Professional Documents
Culture Documents
DOI: 10.1093/oso/9780197639191.001.0001
1 3 5 7 9 8 6 4 2
Printed by Sheridan Books, Inc., United States of America
Ryan Jenkins dedicates this book to those injured or killed in automobile accidents
the world over—and those working to bend the arc of technological progress to
minimize the human suffering that results.
David Černý dedicates this book to his fiancée, Alena, who gives meaning and joy
to his work.
Tomáš Hříbek dedicates the book to all those who are tired of being drivers and
hope to be liberated by AV technology.
Contents
Acknowledgments xi
Contributors xiii
Introduction xix
PA RT I AU T O N OM OU S V E H IC L E S
A N D T R O L L EY P R O B L E M S
Introduction by David Černý
1. Ethics and Risk Distribution for Autonomous Vehicles 7
Nicholas G. Evans
2. Autonomous Vehicles, the Badness of Death, and Discrimination 20
David Černý
3. Automated Vehicles and the Ethics of Classification 41
Geoff Keeling
4. Trolleys and Autonomous Vehicles: New Foundations for the
Ethics of Machine Learning 58
Jeff Behrends and John Basl
5. The Trolley Problem and the Ethics of Autonomous Vehicles
in the Eyes of the Public: Experimental Evidence 80
Akira Inoue, Kazumi Shimizu, Daisuke Udagawa, and Yoshiki
Wakamatsu
6. Autonomous Vehicles in Drivers’ School: A Non-Western
Perspective 99
Soraj Hongladarom and Daniel D. Novotný
7. Autonomous Vehicles and Normative Pluralism 114
Saul Smilansky
8. Discrimination in Algorithmic Trolley Problems 130
Derek Leben
viii Contents
PA RT I I E T H IC A L I S SU E S B EYO N D
T H E T R O L L EY P R O B L E M
Introduction by Ryan Jenkins
9. Unintended Externalities of Highly Automated Vehicles 147
Jeffrey K. Gurney
10. The Politics of Self-Driving Cars: Soft Ethics, Hard Law, Big
Business, Social Norms 159
Ugo Pagallo
11. Autonomous Vehicles and Ethical Settings: Who Should Decide? 176
Paul Formosa
12. Algorithms of Life and Death: A Utilitarian Approach to the
Ethics of Self-Driving Cars 191
Stephen Bennett
13. Autonomous Vehicles, Business Ethics, and Risk Distribution in
Hybrid Traffic 210
Brian Berkey
14. An Epistemic Approach to Cultivating Appropriate Trust in
Autonomous Vehicles 229
Kendra Chilson
15. How Soon Is Now?: On the Timing and Conditions for Adopting
Widespread Use of Autonomous Vehicles 243
Leonard Kahn
16. The Ethics of Abuse and Unintended Consequences for
Autonomous Vehicles 257
Keith Abney
PA RT I I I P E R SP E C T I V E S F R OM
P O L I T IC A L P H I L O S O P H Y
Introduction by Tomáš Hříbek
17. Distributive Justice, Institutionalism, and Autonomous Vehicles 279
Patrick Taylor Smith
18. Autonomous Vehicles and the Basic Structure of Society 295
Veljko Dubljević and William A. Bauer
19. Supply Chains, Work Alternatives, and Autonomous Vehicles 316
Luke Golemon, Fritz Allhoff, and T. J. Broy
Contents ix
PA RT I V AU T O N OM OU S V E H IC L E
TECHNOLOGY IN THE CIT Y
Introduction by Tomáš Hříbek
22. Fixing Congestion for Whom? The Distribution of Autonomous
Vehicles’ Effects on Congestion 375
Carole Turley Voulgaris
23. Fulfilling the Promise of Autonomous Vehicles with
a New Ethics of Transportation 390
Beaudry Kock and Yolanda Lannquist
24. Ethics, Autonomous Vehicles, and the Future City 415
Jason Borenstein, John Bucher, and Joseph Herkert
25. The Autonomous Vehicle in Asian Cities: Opportunities for
Gender Equity, Convivial Urban Relations, and Public Safety
in Seoul and Singapore 432
Jeffrey K. H. Chan and Jiwon Shim
26. Autonomous Vehicles, the Driverless City, and
the Pedestrian City 451
Tomáš Hříbek
The editors of this collection would collectively like to thank our editors at
Oxford University Press, whose stewardship of the manuscript through the pro-
cess of proposal, review, editing, and publication, has been tremendously helpful,
in particular Peter Ohlin.
Ryan Jenkins would like to thank Patrick Lin, Keith Abney, and Zachary Rentz
for innumerable enlightening and invigorating conversations about the ethical
and social implications of emerging technologies. Without them, in fact, his in-
terest in technology might have never been stoked. His mentors and advocates
are too numerous to list, but among them are Bradley Strawser, Benjamin Hale,
Alastair Norcross, and Duncan Purves. If it is true that we are a reflection of the
people closest to us, then Ryan has been lucky to have found himself close to this
group of insightful, careful, and indefatigable thinkers. Finally, he’d like to thank
his wife, Gina, with whom he has been grateful to share both his successes and
setbacks.
David Černý would like to thank Saul Smilansky for his friendliness, kind-
ness, and continuous support. Saul sets a fine example of philosophical sophisti-
cation, exactness, courage to explore unprecedented routes, and love of wisdom.
Special thanks go to Patrick Lin for his passion for philosophy, love of discussion,
and generosity. Without Patrick, this book would not be possible.
Tomáš Hříbek joins David Černý in thanking Patrick Lin for his friendship
and collegiality, and for his invaluable support of this project. Tomáš also wishes
to extend a big thank-you to Ryan Jenkins for bearing most of the responsibility
for preselecting and contacting the candidate contributors, and communicating
with them. Without his good rapport with the contributors, the project would
have hardly gotten off the ground. Finally, Tomáš thanks his colleagues Dan
Novotný and Pavel Kalina for their advice and feedback.
Both David and Tomáš are grateful for the support of grant project
TL01000467 “Ethics of Autonomous Vehicles” of the Technology Agency of the
Czech Republic.
Contributors
Keith Abney is Senior Lecturer in the Philosophy Department and a Senior Fellow at the
Ethics +Emerging Sciences Group at California Polytechnic State University in San Luis
Obispo. His research involves the ethics of emerging technologies, especially space ethics,
artificial intelligence/robot ethics, and bioethics, as well as autonomous vehicles.
Dr. Fritz Allhoff, JD, PhD, is Professor in the Department of Philosophy at Western
Michigan University, and Community Professor in the Program in Medical Ethics,
Humanities, and Law at the Western Michigan University Homer Stryker M. D. School of
Medicine. He publishes in ethical theory, applied ethics, and philosophy of law.
Brian Berkey is an Assistant Professor in the Legal Studies and Business Ethics
Department in the Wharton School at the University of Pennsylvania. He works in
moral and political philosophy, and he has published articles on moral demandingness,
obligations of justice, climate change, exploitation, effective altruism, ethical consum-
erism, and collective obligations.
Jason Borenstein, PhD, is the Director of Graduate Research Ethics Programs at the
Georgia Institute of Technology. His appointment is divided between the School of Public
Policy and the Office of Graduate Studies. His teaching and research interests include
xiv Contributors
robot and artificial intelligence ethics, engineering ethics, research ethics/RCR, and
bioethics.
Dr. Anne Brown is an Assistant Professor in the School of Planning, Public Policy, and
Management at the University of Oregon. She researches issues of transportation equity,
shared mobility, and travel behavior.
John Bucher, AICP, PMP, is a Senior Planner at Stantec and a Fellow at Tulane University’s
Disaster Resilience Leadership Academy. His work is focused on building sustainable re-
silience through community development, climate adaptation, and hazard mitigation.
Dr. David Černý is a Research Fellow at the Institute of State and Law of the Czech
Academy of Sciences and the Institute of Computer Science of the Czech Academy of
Sciences. He is a founding member of the Karel Čapek Center for Values in Science and
Technology.
Dr. Jeffrey K. H. Chan is an Assistant Professor in the Humanities, Arts and Social
Sciences cluster at the Singapore University of Technology and Design. His research
focuses on design and planning ethics, and he is the author of two books, Urban Ethics in
the Anthropocene and Sharing by Design.
Dr. Madhu C. Dutta Koehler is an MIT-educated dreamer, designer, dancer, and entre-
preneur. Dutta-Koehler has been a professor of architecture and planning for over two
decades and recently founded The Greener Health Benefit Corporation. An award-
winning practitioner, she has lectured worldwide on issues where human development
and climate change collide.
Dr. Paul Formosa is an Associate Professor in Philosophy and Director of the Centre for
Agency, Values and Ethics (CAVE) at Macquarie University. Paul has published widely on
topics in moral and political philosophy, with a focus on ethical issues raised by technolo-
gies such as videogames and artificial intelligence.
Luke Golemon, MA, is a PhD student in the Department of Philosophy at the University
of Arizona. His research is focused primarily on ethics of all flavors, political philosophy,
and philosophy of science. He pays special attention to their applications to technology,
medicine, feminist theory, and scientific theorizing.
Jeffrey K. Gurney is a partner with Nelson Mullins Riley & Scarborough, LLP. He is the
author of Automated Vehicle Law: Legal Liability, Regulation, and Data Security, which
was published by the American Bar Association, and numerous publications on auto-
mated vehicles. His practice includes representing companies involved in deploying auto-
mated driving systems.
Jennifer Hatch is the Strategic Advisor for Convergence at the Center for Sustainable
Energy, where she guides strategy for decarbonization infrastructure. Previously she
was a visiting scholar at the Boston University Urban Planning Department and led the
Transportation and Utility Practice at the BU Institute for Sustainable Energy. She holds a
bachelor’s degree from Wellesley College and a master’s degree in public policy from the
Harvard Kennedy School.
Soraj Hongladarom is Professor of Philosophy and Director of the Center for Science,
Technology, and Society at Chulalongkorn University.
Dr. Tomáš Hříbek is a Research Fellow at the Institute of Philosophy of the Czech
Academy of Sciences. Together with David Černý, he is the founder of the Karel Čapek
Center for Values in Science and Technology. He also teaches at several colleges, including
Charles University and Anglo-American University.
Dr. Ryan Jenkins is an Associate Professor of Philosophy, and a Senior Fellow at the Ethics
+Emerging Sciences Group at California Polytechnic State University in San Luis Obispo.
He studies the ethics of emerging technologies, especially artificial intelligence and robot
ethics. He has published extensively on autonomous vehicles.
xvi Contributors
Leonard Kahn is the Associate Dean of the College of Arts & Sciences and an Associate
Professor of Philosophy at Loyola University New Orleans. He is also the 2021–2022
Donald and Beverly Freeman Fellow at the Stockdale Center for Ethical Leadership, US
Naval Academy.
Dr. Geoff Keeling is an Affiliate Fellow at the Institute for Human-Centered Artificial
Intelligence at Stanford University, an Associate Fellow at the Leverhulme Centre for the
Future of Intelligence at the University of Cambridge, and a Bioethicist at Google. His re-
search focuses on ethics, decision theory, and artificial intelligence.
Dr. Beaudry Kock works on mass transit products at Apple, Inc. He has previously
worked in transportation technology at the MBTA, Ford Motor Company, Daimler, and
in numerous startups. His focus is on making both cities and the mass transportation ex-
perience more pleasant, safe, equitable and sustainable.
Yolanda Lannquist is Head of Research & Advisory at The Future Society (TFS), a US
nonprofit specializing in governance of artificial intelligence and emerging technologies.
She leads artificial intelligence policy projects with international organizations and is ap-
pointed to the OECD AI Policy Observatory expert group on implementing trustworthy
artificial intelligence. She holds a master’s degree in public policy from Harvard Kennedy
School.
Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business
at Carnegie Mellon University. His research focuses on the ethics of artificial intelligence
and autonomous systems, and he is the author of the book Ethics for Robots: How to Design
a Moral Algorithm (Routledge, 2018).
Daniel D. Novotny received his PhD from State University of New York at Buffalo and
is an Assistant Professor of Philosophy at the University of South Bohemia. He has
published in the area of the history of philosophy, metaphysics, and applied philosophy.
Ugo Pagallo is a Full Professor of Jurisprudence at the University of Turin and Faculty
Fellow at the CTLS in London. He is a member of several high-level expert groups of
international institutions, such as the European Commission and the World Health
Organization, on the legal impact of artificial intelligence and other emerging
technologies.
Dr. Kazumi Shimizu is a Professor in the Department of Political Science and Economics,
Waseda University, Japan. His research and teaching focus on experimental economics,
behavioral economics, and decision theory. His research is not only empirical but also
examines the methodological bases of empirical research.
Contributors xvii
Dr. Patrick Taylor Smith is Resident Fellow at the Stockdale Center for Ethical Leadership
at the United States Naval Academy. He was also a Postdoctoral Fellow at the McCoy
Center for Ethics in Society at Stanford University. His published work concerns the jus-
tice of emerging climate and military technologies.
Dr. Carole Turley Voulgaris is an Assistant Professor of Urban Planning at the Harvard
University Graduate School of Design. She is trained as a transportation engineer and as a
transportation planner. Her teaching and research focus on how transportation planning
institutions use data to inform plans and policies.
Dr. Yoshiki Wakamatsu is a Professor at Gakushuin University Law School, Japan. His
research and teaching focus on legal and political philosophy. Recently, he has published
two books about paternalism in Japanese, one about libertarian paternalism and the other
about J. S. Mill.
Introduction
Ryan Jenkins, David Černý, and Tomas Hribek
Autonomous Vehicle Ethics. Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press.
© Oxford University Press 2022. DOI: 10.1093/oso/9780197639191.001.0001
xx Introduction
distinct from the situation of engineers, since engineers do not enjoy certainty
about the consequences of their actions. And so on.
A handful of authors have maintained the usefulness of trolley problems as a
general schema for understanding contrasting or conflicting values that com-
panies will have to negotiate, for example, how much space to afford a bicyclist
on one side and an oncoming car on the other side. These are examples of the
trolley problem on a micro scale: imagining a single situation that an AV is en-
gaged in and comparing it to a trolley problem. These are still questions about
balancing risks and trade-offs, and the trolley problem is still somewhat useful
here. Ultimately, however, very few philosophers accept anymore that the trolley
problem is a perfect analogy for driverless cars, or that the situations AVs face
will resemble the forced choice of the unlucky bystander in the original thought
experiment. It is safe to say that the academic conversation around AVs is moving
beyond the trolley problem.
If the trolley problem is retained, it is in a diminished role as a metaphor rather
than a literal analogy for thinking about the design of driverless cars. That is, it is
retained as a macro-level metaphor: it is useful to illustrate other problems with
AVs, or the forced choice that developers and companies will have to confront.
For example, making certain decisions about how to design AVs and where to
deploy them could have disparate impacts on the elderly, the mobility chal-
lenged, the poor, or historically disadvantaged minorities. Each of these trade-
offs can be put into the frame of a trolley problem where the agent is forced to
distribute benefits and burdens one way or another—and the trolley problem is
perhaps supremely useful among thought experiments for making those forced
choices stark and explicit.
At the same time, predictions about the benefits of autonomous cars have
become more muted. AVs were once hailed for their ability to all but eliminate
automobile accidents—saving roughly a million lives per year worldwide—to
reduce congestion and pollution; to drastically reduce the cost of insurance or
eliminate the need for car ownership altogether; and more. We now know the
truth is more complicated. Creating a car that drives as reliably as a human—in
all conditions, in unpredictable circumstances, at night and in the snow, and
so on—is a wicked problem for engineering. Cars may save lives, but they may
kill people that humans wouldn’t have. And while they may reduce conges-
tion, those benefits will probably only be temporary, just as new lanes added
to a highway inevitably fill right back up in a few years’ time. While AVs might
provide mobility to the disabled, their benefits will accrue to the wealthiest
first, potentially exacerbating inequalities. In short, all of these questions about
the ethical and social impacts of AVs require vigorous discussion. All of these
questions are beginning to overtake trolleyological questions in their im-
portance, urgency, and concreteness. And, here, the methods of philosophy,
Introduction xxi
References
Foot, Philippa. 1967. “The Problem of Abortion and the Doctrine of Double Effect.”
Oxford Review 5: 5–15.
Thomson, Judith Jarvis. 1976. “Killing, Letting Die, and the Trolley Problem.” The Monist
59, no. 2: 204–17.
PART I
AU TONOMOU S VE H ICLE S
AN D T ROL L EY PROBLE M S
Introduction by David Černý
Swerve right and kill one passerby or swerve left and kill two (or more) persons
on impact? This seemingly simple question has occupied the attention of many
bright minds for decades. It should not be seen as a surprise. Trolley-type sce-
narios, flourishing within scholarly publications since Philippa Foot’s seminal
paper on the ethics of abortion, seem to bear a close structural similarity with
collision situations that may be encountered by autonomous vehicles (AVs) on
the road. The leading assumption has been that analogical reasoning might be
employed, enabling one to transfer important moral conclusions from simplified
thought experiments to real-life situations. If, for example, the rightness of one’s
choice in trolley-type scenarios depends on the maximizing strategy (i.e., save
the most lives possible), then the same decision procedure can also be employed
in richer, nonidealized conditions of everyday traffic.
Notwithstanding considerable efforts that came into the development and
use of trolley-type scenarios, there has been a steadily growing consensus that
our ethical reflections should move beyond these scenarios toward more real-
istic considerations. There are still some scholars defending the importance
of trolleyology in the context of AV ethics, but many—maybe the majority—
endorse some sort of Trolley Pessimism, according to which either there are
not any relevant similarities between trolley and real-life road scenarios, or
there are insurmountable technological challenges calling into question the
very possibility of programming AVs to follow a set of ethical rules provided by
programmers.
Thus, the common thread running through all of the contributions in Part I is
the effort to go beyond the trolley problem in an attempt to address the ethical
issues raised by AVs.
We begin this section with a chapter by Nicholas G. Evans and Heidi Furey.
They draw on the existing literature on crash scenarios involving AVs but go
far beyond the traditional focus on the trolley problem and its applications in
2 Autonomous Vehicles and Trolley Problems
allow AVs to determine how much weight should be given to safety depending
on how high the probability is that a perceived object classified as a pedestrian is,
in fact, a pedestrian.
Many authors working in the field of AI ethics had been confident for a long
time that trolley-type scenarios represent a conceptual tool allowing one to de-
scribe and analyze possible choices leading to harm. Recently, however, there is a
growing consensus that the matters are far from being that simple. In Chapter 4,
Jeff Behrends and John Basl take the view of Trolley Pessimists. The negation of
Trolley Pessimism is, of course, Trolley Optimism, which subscribes to the theses
that some possible collisions of AVs are structurally similar (the authors give a
precise definition of structural similarity) to trolley-type cases and, accordingly,
the engineers should work to program AVs to behave in ways conforming to the
moral conclusions drawn from trolley cases. Berends and Basl suggest that both
Optimists and Pessimists have been victims to their inability to recognize impor-
tant features of the engineering techniques deployed in the process of designing
AVs’ guiding software. Both authors endorse Trolley Pessimism and present a
novel technological case against Trolley Optimism. Their complex arguments are
based on the difference between traditional and machine learning algorithms.
We can see traditional algorithms as a set of rules enabling the transformation of
inputs into outputs according to the rules invented by programmers. However,
machine learning algorithms are radically different in that they generate new al-
gorithmic instructions not explicitly provided by programmers. Consequently,
we cannot expect them to follow a set of prior established ethical rules incorpo-
rated into their code by programmers. Therefore, engineers developing software
for AVs are not and will not be in a position to program their vehicles to respond
to the particular crash scenarios encountered on the road in a predetermined
and always consistent manner.
A great deal of discussion in the context of the ethics of AVs is predicated on
the assumption of what can be called normative monism. Normative monism
may take two forms: Either we assume that among all of the competing norma-
tive theories there is only one that is correct, or we can hold to the view that for
each field of applied ethics there is only one solution. Saul Smilansky, in his highly
original contribution in Chapter 7, questions this assumption. He considers a
scenario, a hostage situation that he calls The Situation, and demonstrates that
many competing and sometimes contrasting solutions may be invoked. By
adopting a pluralist normative worldview, Smilansky also goes beyond the clas-
sical trolley-type scenarios inviting “either-or” type responses. The combination
of moral and value pluralism applied within the field of AV ethics gives rise to an
open moral world with many permissible possibilities, from the design ethics to
the behavior of self-driving vehicles in possible crash situations. Smilansky’s nor-
mative pluralism may (and as he believes is likely to) transform into a plurality of
4 Autonomous Vehicles and Trolley Problems
Introduction
Autonomous vehicles (AVs) will be on our roads soon.1 How should they be pro-
grammed to behave? The introduction of AVs will mark the first time that artifi-
cially intelligent systems interact with humans in the real world on such a large
scale—and while travelling at such high speeds.
Current AV ethics literature concentrates on crash scenarios in which an AV
must decide how to distribute unavoidable harm; for example, an unoccupied AV
must either swerve to the left, killing the five passengers of a minivan, or swerve
to the right, killing a lone motorcyclist. Scenarios like these have been called
“trolley cases” because they resemble a series of famous thought experiments
that have sparked an enormous body of ethics literature (known, somewhat de-
risively, as “trolleyology”).2 In the original case, a runaway tram (or “trolley”) is
about to run over and kill five workers, but the driver can choose to steer from
one track to another, but in the process killing one worker on the alternate track.3
What’s important about these cases is not that AVs are real-world analogs to
trolleys, but that AV navigation poses difficult ethical decisions. Moreover, AVs,
in virtue of being programmable rather than reacting on human instinct, must
be instructed how they ought to act in these cases (or a decision, arguably equally
morally weighty, must be made to remain silent on what the AV ought to do in
this case). Trolley-based scenarios have been used to test intuitions about the be-
havior of AVs, such as when it is permissible to choose (or allow) a smaller group
to be harmed in order to save a larger one, and more controversially what kinds
of people we should prioritize over others in saving them.4
When people ask how AVs could have anything to do with ethics, the trolley
problem offers a quick and obvious explanation. But trolley problem–inspired
AV ethics has received considerably criticism. One central line of reasoning
is that, in the real world, we are almost never certain about our options and
their outcomes. Nyholm and Smids, as well as Goodall, have argued that we
should focus on risk management when programming AVs, and they describe
Nicholas G. Evans, Ethics and Risk Distribution for Autonomous Vehicles In: Autonomous Vehicle Ethics. Edited by:
Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press. © Oxford University Press 2022.
DOI: 10.1093/oso/9780197639191.003.0001
8 Nicholas G. Evans
a number of realistic cases involving risk, many of which are similar to trolley
cases but involve only probabilities of harm, including how close AVs choose to
drive to certain types of vehicles and pedestrians, when they choose to change
lanes, and which vehicles they take pains to avoid crashing into.5 Himmelreich
has argued that trolley-like problems are too rare, and too extreme, relative to
the kind of ethical issue that are more likely to face AVs on a day-to-day basis.
He argues that we should instead focus more on mundane driving scenarios,
many of which involve risk. In addition to the kinds of cases Goodall mentions,
he draws attention to the risks associated with the environmental impact of
AVs and with programming AV behavior that will be repeated exactly by every
other AV.6
The discussion of risk and AVs is just beginning. We’re at the stage where (a) a
good case has been made for the importance of the discussion, and where (b) a
smattering of different scenarios and questions about risk has been posed. One
way to approach a difficult problem like finding a suitable ethical algorithm for
AVs, which is common to both engineering and philosophy,7 is to start with the
simplest or most idealized kinds of cases first. Greater complexity can be added
back into the picture as more progress is made. Hence the trolley problem—a
simple case outlining a clear issue in which choices about doing harm, or
allowing it to happen, are parsed in the clearest detail.8 What comes next for AV
ethics, however?
Our purpose here is not to reject the trolley problem, as others have done. The
trolley problem is an important thought experiment in the history of philosophy;
it serves a very specific purpose. In point of fact, we believe its purpose is precisely
the one it has served: to force people to acknowledge, and then choose a position
on, an important moral feature that is subject to disagreement. The point of the
trolley problem, put another way, is to cause problems! But the trolley problem
cannot—indeed, no philosophical problem can—solve a complex problem like
the navigation of AVs on its own. There are other challenges that are philosophi-
cally relevant to an investigation of a complex problem like AVs. The field needs
to evolve, beyond the mere debate about trolleys (and whether that debate is rel-
evant), to encompass other philosophical issues.
In what follows, we outline three ways to think about this evolution. We mo-
tivate this project first through conceptual, empirical, and metaphilosophical
concerns about the limits of the trolley problem as it applies to the ethics of
AVs. We then turn to two case studies that demonstrate the challenge ahead.
The first is how AVs should behave when they encounter each other, and where
differences in their algorithmic behavior are morally relevant, and where each is
uncertain as to the other’s algorithm. The second is considering a wide view of
AVs, and how we account for the broader question of AVs in large, even global
transportation systems.
Ethics and Risk Distribution for AVs 9
What makes the trolley problem so convenient as a tool is that it contains a series
of assumptions that carry directly into the ethics of AVs. These assumptions, we
take it, are as follows:
In the case of the trolley problem, it is a tram or trolley that has perfect know-
ledge about the outcomes of its choices, of which there are two and only two. In
the case of the AV, trolley-like problems are homologous to the trolley problem
in terms of their assumptions. That is:
These assumptions drive the trolley-like problem characterized by, inter alia, the
MIT Moral Machine Project.
Three limits arise from this kind of analysis, however. Others have argued that
the determinate and binary set of options in the trolley problem make this kind
of analysis inappropriate under conditions of risk.9 We think that this, however,
misunderstands the trolley problem itself. Conceptually, the trolley problem
identifies a critical distinction in ethical decision-making around our intentions
to bring about someone’s death, commonly claimed to be between “killing” and
“letting die.”10 This is an important distinction; indeed, it guides a number of
serious, applied ethical decisions. The original applied problem in which the
trolley problem was posed was not AVs, but abortion.11 The doctrine of double
effect, which turns in part on the distinction the trolley problem picks out, is also
an important doctrine in the ethics of killing in war.12 But no one would say that
epistemic concerns about fetal viability, or about personhood, undermine the
moral problem between intending versus foreseeing harm. Certainly in the just
war literature, uncertainty about combatant status is an important issue of dis-
cussion, but no one engaged in that discussion would say that our beliefs about
the trolley problem or doctrine of double effect are perforce irrelevant in light of
the presence of uncertainty.
10 Nicholas G. Evans
Thus, the conceptual issue is not that the trolley problem is limited. It was al-
ways limited. But that is hardly a weakness, any more than a diagnostic for breast
cancer is weak for not detecting glaucoma. Different tools do different things, and
if there is any weakness for trolleyology in the AV debate, it is in trying to make the
trolley problem do things it cannot. Rather, the problem that ethicists need to face
can be formulated as a question: “What other philosophical problems, outside of
doing versus allowing harm [or similar concepts, depending on our reading of the
trolley problem], are a propos when thinking through the ethics of AVs?”
If we care about risk, then, there are frameworks available to us that would
be well suited to the problem of AVs. For example, Lara Buchak’s work on ra-
tional risk aversion has been applied to how we should conceive of the ethics of
risk aversion in HIV/AIDS therapy trials that require participants to cease their
routine antiretroviral medications.13 Likewise, Adam Bjorndahl and others have
discussed how we should think about the risk of violating human dignity and
make decisions where there is a possibility that some absolutely important value
(like dignity, for some) might be violated, but it is under a condition of uncer-
tainty.14 Finally, Seth Lazar and Chad Stronach have shown how we can account
for the ethics of risk of rights violation in armed conflict, including cases where
we are not sure of the kinds of rights those individuals possess.15
A second problem for AVs is empirical, including when we consider risk. AV
decision-making is made under risk, yes, but it may also be tunable depending
on the case we care about. What we mean by this it is not simply the case that
there are a number of discrete but uncertain outcomes, but that there may be an
indeterminate range of options from which the AV might choose. Consider re-
cent work in AV decision-making involving the following case:
A Tailgater (TG) is closely following the AV, and the AV is following the Front Car
(FC) in a two-lane two-way road. At the start of the scenario, all cars are cur-
rently moving at the same velocity of 90 kph, consistent with highway speeds. The
scenario starts with FC suddenly braking to a stop. The AV is responsive enough
to stop in time to prevent a collision with FC because the AV is following at a safe
distance. However, though the AV is responsive enough to avoid a collision with
the lead car, TG may not be responsive enough to avoid crashing into the AV. This
scenario is further constrained in that the AV cannot swerve out of the way (due to
oncoming traffic on the left and a barrier on the right). Intuitively, the AV appears
to have two options: it could slam on the brakes and suffer a severe rear-end col-
lision, or it could intentionally ram the forward car at a relatively low speed, re-
ducing the speed of collision of the TG with the AV. Given the AV’s superior ability
to measure speeds and distances, is there another way of managing potential inju-
ries that may not be available to a human driver?16
Ethics and Risk Distribution for AVs 11
The results of this thought experiment are not binary: not just in terms of the
possible injuries that might arise to TG, AV, and FC, but in terms of the options
available to AV. As a parametric model, AV could choose any combination of vel-
ocities and accelerations available to the vehicle. Modeling on the above vignette
gave options such as a “wake-up call” where the AV initiated a low-speed colli-
sion to TC to encourage them to brake, or in the case of an unresponsive TC an
“emergency brake” in which the AV initiated a series of low-speed collisions until
those collisions became inelastic and the AV could use its own braking power to
stop both cars.
Even in trolley cases for “unavoidable crashes,” there may be continuous
variables, such as how much braking room there is between a vehicle and
pedestrians or other vehicles; the side of the vehicle that strikes the other object;
the object’s reaction times if it has any; and so on. These are not relevant to the
trolley problem as a thought experiment; because they may make meaningful
differences in the outcomes of these collisions; however, they may be (though
are not always) relevant to an AV’s decisions. These are empirical concerns, but
important ones in knowing what options are available to an AV, which at least
on some accounts is a precondition for having good beliefs about what the AV
ought to do.
Finally, there is a metaphilosophical problem around how we do the ethics of
AVs in conditions under risk. This arises from, but it is not totally derivative of
the first two problems above. Making decisions about the ethics of AVs requires
both knowing what, philosophically, is at stake in decisions around AVs. But
it also requires empirical knowledge of the conditions under which those AVs
will make those decisions. This requires a form of collaboration that is not
common, between philosophers and empirical researchers. While our previous
work provides a model to emulate, it does not solve larger metaphilosophical
questions about how philosophers should engage with practical and design
processes.
These kinds of risks are important because they allow us to loosen our three
assumptions around AVs. We, with sufficient work, no longer need to make
a binary distinction between autonomous and human-driven cars, and we
can accept a range of levels of autonomy—levels that exist but are typically
eschewed in debates about the ethics of AVs. We can further deal with im-
portant temporal components of the deployment of AVs, from the near-future
scenario in which full autonomy is available only to a handful of cars on the
road, to the potential future in which all or nearly all cars are AVs. Finally, we
can deal with questions about what kinds of information are necessary, and
how decision-making might permissibly proceed for vehicles operating with
different kinds of data.
12 Nicholas G. Evans
The above described potential inroads into risk have to do with real-world
conditions when we do AV ethics. These kinds of risk include the nature, number,
and identity of objects in collisions (human, animal, inanimate). However, there
are other, immediately important issues that the ethics of AVs ought to deal with
that invoke philosophical problems.
A preliminary and likely imminent issue is how an AV should distribute risk
in contexts where other AVs have divergent moral commitments. Human drivers
can be uncertain about how other human drivers will behave. AVs governed by
ethical algorithms could be more predictable than human-driven vehicles,17 es-
pecially if the same ethical algorithm is used across all cars of a particular make.
AVs may be able, in some circumstances, to use this information to coordinate
with other AVs and to decide which ones to help or try to stop, and so on. But this
need not always be the case.
To illustrate, let us first imagine a case where there is no uncertainty about the
drivers’ ethical codes. Let’s simplify further and focus on two kinds of moral al-
gorithm: “selfish” drivers that prioritize the drivers’ (and their passengers’) well-
being, and “selfless” drivers that always aim to minimize total harm among all
road users. Suppose that driver A finds herself in an unavoidable crash with a
selfless driver B and a selfish driver C. She can only control which vehicle she hits,
and the driver of the vehicle she hits is likely to be seriously injured or killed. Let’s
further assume each car contains a single occupant. A selfless, harm-minimizing
driver A will hit the selfish driver if they believe the spared drivers will go on to
face their own ethical situations. In these future cases, the selfless driver will act
to minimize harm, but the selfish driver may not.
What if driver A is selfish and faces the same choice? She, too, can be expected
to spare the selfless driver. After all, she might run into the driver she spares in the
future, and she can expect to fare better in an encounter with a harm minimizer
than in one with another person looking out for herself. It is better for the selfish
driver to be surrounded by selfless drivers, in other words, and pick each other
off in cases where a choice between selfless and selfish is forced.
It’s unlikely that a human driver has ever been in a situation exactly like this.
But because ethical algorithms for AVs may in principle be easier to discover,
AVs may be in the position to make these kinds of decisions more often. The ac-
tual ethical algorithms we use to govern AVs will likely be complex, but so long as
some AVs follow more “selfish” algorithms (ones that favor the interests of their
owners and passengers)18 and some AVs follow more “selfless” algorithms (e.g.,
ones that try to minimize harm):
(a) AVs will have reason to distribute risk from the “selfless” AVs to the
“selfish” ones. That is, regardless of one’s culpability or safety otherwise,
Ethics and Risk Distribution for AVs 13
The second problem here becomes more acute because even if we have some
kind of signal to take the ambiguity out of signals between vehicles, there may be
uncertainty about the veracity of these signals. Anyone who has spent time on
the Internet will understand the perils of a network that relies on trust among a
large number of anonymous actors. But these actors are now cars weighing many
hundreds of pounds, doing speeds fast enough to kill humans (individually or in
groups).
An AV’s ability to accurately judge how likely it is that other AVs follow spe-
cific ethical algorithms will also be important in cases where it must coordinate
with other AVs to achieve the best outcome. A harm-minimizing algorithm will
favor collective AV actions like grouping together to act as an enormous brake
for a truck that’s out of control, or to get out of the way when an AV needs to
get its driver to an emergency room.19 No single AV could do this on its own,
and it would increase the total risk of harm for a single AV to attempt it. So
harm minimizers will need to broadcast that they are harm minimizers and will
need to determine if other AVs are, too. If the AVs are connected through wire-
less communication, this might be achieved easily. But if they’re not, a harm-
minimizing AV will still have reason to gather evidence about the other AVs’
algorithms. Either way, this kind of signaling raises questions about how we tell
whose moral commitments are whose in the AV world, and how we ensure trust
and compliance on the road between these different ethical AVs.
Our second case is how an AV should distribute risk among more members of
society when interacting with hazardous materials. Consider how an AV might
“interact” with hazardous materials. It might be used to transport them, and it
might find itself in a potential crash scenario with a vehicle transporting them.20
14 Nicholas G. Evans
We can expect there to be extra safety precautions for AVs allowed to transport
hazardous materials. Let’s call an AV allowed to carry hazardous materials a
“Hazmat AV.” Some potential safety precautions for Hazmat AVs might include
the following:
These possibilities raise questions about how Hazmat AVs should be designed
and regulated. But let’s consider a case where an AV encounters a Hazmat AV
and must decide how to distribute risk among society. Consider the following
example:
low-pathogenicity samples.23 It seems that on the face of it, that given the ex-
treme stakes, the AV should collide with the passengers over the Hazmat AV.
Consider, then, either of the next two scenarios:
Possible Hazard I: The same as Pandemic, except that this time the AV
determines that, if all the usual safety precautions have been taken, then there’s
still a very small chance that any dangerous materials will leak into the lake
even if the Hazmat AV falls in.
Possible Hazard II: The same as Pandemic, except that this time the AV is
sure that dangerous materials will end up in the lake if it hits the unmanned AV,
but it’s uncertain whether it is a Hazmat AV, or whether the materials it might
carry are dangerous enough to warrant seriously injuring the lives of several
people.
It’s much less clear what the AV should do when facing this kind of uncertainty.
One possibility is to design it to do a cost-benefit analysis and then to act so as to
maximize expected well-being. But we might also want it to give special priority
to its own passengers, at least, if it has any. And it may be difficult to determine
exactly how it should do the analysis. These are incredibly rare but potentially
high-impact events. When dealing with probabilities so small and consequences
so large, slight differences in its approach could lead to noticeably different
decision-making.
Importantly, the cost of avoiding these incidents could be high enough to deter
self-interested firms from responding to them. In the case of Possible Hazard II, it
is foreseeable that a manufacturer could take the time to develop a contingency to
detect a Hazmat AV with very high confidence and make sure that their vehicles
always, or nearly always, respond appropriately. However, the costs in doing so
could be prohibitively high for the manufacturer. Their cars might be slower in
navigating terrain (to allow more time to notice and respond to Hazmat AVs or
similar kinds of threats); or the additional development cost for a firm might
reduce their competitiveness. In either case, individual manufacturers have few
incentives to respond to Possible Hazard I or II, especially if the probability one
of their vehicles will encounter a Pandemic case is very low.24
We’ve considered cases in which an AV has to decide whether to crash into a
(potential) Hazmat AV. We can also consider cases in which the crash is unavoid-
able, but in which the AV must decide how to crash. For example:
But rain has made the road slippery and the Hazmat AV has started to skid
out of control. A passenger AV rounds a corner to find the truck skidding to-
ward it perpendicular to the road. The passenger AV can either swerve to the
left or to the right. Both maneuvers are expected to put its own passengers at
the same amount of risk. But if it skids to the left, the truck will likely end up
falling down off the road into the ocean. And if it skids to the right, the truck
will likely end up crashing into the main street of a small town. The waste is
expected to spill out either way. It will be easier to collect, contain, and dis-
pose of the waste if it ends up in town. But if it ends up there, several people
are likely to die from exposure or from drinking contaminated water. If it
ends up in the ocean, no one will die from it directly. But it will devastate the
ecosystem for hundreds of miles, and the town and a much larger area will
suffer economically and from higher rates of illness for years. If the AV is
able to make this kind of assessment of the situation, what should it do? Or,
supposing the Hazmat vehicle has made the assessment, what should it signal
the AV to do?
There could also be cases in which a passenger AV or its passengers become con-
taminated. For example:
We’ve focused on passenger AVs because we’ve been trying to show that we
may eventually want them to be designed to make wide-scope risk distribution
decisions that take more than the well-being of their passengers and other road
users into account. But we have even more reason to want AVs carrying haz-
ardous materials to make good wide-scope risk distribution decisions, and they’d
face more important ones more often. For example, the routes passenger AVs
choose may determine which neighborhoods have more traffic and noise pollu-
tion and higher accident rates. So the route a passenger AV chooses will deter-
mine how some small risks are distributed among a wider group of people than
those who are on the road at the time. An AV transporting hazardous materials
also has to choose which routes to take. So it also determines how similar risks
Ethics and Risk Distribution for AVs 17
of small harms are distributed. But in addition, there’s a very small risk of great
harm to all the neighborhoods it passes through.
Conclusion
Notes
1. We set aside what precisely counts as “autonomy.” See Society for Automotive
Engineers, “J3016B: Taxonomy and Definitions for Terms Related to Driving
Automation Systems for On-Road Motor Vehicles—SAE International,” June 15, 2018.
https://www.sae.org/standards/content/j3016_201806/.
2. Barbara H. Fried, “What Does Matter? The Case for Killing the Trolley Problem (or
Letting It Die),” The Philosophical Quarterly 62, no. 248 (July 1, 2012): 505–29. https://
doi.org/10.1111/j.1467-9213.2012.00061.x.
3. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” in
Virtues and Vices and Other Essays in Moral, 19–32 (New York: Oxford University
Press, 1993).
4. Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich,
Azim Shariff, Jean- François Bonnefon, and Iyad Rahwan, “The Moral Machine
Experiment,” Nature 563, no. 7729 (November 2018): 59–64. https://doi.org/10.1038/
s41586-018-0637-6.
5. Sven Nyholm and Jilles Smids, “The Ethics of Accident-Algorithms for Self-Driving
Cars: An Applied Trolley Problem?,” Ethical Theory and Moral Practice 19, no. 5 (July
2016): 1275–89. https://doi.org/10.1007/s10677-016-9745-2; Noah J. Goodall, “Away
from Trolley Problems and Toward Risk Management,” Applied Artificial Intelligence
30, no. 8 (November 2016): 810–21. https://doi.org/10.1080/08839514.2016.1229922.
6. Johannes Himmelreich, “Never Mind the Trolley: The Ethics of Autonomous Vehicles
in Mundane Situations,” Ethical Theory and Moral Practice 21, no. 3 (May 2018): 669–
84. https://doi.org/10.1007/s10677-018-9896-4.
7. E.g., Michael Weisberg, Simulation and Similarity: Using Models to Understand the
World (New York: Oxford University Press, 2013).
18 Nicholas G. Evans
8. Geoff Keeling, “Why Trolley Problems Matter for the Ethics of Automated Vehicles,”
Science and Engineering Ethics 26, no. 1 (February 1, 2020): 293–307. https://doi.org/
10.1007/s11948-019-00096-1.
9. Heather M. Roff, “The Folly of Trolleys: Ethical Challenges and Autonomous
Vehicles,” Brookings, December 17, 2018. https://www.brookings.edu/research/the-
folly-of-trolleys-ethical-challenges-and-autonomous-vehicles/.
10. Cf. Judith Jarvis Thomson, “The Trolley Problem,” The Yale Law Journal 94, no. 6
(1985): 1395. https://doi.org/10.2307/796133.
11. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” Oxford Review 5 (1967).
12. Fritz Allhoff, Nicholas Greig Evans, and Adam Henschke, “Not Just Wars: Expansions
and Alternatives to the Just War Tradition,” in The Routledge Handbook of Ethics and
War, edited by Fritz Allhoff, 1–8 (New York: Routledge, 2013).
13. Lara Buchak, “Why High-Risk, Non-Expected-Utility-Maximising Gambles Can Be
Rational and Beneficial: The Case of HIV Cure Studies,” Journal of Medical Ethics 43,
no. 2 (February 1, 2017): 90–95. https://doi.org/10.1136/medethics-2015-103118.
14. A. Bjorndahl, A. J. London, and Kevin J. S. Zollman, “Kantian Decision Making
under Uncertainty: Dignity, Price, and Consistency,” Philosophers Imprint 17, no. 7
(April 2017): 1–22.
15. Seth Lazar and Chad Lee Stronach, “Axiological Absolutism and Risk,” Noûs 53, no. 1
(March 2019): 97–113. https://doi.org/10.1111/nous.12210.
16. Pamela Robinson et al., “Modelling Ethical Algorithms in Autonomous Vehicles
Using Crash Data,” IEEE Transactions on Intelligent Transportation Systems (May
2021), doi:10.1109/TITS.2021.3072792.
17. And they might also be less predictable, depending on the method of developing an
algorithm and its capacity to change over time. Many if not most original equipment
manufacturers—in the main, standard auto companies—rely on formal methods to
develop their algorithms. These algorithms are predictable in the sense that their pro-
gram is transparent, and while it is possible to not test them adequately, they are in
principle understandable and predictable. Deep learning algorithms, however, and
in particular the development of algorithms through neural nets, provide behavior
that is interpolated from existing data. They can be very sophisticated but are largely
(though not exclusively, see Kiri L. Wagstaff and Jake Lee, “Interpretable Discovery
in Large Image Data Sets,” arXiv:1806.08340[2018]) transparent in the sense that it
is not possible to know the exact form of the algorithm—they are sometimes called
“black box” algorithms. In the case of neural nets, emergent conditions could result
in an asymptotic, unpredictable response that diverges strongly from human expecta-
tions or the data set.
18. https://www.businessinsider.com/mercedes-benz-self-driving-cars-programmed-
save-driver-2016-10
19. E.g., Charlie Osborne, “Tesla’s Autopilot Takes the Wheel as Driver Suffers Pulmonary
Embolism,” ZDNet. https://www.zdnet.com/article/teslas-autopilot-takes-the-
wheel-as-driver-suffers-pulmonary-embolism/.
Ethics and Risk Distribution for AVs 19
20. It might also find itself about to crash into a facility handling hazardous materials, but
we won’t discuss this or other possibilities here.
21. See, e.g., Evans, Lipsitch, and Levinson (2016).
22. Lisa Brown, “Truck Carrying Radioactive Material Found after It Was Stolen in
Mexico,” NACCHO, December 6, 2013. https://www.naccho.org/blog/articles/truck-
carrying-radioactive-material-found-after-it-was-stolen-in-mexico.
23. Centers for Disease Control and Prevention, “Report on the Inadvertent Cross-
Contamination and Shipment of a Laboratory Specimen with Influenza Virus H5N1,”
Atlanta, GA, August 2014. https://www.cdc.gov/labs/pdf/InvestigationCDCH5N
1contaminationeventAugust15.pdf.
24. The formal demonstration for these kinds of problem, and their ethical significance,
can be found in Lipstich, Evans, and Cotton-Barrett (2016).
2
Autonomous Vehicles, the Badness
of Death, and Discrimination
David Černý
Introduction
External justification of introducing AVs to road traffic (EXT). From the eth-
ical point of view, introducing AVs into everyday traffic is justified by the prev-
alence of positive factors over negative ones.
EXT implies that if positive factors outweigh negative ones, we have good (and
even convincing) reasons not only to introduce AVs into traffic but also to strive
to proceed with that introduction as fast as possible. However, the fastest in-
troduction of AVs into traffic is predicated on the nonexistence of contrasting
deontological constraints. Therefore, it is essential to demonstrate that these
possible deontological constraints regarding, for example, fair rules of distrib-
uting harm or the issue of discrimination can be solved in a principled way.
Passing over problems of technical character, it is necessary to address the eth-
ical rules by which using AVs ought to be governed. With respect to EXT, we
can thus say that we have good reasons to solve the problems of the ethical reg-
ulation of AVs operation, and even that we have good reasons to solve them as
soon as possible.
Contemporary normative ethics offers a whole range of ethical systems that
could be used in connection with AVs. But whichever one of them we choose, it
will still hold that AVs will, albeit rarely, find themselves in dilemmatic situations.
I believe that solving them must meet at least two important requirements:
David Černý, Autonomous Vehicles, the Badness of Death, and Discrimination In: Autonomous Vehicle Ethics.
Edited by: Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press. © Oxford University Press 2022.
DOI: 10.1093/oso/9780197639191.003.0002
AVs, Death, and Discrimination 21
The first requirement occupies a greater part in the context of AVs than it does
in debates over the role of intuitions in ethical thought. This is because AVs are a
modern technology which is not yet present among us. Introducing it will arouse
strong emotions, and if AVs were governed by ethical rules solving dilemmatic
situations in a way that stands in contradiction to moral intuitions, it could easily
happen that people would not want to buy them; but this stands in contradiction
to EXT and its implied requirement of fast introduction of AVs to traffic.
The other requirement is no less important. From the normative point of view,
all human beings are equal; therefore, if AVs solved the ethical problems they
will encounter in ways contradicting this equality, we would have good reasons
to reject the ethical system in which these decisions are grounded. The principle
of equality would be especially gravely violated by any form of discrimination.
But apparently AVs can find themselves in situations when they will have to
distribute the harm between the involved road traffic participants in some way.
Let us imagine, for example, that an AV must decide between a young man and
an older man. Let us further assume that the probability of death is the same in
both cases (I am taking here the prospective view, i.e., that the AV will decide
based on the prospect of consequences based on its best knowledge in the given
situation). How should the AV decide? Randomly? Or perhaps based on age? But
if it decides based on age, will it not be a clear instance of discrimination?
In this paper, I will try to show that distributing harm based on age need not
be a kind of discrimination. First, I will present a general definition of direct dis-
crimination. Then I will present the basic contours of the deprivation conception
of the badness of death. Finally I will apply all of this to the problem of harm dis-
tribution in the context of autonomous traffic.
What Is Discrimination?
The word discrimination comes from the Latin noun discriminatio, which is de-
rived from the verb discriminare (to divide up, separate). At the level of values,
rejecting discrimination is based on the idea that all human beings are equal in
their freedom and rights, as explicitly stated by the first article of the Universal
Declaration of Human Rights of 1948: “All humans are born free and equal in dig-
nity and rights.” This equality and dignity prohibits discrimination, as expressed
in the following article of the same declaration:
22 David Černý
Everyone is entitled to all the rights and freedoms set forth in this Declaration,
without distinction of any kind, such as race, colour, sex, language, religion,
political or other opinion, national or social origin, property, birth or other
status.
Author: K. E. Selow-Serman
Language: German
Von
K. E. Selow-Serman
1 . b i s 1 0 0 . Ta u s e n d
Seite
Tsingtau-Lied 7
Auf dem Hsikiang 9
Hochwasser 20
Krieg 40
Nach Manila 50
Interniert 66
Weddigen 76
Wieder interniert 85
Im Mauritius-Orlan 94
In die Wüste 111
Kapitänleutnant v. Möllers letzte Fahrt
Tsingtau-Lied
Im Delta da fahren
Die Dschunken zu Paaren,
Die Zampans in Scharen
Und machen Radau,
Doch leis und bedächtig,
So grau und so prächtig,
Bewaffnet und mächtig
Erscheint die Tsingtau.