You are on page 1of 67

Autonomous Vehicle Ethics Ryan

Jenkins
Visit to download the full and correct content document:
https://ebookmass.com/product/autonomous-vehicle-ethics-ryan-jenkins/
Autonomous Vehicle Ethics
Autonomous Vehicle Ethics
The Trolley Problem and Beyond
Edited by
RYAN JENKINS, DAVID ČERNÝ, AND TOMÁŠ HŘÍBEK
Oxford University Press is a department of the University of Oxford. It furthers the
University’s objective of excellence in research, scholarship, and education by publishing
worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and
certain other countries.
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.
© Oxford University Press 2022
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, without the prior permission in
writing of Oxford University Press, or as expressly permitted by law, by license, or under
terms agreed with the appropriate reproduction rights organization. Inquiries concerning
reproduction outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above.

You must not circulate this work in any other form and you must impose this same
condition on any acquirer.
Library of Congress Cataloging-in-Publication Data
Names: Jenkins, Ryan, editor. | Černý, David, editor. |
Hříbek, Tomáš, editor.
Title: Autonomous vehicle ethics : the trolley problem and beyond/
edited by Ryan Jenkins, David Černý, and Tomáš Hříbek.
Description: New York, NY, United States of America :
Oxford University Press, [2022] |
Includes bibliographical references and index.
Identifiers: LCCN 2022000306 (print) | LCCN 2022000307 (ebook) |
ISBN 9780197639191 (hbk) | ISBN 9780197639214 (epub) | ISBN 9780197639221
Subjects: LCSH: Automated vehicles—Moral and ethical aspects. |
Double effect (Ethics)
Classification: LCC TL152.8 .A8754 2022 (print) | LCC TL152.8 (ebook) |
DDC 629.2—dc23/eng/20220315
LC record available at https://lccn.loc.gov/2022000306
LC ebook record available at https://lccn.loc.gov/2022000307
DOI: 10.1093/oso/9780197639191.001.0001
Ryan Jenkins dedicates this book to those injured or killed
in automobile accidents the world over—and those working
to bend the arc of technological progress to minimize the
human suffering that results.
David Černý dedicates this book to his fiancée, Alena, who
gives meaning and joy to his work.
Tomáš Hříbek dedicates the book to all those who are tired
of being drivers and hope to be liberated by AV technology.
Contents

Acknowledgments
Contributors
Introduction

PART I AUTONOMOUS VEHICLES AND TROLLEY PROBLEMS


Introduction by David Černý
1. Ethics and Risk Distribution for Autonomous Vehicles
Nicholas G. Evans
2. Autonomous Vehicles, the Badness of Death, and
Discrimination
David Černý
3. Automated Vehicles and the Ethics of Classification
Geoff Keeling
4. Trolleys and Autonomous Vehicles: New Foundations for the
Ethics of Machine Learning
Jeff Behrends andJohn Basl
5. The Trolley Problem and the Ethics of Autonomous Vehicles in
the Eyes of the Public: Experimental Evidence
Akira Inoue, Kazumi Shimizu, Daisuke Udagawa, and Yoshiki
Wakamatsu
6. Autonomous Vehicles in Drivers’ School: A Non-Western
Perspective
Soraj Hongladarom and Daniel D. Novotný
7. Autonomous Vehicles and Normative Pluralism
Saul Smilansky
8. Discrimination in Algorithmic Trolley Problems
Derek Leben

PART II ETHICAL ISSUES BEYOND THE TROLLEY PROBLEM


Introduction by Ryan Jenkins
9. Unintended Externalities of Highly Automated Vehicles
Jeffrey K. Gurney
10. The Politics of Self-Driving Cars: Soft Ethics, Hard Law, Big
Business, Social Norms
Ugo Pagallo
11. Autonomous Vehicles and Ethical Settings: Who Should Decide?
Paul Formosa
12. Algorithms of Life and Death: A Utilitarian Approach to the
Ethics of Self-Driving Cars
Stephen Bennett
13. Autonomous Vehicles, Business Ethics, and Risk Distribution in
Hybrid Traffic
Brian Berkey
14. An Epistemic Approach to Cultivating Appropriate Trust in
Autonomous Vehicles
Kendra Chilson
15. How Soon Is Now?: On the Timing and Conditions for Adopting
Widespread Use of Autonomous Vehicles
Leonard Kahn
16. The Ethics of Abuse and Unintended Consequences for
Autonomous Vehicles
Keith Abney

PART III PERSPECTIVES FROM POLITICAL PHILOSOPHY


Introduction by Tomáš Hříbek
17. Distributive Justice, Institutionalism, and Autonomous Vehicles
Patrick Taylor Smith
18. Autonomous Vehicles and the Basic Structure of Society
Veljko Dubljević and William A. Bauer
19. Supply Chains, Work Alternatives, and Autonomous Vehicles
Luke Golemon, Fritz Allhoff, and T. J. Broy
20. Can Autonomous Cars Deliver Equity?
Anne Brown
21. Making Autonomous Vehicle Technologies Matter: Ensuring
Equitable Access and Opportunity
Madhu C. Dutta-Koehler and Jennifer Hatch

PART IV AUTONOMOUS VEHICLE TECHNOLOGY IN THE CITY


Introduction by Tomáš Hříbek
22. Fixing Congestion for Whom? The Distribution of Autonomous
Vehicles’ Effects on Congestion
Carole Turley Voulgaris
23. Fulfilling the Promise of Autonomous Vehicles with a New
Ethics of Transportation
Beaudry Kock and Yolanda Lannquist
24. Ethics, Autonomous Vehicles, and the Future City
Jason Borenstein, John Bucher, and Joseph Herkert
25. The Autonomous Vehicle in Asian Cities: Opportunities for
Gender Equity, Convivial Urban Relations, and Public Safety in
Seoul and Singapore
Jeffrey K. H. Chan and Jiwon Shim
26. Autonomous Vehicles, the Driverless City, and the Pedestrian
City
Tomáš Hříbek

Appendix: Varieties of Trolley Pessimism


Jeff Behrends and John Basl
Index
Acknowledgments

The editors of this collection would collectively like to thank our


editors at Oxford University Press, whose stewardship of the
manuscript through the process of proposal, review, editing, and
publication, has been tremendously helpful, in particular Peter Ohlin.
Ryan Jenkins would like to thank Patrick Lin, Keith Abney, and
Zachary Rentz for innumerable enlightening and invigorating
conversations about the ethical and social implications of emerging
technologies. Without them, in fact, his interest in technology might
have never been stoked. His mentors and advocates are too
numerous to list, but among them are Bradley Strawser, Benjamin
Hale, Alastair Norcross, and Duncan Purves. If it is true that we are
a reflection of the people closest to us, then Ryan has been lucky to
have found himself close to this group of insightful, careful, and
indefatigable thinkers. Finally, he’d like to thank his wife, Gina, with
whom he has been grateful to share both his successes and
setbacks.
David Černý would like to thank Saul Smilansky for his
friendliness, kindness, and continuous support. Saul sets a fine
example of philosophical sophistication, exactness, courage to
explore unprecedented routes, and love of wisdom. Special thanks
go to Patrick Lin for his passion for philosophy, love of discussion,
and generosity. Without Patrick, this book would not be possible.
Tomáš Hříbek joins David Černý in thanking Patrick Lin for his
friendship and collegiality, and for his invaluable support of this
project. Tomáš also wishes to extend a big thank-you to Ryan
Jenkins for bearing most of the responsibility for preselecting and
contacting the candidate contributors, and communicating with
them. Without his good rapport with the contributors, the project
would have hardly gotten off the ground. Finally, Tomáš thanks his
colleagues Dan Novotný and Pavel Kalina for their advice and
feedback.
Both David and Tomáš are grateful for the support of grant
project TL01000467 “Ethics of Autonomous Vehicles” of the
Technology Agency of the Czech Republic.
Contributors

Keith Abney is Senior Lecturer in the Philosophy Department and a


Senior Fellow at the Ethics + Emerging Sciences Group at California
Polytechnic State University in San Luis Obispo. His research involves
the ethics of emerging technologies, especially space ethics, artificial
intelligence/robot ethics, and bioethics, as well as autonomous
vehicles.
Dr. Fritz Allhoff, JD, PhD, is Professor in the Department of
Philosophy at Western Michigan University, and Community Professor
in the Program in Medical Ethics, Humanities, and Law at the
Western Michigan University Homer Stryker M. D. School of
Medicine. He publishes in ethical theory, applied ethics, and
philosophy of law.
John Basl is an Associate Professor of Philosophy at Northeastern
University and the Associate Director of AI Initiatives at
Northeastern’s Ethics Institute. He works in moral philosophy and
applied ethics, especially on the ethics of emerging technologies and
artificial intelligence.
William A. Bauer is an Associate Teaching Professor in the
Department of Philosophy and Religious Studies at North Carolina
State University. His research addresses problems in the metaphysics
of science and the ethics of technology. His publications include
articles in Erkenntnis, Neuroethics, and Science and Engineering
Ethics.
Jeff Behrends is an Associate Senior Lecturer in Philosophy at
Harvard University and the Director of Ethics and Technology
Initiatives at the Edmond J. Safra Center for Ethics. His research
focuses on the nature and grounds of practical normativity, and the
ethics of developing and deploying computing technology.
Stephen Bennett is a higher degree research student in philosophy
at Macquarie University. His master’s thesis deals with the ethics of
self-driving cars, focusing on the question of how they should be
programmed to act when faced with an unavoidable accident where
harm cannot be avoided.
Brian Berkey is an Assistant Professor in the Legal Studies and
Business Ethics Department in the Wharton School at the University
of Pennsylvania. He works in moral and political philosophy, and he
has published articles on moral demandingness, obligations of
justice, climate change, exploitation, effective altruism, ethical
consumerism, and collective obligations.
Jason Borenstein, PhD, is the Director of Graduate Research Ethics
Programs at the Georgia Institute of Technology. His appointment is
divided between the School of Public Policy and the Office of
Graduate Studies. His teaching and research interests include robot
and artificial intelligence ethics, engineering ethics, research
ethics/RCR, and bioethics.
T. J. Broy is a PhD student in the Philosophy Department at the
University of Connecticut. He works mainly in ethics, philosophy of
art, and philosophy of language with his current research focused on
the intersection of the study of expression in philosophy of language
with various issues in normative ethics.
Dr. Anne Brown is an Assistant Professor in the School of Planning,
Public Policy, and Management at the University of Oregon. She
researches issues of transportation equity, shared mobility, and travel
behavior.
John Bucher, AICP, PMP, is a Senior Planner at Stantec and a
Fellow at Tulane University’s Disaster Resilience Leadership Academy.
His work is focused on building sustainable resilience through
community development, climate adaptation, and hazard mitigation.
Dr. David Černý is a Research Fellow at the Institute of State and
Law of the Czech Academy of Sciences and the Institute of
Computer Science of the Czech Academy of Sciences. He is a
founding member of the Karel Čapek Center for Values in Science
and Technology.
Dr. Jeffrey K. H. Chan is an Assistant Professor in the Humanities,
Arts and Social Sciences cluster at the Singapore University of
Technology and Design. His research focuses on design and planning
ethics, and he is the author of two books, Urban Ethics in the
Anthropocene and Sharing by Design.
Kendra Chilson is a philosophy PhD student at the University of
California, Riverside. She studies the interconnections among
decision theory, game theory, and ethics, and how each of these
fields can and should contribute to the development of artificially
intelligent systems.
Dr. Veljko Dubljević is a University Faculty Scholar and Associate
Professor of Philosophy and Science, Technology & Society at NC
State University and leads the NeuroComputational Ethics Research
Group (https://sites.google.com/view/neuroethics-group/team-
members). He is a recipient of a NSF CAREER award and has
published extensively in neuroethics, neurophilosophy, and ethics of
artificial intelligence.
Dr. Madhu C. Dutta Koehler is an MIT-educated dreamer,
designer, dancer, and entrepreneur. Dutta-Koehler has been a
professor of architecture and planning for over two decades and
recently founded The Greener Health Benefit Corporation. An award-
winning practitioner, she has lectured worldwide on issues where
human development and climate change collide.
Dr. Nicholas G. Evans is Assistant Professor of Philosophy at the
University of Massachusetts Lowell and a 2020–2023 Greenwall
Foundation Faculty Scholar. He conducts research on the ethics
technology and national security. His new book, The Ethics of
Neuroscience and National Security, was published with Routledge in
May 2021.
Dr. Paul Formosa is an Associate Professor in Philosophy and
Director of the Centre for Agency, Values and Ethics (CAVE) at
Macquarie University. Paul has published widely on topics in moral
and political philosophy, with a focus on ethical issues raised by
technologies such as videogames and artificial intelligence.
Luke Golemon, MA, is a PhD student in the Department of
Philosophy at the University of Arizona. His research is focused
primarily on ethics of all flavors, political philosophy, and philosophy
of science. He pays special attention to their applications to
technology, medicine, feminist theory, and scientific theorizing.
Jeffrey K. Gurney is a partner with Nelson Mullins Riley &
Scarborough, LLP. He is the author of Automated Vehicle Law: Legal
Liability, Regulation, and Data Security, which was published by the
American Bar Association, and numerous publications on automated
vehicles. His practice includes representing companies involved in
deploying automated driving systems.
Jennifer Hatch is the Strategic Advisor for Convergence at the
Center for Sustainable Energy, where she guides strategy for
decarbonization infrastructure. Previously she was a visiting scholar
at the Boston University Urban Planning Department and led the
Transportation and Utility Practice at the BU Institute for Sustainable
Energy. She holds a bachelor’s degree from Wellesley College and a
master’s degree in public policy from the Harvard Kennedy School.
Dr. Joseph Herkert is an Associate Professor Emeritus of Science,
Technology and Society at North Carolina State University in Raleigh.
He studies engineering ethics and the ethics of emerging
technologies. Recent work includes ethics of autonomous vehicles,
lessons learned from the Boeing 737 MAX crashes, and responsible
innovation in biotechnology.
Soraj Hongladarom is Professor of Philosophy and Director of the
Center for Science, Technology, and Society at Chulalongkorn
University.
Dr. Tomáš Hříbek is a Research Fellow at the Institute of
Philosophy of the Czech Academy of Sciences. Together with David
Černý, he is the founder of the Karel Čapek Center for Values in
Science and Technology. He also teaches at several colleges,
including Charles University and Anglo-American University.
Dr. Akira Inoue is a Professor of Political Philosophy in the
Department of Advanced Social and International Studies, Graduate
School of Arts and Sciences, University of Tokyo, Japan. He works in
the areas of distributive justice and democracy. He also works on
experimental research in normative political theory.
Dr. Ryan Jenkins is an Associate Professor of Philosophy, and a
Senior Fellow at the Ethics + Emerging Sciences Group at California
Polytechnic State University in San Luis Obispo. He studies the ethics
of emerging technologies, especially artificial intelligence and robot
ethics. He has published extensively on autonomous vehicles.
Leonard Kahn is the Associate Dean of the College of Arts &
Sciences and an Associate Professor of Philosophy at Loyola
University New Orleans. He is also the 2021–2022 Donald and
Beverly Freeman Fellow at the Stockdale Center for Ethical
Leadership, US Naval Academy.
Dr. Geoff Keeling is an Affiliate Fellow at the Institute for Human-
Centered Artificial Intelligence at Stanford University, an Associate
Fellow at the Leverhulme Centre for the Future of Intelligence at the
University of Cambridge, and a Bioethicist at Google. His research
focuses on ethics, decision theory, and artificial intelligence.
Dr. Beaudry Kock works on mass transit products at Apple, Inc.
He has previously worked in transportation technology at the MBTA,
Ford Motor Company, Daimler, and in numerous startups. His focus is
on making both cities and the mass transportation experience more
pleasant, safe, equitable and sustainable.
Yolanda Lannquist is Head of Research & Advisory at The Future
Society (TFS), a US nonprofit specializing in governance of artificial
intelligence and emerging technologies. She leads artificial
intelligence policy projects with international organizations and is
appointed to the OECD AI Policy Observatory expert group on
implementing trustworthy artificial intelligence. She holds a master’s
degree in public policy from Harvard Kennedy School.
Derek Leben is Associate Teaching Professor of Ethics at the
Tepper School of Business at Carnegie Mellon University. His
research focuses on the ethics of artificial intelligence and
autonomous systems, and he is the author of the book Ethics for
Robots: How to Design a Moral Algorithm (Routledge, 2018).
Daniel D. Novotny received his PhD from State University of New
York at Buffalo and is an Assistant Professor of Philosophy at the
University of South Bohemia. He has published in the area of the
history of philosophy, metaphysics, and applied philosophy.
Ugo Pagallo is a Full Professor of Jurisprudence at the University of
Turin and Faculty Fellow at the CTLS in London. He is a member of
several high-level expert groups of international institutions, such as
the European Commission and the World Health Organization, on the
legal impact of artificial intelligence and other emerging
technologies.
Dr. Jiwon Shim is an Assistant Professor in the Department of
Philosophy at Dongguk University in Seoul. She studies applied
ethics in human enhancement and artificial intelligence and is
interested in how technology affects women, the disabled, and
transgender individuals.
Dr. Kazumi Shimizu is a Professor in the Department of Political
Science and Economics, Waseda University, Japan. His research and
teaching focus on experimental economics, behavioral economics,
and decision theory. His research is not only empirical but also
examines the methodological bases of empirical research.
Saul Smilansky is a Professor of Philosophy, University of Haifa,
Israel. He works on normative and applied ethics, the free will
problem, and meaning in life. He is the author of Free Will and
Illusion (Oxford University Press, 2000), 10 Moral Paradoxes
(Blackwell, 2007), and one hundred papers in philosophical journals
and edited collections.
Dr. Patrick Taylor Smith is Resident Fellow at the Stockdale
Center for Ethical Leadership at the United States Naval Academy.
He was also a Postdoctoral Fellow at the McCoy Center for Ethics in
Society at Stanford University. His published work concerns the
justice of emerging climate and military technologies.
Daisuke Udagawa is an Associate Professor in the Department of
Economics, Hannan University, Japan. His fields of research are
microeconomics, experimental economics, and behavioral
economics. His recent work has focused on human behavior and
social preferences.
Dr. Carole Turley Voulgaris is an Assistant Professor of Urban
Planning at the Harvard University Graduate School of Design. She is
trained as a transportation engineer and as a transportation planner.
Her teaching and research focus on how transportation planning
institutions use data to inform plans and policies.
Dr. Yoshiki Wakamatsu is a Professor at Gakushuin University
Law School, Japan. His research and teaching focus on legal and
political philosophy. Recently, he has published two books about
paternalism in Japanese, one about libertarian paternalism and the
other about J. S. Mill.
Introduction
Ryan Jenkins, David Černý, and
Tomas Hribek

“A runaway trolley is speeding down a track . . .” So begins what is


perhaps the most fecund thought experiment of the past several
decades since its invention by Philippa Foot in 1967 and subsequent
modification into the version we all know by Judith Thompson in
1976.
The trolley problem enjoys a rare distinction among philosophical
thought experiments: it has entered the common vocabulary. From
its birthplace in academic ethics it has passed through the portal into
the folk consciousness and become a tool used by nonphilosophers—
at least as a touchpoint for thinking about what philosophers do, and
in many cases is used as a tool ripe for application to their own
questions about balancing conflicting values, contrasting doing with
allowing, negotiating mutually exclusive obligations, understanding
the distribution of goods, and so on. This is a status enjoyed by very
few other stories woven by philosophers—Plato’s cave comes to
mind, along with perhaps Descartes’s dreaming scenario
(popularized by the Matrix and the “brain in a vat” variation),
Nozick’s experience machine and Singer’s shallow pond, and only a
handful of others.
Many of us in the analytic tradition of moral philosophy who came
of age in the first two decades of the 2000s were raised on a steady
diet of trolley problems—the cottage industry of trolleyology. Its
utility is chiefly due to its protean nature: it lends itself to countless
permutations. And this was before the news broke that the trolley
problem could be applied to autonomous vehicles (AVs). As if by an
alchemical reaction, this confluence of academic philosophy and
emerging technology resulted in an explosion of literature. A quick
search on Google Scholar shows over 8,000 papers that mention the
trolley problem, and over a thousand that contain both the terms
“trolley problem” and “autonomous vehicles.”
But as the honeymoon phase has waned over the last decade or
so, debate gradually pivoted to whether the trolley problem is a
useful frame for thinking about the behavior of AVs at all. Critics
have suggested, for one, that the designers of AVs will not program
AVs to think through situations like humans do: pulling a lever to
choose option A or B after appraising them and attaching a moral
value or disvalue to each. Others contest that overwhelmingly
negative—and conclusive—legal reasons would prohibit designers
from programming an AV to aim for anyone. Others point out that
the original trolley problem is morally distinct from the situation of
engineers, since engineers do not enjoy certainty about the
consequences of their actions. And so on.
A handful of authors have maintained the usefulness of trolley
problems as a general schema for understanding contrasting or
conflicting values that companies will have to negotiate, for example,
how much space to afford a bicyclist on one side and an oncoming
car on the other side. These are examples of the trolley problem on
a micro scale: imagining a single situation that an AV is engaged in
and comparing it to a trolley problem. These are still questions about
balancing risks and trade-offs, and the trolley problem is still
somewhat useful here. Ultimately, however, very few philosophers
accept anymore that the trolley problem is a perfect analogy for
driverless cars, or that the situations AVs face will resemble the
forced choice of the unlucky bystander in the original thought
experiment. It is safe to say that the academic conversation around
AVs is moving beyond the trolley problem.
If the trolley problem is retained, it is in a diminished role as a
metaphor rather than a literal analogy for thinking about the design
of driverless cars. That is, it is retained as a macro-level metaphor: it
is useful to illustrate other problems with AVs, or the forced choice
that developers and companies will have to confront. For example,
making certain decisions about how to design AVs and where to
deploy them could have disparate impacts on the elderly, the
mobility challenged, the poor, or historically disadvantaged
minorities. Each of these trade-offs can be put into the frame of a
trolley problem where the agent is forced to distribute benefits and
burdens one way or another—and the trolley problem is perhaps
supremely useful among thought experiments for making those
forced choices stark and explicit.
At the same time, predictions about the benefits of autonomous
cars have become more muted. AVs were once hailed for their ability
to all but eliminate automobile accidents—saving roughly a million
lives per year worldwide—to reduce congestion and pollution; to
drastically reduce the cost of insurance or eliminate the need for car
ownership altogether; and more. We now know the truth is more
complicated. Creating a car that drives as reliably as a human—in all
conditions, in unpredictable circumstances, at night and in the snow,
and so on—is a wicked problem for engineering. Cars may save
lives, but they may kill people that humans wouldn’t have. And while
they may reduce congestion, those benefits will probably only be
temporary, just as new lanes added to a highway inevitably fill right
back up in a few years’ time. While AVs might provide mobility to the
disabled, their benefits will accrue to the wealthiest first, potentially
exacerbating inequalities. In short, all of these questions about the
ethical and social impacts of AVs require vigorous discussion. All of
these questions are beginning to overtake trolleyological questions in
their importance, urgency, and concreteness. And, here, the
methods of philosophy, buttressed and enhanced by engagement
across other empirical disciplines, can still be of tremendous help in
clarifying and systematizing the trade-offs at stake.

The Structure of This Book


This book represents a substantial and purposeful effort—the first
we know of—to move the discussion of moral philosophy and other
academic disciplines beyond the trolley problem to an examination of
the other issues that AVs present. This next generation of
scholarship is directed at the ethical and social implications of AVs
more broadly. There are still urgent questions waiting to be
addressed, for example: how AVs might interact with human drivers
in mixed or “hybrid” traffic environments; how AVs might reshape
our urban landscapes now that we could expect demand for parking
lots or garages to fall dramatically; what unique security or privacy
concerns are raised by AVs as connected devices in the “Internet of
Things”; how the benefits and burdens of this new technology,
including mobility, traffic congestion, and pollution, will be
distributed throughout society; and so on.
This book is an attempt to map the landscape of these next-
generation questions raised by AVs and to suggest preliminary
answers. With input from the disciplines of philosophy, sociology,
economics, urban planning and transportation engineering, policy
analysis, human-robot interaction, business ethics, and, from the
private sphere, computer science, and law, this book explores the
ethical and social implications, very broadly construed, of AVs. In
addition, the book provides a worldwide perspective, as the authors
included represent the United States, the United Kingdom, the Czech
Republic, Israel, Hong Kong, Thailand, Singapore, Italy, Korea, and
Japan.
Most scholars working on AVs now endorse some sort of Trolley
Pessimism, according to which either there are not any relevant
similarities between trolley and real-life road scenarios, or there are
insurmountable technological challenges calling into question the
very possibility of programming AVs to follow a set of ethical rules
provided by programmers. Thus, the common thread running
through all of the contributions in Part I is an effort to go beyond the
trolley problem in an attempt to address the ethical issues raised by
AVs. In aggregate, we take these pieces to present a conclusive case
for abandoning the use of trolley problems in the micro scale.
In Part II, authors are given license to speculate about longer-
term ethical issues, exploring how AVs may reshape the human
communities and interaction—from distributing risk among drivers
and pedestrians, to anticipating the externalities of AVs. While
forward-looking, these authors remain empirically and philosophically
grounded, applying and extending lessons from emerging research
in psychology, law, business ethics, and elsewhere.
The benefits of new technologies often initially accrue to the best-
off members of a community. We should expect AVs to follow a
similar pattern, since those companies most aggressively pursuing
autonomous driving features—Tesla, Mercedes, Volvo, and so on—
tend to manufacture more expensive cars. Authors in Part III explore
how we can anticipate and mitigate disparities in the effects of AVs,
and how their benefits and burdens might be distributed. These are
the issues that extend beyond ethics, in the narrow sense of the
theory of interpersonal conduct, to political theory, concerned with
the design of a just community.
Several authors in Part III inquire whether AVs are likely to
mitigate, or worsen, the problems of congestion and pollution. To be
sure, these are the problems that are particularly acute in a dense
urban environment. The issues of the impact of the AV technology
on the design of our cities are specifically explored in the chapters of
the final segment of the volume, Part IV. Will the new technology
help create the just city, or will it exacerbate existing inequalities?
Will it further accelerate the urban sprawl or instead promote a new
densification? Are AVs compatible with the classical ideal of the
pedestrian city? These are some of the important questions
addressed in this part of the book.

References
Foot, Philippa. 1967. “The Problem of Abortion and the Doctrine of Double Effect.”
Oxford Review 5: 5–15.
Thomson, Judith Jarvis. 1976. “Killing, Letting Die, and the Trolley Problem.” The
Monist 59, no. 2: 204–17.
PART I
AUTONOMOUS VEHICLES AND TROLLEY PROBLEMS
Introduction by David Černý

Swerve right and kill one passerby or swerve left and kill two (or
more) persons on impact? This seemingly simple question has
occupied the attention of many bright minds for decades. It should
not be seen as a surprise. Trolley-type scenarios, flourishing within
scholarly publications since Philippa Foot’s seminal paper on the
ethics of abortion, seem to bear a close structural similarity with
collision situations that may be encountered by autonomous vehicles
(AVs) on the road. The leading assumption has been that analogical
reasoning might be employed, enabling one to transfer important
moral conclusions from simplified thought experiments to real-life
situations. If, for example, the rightness of one’s choice in trolley-
type scenarios depends on the maximizing strategy (i.e., save the
most lives possible), then the same decision procedure can also be
employed in richer, nonidealized conditions of everyday traffic.
Notwithstanding considerable efforts that came into the
development and use of trolley-type scenarios, there has been a
steadily growing consensus that our ethical reflections should move
beyond these scenarios toward more realistic considerations. There
are still some scholars defending the importance of trolleyology in
the context of AV ethics, but many—maybe the majority—endorse
some sort of Trolley Pessimism, according to which either there are
not any relevant similarities between trolley and real-life road
scenarios, or there are insurmountable technological challenges
calling into question the very possibility of programming AVs to
follow a set of ethical rules provided by programmers.
Thus, the common thread running through all of the contributions
in Part I is the effort to go beyond the trolley problem in an attempt
to address the ethical issues raised by AVs.
We begin this section with a chapter by Nicholas G. Evans and
Heidi Furey. They draw on the existing literature on crash scenarios
involving AVs but go far beyond the traditional focus on the trolley
problem and its applications in autonomous driving. Evans does not
intend to eliminate trolley-case scenarios but subsumes them under
a more general risk distribution category. This category is more
general in two aspects: First, it removes the simplification
traditionally assumed in the discussion of trolley cases, according to
which our options and outcomes are certain. Second, it takes into
consideration far more types of morally relevant scenarios. The
authors divide decisions regarding risk distribution and AVs into
three categories. Each of these categories gives rise to a different
perspective from which to look at AVs and situations they may
encounter. There are three types of decisions corresponding to each
category: narrow-, medium-, and wide-scope decisions. Each opens
a distinct conceptual space and invites one to ask different questions
about how AVs should distribute risks and how we should regulate
and deploy AVs so that these risks are best distributed among
occupants of AVs, road users, and other members of society.
Next, in Chapter 2, David Černý addresses an admittedly
controversial issue of whether an AV’s decision processes based on
age would always and in all contexts be discriminatory.
Discriminatory behavior is commonly considered unethical and
prohibited by many international human rights documents. Yet it
might come as a surprise that at least in the context of artificial
intelligence (AI) ethics, there are not many attempts at a precise
definition of direct discrimination. Černý starts his chapter by
thoroughly analyzing the definitional marks of discrimination and
arrives at a semiformal definition of it. Successively, he delineates
the main contours of the derivational account of the badness of
death according to which death is bad in virtue of the fact that it
deprives us of all the prudential goods comprised in continued
existence. These two conceptual devices allow him to defend the
main conclusion of his chapter: If an AV chose between two human
targets on the basis of age, its choice would not be an instance of
direct discrimination.
Geoff Keeling, in his sophisticated contribution in Chapter 3, asks
an important question: “How does the moral status of an AV’s act
depend on its prediction of the classification of proximate objects?”
He presents three possible answers—the objective version and two
variants of a subjective view. The line of demarcation between
objective and subjective views depends on whether the evaluation of
the AV’s choices and acts takes into account the AV’s internal
representations of facts provided by external sensors. Keeling opts
for moderate subjectivism, according to which the rightness or
wrongness of the AV’s acts ought to be judged by the AV’s
epistemically justified or reasonable predictions about the morally
relevant facts. The next section of Keeling’s chapter is devoted to
developing a moderate subjectivist view and its application in the
context of mundane road-traffic situations. Keeling’s arguments are
very complex and, for those who are not comfortable with essential
higher mathematics, challenging to follow. Keeling’s overall aim here
is to find a decision-making procedure which would allow AVs to
determine how much weight should be given to safety depending on
how high the probability is that a perceived object classified as a
pedestrian is, in fact, a pedestrian.
Many authors working in the field of AI ethics had been confident
for a long time that trolley-type scenarios represent a conceptual
tool allowing one to describe and analyze possible choices leading to
harm. Recently, however, there is a growing consensus that the
matters are far from being that simple. In Chapter 4, Jeff Behrends
and John Basl take the view of Trolley Pessimists. The negation of
Trolley Pessimism is, of course, Trolley Optimism, which subscribes
to the theses that some possible collisions of AVs are structurally
similar (the authors give a precise definition of structural similarity)
to trolley-type cases and, accordingly, the engineers should work to
program AVs to behave in ways conforming to the moral conclusions
drawn from trolley cases. Berends and Basl suggest that both
Optimists and Pessimists have been victims to their inability to
recognize important features of the engineering techniques deployed
in the process of designing AVs’ guiding software. Both authors
endorse Trolley Pessimism and present a novel technological case
against Trolley Optimism. Their complex arguments are based on the
difference between traditional and machine learning algorithms. We
can see traditional algorithms as a set of rules enabling the
transformation of inputs into outputs according to the rules invented
by programmers. However, machine learning algorithms are radically
different in that they generate new algorithmic instructions not
explicitly provided by programmers. Consequently, we cannot expect
them to follow a set of prior established ethical rules incorporated
into their code by programmers. Therefore, engineers developing
software for AVs are not and will not be in a position to program
their vehicles to respond to the particular crash scenarios
encountered on the road in a predetermined and always consistent
manner.
A great deal of discussion in the context of the ethics of AVs is
predicated on the assumption of what can be called normative
monism. Normative monism may take two forms: Either we assume
that among all of the competing normative theories there is only one
that is correct, or we can hold to the view that for each field of
applied ethics there is only one solution. Saul Smilansky, in his highly
original contribution in Chapter 7, questions this assumption. He
considers a scenario, a hostage situation that he calls The Situation,
and demonstrates that many competing and sometimes contrasting
solutions may be invoked. By adopting a pluralist normative
worldview, Smilansky also goes beyond the classical trolley-type
scenarios inviting “either-or” type responses. The combination of
moral and value pluralism applied within the field of AV ethics gives
rise to an open moral world with many permissible possibilities, from
the design ethics to the behavior of self-driving vehicles in possible
crash situations. Smilansky’s normative pluralism may (and as he
believes is likely to) transform into a plurality of AV guiding
algorithms corresponding and responding to differences in cultural
backgrounds and preferences. The solution offered by Smilansky
falls under the umbrella of “Crazy Ethics,” a term coined by the
author to designate ethics which, despite being true, may lead to
counterintuitive consequences. Living in such a pluralistic world
might be hard at first, yet if Smilansky is right, we do not have any
other options available.
Like David Černý, Derek Leben in Chapter 8 also focuses on the
problem of discrimination in the context of AVs but considers it from
a different and more general angle. Leben argues that whether
choices made by algorithms represent unjustifiably discriminatory
behavior crucially depends on the nature of the task these
algorithms are called upon to fulfill. This task-relevance standard of
discrimination involves two components, one conceptual and the
other empirical. The conceptual component brings into focus the
essential task of an algorithm in a specific context. If, as the author
asserts, this task involves making decisions about the distribution of
harm interpreted as the predicted health outcomes (or the likelihood
of thereof) of collisions, then some features of the persons involved
may turn relevant to the task and others irrelevant. The empirical
component enters into play here; it depends on the answers to the
conceptual questions, and its role consists of determining which
features have an impact on accomplishing the essential function of
the algorithm. Consider, for example, age. If the essential task of AV-
guiding algorithms in the context of the distribution of harm in
collisions is to minimize harm measured as health outcomes or
otherwise (conceptual component) and age can serve as a statistical
predictor of these outcomes (empirical component), then age may
be relevant to the task. It follows from these considerations that
choices based on age may not represent instances of unjustified
discrimination.
For rethinking current approaches to the ethics of AVs, we can
turn to Soraj Hongladarom and Daniel D. Novotný in Chapter 6. The
authors call into question the predominant focus on the ethical
concerns and philosophical traditions of high-income, Western
countries in search of the one and only moral theory to be accepted
globally. Other cultures and traditions, however, often have different
standards of evaluation of what could be an acceptable and
desirable behavior of AVs. The challenge is how to take into account
these other cultures—and also the rich Asian, African, and other
traditions of ethical reflection. Their proposal consists in considering
AVs with their machine-learning systems as if they were human
drivers in a given culture that need to pass a driver’s license test.
Since they are aware that there still needs to be some in-built
fundamental norms, values, or virtues to make AVs human-aligned,
they explore the possibility of drawing upon the Buddhist concept of
compassion (karuṇā) for this role.
The trolley-like scenarios, despite some recent criticism mounted
against them, continue to occupy an important place in modern
experimental philosophy. Many philosophers are convinced that
moral intuitions—immediate moral reactions to presented scenarios
—should be treated as robust “data” expressing well-established
social norms. The overall aim of Chapter 5 by Akira Inoue, Kazumi
Shimizu, Daisuke Udagawa, and Yoshiki Wakamatsu is to
experimentally test whether, and to what extent, social norms
identified by a version of the trolley dilemma are robust. To achieve
this aim, the researchers conducted an online empirical survey in
Japan. The experimental results show, among others, that our
choices and willingness to follow socially established norms may be
heavily influenced by the presence or absence of public scrutiny.
These results are undoubtedly of immense relevance to the ethics of
AVs, and the authors go to great lengths to explore their impact to a
considerable depth. It may be argued that this chapter offers a first
step toward a solution to the so-called social dilemma of AVs.
1
Ethics and Risk Distribution for Autonomous
Vehicles
Nicholas G. Evans

Introduction
Autonomous vehicles (AVs) will be on our roads soon.1 How should
they be programmed to behave? The introduction of AVs will mark
the first time that artificially intelligent systems interact with humans
in the real world on such a large scale—and while travelling at such
high speeds.
Current AV ethics literature concentrates on crash scenarios in
which an AV must decide how to distribute unavoidable harm; for
example, an unoccupied AV must either swerve to the left, killing the
five passengers of a minivan, or swerve to the right, killing a lone
motorcyclist. Scenarios like these have been called “trolley cases”
because they resemble a series of famous thought experiments that
have sparked an enormous body of ethics literature (known,
somewhat derisively, as “trolleyology”).2 In the original case, a
runaway tram (or “trolley”) is about to run over and kill five workers,
but the driver can choose to steer from one track to another, but in
the process killing one worker on the alternate track.3 What’s
important about these cases is not that AVs are real-world analogs
to trolleys, but that AV navigation poses difficult ethical decisions.
Moreover, AVs, in virtue of being programmable rather than reacting
on human instinct, must be instructed how they ought to act in
these cases (or a decision, arguably equally morally weighty, must
be made to remain silent on what the AV ought to do in this case).
Trolley-based scenarios have been used to test intuitions about the
behavior of AVs, such as when it is permissible to choose (or allow)
a smaller group to be harmed in order to save a larger one, and
more controversially what kinds of people we should prioritize over
others in saving them.4
When people ask how AVs could have anything to do with ethics,
the trolley problem offers a quick and obvious explanation. But
trolley problem–inspired AV ethics has received considerably
criticism. One central line of reasoning is that, in the real world, we
are almost never certain about our options and their outcomes.
Nyholm and Smids, as well as Goodall, have argued that we should
focus on risk management when programming AVs, and they
describe a number of realistic cases involving risk, many of which
are similar to trolley cases but involve only probabilities of harm,
including how close AVs choose to drive to certain types of vehicles
and pedestrians, when they choose to change lanes, and which
vehicles they take pains to avoid crashing into.5 Himmelreich has
argued that trolley-like problems are too rare, and too extreme,
relative to the kind of ethical issue that are more likely to face AVs
on a day-to-day basis. He argues that we should instead focus more
on mundane driving scenarios, many of which involve risk. In
addition to the kinds of cases Goodall mentions, he draws attention
to the risks associated with the environmental impact of AVs and
with programming AV behavior that will be repeated exactly by every
other AV.6
The discussion of risk and AVs is just beginning. We’re at the
stage where (a) a good case has been made for the importance of
the discussion, and where (b) a smattering of different scenarios and
questions about risk has been posed. One way to approach a difficult
problem like finding a suitable ethical algorithm for AVs, which is
common to both engineering and philosophy,7 is to start with the
simplest or most idealized kinds of cases first. Greater complexity
can be added back into the picture as more progress is made. Hence
the trolley problem—a simple case outlining a clear issue in which
choices about doing harm, or allowing it to happen, are parsed in
the clearest detail.8 What comes next for AV ethics, however?
Our purpose here is not to reject the trolley problem, as others
have done. The trolley problem is an important thought experiment
in the history of philosophy; it serves a very specific purpose. In
point of fact, we believe its purpose is precisely the one it has
served: to force people to acknowledge, and then choose a position
on, an important moral feature that is subject to disagreement. The
point of the trolley problem, put another way, is to cause problems!
But the trolley problem cannot—indeed, no philosophical problem
can—solve a complex problem like the navigation of AVs on its own.
There are other challenges that are philosophically relevant to an
investigation of a complex problem like AVs. The field needs to
evolve, beyond the mere debate about trolleys (and whether that
debate is relevant), to encompass other philosophical issues.
In what follows, we outline three ways to think about this
evolution. We motivate this project first through conceptual,
empirical, and metaphilosophical concerns about the limits of the
trolley problem as it applies to the ethics of AVs. We then turn to
two case studies that demonstrate the challenge ahead. The first is
how AVs should behave when they encounter each other, and where
differences in their algorithmic behavior are morally relevant, and
where each is uncertain as to the other’s algorithm. The second is
considering a wide view of AVs, and how we account for the broader
question of AVs in large, even global transportation systems.

Post Trolley Ethics


What makes the trolley problem so convenient as a tool is that it
contains a series of assumptions that carry directly into the ethics of
AVs. These assumptions, we take it, are as follows:

1. A single isolated actor;


2. Making a decision under conditions of perfect information;
3. That has a well-described series of options available to it.
In the case of the trolley problem, it is a tram or trolley that has
perfect knowledge about the outcomes of its choices, of which there
are two and only two. In the case of the AV, trolley-like problems are
homologous to the trolley problem in terms of their assumptions.
That is:

1. The AV is a single, isolated actor.


2. The AV is made under conditions of perfect information about the kinds of
people it is hitting, how many, and what their outcomes (e.g., injury, death)
will be, and the relevant capacities to understand these inputs (i.e., it is a
“level 5” fully autonomous AV).
3. The AV has a well-described series of options available to it.

These assumptions drive the trolley-like problem characterized by,


inter alia, the MIT Moral Machine Project.
Three limits arise from this kind of analysis, however. Others have
argued that the determinate and binary set of options in the trolley
problem make this kind of analysis inappropriate under conditions of
risk.9 We think that this, however, misunderstands the trolley
problem itself. Conceptually, the trolley problem identifies a critical
distinction in ethical decision-making around our intentions to bring
about someone’s death, commonly claimed to be between “killing”
and “letting die.”10 This is an important distinction; indeed, it guides
a number of serious, applied ethical decisions. The original applied
problem in which the trolley problem was posed was not AVs, but
abortion.11 The doctrine of double effect, which turns in part on the
distinction the trolley problem picks out, is also an important
doctrine in the ethics of killing in war.12 But no one would say that
epistemic concerns about fetal viability, or about personhood,
undermine the moral problem between intending versus foreseeing
harm. Certainly in the just war literature, uncertainty about
combatant status is an important issue of discussion, but no one
engaged in that discussion would say that our beliefs about the
trolley problem or doctrine of double effect are perforce irrelevant in
light of the presence of uncertainty.
Thus, the conceptual issue is not that the trolley problem is
limited. It was always limited. But that is hardly a weakness, any
more than a diagnostic for breast cancer is weak for not detecting
glaucoma. Different tools do different things, and if there is any
weakness for trolleyology in the AV debate, it is in trying to make
the trolley problem do things it cannot. Rather, the problem that
ethicists need to face can be formulated as a question: “What other
philosophical problems, outside of doing versus allowing harm [or
similar concepts, depending on our reading of the trolley problem],
are a propos when thinking through the ethics of AVs?”
If we care about risk, then, there are frameworks available to us
that would be well suited to the problem of AVs. For example, Lara
Buchak’s work on rational risk aversion has been applied to how we
should conceive of the ethics of risk aversion in HIV/AIDS therapy
trials that require participants to cease their routine antiretroviral
medications.13 Likewise, Adam Bjorndahl and others have discussed
how we should think about the risk of violating human dignity and
make decisions where there is a possibility that some absolutely
important value (like dignity, for some) might be violated, but it is
under a condition of uncertainty.14 Finally, Seth Lazar and Chad
Stronach have shown how we can account for the ethics of risk of
rights violation in armed conflict, including cases where we are not
sure of the kinds of rights those individuals possess.15
A second problem for AVs is empirical, including when we
consider risk. AV decision-making is made under risk, yes, but it may
also be tunable depending on the case we care about. What we
mean by this it is not simply the case that there are a number of
discrete but uncertain outcomes, but that there may be an
indeterminate range of options from which the AV might choose.
Consider recent work in AV decision-making involving the following
case:
A Tailgater (TG) is closely following the AV, and the AV is following the Front
Car (FC) in a two-lane two-way road. At the start of the scenario, all cars are
currently moving at the same velocity of 90 kph, consistent with highway
speeds. The scenario starts with FC suddenly braking to a stop. The AV is
responsive enough to stop in time to prevent a collision with FC because the
AV is following at a safe distance. However, though the AV is responsive
enough to avoid a collision with the lead car, TG may not be responsive
enough to avoid crashing into the AV. This scenario is further constrained in
that the AV cannot swerve out of the way (due to oncoming traffic on the left
and a barrier on the right). Intuitively, the AV appears to have two options: it
could slam on the brakes and suffer a severe rear-end collision, or it could
intentionally ram the forward car at a relatively low speed, reducing the speed
of collision of the TG with the AV. Given the AV’s superior ability to measure
speeds and distances, is there another way of managing potential injuries that
may not be available to a human driver?16

The results of this thought experiment are not binary: not just in
terms of the possible injuries that might arise to TG, AV, and FC, but
in terms of the options available to AV. As a parametric model, AV
could choose any combination of velocities and accelerations
available to the vehicle. Modeling on the above vignette gave options
such as a “wake-up call” where the AV initiated a low-speed collision
to TC to encourage them to brake, or in the case of an unresponsive
TC an “emergency brake” in which the AV initiated a series of low-
speed collisions until those collisions became inelastic and the AV
could use its own braking power to stop both cars.
Even in trolley cases for “unavoidable crashes,” there may be
continuous variables, such as how much braking room there is
between a vehicle and pedestrians or other vehicles; the side of the
vehicle that strikes the other object; the object’s reaction times if it
has any; and so on. These are not relevant to the trolley problem as
a thought experiment; because they may make meaningful
differences in the outcomes of these collisions; however, they may
be (though are not always) relevant to an AV’s decisions. These are
empirical concerns, but important ones in knowing what options are
available to an AV, which at least on some accounts is a precondition
for having good beliefs about what the AV ought to do.
Finally, there is a metaphilosophical problem around how we do
the ethics of AVs in conditions under risk. This arises from, but it is
not totally derivative of the first two problems above. Making
decisions about the ethics of AVs requires both knowing what,
philosophically, is at stake in decisions around AVs. But it also
requires empirical knowledge of the conditions under which those
AVs will make those decisions. This requires a form of collaboration
that is not common, between philosophers and empirical
researchers. While our previous work provides a model to emulate, it
does not solve larger metaphilosophical questions about how
philosophers should engage with practical and design processes.
These kinds of risks are important because they allow us to
loosen our three assumptions around AVs. We, with sufficient work,
no longer need to make a binary distinction between autonomous
and human-driven cars, and we can accept a range of levels of
autonomy—levels that exist but are typically eschewed in debates
about the ethics of AVs. We can further deal with important temporal
components of the deployment of AVs, from the near-future scenario
in which full autonomy is available only to a handful of cars on the
road, to the potential future in which all or nearly all cars are AVs.
Finally, we can deal with questions about what kinds of information
are necessary, and how decision-making might permissibly proceed
for vehicles operating with different kinds of data.

Moral Uncertainty in AVs


The above described potential inroads into risk have to do with real-
world conditions when we do AV ethics. These kinds of risk include
the nature, number, and identity of objects in collisions (human,
animal, inanimate). However, there are other, immediately important
issues that the ethics of AVs ought to deal with that invoke
philosophical problems.
A preliminary and likely imminent issue is how an AV should
distribute risk in contexts where other AVs have divergent moral
commitments. Human drivers can be uncertain about how other
human drivers will behave. AVs governed by ethical algorithms could
be more predictable than human-driven vehicles,17 especially if the
same ethical algorithm is used across all cars of a particular make.
AVs may be able, in some circumstances, to use this information to
coordinate with other AVs and to decide which ones to help or try to
stop, and so on. But this need not always be the case.
To illustrate, let us first imagine a case where there is no
uncertainty about the drivers’ ethical codes. Let’s simplify further and
focus on two kinds of moral algorithm: “selfish” drivers that prioritize
the drivers’ (and their passengers’) well-being, and “selfless” drivers
that always aim to minimize total harm among all road users.
Suppose that driver A finds herself in an unavoidable crash with a
selfless driver B and a selfish driver C. She can only control which
vehicle she hits, and the driver of the vehicle she hits is likely to be
seriously injured or killed. Let’s further assume each car contains a
single occupant. A selfless, harm-minimizing driver A will hit the
selfish driver if they believe the spared drivers will go on to face
their own ethical situations. In these future cases, the selfless driver
will act to minimize harm, but the selfish driver may not.
What if driver A is selfish and faces the same choice? She, too,
can be expected to spare the selfless driver. After all, she might run
into the driver she spares in the future, and she can expect to fare
better in an encounter with a harm minimizer than in one with
another person looking out for herself. It is better for the selfish
driver to be surrounded by selfless drivers, in other words, and pick
each other off in cases where a choice between selfless and selfish is
forced.
It’s unlikely that a human driver has ever been in a situation
exactly like this. But because ethical algorithms for AVs may in
principle be easier to discover, AVs may be in the position to make
these kinds of decisions more often. The actual ethical algorithms
we use to govern AVs will likely be complex, but so long as some
AVs follow more “selfish” algorithms (ones that favor the interests of
their owners and passengers)18 and some AVs follow more “selfless”
algorithms (e.g., ones that try to minimize harm):
(a) AVs will have reason to distribute risk from the “selfless” AVs to the “selfish”
ones. That is, regardless of one’s culpability or safety otherwise, and if we
believe that there is some room for reasonable disagreement about one’s
partial duties to oneself, selfish drivers will always be prioritized for crashes.
Thus, even if one is an otherwise very risk-averse but selfish driver, one will
be potentially targeted by other vehicles in cases where a choice is forced.
(b) Some AVs will have reason to deceive other AVs about which ethical
algorithms they are following (or provide incentives to encourage other AVs
to spare them). That is, selfish drivers will have an incentive, where it is
possible, to behave or code their vehicles as selfless even if they ultimately
behave selfishly. That is, it will behoove drivers, and potentially car
companies, to say one thing about the ethics of their vehicles but do
something quite the opposite.

The second problem here becomes more acute because even if


we have some kind of signal to take the ambiguity out of signals
between vehicles, there may be uncertainty about the veracity of
these signals. Anyone who has spent time on the Internet will
understand the perils of a network that relies on trust among a large
number of anonymous actors. But these actors are now cars
weighing many hundreds of pounds, doing speeds fast enough to kill
humans (individually or in groups).
An AV’s ability to accurately judge how likely it is that other AVs
follow specific ethical algorithms will also be important in cases
where it must coordinate with other AVs to achieve the best
outcome. A harm-minimizing algorithm will favor collective AV
actions like grouping together to act as an enormous brake for a
truck that’s out of control, or to get out of the way when an AV
needs to get its driver to an emergency room.19 No single AV could
do this on its own, and it would increase the total risk of harm for a
single AV to attempt it. So harm minimizers will need to broadcast
that they are harm minimizers and will need to determine if other
AVs are, too. If the AVs are connected through wireless
communication, this might be achieved easily. But if they’re not, a
harm-minimizing AV will still have reason to gather evidence about
the other AVs’ algorithms. Either way, this kind of signaling raises
questions about how we tell whose moral commitments are whose in
the AV world, and how we ensure trust and compliance on the road
between these different ethical AVs.

Broad Social Impacts


Our second case is how an AV should distribute risk among more
members of society when interacting with hazardous materials.
Consider how an AV might “interact” with hazardous materials. It
might be used to transport them, and it might find itself in a
potential crash scenario with a vehicle transporting them.20 We can
expect there to be extra safety precautions for AVs allowed to
transport hazardous materials. Let’s call an AV allowed to carry
hazardous materials a “Hazmat AV.” Some potential safety
precautions for Hazmat AVs might include the following:

• mandatory and specially trained “drivers” or “monitors,” at least at the


beginning (perhaps as an addendum to the US Department of Transport’s
Hazardous Materials Regulations 49 CFR 173.134, or related regulations in
other jurisdictions);
• tracking mechanisms giving authorities access to the Hazmat AV’s location
and status;
• symbols, markings, or wireless signals to alert other vehicles to be extra
cautious, including those that are machine-interpretable such as a “Hazmat
QR code”; and
• designs allowing the Hazmat AV to make different decisions from ordinary
AVs in dangerous situations (e.g., collisions, weather events, attempted
hijacking).

These possibilities raise questions about how Hazmat AVs should be


designed and regulated. But let’s consider a case where an AV
encounters a Hazmat AV and must decide how to distribute risk
among society. Consider the following example:
Pandemic: An unoccupied AV must swerve to the left or right. If it swerves
left, it will probably crash into what it knows to be an unmanned Hazmat AV,
knocking it into a lake. If it swerves right, it will probably crash into a vehicle
containing several passengers. There will be no immediate injuries and deaths
if it hits the unoccupied Hazmat AV, and almost certain deaths and serious
injuries if it hits the other vehicle. So it hits the Hazmat AV. But this causes a
recombinant flu virus21 to be released into the environment, infecting the
surrounding poultry and triggering a pandemic in humans which wipes out 15
percent of the world’s population.

In this case, an AV is provided a case in which the aggregate harm


of permitting one kind of collision diverges strongly from the
immediate implications of the collision. This kind of case is again
rare but plausible, given that hazardous substances are routinely
shipped via road and may be detained, crash, or even stolen.22
Moreover, incredibly hazardous biological materials may be
transported by accident, such as in 2014, when it was revealed the
USDA received highly pathogenic avian influenza samples that had
contaminated its requested low-pathogenicity samples.23 It seems
that on the face of it, that given the extreme stakes, the AV should
collide with the passengers over the Hazmat AV.
Consider, then, either of the next two scenarios:

Possible Hazard I: The same as Pandemic, except that this time the AV
determines that, if all the usual safety precautions have been taken, then
there’s still a very small chance that any dangerous materials will leak into the
lake even if the Hazmat AV falls in.
Possible Hazard II: The same as Pandemic, except that this time the AV
is sure that dangerous materials will end up in the lake if it hits the unmanned
AV, but it’s uncertain whether it is a Hazmat AV, or whether the materials it
might carry are dangerous enough to warrant seriously injuring the lives of
several people.

It’s much less clear what the AV should do when facing this kind of
uncertainty. One possibility is to design it to do a cost-benefit
analysis and then to act so as to maximize expected well-being. But
we might also want it to give special priority to its own passengers,
at least, if it has any. And it may be difficult to determine exactly
how it should do the analysis. These are incredibly rare but
potentially high-impact events. When dealing with probabilities so
small and consequences so large, slight differences in its approach
could lead to noticeably different decision-making.
Importantly, the cost of avoiding these incidents could be high
enough to deter self-interested firms from responding to them. In
the case of Possible Hazard II, it is foreseeable that a manufacturer
could take the time to develop a contingency to detect a Hazmat AV
with very high confidence and make sure that their vehicles always,
or nearly always, respond appropriately. However, the costs in doing
so could be prohibitively high for the manufacturer. Their cars might
be slower in navigating terrain (to allow more time to notice and
respond to Hazmat AVs or similar kinds of threats); or the additional
development cost for a firm might reduce their competitiveness. In
either case, individual manufacturers have few incentives to respond
to Possible Hazard I or II, especially if the probability one of their
vehicles will encounter a Pandemic case is very low.24
We’ve considered cases in which an AV has to decide whether to
crash into a (potential) Hazmat AV. We can also consider cases in
which the crash is unavoidable, but in which the AV must decide
how to crash. For example:
Town or Ocean: A Hazmat AV is transporting a large amount of toxic waste
along a road along the edge of the ocean. The AV is a long truck equipped
with symbols and flashing lights to warn other vehicles to keep their distance.
But rain has made the road slippery and the Hazmat AV has started to skid
out of control. A passenger AV rounds a corner to find the truck skidding
toward it perpendicular to the road. The passenger AV can either swerve to
the left or to the right. Both maneuvers are expected to put its own
passengers at the same amount of risk. But if it skids to the left, the truck will
likely end up falling down off the road into the ocean. And if it skids to the
right, the truck will likely end up crashing into the main street of a small town.
The waste is expected to spill out either way. It will be easier to collect,
contain, and dispose of the waste if it ends up in town. But if it ends up there,
several people are likely to die from exposure or from drinking contaminated
water. If it ends up in the ocean, no one will die from it directly. But it will
devastate the ecosystem for hundreds of miles, and the town and a much
larger area will suffer economically and from higher rates of illness for years.
If the AV is able to make this kind of assessment of the situation, what should
it do? Or, supposing the Hazmat vehicle has made the assessment, what
should it signal the AV to do?

There could also be cases in which a passenger AV or its passengers


become contaminated. For example:

Contaminated AV: A passenger AV arrives at the scene of an accident. Its


passengers get out to try to help, find an unconscious injured person, and
bring the person into the AV to take them to the nearest hospital. (Perhaps
they’ve been on the phone with emergency services and that’s what they’ve
been told to do.) But the AV, meanwhile, has received a message from one of
the vehicles in the crash: a Hazmat AV. The passenger learns that the Hazmat
AV was carrying a dangerous virus and that no one should leave the scene of
the crash for any reason. Its passengers instruct it to take them to the
hospital anyway. What should it do?

We’ve focused on passenger AVs because we’ve been trying to show


that we may eventually want them to be designed to make wide-
scope risk distribution decisions that take more than the well-being
of their passengers and other road users into account. But we have
even more reason to want AVs carrying hazardous materials to make
good wide-scope risk distribution decisions, and they’d face more
important ones more often. For example, the routes passenger AVs
choose may determine which neighborhoods have more traffic and
noise pollution and higher accident rates. So the route a passenger
AV chooses will determine how some small risks are distributed
among a wider group of people than those who are on the road at
the time. An AV transporting hazardous materials also has to choose
which routes to take. So it also determines how similar risks of small
harms are distributed. But in addition, there’s a very small risk of
great harm to all the neighborhoods it passes through.

Conclusion
In this chapter, we describe an evolution in thinking around the
ethics of AVs. We identify current debates about AVs and describe
conceptual, empirical, and metaphilosophical problems that arise
with the current focus on trolley-like cases of risk in AVs. We then
show two cases in which deeper philosophical inquiries into AV
behavior might shed new light into applied problems with the
development and deployment of these technologies. Like trolley-like
problems, however, these are problems that would benefit from a
close, interdisciplinary collaboration between philosophers and
empirical researchers to model and examine these problems in a
range of contexts.

Notes
1. We set aside what precisely counts as “autonomy.” See Society for
Automotive Engineers, “J3016B: Taxonomy and Definitions for Terms Related
to Driving Automation Systems for On-Road Motor Vehicles—SAE
International,” June 15, 2018.
https://www.sae.org/standards/content/j3016_201806/.
2. Barbara H. Fried, “What Does Matter? The Case for Killing the Trolley
Problem (or Letting It Die),” The Philosophical Quarterly 62, no. 248 (July 1,
2012): 505–29. https://doi.org/10.1111/j.1467-9213.2012.00061.x.
3. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” in Virtues and Vices and Other Essays in Moral, 19–32 (New York:
Oxford University Press, 1993).
4. Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph
Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, “The Moral
Machine Experiment,” Nature 563, no. 7729 (November 2018): 59–64.
https://doi.org/10.1038/s41586-018-0637-6.
5. Sven Nyholm and Jilles Smids, “The Ethics of Accident-Algorithms for Self-
Driving Cars: An Applied Trolley Problem?,” Ethical Theory and Moral Practice
19, no. 5 (July 2016): 1275–89. https://doi.org/10.1007/s10677-016-9745-2;
Noah J. Goodall, “Away from Trolley Problems and Toward Risk Management,”
Applied Artificial Intelligence 30, no. 8 (November 2016): 810–21.
https://doi.org/10.1080/08839514.2016.1229922.
6. Johannes Himmelreich, “Never Mind the Trolley: The Ethics of Autonomous
Vehicles in Mundane Situations,” Ethical Theory and Moral Practice 21, no. 3
(May 2018): 669–84. https://doi.org/10.1007/s10677-018-9896-4.
7. E.g., Michael Weisberg, Simulation and Similarity: Using Models to
Understand the World (New York: Oxford University Press, 2013).
8. Geoff Keeling, “Why Trolley Problems Matter for the Ethics of Automated
Vehicles,” Science and Engineering Ethics 26, no. 1 (February 1, 2020): 293–
307. https://doi.org/10.1007/s11948-019-00096-1.
9. Heather M. Roff, “The Folly of Trolleys: Ethical Challenges and Autonomous
Vehicles,” Brookings, December 17, 2018.
https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-
and-autonomous-vehicles/.
10. Cf. Judith Jarvis Thomson, “The Trolley Problem,” The Yale Law Journal 94,
no. 6 (1985): 1395. https://doi.org/10.2307/796133.
11. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” Oxford Review 5 (1967).
12. Fritz Allhoff, Nicholas Greig Evans, and Adam Henschke, “Not Just Wars:
Expansions and Alternatives to the Just War Tradition,” in The Routledge
Handbook of Ethics and War, edited by Fritz Allhoff, 1–8 (New York:
Routledge, 2013).
13. Lara Buchak, “Why High-Risk, Non-Expected-Utility-Maximising Gambles Can
Be Rational and Beneficial: The Case of HIV Cure Studies,” Journal of Medical
Ethics 43, no. 2 (February 1, 2017): 90–95.
https://doi.org/10.1136/medethics-2015-103118.
14. A. Bjorndahl, A. J. London, and Kevin J. S. Zollman, “Kantian Decision Making
under Uncertainty: Dignity, Price, and Consistency,” Philosophers Imprint 17,
no. 7 (April 2017): 1–22.
15. Seth Lazar and Chad Lee Stronach, “Axiological Absolutism and Risk,” Noûs
53, no. 1 (March 2019): 97–113. https://doi.org/10.1111/nous.12210.
16. Pamela Robinson et al., “Modelling Ethical Algorithms in Autonomous Vehicles
Using Crash Data,” IEEE Transactions on Intelligent Transportation Systems
(May 2021), doi:10.1109/TITS.2021.3072792.
17. And they might also be less predictable, depending on the method of
developing an algorithm and its capacity to change over time. Many if not
most original equipment manufacturers—in the main, standard auto
companies—rely on formal methods to develop their algorithms. These
algorithms are predictable in the sense that their program is transparent, and
while it is possible to not test them adequately, they are in principle
understandable and predictable. Deep learning algorithms, however, and in
particular the development of algorithms through neural nets, provide
behavior that is interpolated from existing data. They can be very
sophisticated but are largely (though not exclusively, see Kiri L. Wagstaff and
Jake Lee, “Interpretable Discovery in Large Image Data Sets,”
arXiv:1806.08340[2018]) transparent in the sense that it is not possible to
know the exact form of the algorithm—they are sometimes called “black box”
algorithms. In the case of neural nets, emergent conditions could result in an
asymptotic, unpredictable response that diverges strongly from human
expectations or the data set.
18. https://www.businessinsider.com/mercedes-benz-self-driving-cars-
programmed-save-driver-2016-10
19. E.g., Charlie Osborne, “Tesla’s Autopilot Takes the Wheel as Driver Suffers
Pulmonary Embolism,” ZDNet. https://www.zdnet.com/article/teslas-autopilot-
takes-the-wheel-as-driver-suffers-pulmonary-embolism/.
20. It might also find itself about to crash into a facility handling hazardous
materials, but we won’t discuss this or other possibilities here.
21. See, e.g., Evans, Lipsitch, and Levinson (2016).
22. Lisa Brown, “Truck Carrying Radioactive Material Found after It Was Stolen in
Mexico,” NACCHO, December 6, 2013.
https://www.naccho.org/blog/articles/truck-carrying-radioactive-material-
found-after-it-was-stolen-in-mexico.
23. Centers for Disease Control and Prevention, “Report on the Inadvertent
Cross-Contamination and Shipment of a Laboratory Specimen with Influenza
Virus H5N1,” Atlanta, GA, August 2014.
https://www.cdc.gov/labs/pdf/InvestigationCDCH5N1contaminationeventAug
ust15.pdf.
24. The formal demonstration for these kinds of problem, and their ethical
significance, can be found in Lipstich, Evans, and Cotton-Barrett (2016).
2
Autonomous Vehicles, the Badness of Death,
and Discrimination
David Černý

Introduction
While autonomous vehicles (AVs) promise a number of benefits,
introducing them into traffic may also lead to some negative
consequences. I will call the benefits “positive factors” and the
negative consequences “negative factors.” From the ethical point of
view, it is important that the positive factors by far prevail over the
negative ones, as this makes it possible to postulate the following
thesis regarding the external justification of introducing AVs into
road traffic:

External justification of introducing AVs to road traffic (EXT). From


the ethical point of view, introducing AVs into everyday traffic is justified by
the prevalence of positive factors over negative ones.

EXT implies that if positive factors outweigh negative ones, we have


good (and even convincing) reasons not only to introduce AVs into
traffic but also to strive to proceed with that introduction as fast as
possible. However, the fastest introduction of AVs into traffic is
predicated on the nonexistence of contrasting deontological
constraints. Therefore, it is essential to demonstrate that these
possible deontological constraints regarding, for example, fair rules
of distributing harm or the issue of discrimination can be solved in a
principled way. Passing over problems of technical character, it is
necessary to address the ethical rules by which using AVs ought to
Another random document with
no related content on Scribd:
2168 Huffman R J 5 June
H 19
5 May
862 Heeler A 64
D 3
June
1633 Harper D 7K
5
8 June
1816 Hurlay J
H 11
17 Mar
12749 Hubanks C, S’t 65
H 8
5 Oct
10360 Ireland J S Cav 64
H 5
Aug
4461 Jones C 4B
1
8 Sept
8656 Jenks G A, S’t
C 13
5 Sept
9401 Jones J
C 21
17 July
3204 Kolenbrander H
K 12
17 April
7 King Alexander
H 5
2 Aug
6464 King E Cav
C 22
July
3560 Kesler F 4B
18
Oct
11281 Knight J H, S’t 9 I
22
Lambert Chas, 39 May
892
Cor K 5
5 May
2045 Littleton J
- 15
7959 Lord L 13 Sept
G 6
13 Sept
8263 Lanning A
I 9
Sept
9438 Lowdenbeck N 5B
21
Lowelenbuck D Oct
10224 5B
R 2
Oct
10881 Layers W 5E
14
Nov
11752 Luther J, Cor 9B
2
Feb
12629 Littlejohn L D Cav 4B 65
10
39 Mch
257 Moore John 64
H 31
April
307 Myers M 4K
2
39 April
450 Moon James
H 9
4 May
1192 McMullen James
C 18
5 May
1317 Miller F
H 23
May
1472 McCameron W 4A
30
14 June
2027 McAllister A P
E 15
11 July
3423 McNeil J W
I 16
13 Aug
4804 Moore Wm
A 5
17 Aug
5445 Murray J J
I 12
6167 McCall Thomas Cav 8 Aug
M 19
13 Aug
6815 Merchant Wm
G 25
Aug
6878 Maynard J D 4B
26
McDonald D B, 5 Aug
7143 C.
S’t M 29
16 Sept
8120 McClure Z, S’t
C 8
11 Sept
9274 Martin S S
G 19
16 Sept
9585 Mann J
- 23
5 Oct
10110 Miller J
D 1
5 Oct
10827 McCoy G B, Cor
G 13
4 Oct
16950 Mercer John
C 14
31 Nov
11745 Miller E, Cor
D 2
Jan
12484 Martin J B 5B 65
19
8 Jan
12561 Macy C S Cav
C 31
26 Aug
6959 O’Connor P 64
D 27
5 Sept
9500 O’Verturf P W
H 22
16 Nov
12160 Osborn F L
A 26
1972 Peterson J 76 June
E 15
9 July
2860 Palmer L H
D 4
31 Aug
6200 Phillpot C P
B 19
27 Aug
9370 Putnam O
F 20
16 Oct
10270 Pitts J
I 3
8 Oct
10297 Pugh A, Cor
M 3
Oct
10413 Parker D 4 I
6
10 April
18 Rule Y A
A 12
5 June
1796 Ryan Charles
G 10
Richardson John June
1820 5 I
C 11
June
1951 Ratcliff J 4 I
14
16 Aug
5878 Reed R
I 16
13 Aug
6572 Robinson D
G 23
Rice H M, Sut’s 9 Aug
7400
C’k - 31
Sept
9413 Riley M 5A
21
9 Sept
9483 Reeves S J
D 21
2 Sept
10015 Reed C
C 29
10017 Rogers L 4F Sept
29
4 Dec
12264 Russel E
G 12
8 Dec
12287 Raiser A 64
C 14
April
451 Stout John 5A
9
5 April
599 Shuffleton J
H 17
April
641 Seeley Norman 9B
20
10 July
2712 Smith R F, Cor
H 1
30 July
2845 Shutter J
K 3
July
3060 Sparks M J 5K
9
5 July
4178 Sutton S
H 28
Smith Charles, 20 Aug
4773
Cor F 4
30 Aug
5410 Starr C F
H 12
16 Aug
5892 Sheddle G
C 16
3 Sept
7954 Seims Wm
D 6
13 Sept
8200 Smith J
A 8
5 Sept
9209 Smith O
D 19
9125 Sherman J W 3 I Sept
17
5 Sept
9234 Spears J Cav
H 19
Sept
9367 Smith D Cav 3B
20
5 Nov
11789 Shaw W W
H 4
16 Mch
12729 Smice W 65
E 4
Oct
10884 Sayres W 5E 64
14
June
1981 Taiping Wm 5K
15
5 July
3986 Thopson M
G 25
Aug
6687 Tivis C 5A
24
4 Sept
9720 Tomme B Cav
M 25
3 Nov
11708 Thier A F
- 1
Voke John C, Oct
10351 5E
Cor 5
Whitman O R, June
1674 5E
Cor 6
June
2162 Wells F, S’t 5 I
19
June
2213 Wittesrick A K 9K
20
July
2855 Wolf B F 8E
4
2 Aug
4916 Wolfe J H
C 6
6934 Wheelan J, S’t 26 Aug
D 26
Sept
8101 Walworth C, S’t 5K
17
Woolston S P, 13 Sept
8131
S’t H 8
Sept
9221 Ward O R 3E
29
13 Sept
9486 Wagner Joseph
E 21
31 Sept
9727 Wersbrod Y
A 25
10 Oct
10848 Wilson P D
G 13
Woodward J, 9 Oct
10942
Sut - 14
5 Oct
11114 Whiting J
H 18
Oct
11141 Whitehead N B Cav 5L
19
57 Mch
12741 Wen C 65
C 6
Total 174.

KANSAS.
Freeman F J, S’t 8 June
1614 64
F 4
8 June
1935 Gensarde Thos
A 14
1 Nov
12127 Sweeney M
H 22
11139 Weidman W 8 Oct
B 19
8 June
1663 Williams C A
A 6
Total 5.

KENTUCKY.
Allen Sam’l S, 13 April
329 64
Cor F 2
11 April
674 Alford George Cav
B 22
11 May
1575 Anderson S Cav
D 3
July
3385 Adams J D Cav 1 I
16
July
3759 Ashley J M Cav 1L
22
11 Aug
4723 Allen Wm, Cor Cav
C 4
39 Aug
4894 Atkins A Cav
H 6
18 Aug
6093 Anghlin J A, Cor C
B 18
13 Aug
6720 Arnett H S Cav
A 24
15 Oct
10514 Adamson Wm “
K 8
27 Nov
11759 Adams J L
G 3
4 Jan
12426 Arthur D 65
G 9
12528 Ayers E 52 Jan
A 26
52 Jan
12703 Ayers S 65
A 26
Jan
12593 Arnett T Cav 4F
5
1 Mch
193 Bow James “ 64
- 27
Mch
201 Burrows Wm “ 1K
31
11 April
366 Byesly Wm “
E 2
1 April
379 Baker Isaac “
H 5
12 April
413 Basham S “
E 7
11 April
419 Button Ed “
D 18
6 April
608 Burret B “
D 18
4 April
609 Bloomer H “
G 18
3 April
803 Baker A W “
C 29
12 May
832 Boley Peter
L 1
11 May
891 Bird W T Cav
H 5
14 May
857 Bailey A W
G 2
May
1167 Burton Tillman Cav 1F
17
1200 Butner L B, S’t “ 6 I May
18
11 May
1263 Bell P B “
I 21
8 May
1362 Barnett James “
H 25
12 June
1566 Baird Sam’l J “
D 2
11 June
1789 Bishop D L “
A 10
11 June
2022 Bowman G “
D 15
9 June
2423 Bray H N, Cor “
H 24
12 June
2529 Buchanan S “
F 26
11 July
2760 Ball David “
B 2
1 July
3087 Beard J C, S’t “
C 9
July
3228 Brophy M “ 5 I
12
4 July
3433 Bailey F M “
G 17
11 July
3909 Banner J “
C 24
July
3998 Bridell S, Cor “ 3F
26
16 Aug
4562 Booth Z, S’t “
E 2
Aug
4653 Barger George “ 5 I
3
Aug
4835 Baker Wm “ 3 I
6
4971 Bigler A “ 6B Aug
7
11 Aug
5471 Bailey J H “
A 12
1 Aug
5644 Branan H “
G 14
27 Aug
6576 Boston J “
E 23
1 Aug
6727 Bottoms J M “
H 24
11 Aug
9551 Brinton W J, S’t “
C 23
12 Sept
9568 Barnett A “
K 23
10 Sept
9628 Brown J “
I 24
13 Sept
9740 Boyd M “
A 25
5 Oct
10147 Batt W
G 1
Oct
10202 Byron H M, S’t C 1 I
2
Oct
10451 Bill B S Cav 1K
7
Oct
10816 Bodkins P, Cor “ 1K
12
11 Oct
10859 Bagley T “
- 13
Oct
11052 Brickey W L 4F
17
11 Oct
12256 Baldwin J W
H 21
11303 Brown E W 4F Oct
22
4 Oct
11491 Barber T Cav
H 26
Nov
12066 Brannon J 3B
13
Dec
12304 Beatty R 5B
18
11 Dec
12333 Barnes J
D 25
11 Dec
12360 Brodus O Cav
A 30
45 Jan
12421 Britton J 65
F 9
11 Aug
5098 Bowman Henry C 64
F 9
12 Mch
12777 Balson L
B 15
10 Oct
11483 Cranch J P
D 26
14 Mch
240 Conler Wm
I 30
12 April
484 Caldwell Wm Cav
I 9
12 April
509 Cook Theo “
D 12
11 April
672 Colvin George “
D 22
11 May
877 Christmas J “
F 4
12 May
906 Collague M “
E 8
May
1268 Cash Phillip “ 1 I
21
1600 Cole W C “ 1 June
C 4
Christenburg R 12 June
1676 “
I, S’t G 6
11 June
1687 Callihan Pat Cav 64
A 6
11 June
1856 Clane H “
E 12
40 June
2152 Clinge W H
A 18
June
2293 Cox A B Cav 6 I
21
June
2339 Chippendale C “ 1B
22
June
2446 Carlisle J “ 6 I
25
11 July
2823 Cummings J
F 3
18 July
2912 Cleming Thos
I 5
11 July
3184 Carter W Cav
H 11
4 July
60 Cristian John “
C 4
11 July
4044 Clark A H
I 27
11 Aug
4809 Chapman
H 5
23 Aug
6387 Coulter M
B 21
Sept
9835 Conrad R P 4B
27
11179 Clun W H Cav 11 Oct
L 19
6 Oct
11486 Chatsin W M “
H 26
4 Jan
12447 Carcanright 65
C 13
4 Jan
12700 Cook J P
G 26
June
2223 Corbitt Thos 5A 64
20
11 Sept
8113 Coyle C Cav
I 7
1 Aug
4740 Chance A J “
C 5
12 Apr
421 Dupon F
G 7
11 May
1388 Delaney M Cav
I 26
12 May
1414 Dugean J R, S’t
K 27
11 June
1568 DeBarnes P M
C 2
1 June
1027 Demody Thos
H 4
12 June
1867 Drake J H
G 12
5 July
2736 Davis B
C 1
12 Apr
23 Duncan E Cav
G 15
39 July
3623 Dodson E
H 20
Apr
27 Derine George Cav 1 I
17
3924 Davis G C 12 July
F 25
11 July
3966 Derringer H
I 25
11 Aug
4510 Dulrebeck H
E 1
4 Aug
4556 Delaney H Cav
H 2
Aug
5088 Dounty P 5F
8
Aug
5899 Daniel R 9F
16
6 Oct
11405 Disque F, S’t Cav
G 24
Dec
12280 Duland D W 3K
13
4 Feb
12623 Dannard W 65
D 9
Feb
12684 Dipple S 4E
21
May
1109 Dinsman H Cav 4E 64
15
13 July
2805 Davis J P
A 3
6 June
2117 Davis C Cav
D 30
Apr
639 Eodus James 1F
20
11 May
1174 Edminston J W
A 17
Edwards H S, May
1439 8K
Cor 27
2544 Emery J 10 June
G 27
Aug
2341 Errbanks J Cav 1A
11
Oct
12277 Esteff J 1L
22
1 May
1447 East R
G 29
Apr
384 Falconburg I K 1A
5
4 June
2540 Fleming R
D 27
July
3640 Forteen John 8A
20
1 July
4344 Fenkstine M
D 30
6 Aug
6763 Featherstone J
C 25
4 Aug
7068 Fritz J Cav
G 28
Oct
10280 Funk L 1 I
4
23 Oct
11549 Frazier C R
H 27
17 Nov
11720 Fletcher T
E 1
11 June
1612 Gritton G Cav
D 4
18 June
1618 Graves G
C 4
11 June
1841 Gritton M Cav
B 11
June
2583 Gibson John 6L
27
3680 Griffin B 11 July
E 20
July
3663 Glassman P Cav 4B
20
4 July
3888 Gonns J M
H 24
July
4438 Gather M Cav 4F 64
31
45 Aug
5779 Gullett A
K 15
11 Aug
7197 Green J B, S’t
I 29
Sept
7817 Grabul B 1F
4
4 Sept
8049 Gury J
H 6
20 Sept
8903 Gray C D
G 18
40 Sept
9318 Gett John, S’t
G 20
11 Sept
9950 Gill W J Cav
H 28
13 Sept
10053 Gower J C
A 30
Oct
10650 Gibson A Cav 8K
10
Oct
10831 Grulach J, S’t 4K
13
Nov
11910 Grimstead J R 1E
8
11 Nov
12022 Griffin R
E 15
1235 Gregory H Cav 12 May
D 20
12 Mar
81 Hauns J B
K 20
Holloway Mar
237 4 I
Richard 29
40 Apr
289 Harley Alfred
K 1
Apr
292 Hood G Cav 5F
1
1 Apr
348 Hammond J W
G 2
1 Apr
376 Harper J
C 5
13 Apr
402 Harlow Harvey
I 6
12 Apr
614 Hess Wm F Cav
M 18
11 Apr
643 Hendree A, S’t
F 20
11 May
1026 Hillard Geo
D 11
11 May
1127 Hoffman C Cav
E 15
Hughes Thos, 9 June
1584
S’t G 3
28 June
1760 Hennesey J
D 9
4 June
1878 Hundley G W Cav
- 12
18 June
1956 Hazlewood J H
G 14
June
1990 Hamner A 9B
15
2490 Huison J W, S’t 9B June
26
June
2705 Hillard S Cav 1 I
30
18 July
3239 Henderson J
B 12
11 Apr
26 Hooper Saml Cav
D 16
1 July
3944 Hooper J
H 25
45 July
3994 Hickworth J
H 26
1 July
4313 Hall J H Cav
C 30
June
4420 Hammontius P 6L
30
1 Aug
4970 Hayner E
D 7
12 Aug
5059 Haines J
D 8
15 Aug
5091 Harrington C
K 8
Aug
5793 Hatfield L 1F
15
11 Aug
6193 Hendrie Wm Cav
F 19
23 Aug
6801 Hardison G
I 25
Sept
8032 Hise P 4 I
6
11 Sept
8111 Hicks P Cav
F 7
8181 Heglen C “ 4 I Sept
8
18 Sept
9376 Hanker R “
F 20
11 Sept
9599 Hyrommus Jas “
H 23
Oct
10683 Halton S M 2K
11
Oct
11054 Halligan J 4A
17
Oct
11095 Hall F Cav 1F
18
11 Oct
11132 Hazer John
I 18
12 Oct
11251 Harter F Cav
M 21
Dec
12293 Hays J F 5A
15
4 Jan
12518 Hasting J 65
H 24
Feb
12638 Hudson B F 4A
11
24 Aug
5734 Inman John 64
A 15
3 Sept
9757 Isabell J M
H 25
11 Oct
11392 Inman W Cav
H 24
Dec
12203 Isabel A 1K
1
45 Apr
649 Jackson John
D 20
June
2679 Jeffries Wm Cav 1A
30

You might also like