You are on page 1of 16

Mauga 1

Nathaniel Mauga
CST Writing Lab
18, February 2017
The Modern Trolley Dilemma

Imagine driving down the highway with a passenger, youre following all traffic laws and

driving at the speed limit, when suddenly a man appears in the middle of the road. With no time

to break to a stop before him, you have two choices, hit the man head on, or swerve into a wall

killing both you and your passenger. What do you do, and why? Regardless of your choice,

someone will end up dying. This morbidly depressing scenario thankfully doesnt happen to

most drivers. In fact, chances are, one will never have to make such a decision in the averaged

one million miles they will drive in their lifetime (fhwa, n.d.). Because of this, its likely and

understandable that most people dont consider this ethical dilemma of any importance, that is,

until you factor in driverless cars. Driverless cars have already driven well over two million

miles on public roads as of today. That's already twice the amount of miles an average person

will drive in their lifetime, and these cars havent even been sold to the mass public. When

considering the colossal amount of miles driverless cars will accumulate in the future, theres no

doubt a machine will grievously need to choose between the two one day: killing a pedestrian, or

sacrificing itself and its passenger. Its important to program these machines ahead of time to

deal with every scenario, but how do we prepare for these situations? How do we program a

robot to make ethical choices that humans themselves have trouble making? How do we define

in a concrete way how to choose between two sets of lives? Turns out, there is debate on this

topic right now. This argument will not only shape the industrys future, but will make huge

impacts on how humanity perceives, defines, and values human life.

Self-driving cars, right around the corner

Mauga 2

Mankind has been dreaming of autonomous cars for a while now, in fact, as far back as

1940. Author Norman Bel Geddes predicted in his book, Magic Motorways, ...these cars of

1960 will have in them devices which will correct the faults of human beings as drivers

(Geddes, 1940, p. 56). Not much progress was seen towards the technology since then, not until

the early 2000s when the US government organized the DARPA Grand Challenges. These trials

issued by the Department of Defense set out to challenge teams to program and build robot cars

to drive in various conditions completely unmanned (DARPA, n.d.). Many teams were inspired

by these challenges, and over the course of 10 years, numerous tech and auto companies

including Tesla, Mercedes-Benz, Google, and even Apple have picked up the technology and

have seriously set out to provide driverless cars for the public as early as 2020 (Muoio, 2016).

Tesla CEO Elon Musk has even stated that the latest Tesla vehicles have the hardware and

software needed to be fully autonomous, and that after many more test miles, the vehicles full

auto pilot mode will be released in an over-the-air software update (Lambert, 2016). The same

company suffered its first, in fact the world's first, self-driving related fatality recently. Ohio

resident, Joshua Brown, was killed when his Tesla Model S crashed into a turning truck while on

autopilot (Singhvi & Russell, 2016). Although this incident contributes little to the ethical

dilemma at hand, it is however a testimony to the widespread use of self-driving technology, and

the experience the industry is rapidly gaining. Its clear that driverless cars are around the corner

from being on the market, and they will very soon have millions of miles of experience on public

roads, only adding to the urgency of this ethical issue. It seems that in some cases like this,

technology has and is evolving faster than society - the more advanced technology becomes, the

more topics in ethics need to be addressed and the more the hypothetical needs to be taken

Mauga 3

The Trolley Dilemma

Scientifically referred to as the trolley dilemma, this classic philosophical thought

experiment describes the circumstance above. In 1967, british philosopher Philippa Foot

proposed this hypothetical scenario that has since been used in discussion about various ethical

paradigms (Crockett, 2016). The thought experiment, put very simply, goes like this: a trolley is

lethally headed towards five workers who cannot move in time to avoid it, you have a choice to

pull, or not pull, a lever, that would divert the trolley on another track with only one worker who

cannot move in time to avoid it (DOlimpio, 2016). What do you do, and why? Participants of

this thought experiment make decisions based off of numerous ethical paradigms. One of the

most popular ways participants of this thought experiment solve this problem is with the use of

utilitarian ethics (Crockett, 2016). That is, making a choice for the greater good for everyone

involved. Most participants answer that they would divert the trolley into the one worker to save

the most amount of people, thus resulting in the most amount of good done (Crockett, 2016).

Although there are other ways this problem is answered, there is usually a clear majority, but

even the majority seems to depend on how the dilemma is portrayed. For example, given this

slightly modified (and simplified) version, participants drastically change their perspectives:

instead of being able to switch the path of the trolley, you are now on a footbridge, and can push

a large person in front of the trolley, killing him, to save the five workers (DOlimpio, 2016). The

majority of participants who are asked what they would do and why, no longer answer with

utilitarian motives. Most lean towards a deontological paradigm based off the moral standard that

killing is never ok (DOlimpio, 2016). Participants have issue with killing one to save five in this

seemingly logically equivalent scenario. Why do people make such different choices on

situations that are so similar? Regardless of the outcome, people seem to stray away from actions
Mauga 4

that result in directly killing someone, rather than just letting someone be killed. Simply put,

pushing someone off a bridge is murder, diverting a trolley into someone is letting someone die.

The answer is now clearly based off of perspective, off of how close one has to get to the actual

act of killing. But how does this tie in to driverless cars? Right or wrong doesnt depend on the

outcome, but on how the situation is portrayed, thus further complicating the question of how to

program cars to make these decisions. Basically, it doesnt necessarily provide a direction for

programmers to go, but tells them where not to go, when programming ethics into self-driving


So What? Who Cares?

Well, clearly the issue is important because robots are programmed what to do, and

always rely on a set of directions on how to deal with every situation they come in contact with.

The stakes are high, these robots arent dealing with just numbers and math, they deal with

human beings in potentially lethal environments. So what are the stakes here? This emerging

obstacle, we will call it the modern trolley dilemma, has lasting repercussions that will also

drastically affect large companies, law makers, and anyones lives who share public roads.

Tech and Auto Companies

If the tech and auto companies make cars with choice making protocols that arent

popular, then obviously, they will lose a lot of money. So clearly, they have a lot to gain or lose

here. However, these stakeholders not only care about making profits, but have other, uplifting

values and motives behind pushing this technology with such vigor. Waymo, a company now in

charge of Googles self-driving car project is focused on making safer, more accessible

transportation available for everyone, in fact, their name comes from, A way forward in

mobility (Waymo). Teslas Elon Musk boldly attacked critics of driverless cars in his October
Mauga 5

press conference that seems to show some rather inspiring motives, if you effectively

dissuade people from using an autonomous vehicle, youre killing people (Lambert, 2016). A

few of these companies certainly have interest in the public good, as well as their own bottom

line, but regardless, business is business and the largest stake these companies have to lose is

their money.

Not much is out there right now on how these companies are going about ethical

programming, and its understandable - they say one wrong thing, they could experience huge

backlash. As a result, not one company has committed to any set of ethical guidelines for their

products. At the tech conference SXSW interactive 2016, Waymo project lead Chris Urmson was

asked about the issue directly, what happens if a car has to make some sort of tragic decision?

(SXSW, 2016). To which Urmson admitted being the second most asked question in the

business, and responded a few different ways all avoiding a direct answer (SXSW, 2016). There

isnt a good answer, Urmson admits, and continues, so the way we approach this question is

first we make our cars incredibly good defensive drivers (SXSW, 2016). He then lists the

priority in which a car tries to avoid hitting on the road; first pedestrians and cyclists, second

other moving things on the road, then lastly, static things on the road (SXSW, 2016). But this

doesnt answer the question, and is a clear indicator that companies such as Waymo are afraid to

jump in on this ethical debate. Again, why though? Companies that engage in such sensitive

topics have a lot to lose if their opinion isnt shared by a majority, and as of now, there really

isnt one shared majority on this sensitive topic.

Law Makers

There is this idea brought on by the digital revolution and the information age that

technology is outgrowing modern law. This is seen in various forms, including the inadequacy of
Mauga 6

modern law to protect creators from stolen music and media, to protect citizens from loss of

digital privacy, and to monitor currency exchange, specifically over crowd funding websites.

Some shortcomings of old law hinder its ability to protect human rights, while others hinder the

peoples ability to practice effective business. The modern trolley dilemma brings its own

challenges to modern law, mainly, who is to blame for fatalities in the case of a driverless car

accident? If the responsibility of fatal accidents isnt put on anyone by lawmakers, then whats

the incentive for anyone to do anything about it, and more importantly, how do you protect the

right to life for citizens affected?

Turns out, this pressure on lawmakers has happened before. In 2013, with the invention

of armed military drones, some were beginning to see the future complications of robots put in

delicate life or death situations. These drones were programmed to fly and carry out missions

partially, and completely unmanned, and were equipped with the ability to kill. Writer Benjamin

Wittes at proposed the question, will the world be better off if we try to

regulate full lethal autonomy, or would we be wiser... to preclude it? (Wittes, 2013). He offered

two possible outcomes; to develop a complex set of internal policies allowing full lethal

autonomy, or attempt to ban it all together (Wittes, 2013). Brian Fung, writer for steers the issue towards the topic of liability and asked, what happens when a

drone accidentally shoots a civilian? Is the person responsible the designer of the machine, the

person who programmed its software, its commander or some some other individual? (Fung,

2014). The privacy of military war strategies and the process of national regulations have made

these kinds of questions hard to get answers for, but the case should be different for driverless

cars shouldnt they? Or at least lawmakers should have a bit of experience when dealing with

these types of things right?

Mauga 7

Well in September of 2016 the Department of Transportation published a 116 page formal

guideline document for the rules of driverless cars (Love, 2016). Included were laws such as;

cars must adapt to local laws, be secure from cyber attacks, and be safe in case of a crash

paraphrased by Dylan Love at (Love, 2016). But more importantly the DoT

included this guideline, self-driving cars need to be able to make ethical considerations while

theyre out on the road, and act on them (Love, 2016). The vagueness of this rule has however

received some criticism. When looked at critically, the guideline doesnt actually do anything to

solve the issue (of the modern trolley dilemma), but, almost comically, points a finger at it like a

child. Regardless of the guidelines effectiveness, it does however prove the issue is being

thought about at the very least, and maybe the DoT has decided to refrain from a specific answer

for the same reasons the driverless car companies have. If so, lawmakers have to lose in similar

what companies do - support. Which means they will probably keep their opinion to themselves

until a more widely accepted answer circulates, and society can come to an agreement. But thats

what law should be right, a generally agreed upon mandate?

The People

Driverless car companies and lawmakers alike each have their own values and opinions,

and even have similar stakes when it comes to the modern trolley dilemma. Both also dont have

much to say when it comes to solutions to the problem, very unlike the general public. In fact,

public opinion dominates the discussion on this topic right now, and thats because they have the

most to lose, their lives. Due to the variety of peoples backgrounds, values, and opinions, there

are many different solutions expressed by public opinion. Overall, these three things seem to

guide the publics thinking when discussing driverless cars, better transportation, assured
Mauga 8

privacy, and safety, safety meaning not only for the passenger, but for everyone. But what does

that mean, safety for everyone?

Those at Massachusetts Institute of Technology set out to discover just that. Introduce,

The Moral Machine - a platform for gathering human perspective on moral decisions made by

machine intelligence, such as self-driving cars (Moral Machine). This program at has users judge what they believe is right. It does this through a series of

moral dilemmas involving a driverless car, users choose a side, and then compare their responses

to that of thousands of others. The dilemmas range from choosing between killing 2 passengers

or 5 pedestrians, to killing 1 passenger or 10 animals. The dilemmas also range from having to

take an action or even not take an action, to save more. It boils down to an experiment that finds

if users are more utilitarian, or deontological in their decision making when it comes to traffic

accidents. Iyad Rahwan, professor at MIT, stated that the social experiment results indicated that

most people lean towards utilitarian ethics, and favor saving the most people, even if it meant

killing the passenger (TED, 2016). However during his speech at TEDxCambridge 2016,

Rahwan expands on their results, people want cars to minimize total loss... problem solved,

but there is a little catch, when we asked people whether they would purchase such cars, they

said absolutely not (TED, 2016). They would like to buy cars that protect them at all costs but

they want everybody else to buy cars that minimize harm. (TED, 2016). Rahwan uncovers both

sides of the coin here, utilitarian and deontological ethics. It also seems we have two new issues

now, a social dilemma on social cooperation, and a paradox. The paradox being this, people want

to minimize total harm, but in not buying utilitarian cars, they delay the social adoption of such

cars, continuing the 35,000 deaths a year caused by traffic accidents. Clearly the modern trolley
Mauga 9

dilemma comes with many complications, and the general public has met these head on,

expressing their values and opinions along the way.

Utilitarian and Deontological Ethics

So, the issue of the modern trolley dilemma is challenged by both utilitarian and

deontological ethical paradigms. Defining these terms is necessary in order to understand how to

apply them.

Utilitarianism is the ethical paradigm in which the judgment of an action is determined by

its overall utility (Driver, 2009). Utility, in this case, is defined as the overall usefulness of an

action's consequences, which in classical hedonistic (sensually self-indulgent) utilitarianism,

usefulness is simplified to the overall pleasure, minus the overall pain (Driver, 2009). The

rightness of an action depends of this blend. Utilitarianism is derived from consequentialism,

which is the broader ethical theory that an actions ethical value is determined by its

consequences (Armstrong, 2003). Jeremy Bentham is credited as the founder of utilitarianism

and he originally created the utilitarian theory in 1789 to get government and citizens to question

the laws they were bound by (Driver, 2009). His basic premise was, if a law doesnt do any

good, then it isnt any good (Driver, 2009). Other contributors to the theory are John Stuart Mill

and Henry Sedgwick, and since its conception, the theory has received criticism and has inspired

other forms of utilitarianism. One of which is ideal utilitarianism, which considers innate beauty

as a justification for action, and questions the true value of pleasure and pain (Driver, 2009).

Deontological ethics is in direct contrast to consequentialism, and thereby utilitarianism.

Where utilitarianism focuses on the consequences, deontology explicitly does not. Deontological

ethics deals with the assessment of choices, much like utilitarianism, but the justification for such

choices is determined by whether or not the action is, required, permitted, or forbidden
Mauga 10

(Alexander & Moore, 2016). For example, an action is determined ethical if its in accordance

with the law, with static moral standards (absolutism), or even with religion (divine command

theory) (Alexander & Moore, 2016). Someone who makes choices based off deontological ethics

will consider standards like; stealing is always wrong, forgiveness is always right, and killing is

always wrong - keyword: always. Although deontological ethics encompasses a few different

theories, they all converge upon the ideas of Immanuel Kant, the 18th century philosopher.

Kants idea that judging ones actions by the intent of the agent acting gives inspiration to agent-

centered deontologism (Rolph, 2016). Kants notion that using any one person as a means to an

end is wrong, influences patient-centered deontologism (Rolph, 2016). Kant's belief that reason

is the basis for all motives, and that reason is universal, warrants contractualist ethics, a form of

deontology (Rolph, 2016). Clearly Kant is the single most influential philosopher when it comes

to deontological ethics.

So, in a nutshell, utilitarianism is concerned with the good of the consequences of a

choice, and deontological ethics is concerned with the moral standards to abide by when making

a choice. The question emerges now, which of these frameworks should driverless cars be

programmed with?

The Choices

Consider utilitarianism. Given this framework, an action is obligated to do the most good,

for all people. So a car with utilitarian ethics that is holding one passenger, and is on a path that

would kill two pedestrians, the car would swerve into a wall, killing its passenger, given thats

the only other option. Not only would this satisfy the peoples most popular viewpoint for ethical

programming, it would result in the least amount of casualties, both positive outcomes. However,

this could result in the intentional killing of a person. If the car considered the sacrifice of its
Mauga 11

passenger beneficial to a situation, and takes a conscious action to support that reasoning, then

the action is a form of murder. Of course, there are ways to minimize the unpleasantness of the

idea. If buyers of driverless cars were notified of this ethical standard, if there were laws that all

citizens needed abide by such ethical frameworks in driverless cars, then that would at least

soften the necessity of the killing because users would agree with the protocol prior to any

tragedy. Another con is the potential for complications on what is the most good. For example, in

the same tragedy, the passenger is a doctor, and the pedestrians are two homeless men, wouldnt

the most good be saving the doctor for he has the potential to save more lives than the loss of two

homeless men? This simple change to the dilemma is one of many, many mutations that could

happen. What if the passenger is an elderly person, and the pedestrians are an expecting mother

and her infant child? Even scarier consequences of the deontological paradigm can be seen if you

consider prejudice, what if this ethical dilemma was set in the 1920s, and the passenger was an

african american male or a white female, and the pedestrian was a single white male, how would

their lives be prioritized, valued? This could pave the way for disastrous situations depending on

how society, even government decides to value the lives of different people.

Now, consider a deontological framework. This ethical structure would require any action

be defined by some intrinsic or natural moral standard. For this particular dilemma, killing is

always wrong would be the deontological foil to the utilitarian framework. In this case, a car

would never deliberately, knowingly, or purposefully take an action that would result in the death

of a human being. In the case the car is accidentally on a path that would end up killing

someone, it would not take an action to prevent this if it meant killing another, no matter what.

This means that no matter if the passenger is a thief, or a terminally ill, 99 year old serial killer,

and the pedestrian is the United States President, or the sole knower of the cure to cancer, the car
Mauga 12

would never consider the lives involved, but only the situation. The car would never kill

regardless of the situation. Another con to the application of this framework is much like that of

the utilitarian framework, further complications on definitions. What if the chances of death are

calculated and compared. The chances of someone dying in the car were lower than that of the

passenger, but there are more passengers, what does the car do? How are those chances

calculated? The fact that the answers to questions like these are up to debate make way for abuse

by societies, governments, and programmers alike. This seems like an entirely other debate for

another time. Consider this pro however, if cars were programmed with this deontological

guideline, then the adoption of driverless cars would come much faster than if they were

programmed with utilitarian ethics. People have shown that they would rather buy such vehicles,

meaning way less traffic related deaths, sooner. It could even mean the survival of the driverless

car industry as a whole, which could mean an almost limitless number of saved lives. Maybe the

industry needs to adopt deontological ethics for driverless programming for its very survival, but

survival of an industry has no concern with ethical guidelines.

The Answer

Taking all the evidence into account, it seems that there is a best answer, despite the

fact that there really are multiple, understandable ways of solving this issue. Hence why its such

a tough issue - there is no blatantly wrong side, and a blaringly clear solution.

Whats wise, being responsible with human lives, and preventing the most deaths. So

program cars with utilitarian ethics - actually, a deontological framework would still be best,

because of the paradox mentioned earlier. People have expressed their disinterest in buying

utilitarian cars, despite their want to have utilitarian cars on the road. If driverless cars are

programmed to save the most lives, but never sell, people will continue to die by the thousands
Mauga 13

in traffic related accidents. How many tragic decisions will have to be made by cars in the future,

how many lives a year will be in question by the modern trolley dilemma? Enough to gamble

with the survival of self-driving cars as a whole? No, not at all. With 35,000 lives a year lost to

traffic accidents, 98% caused by human error, society should be doing everything it can to bring

driverless technology to the roads as soon as possible, and right now, that means programming

with a deontological ethical paradigm.

Mauga 14


Alexander, L., & Moore, M. (2016, October 20). Deontological ethics. Retrieved from

Armstrong, S. W. (2003, May 20). Consequentialism. Retrieved from

Crockett, M. (2016, December 12). The trolley problem: would you kill one person to save many

others? Retrieved from



DARPA. (n.d.) Grand Challenge Overview. Retrieved from

DOlimpio, L. (2016, June 2). The trolley dilemma: would you kill one person to save five?

Retrieved from


Driver, J. (2009, March 27). The history of utilitarianism. Retrieved


Fung, B. (2014, May 12). When driverless cars crash, whos to blame? Retrieved from


Geddes, N. B. (1940). Magic motorways. New York, NY: Random House

Lambert, F. (2016, September 11). Transcript: Elon Musks press conference about Tesla

autopilot under v8.0 update. Retrieved from

Mauga 15


Love, D. (2016, October 14). The loneliness of the ethical driverless car. Retrieved from

Muoio, D. (2016, August 18). These 19 companies are racing to put driverless cars on the road

by 2020. Retrieved from


Moral Machine. (n.d.) Retrieved from

Rohlf, M. (2016, January 25). Immanuel Kant. Retrieved from

Singhvi, A., & Russell, K. (2016, July 12). Inside the self-driving Tesla fatal accident. Retrieved



SXSW. (2016, March 12). Google self-driving car project | SXSW interactive 2016 [Video file].\

Retrieved from

TEDxCambridge. (2016). The social dilemma of driverless cars | Iyad Rahwan |

TEDxCambridge [Video file]. Retrieved from


U.S. Department of transportation federal highway administration. (n.d.). Retrieved from

Waymo. (n.d.). Retrieved from

Wittes, B. (2013, January 14). Tom Malinowski ups the game in Lawfares discussion of killer
Mauga 16

robots. Retrieved from