You are on page 1of 2

The ethical dilemma of self-driving cars

https://www.youtube.com/watch?v=ixIoDYVfKA0

COMPREHENSION QUESTIONS

1. In the first scenario, which option would best protect the autonomous car and its occupants?
A) The car does not swerve and stays in its lane
B) The car swerves toward the large SUV
C) The car swerves toward the motorcyclist
D) None of the above

2. In the second scenario, which option would follow the design principle to minimize harm?
A) The car swerves toward the motorcyclist with the helmet
B) The car does not swerve and stays in its lane
C) The car swerves toward the motorcyclist without the helmet
D) None of the above

3. If a robot car reacts exactly as a human would in a crash scenario, which would be true?
A) A random-number engine is needed, since some human actions are random
B) The same actions should result in the same legal consequences
C) Impossible--the sensors on a robot car will enable it to avoid crashes
D) None of the above

4. What other ethical dilemmas are mentioned here about autonomous cars?
A) Determining the value of your life versus the lives of others
B) Whether it should take a parking spot away from a human driver who's looking for one
C) Whether advertisers may influence the route-selection of the car
D) None of the above

5. How are these thought experiments like science experiments?


A) They can safely recreate actual scenarios, without any risk of injury
B) They aren't necessarily meant to recreate actual scenarios
C) They're not alike--one involves lab equipment, and the other does not
D) None of the above
VOCABULARY

a thought experiment without forethought or malice

to barrel down the highway reduce fatalities

avoid a collision remove human error

make a decision a driving equation

swerve left/right road congestion

an SUV harmful emissions

to prioritize your safety outcomes determined in advance

to minimize danger morally murky decisions

sacrifice sb.’s life penalize the responsible motorist

take the middle ground act irresponsibly

high passenger safety rating ethical considerations

drive in manual mode underlying design/algorithm

instinctual reaction discriminate against sb

deliberate and premeditated through no fault of their own

decision/homicide conscientiously

DISCUSSION QUESTIONS

1. How should the car be programmed if it encounters an unavoidable accident?

2. Which design principle for crash decisions seems the most ethical to you and why: to minimize
harm, protect the driver over other drivers, treat everyone equally, hand back control to the
driver, follow the law, or something else?

3. If you had to choose between a car that would always save as many lives as possible in an
accident or one that would save you at any cost, which car would you buy?

4. The video seems to assume that the choice between different victims is inevitable. What is your
position on that?
5. What happens if the cars start analysing and factoring in the passengers of the cars and the
particulars of their lives?
6. When an ethics judgement is needed, who should get to decide how robot cars are
programmed: engineers/programmers, manufacturers/company executives, government,
owners/occupants of the vehicles, the general public, philosophers or someone else?
7. For the crash scenarios we considered, recall the "target" you said the robot car should swerve
into. Does your answer change if you or your loved ones were that target, instead of anonymous
strangers? What does that say about ethics?
8. To what extent is a random decision better than a predetermined one to minimize harm?

You might also like