You are on page 1of 5

As Is my dream job to be a computer scientists I will probably encounter lot of difficult

problems. One of most difficult problem nowadays is ethnicity of autonomous cars.

Responsibility when fully automated vehicles crash is largely venturing into unchartered
territory as the technology is still being developed and the law is trying to keep pace with it.

There are clear risks involved with self-diving cars even with human back-ups inside the
vehicle and a woman was tragically killed in the United States after being hit in 2018. Elaine
Herzberg, 49, was struck in Tempe, Arizona, which was the first death involving a self-driving
car but there have also been multiple accidents.

In the case of the collision that killed Herzberg, the blame was divided between the safety
driver, Uber, the self-driving car, the victim, and the state of Arizona.

In a new study from Columbia University, researchers tackled the problem of liability in a
collision involving a self-driving car. Who is at fault - the driver, the car, the manufacturer, or
someone else? The researchers developed a game-theory model that regulated the drivers,
the self-driving car manufacturer, the car itself, and lawmakers. The goal of the researchers
was to come up with the optimal liability scenario while assuring that each party does not
take advantage of the other.

They found that the human “drivers” of self-driving cars put a good deal of trust in the
“intelligent” car, going so far as to take more risks. Dr. Xuan (Sharon) Di, lead author of the
paper, says, “We found that human drivers may take advantage of this technology by
driving carelessly and taking more risks, because they know that self-driving cars would be
designed to drive more conservatively."

Michael Nelson, a risk expert from Eversheds Sutherland in New York aruges that such
assumptions probably won’t wash, legally speaking. “Regretfully, I do think that we’re going
to have to run through the courts, and we’re going to get a very mixed bag because we
always do, and technology always outpaces the law. There’s only so much forethought that
regulation or legislation can provide. Can you imagine the myriad of scenarios that are going
to play out over the next 20 years? I think that the potential threat to getting this
technology out into the field, where it can do good, is the absence of a rigorous risk-transfer
system, with a clarity of law, and a great insurance compensation system in place. I think the
lack of that will impede the introduction of this really important technology, which will save
lives. While we are constantly told by the Silicon Valley geniuses that software will always
and forever make better, safer, decisions than a human, it’s clearly not the case. Even the
best software makes mistakes, errors, gets caught in runtime loops. The phone in your
pocket has some of the most sophisticated software ever conceived by man, but you still
occasionally need to reboot it to pickup the wifi signal. Equally, the best facial recognition
software - a relatively simple task of pattern recognition, compared to the immense
complexity of driving a car - is only around 85 per cent accurate and that’s on a really, really
good day.

The conclusion. Michiel van Ratingen, Euro NCAP (European New Car Assessment
Programme, a European voluntary car safety performance assessment programme)
secretary general said: “Euro NCAP’s message is clear - cars, even those with advanced
driver assistance systems, need a vigilant, attentive driver behind the wheel at all times. It is
imperative that state-of-the-art passive and active safety systems remain available in the
background as a vital safety backup.”

Safety experts believe car companies need to take more responsibility for ensuring
consumers don't make mistakes.

"Calling this kind of technology Autopilot… that's very misleading for consumers," says
Matthew Avery of Thatcham Research - a group that tests vehicles on behalf of the UK
insurance industry, he convinced that automation itself has vital safety benefits.

There is already evidence that automatic emergency braking and pedestrian detection
systems are reducing the number of accidents. But more sophisticated systems can take
that process a step further. "What the best systems are doing is integrating lane control,
stopping people veering out of their lane, with braking control and distance control.

"That can really help keep people out of trouble," he says.

------------------ Ovo ispod je moje misljenje “tko je kriv”---------------------------------------------

As things stand right now, the owner is responsible for the actions of his or her property. If
my autonomous dog bites you, I am responsible for the actions of my dog. If my lawn
mower slips from my grasp and runs into your car, I am responsible for the actions of my
lawn mower because it's my lawn mower. The theory here is that it's my negligence that
allowed my property to cause injury to others.

In the case of a self-driving car, they'd cite my negligence for improperly supervising the
operation of the vehicle. The only way I see Tesla getting roped in is in the event of a
sudden, catastrophic and unpredictable (by the owner) failure causing an accident. If the
battery pack bursts into flames or the sensors all go offline and it flings itself into on-coming
traffic, that's probably on Tesla to some extent, but you'd have to be able to point to the
part that failed and say, "This is the part that caused the accident and the owner/driver
could not have taken reasonable action to prevent it."

Unfortunately, these things will have to be proved out with some legislation and certainly
several court cases to establish a precedent. In my opinion, if the car is completely
unoccupied, I think they'll rule that Tesla Motors was a permissive driver, just as if you'd lent
your car to a friend, and it'll land right back on your personal auto insurance policy.

-------------------------------------------------------------------------------------------------------------------------

Ovaj 1 dio je najvazniji. I to bi trebalo biti glavni dio eseja.

Esej bi treabo odgovoriti na sva ova pitanja.

Ovaj drugi dio je nesto kao dodatno.


2.dio

------- Ovaj dio sluzi za pitanje “koga treba auto koje samo vozi zrtvovat u slucaju neserece”
tj. Ako mora birati ----------

In 2016, Rahwan’s team stumbled on an ethical paradox about self-driving cars: in surveys,
people said that they wanted an autonomous vehicle to protect pedestrians even if it meant
sacrificing its passengers but also that they wouldn’t buy self-driving vehicles programmed
to act this way. Curious to see if the prospect of self-driving cars might raise other ethical
condundrums, Rahwan gathered an international team of psychologists, anthropologists
and economists to create the Moral Machine. Moral Machine is an online platform,
developed by Iyad Rahwan's Scalable Cooperation group at the Massachusetts Institute of
Technology, that generates moral dilemmas and collects information on the decisions that
people make between two destructive outcomes. The presented scenarios are often
variations of the trolley problem. Within 18 months, the online quiz had recorded 40 million
decisions made by people from 233 countries and territories. When the authors analysed
answers from people in the 130 countries with at least 100 respondents, they found that
the nations could be divided into three groups. One contains North America and several
European nations where Christianity has historically been the dominant religion; another
includes countries such as Japan, Indonesia and Pakistan, with strong Confucian or Islamic
traditions. A third group consists of Central and South America, as well as France and former
French colonies. The first group showed a stronger preference for sacrificing older lives to
save younger ones than did the second group, for example.
Azim Shariff, a psychologist at the University of British Columbia in Vancouver, finds this
result interesting because it suggests that the survey really does reveal people’s moral
preferences. “If you assume that places that have a lower level of income inequality have
political policies that favor egalitarianism, this shows that the moral norms that support
those policies are expressed in the way that people play these games.”

A driver who veers away from cyclists riding on a curvy mountain road increases her chance
of hitting an oncoming vehicle. If the number of driverless cars on the road increases, so too
will the likelihood that they will be involved in such accidents.

Self-driving cars will save countless lives. Humanity needs them, badly—more 30,000 people
die every year in road accidents in the United States alone. Worldwide, it's more than a
million. Because, it turns out, humans are terrible drivers. The German Federal Statistics
Agency reports that in 2015, 67% of all accidents with injuries to people were caused by
driver misconduct. A 2008 survey by the National Highway Traffic Safety Administration
(NHTSA) even showed that human error played a crucial role in 93% of traffic accidents in
the US. Machines, by contrast, are consistent, calculating, and incapable of getting drunk,
angry, or distracted.

IZVORI:

https://www.researchgate.net/publication/301293464_The_Social_Dilemma_of_Autonomo
us_Vehicles

https://www.nature.com/articles/d41586-018-07135-0

https://link.springer.com/chapter/10.1007/978-3-662-48847-8_4

https://newseu.cgtn.com/news/2020-02-21/Who-is-to-blame-when-driverless-cars-crash--
OdyCWkR8yc/index.html

https://www.irishtimes.com/business/innovation/who-s-to-blame-when-a-self-driving-car-
crashes-1.3902166

You might also like