You are on page 1of 2

Who is responsible for ensuring that the decisions made

by AI are ethical and just?


A driverless car powered by artificial intelligence strikes an elderly man as he
steps off the curb. The AI has to decide whether to strike the man or risky
swerve into oncoming traffic. Who is held legally responsible for the choice the
AI makes? Here's a less dramatic illustration: to whom can you appeal if an AI
software rejects your application for a mortgage? If these technologies are
going to gain the public's trust, accountability is essential. Government
regulation, as some currently do, is one alternative. People have the right to
know why computers have made decisions about them, thanks to Europe's
stringent data privacy regulations. Nevertheless, it is still unclear how the law
will be implemented in reality. Companies can also create their own voluntary
standards for "algorithmic transparency" and other AI-related ethical concerns.
We'll see if a solution materialises that can reassure individuals that the
judgements being rendered by computers over their lives are right and fair.

How does accountability work when some AI decisions are


opaque, even to their programmers?
It's not always possible to decipher the reasoning behind an AI decision.
Artificial neural networks, which analyse enormous amounts of data through
groupings of potent microprocessors arranged in a fashion that mimics the
connections between neurons in the human brain, are credited with some of
the largest advancements that have helped put modern AI on the map.
Computers are "taught" by the neural networks how to correctly respond to
certain inquiries. Every millisecond, they contain the digital equivalent of
thousands of overlapping synapses firing. Hence, even if you had access to
the complete source code that instructed the AI, it might not provide you with
any valuable information about the errors or biases that were amplified. You
have no idea how your brain determines that the object that just darted in front
of your automobile is a plastic bag that is safe to be around and not a child on
a bike, am I right? Even the programmers of the AIs that will power driverless
automobiles are unsure of how these systems decide what to do. They only
know that when they build the network in a specific way and feed it data, they
get a specific outcome.

Should AI be allowed to kill people?


Wait until AI entirely replaces human warriors in the battlefield before you start
worrying about the ethical implications of remotely controlled drones.
Governments are already considering how and when to use so-called deadly
autonomous weapons systems, which one day may be able to locate and
eliminate enemy soldiers without the assistance of a human. Deadly AI-
powered robots would have several benefits over human soldiers, including
the ability to be replaced, the lack of sleep they need, and, more
controversially, the willingness to shoot a target when it came within range.
Yet if an AI-powered robot mistakenly kills a person, who will be held
accountable? Would governments be more likely to start conflicts if there were
killer robots? Yet, they - and the population they guard - won't be able to avoid
the enormous ethical and safety problems that these technologies present.
Militaries around the world are likely to tread gingerly as they investigate
potential uses of these systems.

You might also like