You are on page 1of 4

AI and Product liability –the issue of proving causation and standard of proof (The product

Liability Act is not fit either for consumer nor for manufactures)

Sale of Goods Act 1930 provides for the setting up of contracts where the seller transfers or
agrees to transfer the title (ownership) in the goods to the buyer for consideration. It is
applicable all over India. Under the act, goods sold from owner to buyer must be sold for a
certain price and at a given period of time.
The Indian Contract Act 1872 determines the circumstances in which promises made by the
parties to a contract shall be legally binding. The only statue that discusses product liability
where the complainant can be the legal heir or legal representative in the case of death of
the consumer is Consumer Protection Act 2019 (Section 5 (vi)).

Product liability Act is highly significant in the era of Artificial Intelligence taking over most
of the task that human were equipped in, as we see a speedy and efficient system in getting
task done, however there are several cases reported where AI involvement has caused
significant harm, indicating its use is hazardous if not handled carefully. Before going into,
According to Centre for Security and Emerging Technology Policy brief “AI Accidents: An
emerging Threat” even though modern machine learning is a most advanced artificial
intelligence tool, it is fragile that it can make unpredictable fails. There three basic types of
AI failures—robustness failures, specification failures, and assurance failures.

Going into detail about them,


Firstly robustness: robustness means the system receives abnormal or unexpected inputs
that causes it to malfunction.
1. Cancer detector misdiagnoses Black user: a smartphone app where phone camera is
to identify early signs of skin cancer. Millions of American downloaded the app and
consulted the doctor to decide the potentially concerning symptoms. A few years
later, public health researchers detect a sharp upward trend in late-stage skin cancer
diagnoses among Black patients, corresponding to thousands of additional diagnoses
and hundreds of deaths. An inquiry reveals that the self-screening app was trained
and field-tested mainly on data from northern Europe, and is much less accurate at
detecting cancers on dark skin tones.
2. Bus ad triggers facial recognition system: To improve safety and boost public trust in
its new driverless iTaxis, IntelliMotor designs the vehicles’ AI-based vision system to
recognize human faces within a short distance of the windshield. If a face is detected
with high certainty, the iTaxi automatically decelerates to minimize harm to the
human. To prove it works, several of the engineers step in front of speeding iTaxis on
the IntelliMotor test range—the cars brake, and the engineers are unharmed.
IntelliMotor pushes a software update with the new facial recognition capability to
all deployed iTaxis. Meanwhile, in several U.S. cities, city buses are plastered with
ads for Bruce Springsteen’s upcoming concert tour. The updated iTaxis identify the
Boss’s printed face as a nearby pedestrian and begin stopping short whenever they
come near buses, quickly causing thousands of collisions across the country

In this type of harm caused, examining the input or the machinery controlling behind the AI
can detect what caused that specific outcome.
Now coming to the second type of failure, Specification: Machine learning systems
implement the instructions their designers provide: for example, “score as many points as
possible,” “identify which photos have cats in them,” “predict which word will occur next in
this sentence.” This is accomplished by specifying a rule that captures what the AI system is
supposed to do. . Failures of specification means the system is trying to achieve something
subtly different from what the designer or operator intended, leading to unexpected
behaviours or side effects.
A few example of this is:
1. Wall of fire: Summer brings wildfires to the Los Angeles area, forcing evacuations
along Interstate 15. One morning, a truck overturns on the freeway, blocking all
northbound lanes. Navigation apps detect low traffic on nearby side roads and
begin redirecting drivers accordingly. Unfortunately, these roads are empty
because the surrounding neighborhoods have been evacuated; the apps’ routing
algorithms do not take fire safety conditions into account. As traffic fills the side
roads, the wind picks up. Wildfire quickly spreads into the evacuated area,
trapping the rerouted vehicles in the flames. (An area has been evacuated due to
fire, and the machine directed the people to that area as there is no traffic, but
didn’t take into account fire safety because that instructed was not entered in
the code/data)
Most failures of this kind will be identified in testing and fixed before entering real-world
use.

Now the last failure which is the most problematic one is Assurance: Failures of assurance
means the system cannot be adequately monitored or controlled during operation. Machine
learning algorithms do not “reason” like humans, and their inner workings often cannot be
explained in the familiar terms of logic and motivation. This “black box” problem,
sometimes referred to as the problem of AI interpretability or explainability, is currently the
subject of a great deal of academic research. However, practical solutions are still far off,
and in some cases, they may never be found.
An example of this is:
1. AI fails on the high seas: Morsen Shipping Lines installs a new computer
vision system on its tankers. In low-visibility settings, the system can pick out
obstructions and oncoming vessels with superhuman speed and accuracy.
One foggy night, for reasons Morsen’s technical teams are still working to
understand, the vision system on one tanker fails to sound alarms as the ship
approaches semi-submerged debris off the Florida coast. (Normally, a crew
member would be keeping watch as an extra precaution, but since the
computer vision system is so effective, captains have started skipping this
extra precaution from time to time.) Relying on the system, the tanker’s
captain maintains course. The debris tears a gash in the ship’s hull, spilling
carcinogenic chemicals.

Finally, even when a human wants to intervene, it may not be possible. AI systems often
make and execute decisions in microseconds, far faster than any human in the loop can act.
In other cases, the system’s user interface may make intervening difficult. An AI system
might even actively resist being controlled, whether by design or as a strategy “learned” by
the system itself during training.
Ambulance chaos: Faced with a surge of emergency room visits during an unusually bad flu
season, New York City’s hospitals turn to Routr, a machine learning platform. Reading data
from first responders, public health agencies, and member hospitals in real time, Routr
redirects incoming 911 calls from hospitals that could fill up soon to hospitals that are likely
to have enough room. The software is based on AI algorithms that have been “trained” on
terabytes of historical occupancy data, allowing them to identify patterns that no human
could have recognized. Thanks to Routr, during November and December, city hospitals
have beds to spare even as cases skyrocket. However, when the clock turns over to a new
year on January 1, the software inexplicably begins routing calls throughout the city to only
a few hospitals in Queens. By morning, the hospitals are overwhelmed—and in ambulances
outside the hospital entrances, patients are suffering, and in some cases dying, in snarled
traffic. Months later, a state-ordered investigation finds, among other lapses, that human
dispatchers monitoring Routr were aware of the unusual routing pattern on New Year’s Eve
as it unfolded, but they did not intervene. In an interview, one dispatcher explained that
“the system had made weird decisions before that always turned out to be genius...We
didn’t know exactly what was going on, but we just figured the AI knew what it was doing.

In such a scenario, It is difficult to prove a case due to the presence of Blackbox in the AI
system. Black box AI is any type of artificial intelligence (AI) that is so complex that its
decision-making process cannot be explained in a way that can be easily understood by
humans. The EU proposal has provided a solution for this to help the consumer to prove
their case to show causation: The EU proposal:

As a result, there are plans to require AI system operators to disclose relevant evidence on
a claimant's request in the event that a high-risk system (under the AI Act) is suspected of
causing damage. If such a request is not complied with after a court order, the burden of
proof is reversed, and there will then be a rebuttable presumption that the operator
breached its duty of care. However even this has lot concern as the manufacture cannot
predict how far the machine learning ability of the AI goes to.

https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

The inability for us to see how deep learning systems make their decisions is known as the
“black box problem,” and it’s a big deal for a couple of different reasons. First, this quality
makes it difficult to fix deep learning systems when they produce unwanted outcomes

So what can we do about this black box problem? There are essentially two different
approaches. One is to pump the brakes on the use of deep learning in high-stakes
applications. For example, the European Union is now creating a regulatory framework,
which sorts potential applications into risk categories. This could prohibit the use of deep
learning systems in areas where the potential for harm is high, like finance and criminal
justice, while allowing their use in lower-stakes applications like chatbots, spam filters,
search and video games. The second approach is to find a way to peer into the box.
Rawashdeh says so-called “explainable AI” is still very much an emerging field, but computer
scientists have some interesting ideas about how to make deep learning more transparent,
and thus fixable and accountable. “There are different models for how to do this, but we
essentially need a way to figure out which inputs are causing what,” he says. “It may involve
classical data science methods that look for correlations. Or it may involve bigger neural
nets, or neural nets with side tasks, so we can create data visualizations that would give you
some insight into where the decision came from. Either way, it’s more work, and it’s very
much an unsolved problem right now.” They often lack any semblance of common sense,
can be easily fooled or corrupted, and fail in unexpected and unpredictable ways. It is often
difficult or impossible to understand why they act the way they do.

What India needs ?

It will be important for India to define standards around human-machine interaction


including the level of transparency that will be required. Will chatbots need to disclose that
they are chatbots? Will a notice need to be posted that facial recognition technology is used
in a CCTV camera? Will a company need to disclose in terms of service and privacy policies
that data is processed via an AI driven solution? Will there be a distinction if the AI takes the
decision autonomously vs. if the AI played an augmenting role? Presently, the Niti Aayog
paper has been silent on this question.

A better solution is to modify intent and causation tests with a sliding scale based on the
level of AI transparency and human super- vision. Specifically, when AI merely serves as part
of a human-driven decision-making process, current notions of intent and causation should,
to some extent, continue to function appropriately, but when AI behaves autonomously,
liability should turn on the degree of the AI’s transparency, the constraints its creators or
users placed on it, and the vigilance used to monitor its conduct.

Consumer expectation test (a test used in US to prove product liability) = A consumer


expectations test is a standard used for determining if a design defect exists in a products
liability tort case. The consumer expectation test imposes a liability on the seller of a
product if the product is in a defective condition unreasonably dangerous to the consumer.
The standard allows a jury to infer the existence of a defect if product fails to meet
reasonable expectations of consumers. In a case where there is no evidence (direct or
circumstantial) available to prove exactly what sort of manufacturing flaw existed,
a plaintiff may establish their right to recover by proving that the product did not perform in
keeping with the reasonable expectations of the user. A product falls beneath consumer
expectations when the product fails under conditions concerning which an average
consumer of that product could have fairly definite expectations.

You might also like