You are on page 1of 12

 

  

University of Petroleum and Energy Studies 


School of Law  

SEMINAR ON ARTIFICIAL INTELLIGENCE AND LAW


SEMESTER - VII
BBA. LL.B. (Hons.)
with Specialization in Corporate Laws
BATCH - I

 SEMINAR PAPER

Topic: Artificial Intelligence and Law: Who is liable for damage?

Submitted to: Mr. Himanshu Dhandaria

Submitted by:
NAME SAP ID ROLL NO

Parth Sabharwal 500070578 R760218047


Artificial Intelligence and Law: Who is liable for damage?

Abstract
Technology has advanced swiftly in this modern period and important technological
applications of Artificial Intelligence (AI) have become an inherent part of human nature,
resulting in a significant shift in human life. This also resulted in the challenge of determining
civil and criminal culpability for damages or losses caused by it. Activities become a top
focus for AI, which has varying degrees of control over itself. The fact that neither national
nor international law recognizes AI as a legal subject is a major source of worry since AI
cannot be held directly liable for its acts and the resulting damage. Furthermore, there's also
the difficulty of determining liability. The issue of culpability assessment raises the
complicated question of whether AI should be accorded legal status. Given the aforesaid, it's
only natural to wonder who is to blame for the harm inflicted by the actions of AI.
Introduction
The entire world has heard of AI by now. It has been hailed as the technology that will
transform the planet over the next few decades. It promises self-driving cars, Disease
mapping, chatbots for customer care, Programs that can think and learn nearly anything on
their own, the possibilities are endless. But what does the term "artificial intelligence" truly
mean?
There is a simple method to define AI and also a more complicated, abstract approach to
define AI. It is a branch of computer science that can be characterised as a sub-discipline. The
purpose of this field is to figure out how to train computers to learn on their own, which can
be done by imitating "intelligence," as people do. The term "Artificial Intelligence" refers to
intelligence that is manufactured artificially by a programmer.
AI can also be defined as the application of technology to replicate human intellect through a
thorough, expansive, and exhaustive process of building and using algorithms in a dynamic
technological environment within computers.1 In simple terms, AI is a cutting-edge
technology that allows a computer to think and act like a human being through the use of bot
programming.
AI is any type of man-made entity capable of using working memory to perform cognitive
processes such as abstract reasoning and logical deduction, as well as the ability to learn new
information on its own2. Using these cognitive powers, this organism would also be able to
develop long-term plans for the future. Infact, until we get to the point when the programs we
construct have true intelligence, this term won't fully explain AI. Current AI is well below
this criterion; most systems can only act autonomously in a relatively limited domain,
limiting their capabilities.
Artificial intelligence systems have subsequently acquired tremendous traction in this
incredibly tech-savvy society over the last decade, with highly technical and sophisticated
1
OECD, Artificial Intelligence in Society (OECD Publishing, Paris 2019).
2
Jessica Peng, How Human is AI and Should AI Be Granted Rights?, available
at:http://blogs.cuit.columbia.edu/jp3864/2018/12/04/how-human-is-ai-and-should-ai-be-granted-rights/ (last
visited Aug. 1,2020), see also, OECD, Artificial Intelligence in Society (OECD Publishing , Paris, 2019).
technology being applied to construct inventive, intelligent, and intellectual AI systems. As a
result, the day when these clever bots start producing valuable and interesting content is not
far away. Spectacular creations have been made without the use of human intelligence. AI's
ability to create and generate knowledge, content, inventions, and technologies etc., has
generated a significant issue pertaining to the obstacles and problems that it may create in
regards to AI's legal culpability. However, existing legislation in majority of countries will
not be sufficient in addressing and dealing with liability problems originating from actions
and decisions of an AI system.
Though the problem of AI regulations establishing culpability for damages has yet to be
resolved in India, it is a worldwide issue that must be addressed. The problem of AI is not
indigenous or limited to a region and its established legal practices, as seen by the influence
of hastened globalisation initiatives. The lack of a regulatory framework in the field of AI is a
concern for the entire world, including both common law and civil law countries. Since AI's
operation is currently uncontrolled and there is no specific legislation that adequately
addresses it, existing methods and regulations will be used to establish the AI's culpability for
its acts and any resulting damages, if any.
Currently, established legal standards provide for adequate compensation to be paid in the
event of any damages caused by an offender's unlawful act, with the compensation being paid
by the offender or the person accountable for the offender's acts. As a result of this legal
predicament, and in light of the fact that no legislation or regulation applies to AI, a serious
question arises as to who will bear legal duty and offer compensation in the event of any
harm caused by AI's acts. As a result, the purpose of this paper is to examine both national
and international laws in order to determine the liability of artificial intelligence, as well as
whether existing liability principles that apply to humans could be applied to AI (given that
AI software and bots are ultimately created or developed by humans).
Henceforth, despite the fact that these concerns may appear to be somewhat perplexing at
first, with the increasing pace of AI breakthroughs, it has become imperative that we develop
particular legal solutions to these complicated questions. As a result, the researcher has also
included an in-depth review of the Indian legal framework and how it should be read in order
to elucidate answers to the complicated problems raised by the international community over
the use of AI. With large corporations like Apple and Salesforce acquiring Indian companies
Tuplejump and MetaMind, respectively, for AI-powered technologies 3, India is slowly but
gradually pushing forward in the AI field. Not only that, but India has seen a significant surge
in AI start-ups, with growing amounts of money being invested in AI research and
development.
One of the most remarkable facts is that in its early years, the AI space Sentient attracted a
total investment of 143 million dollars in the initial years of its inception 4. As a result, there
is no doubt that with such a significant increase in AI in the country, with the spectrum of
such AI advances encompassing Human life is not particularly remarkable, which
3
Salesforce Acquires AI Startup MetaMind, available at:https://fortune.com/2016/04/04/salesforce-
metamindacquisition/ .
4
The 10 Most Well-Funded Startups Developing Core Artificial Intelligence Tech, available at:
https://www.cbinsights.com/research/most-well-funded-artificial-intelligencecompanies/?
utm_source=CB+Insights+Newsletter&utm_campaign=686c2ed68eTop_Research_Briefs_7_9_201
6&utm_medium=email&utm_term=0_9dc0513989-686c2ed68e-87590629
necessitates the development of new technologies. analysing India's current legal structure in
order to establish various culpability and rule of law issues.

Determination of liability of Artificial Intelligence


We can draw our legal rights and responsibilities from the law. Following the law entails
performing obligations and being granted rights5. As a result, legal personhood for AI raises
the topic of whether AIs should have legal rights and responsibilities. Although the answer
may seem futuristic and progressive, a thorough examination should include a discussion of
AI legal personhood, as this would hold AIs accountable for their own actions. 6 Criminal
culpability for AIs would necessitate legal personhood for the AIs, and would be akin to the
criminal liability for corporations that some legal systems recognise. Corporate criminal
liability is seen as a fiction; it is an interpreted type of liability in which the corporation is
held liable for the actions of its employees. 7 Unlike companies, AIs would be held
accountable for their own actions rather than being blamed for the actions of others. Even
while it appears to be a straightforward answer that does not violate the rule of law, it
demands a more thorough examination.
When a person or individual commits a crime against another person, he or she will
undoubtedly be liable to the criminal laws that have been established in that state. When it
comes to artificial intelligence, however, any crime committed against humanity through it
may not be classified as a traditional crime because it was performed using software or a
robot that is different from the human who created that particular software, programme, or
robot. As a result, it is critical to evaluate the criminal responsibility of activities conducted
using artificial intelligence. To determine if artificial intelligence is a legal entity in and of
itself, and what the legal implications are the major obstacles encountered in determining and
recognising the actus reus (the guilty act) and mens rea (mental intention) is regarded to be a
fundamental criterion for deciding whether or not a crime has been committed.8
The ‘Black Box’ Problem
Computer and smartphone users rely largely on complicated problem-solving algorithms to
execute even the most basic activities efficiently and swiftly, therefore Artificial Intelligence
has become an important component of their daily lives. It's equally important for these
algorithms to run smoothly and without errors, as well as for us to understand how they work
so that we may improve them. However, when we try to figure out how AI works, we hit a
brick wall, and it's impossible to explain what's going on within. It's a big problem, but it's
now limited to massive deep learning models and neural systems. Because AI systems are
made up of complicated algorithms and data sets that are generated by software rather than
by humans, these neural systems separate the problem into zillions of bits and then linearly
process them bit by bit to provide a realistic result. Since the human mind does not work in

5
John Chipman Gray, The Nature and the Sources of Law (Cambridge University Press 1909) 27-28
6
Mireille Hildebrandt, ’Criminal Liability and “Smart” Environments’ in R.A. Duff and Stuart P Green (eds)
Philosophical Foundations of Criminal Law (OUP 2011) 506-32.
7
Matilda Claussén-Karlsson, Artificial Intelligence and the External Element of the Crime: An Analysis of the
Liability Problem, available at:https://www.diva-portal.org/smash/get/diva2:1115160/FULLTEXT01.pdf
8
Hallevy G., The Criminal Liability of Artificial Intelligence entities, available at: http://ssrn.com/
abstract=1564096
the same way, we have no way of knowing what calculations the brain system is performing
or what methods are being used. As a result, this phenomenon is known as the "black box" or
"explainability" issue, because there is no way to get access within the neural network to
view current processing during the problem-solving process.9 This not only prevents us from
getting the deep information needed to update the algorithm and subsequent calculations, but
it also creates a slew of trust difficulties with AI and neural systems. As a result, it is claimed
that a human will never be able to explain why a sound AI system using such self-generated
methodologies or data sets arrived at a given response or made a specific "choice."
We believe it is critical to remember that any examination of culpability in a civil or criminal
action boils down to whether the relevant defendant's activities or omissions were illegal as a
result of the applicable AI framework's choices and conclusions. Were their actions or
omissions considered contract violations, negligence, or criminal offences? It is critical to
emphasise that the defendant will always be a legal person, not an AI system. To answer
these types of questions, a court will not need to know why the relevant AI framework made
the decision that led to the defendant's allegedly illegal act or omission. We can't help but
imagine that we won't be able to figure out why the AI framework made a decision.
However, in actuality, the AI framework did surely arrive at that particular choice, which
either caused or contributed to the defendant's criminal act or omission.
For example, a plaintiff who followed the defendant's poor investment advise supplied by an
AI system lost money. According to the implied terms of service contract, the plaintiff may
argue that the defendant failed to use reasonable care and skill in his responsibility to provide
investment advice. The plaintiff may be able to prove a breach of duty and the defendant's
responsibility to use reasonable care without any explanation as to why the robo-advisor gave
such bad advice. As a result, courts may be able to find the defendant liable based on the
character and quality of the plaintiff's view.10 Whether the counsel was provided by a robo-
adviser or a human adviser, the court may have reached the same conclusion. It's important to
remember that the human brain has "black box" characteristics (we can't always explain
human behaviour), but this hasn't stopped courts from holding defendants accountable in the
past.11
It's too early to say whether the "black box" will lead to a legal issue in assessing whether AI
systems' judgments are legally liable or not. In most circumstances, however, including an AI
system with a "black box" would make it more difficult for the plaintiff to prove the
defendant's act or omission resulted in an unlawful act. This is due to the fact that, under
English law, a claimant must show:
 Factual causation, that the consequence would not have occurred if the defendant had
not acted; and
 ii. Legal causation, a complete chain of causation between the defendant's actions
and the consequence (the defendant's act does not have to be the sole cause of the

9
Niti Aayog, “National Strategy for Artificial Intelligence” 85-86 (June 2018).
10
When AI systems cause harm: the application of civil and criminal liability, available at:
https://digitalbusiness.law/2019/11/when-ai-systems-cause-harm-the-application-of-civil-and-criminalliability/
#page=1
11
Ibid.
consequence, but it must have made a significant (or more than minor) contribution
to that consequence).12
Neither of these standards requires the claimant to explain why the defendant behaved as he
did. The law that applies depends on the nature of the violation alleged when a defendant's
state of mind is crucial to evaluating their liability. However, it's important to remember that,
in the end, it'll be important to consider the respondent's point of view, as well as the decision
made by an AI framework. Without discussing why the AI framework settled or made a
certain decision, it may be viable and probable to show that the defendant has the basic and
vital perspective or viewpoint.
Legal status of AI
Kenji Udhara, an engineer at a Kawasaki heavy industries company where a robot was
deployed to execute specific manufacturing job, was the first person ever to die as a result of
a robot. This incident occurred when Kenji was fixing the robot, he neglected to close it,
causing the robot to recognise Kenji as an obstacle, after which he was mercilessly shoved
into an adjacent machine by the same robot's powerful hydraulic arm, resulting in Kenji's
death practically quickly.
Various laws of nations around the world at the time, and even now, are unable to establish a
definite criminal legal framework to deal with official situations where robots are implicated
in the commission of a specific crime or injury to an individual. With the invention of AI, a
number of new aspects have been introduced to the world, as well as the ability to deal with
such a quick rate of change. It is also critical for states to enact legislation that clarifies the
status of incidents and crimes involving robots and Artificial Intelligence Software. 13 There is
no formal regulation or statute in the Indian legal system that addresses the unborn child and
its rights. Some statutes14, on the other hand, not only acknowledge and talk about an unborn
child, but they also define it as a legal person who obtains such rights only after birth.
The legislature, on the other hand, is a grey region in the legal domain. There is no mention
of the protection afforded to such unborn children or the obligations given to them. Similarly,
AI systems are still in their infancy, and the Indian legal system has yet to recognise them,
which is a concerning situation, much alone the imposition of any rights, responsibilities, or
liabilities on AI systems. Because a person's or entity's legal status is inextricably related to
their autonomy, Status is bestowed not only on individuals, but also on groups, corporations,
and organisations.
However, when it comes to artificial intelligence, no legal system has yet acknowledged it as
such except in Saudi Arabia, where the state is represented by a robot named Sophia. An
artificially intelligent humanoid was acknowledged as a citizen of the country with rights and
responsibilities. A noble individual living within the state who is comparable to human
beings. The issue of gifting Artificial intelligence machines or software are susceptible to the

12
Ibid.
13
Supra note 8.
14
The Transfer of Property Act, 1882, s. 13: which deals with the transfer of property for the benefit of unborn
defines, “Where, on a transfer of property, an interest therein is created for the benefit of a person not in
existence at the date of the transfer, subject to a prior interest created by the same transfer, the interest created
for the benefit of such person shall not take effect unless it extends to the whole of the remaining interest of the
transferor in the property
fact that whether they are legal persons or not and can they be entrusted with some rights and
responsibilities that would normally be entrusted to a living person. Even though corporations
or companies have the legal status of a separate legal entity, they are nonetheless equally
liable to the stakeholders for any future responsibility arising from transactions that these
corporations or companies have entered into. However, even if humans created artificial
intelligence, it is still wholly artificial, independent and capable of doing activities that may
arise as a result of a malfunction or even when the same is not the case, incorrect
programming can lead to the commission of crimes intended on the part of the AI software's
inventor. Under any state's law, the criminal responsibility of artificially intelligent robots is
unclear. Thus, subjecting violations of laws and commission of crimes through artificial
intelligence to only judicial pronouncements, serving as the primary source of the decision
upon such cases where artificial intelligence is responsible for committing a specific crime
(including or omitting the creator's instructions in the creation of artificial intelligence
algorithms or robot software).15
Kelsen addressed the attribution of legal persons in his theory of personality 16, in which
providing legal personhood is only a "technical personification" to assert rights, obligations,
and liabilities. According to the view, an entity's legal personhood is, in general, a legal
mechanism used to organise the company's rights and obligations Based on the Hohfeldian
rights analysis17, as a legal correlative to every right, there is a corresponding duty. In light of
a jurisprudential analysis following an examination of these theories, the question of whether
robot rights and obligations are properly defined arises. It is investigated if they are asserted
by providing them legal personhood. In some cases, granting legal personhood may be
beneficial. As a result, the persons involved in manufacturing or programming have less
liability.
Alternatively, you might run the AI system. It is said that, at this point in time, when
technology is so advanced, in newer sectors, the concept of conferring legal personhood to an
entity is still being developed and tested. It may not be essential to use an AI entity to
determine liability in order to render it liable right away. However, with the developing over-
reliance on AI systems, a need may arise in the near future to give it personhood in order to
ascertain the precise quantum of culpability for such systems
As a result of the lack of a legislative framework and explicit policy guidelines in India for
Artificial Intelligence systems, it is guaranteed to raise a number of ethical and legal
difficulties when used or applied. As a result, corporate policy guidelines are required for
corporations (AI system creators, developers, manufacturers, and software programmers) and
the legislation to meet a variety of ethical and legal requirements could be addressed
primarily by figuring out what AI systems are as a whole. In light of this, the burden of proof
may or may not be transferred from the designers to the AI system that has some self-
control.18
Criminal Liability

15
Supra note 8.
16
Kelsen, H, General Theory of Law and State (Harvard University Press, Cambridge, MA,1945).
17
Heidi M. Hurd, Michael S. Moore, The Hohfeldian Analysis of Rights 63 TAJJ 295 (2018)
18
Priyanka Majumdar, Bindu Ronald et.al.,“Artificial Intelligence, Legal Personhood and Determination of
Criminal Liability”6 Journal of Critical Reviews 323 (2019).
Gabriel Hallevy, a renowned legal researcher and lawyer, proposed that certain AI systems
can meet the essential requirements of criminal liability, including:
 constituting actus reus, an act or omission;
 mens rea, which requires knowledge or information; and
 strict liability offences, which do not require mens rea.
When a criminal conduct is committed by artificial intelligence robots or software, the broad
basis for the same is the requirement of law. As a result, without an actus reus, an individual's
criminal liability cannot be established, and in the case of artificial intelligence, actus reus
can only be established if the crime committed through such a mechanism can be prescribed
to a human being, thereby satisfying the very condition of commission of an act to punish and
prove an individual's criminal liability.19
When it comes to mens rea, the prosecutor must show that an act performed by an AI was
done deliberately by the user against another person. The greatest level of mens rea is
knowledge that may or may not be backed up by the purpose of a single user under whose
supervision or administration an artificial intelligence robot committed a specific act.
The lowest level of mens rea is when the user of an AI system is guilty of criminal
carelessness or recklessness, which would have been obvious to a reasonable human under
his strict liability. Hallevy presented three legal paradigms to consider when investigating AI-
related offences:20
 AI Liability for Perpetration by Another
 AI Liability for Natural Probable Consequences
 AI Liability for Direct Liability

Civil Liability
When software is flawed or a person suffers injury as a result of using it, the legal actions
usually charge negligence rather than criminal culpability.21
Gerstner22 explains the three components that must be proven in order for a negligence claim
to succeed:
i. The defendant owed the plaintiff a duty of care,
ii. The defendant breached that duty, and
iii. The plaintiff was injured as a result of the breach.
In the instance of the defendant's duty of care, Gerstner points out that there is clearly a duty
of care owed by the software or system seller to the consumer; nevertheless, determining the
19
Legal liability issues and regulation of Artificial Intelligence (AI) Dissertation work -Post Graduate Diploma
in Cyber Laws and Cyber Forensics Course Code: PGDCLCF Submitted by: Jomon P Jose, available at:
https://www.academia.edu/38665852/Legal_liability_issues_and_regulation_of_Artificial_Intelligence_AI_Diss
ertation_work_Post_Graduate_Diploma_in_Cyber_Laws_and_Cyber_Forensics_Course_Code_PGDCLCF_Sub
mitted_by_Jomon_P_Jose
20
The Basic Models of Criminal Liability of AI Systems and Outer Circles, available at:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3402527
21
Tuthill G.S, Legal Liabilities and Expert Systems, AI Expert 1991.
22
Gerstner M.E, Comment, Liability Issues with Artificial Intelligence Software 33 Santa Clara L. Rev. 239
quantity of standard care required is difficult. If the system in question is an "expert system,"
the degree of care would be professional at the very least, if not expert.

In the event that the defendant breaches a duty, Gerstner suggests a number of scenarios in
which an AI system could breach its duty of care, including:

 The developer's failure to detect errors in programme features and functions,


 Inappropriate or insufficient knowledge base,
 Inappropriate or insufficient documentation and notices,
 Failure to maintain an up-to-date knowledge base
 Error due to erroneous input from the user
 putting too much reliance on the output
 Abusing the programme
Finally, whether AI systems can cause or be assumed to cause an injury to a plaintiff as a
result of a breach is controversial. However, the crucial question in AI is whether AI systems,
like most expert systems, offer a solution in a specific case or whether AI systems, like an
autonomous car, take the decision and act accordingly. Therefore, while the former scenario
involves at least one external agency, making causality harder to prove, the latter situation
involves no external agent, making causation very simple to prove.
Indian legal scenario
Under Indian legislative regime no specific or separate provision provides under any
legislation concerning the liability of such acts, which may be committed by a user,
administrator, or producer through artificial intelligence software or systems. In India, the
strict liability doctrine corresponding to the direct liability model proposed by Hallevy, is not
as developed despite being a common law system as compared to English law. UK’s criminal
law, strict liability doctrine has evolved over the period time by accumulating existing
English laws with modified, amended provisions, authoritative decisions of the judiciary, and
statutory enactments made by parliament from time to time. On the contrary, an exhaustive
codification of Indian penal laws leaves no scope for the judiciary to go beyond existing
statutes.
When we look at the IPC, chapter IV (general exception) primarily deals with issues that
disprove the presence of such a purpose. Generally, the definition of an offence includes a
reference to malicious purpose in order to exclude those acts that do not have such intent.
Even though the intent is not specified in the term, it is ruled that a malicious intent must be
included in the definitions of all strictly criminal offences on general principles. In India,
offences defined in the Indian Penal Code (IPC) are very cautious to incorporate mens rea in
the definition itself, as well as in the chapter of general exceptions. The definitions in the
Indian penal code, as well as the chapter on general exceptions, may be enough to rule out all
circumstances in which mens rea cannot be attributed.
When looking at the definitions of offences in the IPC, the following elements are commonly
found:
i. A human23, to begin with.
ii. A malevolent purpose on the part of such a human being to generate a certain
outcome that is regarded harmful to individuals or society,
iii. The act performed,
iv. The subsequent consequence
However, there are a few instances where phrases showing purpose are absent from the
definition of an offence. In such circumstances, either the act's outcome is so destructive to
the state or society that it is regarded just and expedient to punish them regardless of intent,
or the actions are of such a character that they establish a strong presumption that the actor
must have intended to do it. Sections 121 (waging, attempting to wage, or abetting the
waging of war against the Government of India), 124A (sedition), and 359-363 (kidnapping
and abduction), Sections 232 (counterfeiting Indian coins) are examples of the former, an
later example of the later nonetheless, identifying the mens rea in the case of AI systems
would be difficult partially because of the Black-Box issue, but the doctrine adopted in the
cases above may be useful in determining the AI system's direct culpability. To assess
whether there is probable cause responsibility or abetment under the IPC, the principle of
probable cause liability or abetment is adequate. The crime and the punishment for those who
aid and abet it However, given the current rate of progress, The legislature has attempted to
bridge the gap and give technology more space, and The Information Technology
(Amendment) Act of 2008 expanded the definition of abetment to include bribes include any
act or omission carried out through the use of encryption or any other electronic technology
The Information Technology Act of 2000 (hereinafter referred to as the IT Act), which
attempts to regulate all aspects of modern day technology, attempts to define computer and
related terms such as software, but the Internet of Things, data analytics, and AI are not
covered by the IT Act, nor are the liabilities that may be committed by humans using these IT
mediums (specifically AI software). Given that the fundamental goal of the Act was to give a
legal character to digital signatures and electronic records, the Indian legislature did not place
a strong emphasis on the scope of responsibility deriving from AI acts and countermeasures.
There is no specific Indian legislation dealing with the criminal or civil culpability of such
acts perpetrated with AI. As a result, it is important to note that India is one of the countries
that is moving toward the implementation of policies that will allow AI to be integrated into
the entire government system, but at the same time, the legal system is ignoring the potential
negative consequences of future cybercrimes committed using these highly technological and
advanced AI systems.24 The only ray of hope that the Indian legal system has left to deal with
such activities is to define final punishment and criminal/civil accountability of such acts
perpetrated by AI systems, software, and robots against other people. The Indian judicial
system is in charge of these cases.

23
Supra note 5., s.10 defines “Man”, “Woman”.—The word “man” denotes a male human being of any age; the
word “woman” denotes a female human being of any age., s.11, which states “Person”.—The word “person”
includes any Company or Association or body of persons, whether incorporated or not.
24
Indian Law is Yet to Transition into the Age of Artificial Intelligence, available at:
https://thewire.in/law/indian-law-is-yet-to-transition-into-the-age-of-artificial-intelligence
Even while no substantial landmark judgement has been issued that can provide a
breakthrough on the guidelines for the use of artificial intelligence software or robots to
prevent the commission of any criminal or civil violation against others, it is expected of the
Supreme Court., with the rapid advancement of artificial intelligence in the judiciary, it is
becoming increasingly difficult to pass such laws. rules and legal precedents that could be
used to justify the use of artificial intelligence an be efficiently dealt with by defining its
criminal and civil liability. Systems, through different unethical means, may inflict hurt or
damage to other people. Phishing, hacking, and data theft are examples of such tactics.25

Conclusion
A future filled with intelligent AI-powered robots and technology may appear frightening at
first, but I believe our future is bright, and the possibilities are far more than we can now
grasp. Rather than discussing the potential benefits of AI and how we might utilise it to better
ourselves, create an ideal society, and even explore other worlds, I've recently seen experts
explaining the perils of AI and painting a picture of a Terminator-style doomsday scenario.
This negative perspective is counterproductive, and I believe we should not allow it to stifle
AI growth.
The AI sector has been moving at a breakneck pace. Because they don't want to be left
behind, businesses are quick to adopt new technologies. Companies can discover trends using
machine learning and deep learning by analysing increasingly vast data sets, which offers up
new opportunities.
These new possibilities bring with them a slew of new ethical issues among them, but not
limited to:
i. The legal ramifications of the liability paradox,
ii. Concerns about intellectual property rights regarding advanced AI programmes
that can generate material on their own.
iii. Concerns about privacy and the use of personal data\
iv. Discrimination by AI programmes in the hiring of job candidates,
v. Recognition of a person's face
vi. The employment of self-driving military weaponry that allow AI to make
judgments on its own when it comes to killing someone,
vii. Self-driving vehicles that will have to decide what to crash into on their own if
they malfunction.
Today, AI is not recognised as a legal entity under both national and international law, which
means it cannot be held liable for any damages it causes. As a result, the principle enshrined
in article 12 of the United Nations Convention on the Use of Electronic Communications in
International Contracts, which states that the person at whose behest the system was
programmed should eventually be held liable for any act done or message generated by that
system, may be applied to AI liability. In light of the foregoing argument, the direct
responsibility model presented by Hallevy can be used, which states that a strict liability
standard could limit the AI system's behaviour while leaving other actors unaffected. As a

25
Ibid.
result, a new concept of AI-as-Tool could be developed and applied, in which strict liability
derived from Hallevy's Direct Liability model could be applied to an external agent (natural
or legal person) who instructed the machine to act in accordance with Hallevy's Perpetrator-
via-another model, regardless of whether such action by an AI system was planned and
envisaged or not. When an AI system is considered as an AI-as-Tool, strict or vicarious
culpability for damages caused by the AI system can be simply applied. However,
establishing the burden of proof adequately would be difficult due to how an AI system
operates and its basic principles, such as independent decision making. Because AI is a self-
learning system, it is nearly impossible for humans to discern between damage caused by a
product fault and damage produced by an act performed by AI during its processing.
As a result, the practical applicability of such a liability model is totally dependent on the
legislative purpose in terms of how to alter existing legislation or introduce new legislation to
address the responsibility of AI systems in an era where AI's dominance in human lives is
growing by the day. Most governing entities appear to be burying their heads in the sand
rather than considering AI's future. Because they have a better knowledge of its possibilities,
business owners appear to be more aware of the need for regulation. Countries such as the
United States are likely apprehensive about falling behind China, which is investing billions
of dollars in AI research and development. However, it is evident that China is a shining
example of what not to do when it comes to AI ethics and legislation. Our priorities must be
in order in order for us to go forward.

You might also like