You are on page 1of 55

Dissertation Title: Artificial Intelligence: Determining Legal

Responsibility and Liability

Supervisor Name: Laura Edgar


Artificial Intelligence: Determining Legal Responsibility and Liability

TABLE OF CONTENTS

CHAPTER 1: INTRODUCTION 1

CHAPTER 2: CHARACTERISTICS OF AI THAT CHALLENGE THE CURRENT

LEGAL SYSTEM 3

2.1 DEFINITION OF ARTIFICIAL INTELLIGENCE 3

2.2 CHARACTERISTICS OF AI THAT CHALLENGE THE CURRENT LEGAL SYSTEM. 5

CHAPTER 3: RESPONSIBILITY AND LIABILITY 9

3.1 PRIVATE LAW 10

3.1.1. Strict Liability and Product Liability 11

3.1.2 Negligence 19

3.1.3 Vicarious Liability 28

3.1.4 Contractual Liability 31

3.2 CRIMINAL LIABILITY 37

CHAPTER 4: CONCLUSION 40

1

Artificial Intelligence: Determining Legal Responsibility and Liability

“If we want to avoid the injustice of holding men responsible for actions of machines over which

they could not have sufficient control, we must find a way to address the responsibility gap in

moral practice and legislation.”1

Chapter 1: Introduction

The growth of technology namely Artificial Intelligence (hereinafter AI), Autonomous

Systems, the Internet of Things (IoT), and Robotics, has created new services and products,

which in turn provide newer and improved opportunities for society and the economy. It may not

be immediately obvious but AI is affecting almost every single person, subtly or not so subtly, by

performing different tasks that were previously performed solely by humans. These AI

technologies are ubiquitous and flexible in their techniques and have become important to

individual human lives as well as the core of various industries such as e-commerce, robotics,

financial markets, consumer applications, facial recognition, and factory automation. 2 With

each passing day, AI gains more importance and is more heavily involved in our daily lives, and

this will only intensify in the near future. The AI technologies are affecting markets and

industries and are accused not only of causing problems in the employment sector, but also to the

current legal systems as well. The regulation of any industry is very crucial for the smooth

functioning of society and the rapid growth of AI has alarmed governments to regulate AI

systems. Current legal systems are only partially equipped and in the next 10-20 years the

biggest questions regulators would face is how to work around regulating AI systems without

1
Andreas Mathias, ‘The Responsibility Gap – Ascribing Responsibility For The Actions Of Learning Automata’,
(2004) 6 Ethics and Information Technology 175.
2
Woodrow Barfield and Ugo Pagallo Towards A Law Of Artificial Intelligence’ Research Handbook On The Law
Of Artificial Intelligence, (Edward Elgar Publishing 2018).


stifling innovation. However, in the foreseeable future, the legal issues that are likely to arise

would pertain to responsibility and liability of AI systems. Questions will be asked regarding the

determination of liability, such as who will be held liable if an AI system causes harm? Or whom

should the law hold responsible in case an autonomous vehicle causes an accident, or who is

liable if an intelligent system used by medical practitioners makes an error? These questions

regarding harm caused by AI will constantly challenge the current legal system.

Various attempts are being made to regulate AI such as the Civil Law Rules on Robotics

by the European Parliament3 or the Autonomous Vehicles Act in United Kingdom.4 In the last 10

years, there have been various debates on how to regulate robots 5 or the scope of criminal 6 and

civil liability.7 On one hand, there is a concern that stringent regulations will hamper innovation

and prevent potential advantages from materializing.8 On the other hand, this innovation is

constantly challenging the secure legal system and questions regarding liability and

responsibility are rising.

This paper discusses the current legal systems with their ability to determine the

challenges put forth by the constant development in AI technologies and analyzing mechanisms


3
European Parliament, 'European Civil Law Rules In Robotics' (2019)
<http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf> accesed
16 June 2019. (Civil Law Rules on Robotics).
4
UK Automated and Electric Vehicles Bill 2017-19 s 2.
5
Ryan Calo, ‘Open Robotics’ (2010) 70 (3) Maryland Law Review <https://ssrn.com/abstract=1706293> accessed
29 July 2019.
6
Gabriel Hallevy, ‘The Criminal Liability for Artificial Intelligence- From Science Fiction to Legal Social Control’
(2016) Arkon Intellectual Property Law Journal.
7
Samir Chopra and Laurence White, Artificial Agents And The Contracting Problem: A Solution Via An Agency
Analysis. < http://illinoisjltp.com/journal/wp-content/uploads/2013/10/Chopra.pdf> accessed 2 August 2019; Curtis
E. A Karnow, ‘Liability for Distributed Artificial Intelligence’ (1996) 4 Berkely Technlogy Law Journal, 147.
8
Ronald Leenes, Erica Palmerini, Bert-Jaap Koops, Andrea Bertolini, Pericle Salvini & Federica Lucivero
‘Regulatory Challenges Of Robotics: Some Guidelines For Addressing Legal And Ethical Issues’ (2017) 9:1 Law,
Innovation and Technology 1-44, <https://doi.org/10.1080/17579961.2017.1304921> accessed 3 March 2019.

2

to assign liability and responsibility. Chapter 3 aims at assigning legal responsibility and affixing

liability. It focuses on analyzing the salient features of AI and whether traditional rules such as

extra-contractual liability, contractual liability and criminal liability are inadequate and what

current legal mechanisms can be adapted for AI technologies. Chapter 2 highlights the various

challenges that the current legal system face and also highlights the difficulty in defining AI.

Chapter 2: Characteristics of AI Challenging the Current Legal System

The machine learning or AI technologies are using mathematical approach to apply

algorithms and is learning through the data fed by the programmers. AI technology evaluates

large amounts of data and formulates its decisions and outcomes. With this constant exchange of

information, these systems are becoming more intelligent and are making it extremely difficult

for the legal experts, the scientists or the regulatory authorities to overlook these challenges to

regulate and define AI. This chapter is divided into two parts: Part 1 will focus on the Definition

of AI and Part 2 will look at Characteristics of AI that Challenge the Legal System.

2.1 Definition of Artificial Intelligence



The regulation of AI intelligence needs to have a clear understanding of what the regime

is regulating. Unfortunately, today, there is no definition of AI that is accepted widely by various

scientists and lawyers especially one for the purpose of regulation. As AI is constantly

developing, it is difficult to define it. The ambiguity has been helpful to the innovators of AI but

has made it inconvenient for the regulators. Until now, only humans had the intelligence that was

universally recognized and were bound by law. Therefore, the definitions of AI are tied to human

intelligence. Even the most well renowned AI professor, John McCarthy, who coined the term

‘Artificial Intelligence’ believes that every definition of AI is dependent on the definition of

3

human intelligence. 9 This is solely because law is still unaware of what other kinds of

intelligence can fall within its ambit.10 In addition to this, AI technologies include robots,

softwares, programs and any other object required to bring AI systems into the physical world. In

the current study, the definition given by Bertolin gives a better understanding of AI and

robotics, and it states that:

[A] Machine, which (i) may be either provided of a physical body, allowing it to interact

with the external world, or rather have an intangible nature – such as a software or

program, – (ii) which in its functioning is alternatively directly controlled or simply

supervised by a human being, or may even act autonomously in order to (iii) perform

tasks, which present different degrees of complexity (repetitive or not) and may entail the

adoption of not predetermined choices among possible alternatives, yet aimed at attaining

a result or provide information for further judgment, as so determined by its user, creator

or programmer, (iv) including but not limited to the modification of the external

environment, and which in so doing may (v) interact and cooperate with humans in
11
various forms and degrees.

Based on the above-mentioned definition, it is assumed that AI developers teach the AI entities

or robots to understand the human intellect and act in an intelligent manner. 12 In addition to this,

attempts were made by the European Parliament 13 to define smart autonomous robots that

creates agency for AI systems but these are also uncertain and unclear due to the constant


9
John McCarthy, ‘What is artificial intelligence?’(2007) 15 Stanford University, Computer Science Department
<http://ai.stanford.edu/~nilsson/John_McCarthy.pdf> accessed 1 August 2019.
10
ibid.
11 Andrea Bertolini ‘Robots As Products: The Case For A Realistic Analysis Of Robotics And Liability Rules’
(2013) 5 (2) Law Innovation and Technology 214-227.
12
McCarthy (n 9).
13
Civil Law Rules on Robotics (n 3).

4

development of the newly regulated entities. While there are various other attempts made by

different experts14, there is still uncertainty pertaining to AI technologies and it is difficult to

formulate laws and polices around the obscure concept of AI.

With the challenges in defining AI and the constant evolution of AI, there is a rising

concern on the current laws to cope and for regulators to determine liability.

2.2 Challenges for the Current Legal System.



Several characteristics of AI will make it exceptionally difficult to regulate as compared

to other sources of public risk. According to Ryan Calo15, embodiment, emergence, and social

valence are three main challenges of AI entities.

Embodiment is mainly when the AI technology interacts with the world physically. The

entity has needs more than just a physical process which is its guiding algorithm or software. The

robot functions on data input to it by programming to shift to from virtual to real for acting in the

world physically. Besides, programming is a set of codes with inputs from the instructions of the

operator dictating the complex behavior of robots. Hence, two similar robots would behave

differently depending on the codes entered into them. Internally the robots combine a lot of data

and externally the hardware of a physical system has the capacity to do physical harm. For better

understanding let’s consider, in torts, a drone flying into the backyard of a neighbour’s property

could be held for trespassing. The concept of embodiment poses a challenge to tort law for

product liability because it causes discrepancies in the definition of product liability.16 The


14
SJ Russell and P Norvig, ‘Artificial Intelligence: A Modern Approach’(2010) 2; JM Balkin, ‘The Path of Robotics
Law’ (2015) 6 Cal L Rev 45, 51.
15
Ryan Calo, ‘Robotics and the Lessons of Cyberlaw’ (2015)103 CALIF LREV 513, 514–15.
16
ibid.

5

hardware or the robot can be held a product but the software or the programming is considered as

service making, AI technologies fall out of ambit of product liability. Hence, the legal challenge

is to determine if the AI technology can be governed by product liability or should be governed

under other specific regulations drafted for AI technologies. Similarly, software and

programming, because of which AI technologies are trained to act autonomously, pose a more

serious challenge to the legal

Programming is mainly related to the concept of emergence and one of the unique

characteristics of AI’s programming is the technology which has the ability to act

autonomously.17 AI has already been developed to such an extent that it is capable of performing

complex tasks viz; autonomously driving a vehicle or creating an investment portfolio without

the supervision of any human being. AI is constantly going to develop into much more complex

and autonomous acts in the coming years. Therefore, the main challenge of AI systems today is

the concept of foreseeability or the black box theory of AI. Professor Ryan Calo uses the term

emergence instead of autonomy.18

Emergence is based on unpredictability and how an agent interacts with the environment.

It means that the AI system have the capacity make decisions independently and implement them

to the outside world without any human control. The emergent AI systems also have the ability

to learn from their mistakes, this ensures improvement without any aid. When these decisions are

made by AI systems, it can go beyond the human understanding. In addition to that, AI systems

are created to think ‘outside the box’ and be creative. Such expectations from AI technology

make it challenging for the human to anticipate the result and understand the reasons behind

decision made by AI technology. The reason for AI making its own decision beyond human

17
Ignacio N Cofone, 'Servers and Waiters: What Matters in the Law of A. I. ' (2018) 21 STAN TECH L REV 167.
18
Calo (n 15).

6

understanding is that AI systems are not bound by the laws and regulations which humans have

been for years. The human brain limits the human and cannot analyze every information a at

speed equivalent to that of a computer based system. Hence, when preconceived notions do not

restrict AI, these systems can make decisions that are not even considered by humans. Calo

contradicts this statement and states that foreseeability or autonomy is not entirely predictable

but is not entirely random and it depends on the AI technology’s ability to react to the data input

in order to produce different results. 19 In the Connect Four game, AI analysis potential solutions

that would not have been anticipated by the human but there may be situations where the AI may

generate optimal solutions depending on the input by the programmer. 20

In addition to this, the risks posed by autonomy is not merely limited to foreseeability, it

extends to control as well. When machines are programmed to act in an autonomous way, it

becomes a problem for humans to control the machine sometimes. If AI technology is created on

the basis of self-learning and adaptation, it would be difficult to regain the lost control. Today,

AI technologies are already proficient enough to execute commands automatically. As in stock

market trading, the time scales are measured by the AI technology in nanoseconds and deprive

human intervention in real time because it is impossible for humans to measure time scales in

nanoseconds. Thus, even a small error can have a huge impact. The Flash Crash of 2010

displayed that the collaboration of AI programs with trading industries can have a huge

economic collision in a limited amount of time.21 So, it is essential to define the level of

emergence and autonomy and how far the acts of the AI technology are foreseeable in order to

affix liability. Foreseeability is also an important element to determine causation as it is the link


19
Cofone (n 18).
20
ibid.
21
Matthew U Scherer, ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies’
29, (2015). Harv JL & Tech <https://ssrn.com/abstract=2609777> accessed 29 July 2019.

7

between the injury and the human liable for the injury. It is very easy to determine liability if

there is a chain of causation. When the machines act in an unpredictable manner that was not

foreseeable, the decisions made by such defective programming cannot be traced back to the

human. This challenges the legal system to determine liability and all systems of law require

some level of fault. For instance, the element of mens rea in criminal law, essential

characteristics of a reasonable person in tort law are absent when AI system act unperceivable.

However, the concept of foreseeability can be overlooked if the AI technologies can be held

liable for their own acts. This gives rise to the third challenge of AI, social valence or lack of

agency (personhood).22

Social valence theory gives same status of animals or human agents to robots. Here, AI

technology itself or their owners can be held responsible for harmful acts of AI technology.23 In

order to assign legal liability under civil laws and more particularly under torts, liability is

mainly attributed to a human or an entity has the status of a legal person under law. 24 There are

constant debates across the world regarding the legal personality for AI entities. Today, the law

is not flexible and the current civil and criminal laws are constantly facing a responsibility gap as

AI is becoming more independent and unpredictable. The laws in the near future are not

adaptable enough to accept robots under the law of agency hence, AI systems cannot be held

liable for their actions. This poses a challenge to determine liability.

Despite the unique features of AI that can cause a problem to the fundamental legal

system, there is still certain hope in the current legal system to ensure that the harm caused by AI

22
Bert-Jaap Koops and Mireille Hildebrandt and David-Olivier Jaquet-Chiffelle, 'Bridging the Accountability Gap:
Rights for New Entities in the Information Society' (2010) 11 MINN JL SCI & TECH 49.
23
Calo (n 15).
24
Omri Rachum-Twaig, ‘Whose Robot Is It Anyway?: Liability for Artificial-Intelligence-Based Robots’ (2019).
2020 University of Illinois Law Review, Vol. 2020 <https://ssrn.com/abstract=3339230> accessed 5 June 2019.

8

systems can be reduced without stifling innovation. An impermeable definition of AI is not a

sound option because of the constant development in the AI technologies. The legal systems will

have to adapt according to a working definition and which can be updated accordingly. Any

definite legal definition would be either over or under inclusive and would not be sufficient for

the purpose of affixing liability or a sound regulation. 25 Similarly, the issues concerning

autonomy and foreseeability need to be addressed by law and adjust it to the current laws.

Considering that AI possess these challenges and is a unique legal phenomenon, further

explanation on how the current legal systems and prevailing laws from parts of the world can be

used to determine who is responsible when AI causes harm, will be emphasized in the next

chapter.

Chapter 3: Responsibility and Liability


AI poses’ a challenge to the liability model which is largely based on causation,

foreseeability and control. It is difficult to analyze the strange behavior of AI due to the

complexity and self-learning behavior of AI system, making it even more difficult to determine

liability on a ‘fault based’ or ‘defect based model’. This highlights the most important question

as what is to be done in such situations. As there is no sound answer to this question, the current

study makes an effort to find a system to hold AI responsible and affix liability and hence, to

find a solution for these technical hitches, the laws will have to be strained, flexed and should

accommodate AI.

To look at the liability frameworks in the current legal system, the concept of sliding

scale needs to be understood. It determines the level of responsibility placed on a person by


25
Scherer (n 21).

9

society. ‘The current state of art’ or the levels at which AI systems are right now needs to be put

in perspective. As noted above in Chapter 1, there are different ranges of AI systems i.e. AI can

make limited pre-defined decisions to AI that can make decisions autonomously, untraceable by

their programmers. Currently AI is at a level, where even though AI makes a pre-defined

decision or an autonomous decision in response to an external stimulus, it will be controlled by

the software developers.26 Professor Ryan Calo,27 believes that it would take nearly 10-15 years

before humans can no longer control a robot. AI systems today have not reached the complete

autonomous stage and it could still be possible to predict the manner in which it is functioning.

Hence, to determine liability, the section will be divided into two parts: Part one discusses the

various aspects of Private Law and Part two focuses on Criminal Law.

3.1 Private law



The obligation of private law related to AI can arise from two sources: contracts and civil

wrongs. Contract law has its foundation on an agreement whereas civil wrongs arise when there

is an infringement of the legal rights of one party. The liability can arise in three different

situations:

First: Sale of a product is connecting parties such as the consumer and a manufacturer.

Second: Contracts bind the parties directly.

Third: Information supplied to the user is by a computer or AI system.

In the first case, the negligence and strict liability regimes are raised. The second case, contract

law is applied and the third case negligence is applied. In both contract law and tort law, the

plaintiff bears the burden of proving a right to compensation from the defendant. There are


26
John Buyers, Artificial Intelligence: The Practical legal Issues (2018) Law Brief Publishing Ltd.
27
Calo (n 15).

10

numerous categories of civil law but this study focuses on negligence, strict liability, vicarious

liability, and contractual liability.

3.1.1. Strict Liability and Product Liability

Strict liability is when a party/person is held liable without their fault.28 It is a theory

whereby the person is considered at fault even without their mistake. In case of an aircraft

mishap, the owner of an aircraft will be liable for any loss or injury caused by the flight, no

matter how responsible or careful the pilot was. This is mainly used to ensure that the producer,

manufacturer or the owner of a product use precaution and the victim is compensated rightfully.

Strict liability has not yet been imposed on the users of machine learning technology or

AI as there are no exact laws. One of the main branches of strict liability is product liability 29

and many jurisdictions 30 have attempted to bring machine learning technology under the scope

of product liability. To elaborate, inferences are drawn from the US Restatement of Torts31 and

the EU’s Product Liability Directive32 to define the product and apply the rules to AI systems. In

UK, under the Consumer Protection Act 198733 the liability for defective products is upon the

producer or the supplier. In order to hold a person liable under the product liability regime, it is

mandatory to define the meaning of product liability. The definitions of product liability under

both the EU directives and US Restatement are well stated.

As per the blend of the two definitions, if there is any defect in the product or any harm is

28
Jacob Turner, Robot Rules: Regulating Artificial Intelligence, (2018 Palgrave Macmillan) 94.
29
Ryan Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’ (2016) 86 GWashLRev
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2877380##> accessed 20 July 2019.
30
United Kingdom, New Zealand, USA.
31
US Restatement of the Law Third. Torts: Products Liability.
32
Council Directive Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations
and administrative provisions of the Member States concerning liability for defective products OJ L 210/29–33.
(Council Directive for Products)
33
Implementing Directive 85/374 EEC on Product Liability.

11

caused by the product due to lack of issuing proper warnings to consumers about the reasonable

use of the product, the responsibility depends on the seller or producer of the product. Thus, the

fault depends upon the defective object and not the individual regarding misrepresentation of

instruction or failure to issue warnings to the consumers concerning the reasonable use of the

product.

Considering these definitions, it is fairly easy to apply product liability to AI systems. In

the recent, fatal Tesla Motor car crash 34, the car was running on autopilot technology and the

technology failed to recognize the difference between a large truck and a trailer and caused a

crash killing the driver and injuring the truck driver. Due to failure in the autopilot technology it

was considered to be a defect in the product even though the car was secure thus, the product

liability regime can be applied to autonomous vehicles and the manufacturer or the supplier will

be liable under the same. Essentially, product liability is an adequate regime because however

intelligent or autonomous the product is, it is still eventually manufactured by manufactures and

is sold to consumers. 35

When applying strict liability model there is a certainty regarding liability under the

regime. In simple terms, it is certain that the producer compensates the consumer or victim for

the injury. From the prospects of an injured party, he does not have to seek responsible

individuals from all the parties viz; the software developer or the expert who permitted the use of

such software. The burden is placed on the supplier or producer to find the remaining parties.

From the producers’ viewpoint, the cost of calculating the risk can be included in the product. In


34
Danny Yardon and Dan Tynan, ‘Tesla driver dies in first fatal crash while using autopilot’ (2016) The Guardian
<https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk> accessed
29 July 2019.
35
Turner (n 28) 97.

12

doing so, the producer can disclose all the risk factors and calculations to the consumer using the

technology, in the prospectus of the product. This is beneficial to both parties.

In addition to certainty, it encourages the producer to use utmost safety controls and

precaution while creating or manufacturing the product. With the use of precaution, the

producers are well aware of the risks and would ensure all risks are mitigated. When an AI

functions in an unforeseeable manner, they are the best people who would understand and

control the risk. 36

While under the product liability regime it is useful to determine the liability of the

person, it has various shortcomings, which render it difficult to apply product liability to AI

systems. 37 One of the shortcomings of product liability is that there are discrepancies in

understanding whether the product liability rules define the term product and whether it includes

services or intangible things.38 The embodiment characteristics of AI render it difficult to affix

liability. For example, the AI system that is added in a car is considered to be a product and the

cloud based AI is not considered to be a product. This is because the cloud is given to the users

as a service and not as a product, hence, cannot be applied to product liability. The product

liability regime is based on the fact that it revolves around products and not services.

Subsequently, applying product liability regime is difficult without determining whether it is a

good or a service. 39 In addition to this, digital technology rely on generating and processing

data and the method of providing data is through IoT or another digital technology to AI systems


36
Horst Eidenmüller, ‘The Rise of Robots and the Law of Humans’ (2017) 27 Oxford Legal Studies Research
Paper, 8. < https://ssrn.com/abstract=2941001> accessed on 9 July 2019.
37
Turner (n 28) 95.
38
Woodrow Barfield, ‘Liability For Autonomous And Artificially Intelligent Robots’ (2018)
<https://doi.org/10.1515/pjbr- 2018- 0018 > acessed 10 June 2019.
39
Bertolini (n 11).

13

and is considered to be a service, making it difficult to apply product liability. Due to these

discrepancies in the definition, there is a gap in responsibility. 40 The Supreme Court of

Wisconsin,41 tried to resolve the gap by stating, that even though the software is an intangible

product, strict liability will be applied. The court held that strict liability can be applies to an

intangible entity such as electricity. Though electricity is considered to be an intangible product,

it is produced by men and is distributed and sold to consumers. So even if this is service, the

court considered it to be a product. Therefore, the same analogy is applied to software. Even

after this decision there still remains a difference of opinions in the world. The EU through its

commissions42 and staff working documents43 are discussing these liability issues related to

technological development. The Commission Staff Working Document on Advancing the

Internet of Things highlighted the liability aspects and also considered that providing data

through IoT system is a service and not a product.

The issues pertaining to definition of product is not just in the EU and US, it applies to

other countries as well. In Japan, Mr. Fumio Shimpo, stated that Japan Product Liability Act is

insufficient and fails to include services and software into their definition of products.44 For that

reason, the software used in a robot may be considered a service and the robot itself would be a

product and if there is a fault in the software, the application of strict product liability regime

becomes insufficient. The second and most important shortcoming of product liability is the

reliance it places on foreseeability.



40
Mathias (n 1).
41
In Ransome v. Wisconsin Electric Power Co 87 Wis 2d 605 (Wis 1979).
42
European Commission, concerning Liability for Defective Product <https://ec.europa.eu/smart-
regulation/roadmaps/docs/2016_grow_027_evaluation_defective_products_en.pdf> accessed 11 August 2019.
43
European Commission, Staff Working Document ‘Advancing The Internet Of Things In Europe’ accompanying
The Document Communication From The Commission To The European Parliament, The Council, The European
Economic And Social Committee And The Committee Of The Regions Digitizing European Industry Reaping The
Full Benefits Of A Digital Single Market Swd/2016/0110 Final.
44
Fumio Shimpo, ‘The Principal Japanse AI and Robot Statergy and Research Towards Establishing Basic
Principles’ (2018) 3 Journal of Law and Information Systems.

14

As noted above, there is a difference between AI that obtains possible results or evaluates

rules simply and the AI that solves problems dynamically by learning from data. The

foreseeability issues arise with the latter part. If AI programs are a black box, the decisions and

predictions made by it will be completed without sufficient reasoning for doing so. 45 The black-

box AI is basically the functioning of the AI that is outside the understanding of creator or

foreseeability. The EU directive is not clear about determining liability without foreseeability.

On one hand, Article 7 (e) of the Directive states that, the producer is exempted if “the

producer shall not be held liable when ‘the state of scientific and technical knowledge at the time

when he put the product into circulation was not such as to enable the existence of the defect to

be discovered.” 46 With regard to this, it is clear that a producer will not be held liable for acts of

the AI that are unforeseeable.

On the other hand, Article 1 states, “The producer shall be liable for damage caused by a

defect in his product.” 47 The use of Article 7 reduces the strictness under Article 1. The risks

that go beyond the scientific understanding should not be imposed on the person who did not

reasonably foresee such action.

To overcome these challenges, it is noted that with constant monitoring of the AI during

the development and testing stage, the software developer can foresee the harm.48 Even if the AI

is created for learning and reaching dynamic solution on its own, there might be similar pattern

between AI system and its environment in determining the solution. Constant monitoring can

overcome the risks, yet there are situations wherein the AI was created for a purpose and it


45
Yavar Bathaee, 'The Artificial Intelligence Black Box and the Failure of Intent and Causation' (2018) 31 HARV J
L & TECH 88
46
Council Directive for Products (n 32).
47
ibid.
48
Barfield, (n 38).

15

performed exactly the way it was created for and still manages to cause injury as in the Tesla

Case49 stated previously. Even in these situations, the strict liability approach can be applied. To

elaborate further, the data put into the AI systems are by the software developer and even if the

AI is learning on its own, it is acting from the data put into it and not its AI systems intuition.

This approach is further supported by The EU Civil Laws Rules in Robotics50 and it approves to

the current state of art of AI. It articulates that the stage of AI systems has not reached a level

where they are completely autonomous and therefore for robots or AI systems cannot entirely

function without the support of humans somewhere. With the use of constant monitoring reports

of AI technologies behavior, it would be a satisfactory solution to hold the producer responsible

because he is the person who created the program in the first place.51

Nevertheless, the producer can avoid this liability if he issued warnings to the consumers

and took reasonable care. The producer of digital technology constantly updates the software

even after the product has been put in the market. As stated above, the software is a code that

affects the functionality and behavior of the AI technology. There are situations wherein the

software is updated, patched by either the producer of the AI technology or the third party and

this affects the safety of the technology. The new feature or updated software has new codes,

which add or remove features that can change the risk factor of the AI.52

In situations where the fully automated vehicle incurs any damage, the liability would be

on the driver as per the civil law rule and the manufacturer as per the Product Liability Directive.

With regards to strict liability or product liability, the foundation banks merely on the principle


49
Danny yardon and Dan Tynan, ‘Tesla driver dies in first fatal crash while using autopilot’ (2016) The Guardian
<https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk> accessed
29 July 2019.
50
Civil Law Rules on Robotics (n 3).
51
Barfield (n 38).
52
ibid.

16

that the person who created the risk for his benefit would be liable for any damage in relation to

that risk. In contradiction to this rule, there are exceptions in certain national liability

schemes,53which state that the owner can avoid liability if he undertook reasonable care and did

everything to avoid the liability. The same analogy can be applied to AI or robots because for

example, if the owner of the AI used and maintained the AI appropriately, following the rules

mentioned by the creators and constantly updating the software, there would be situations

wherein the robot might act autonomously and cause damage. Taking the autonomy aspect in

consideration, it would be difficult to hold the person liable for the autonomous behavior of the

technology and thus, the manufacturer or owner can avoid liability.

It is not entirely possible to avoid liability approach under strict liability as the damages

resulted from AI systems will need to be acknowledged. Hence, the US Judge, Curtis Karnow,

suggests that an insurance scheme is a perfect method to deal with cases relating to liability for

AI. 54 The insurance scheme is to ensure that the injurer despite his knowledge or fault

compensates the injured party for loss or damages incurred. This approach has already been used

by UK in Automated and Electric Vehicles Act 2018 wherein the legislation puts autonomous

vehicles on the same pedestal as normal road vehicles. 55 With the application of insurance laws,

it makes it easier to ascribe liability because the unpredictability of AI would not be a problem

for the insurers. New Zealand has adopted the No Fault Accident Compensation regime to AI.56

These approaches are limited and cannot be applied to all kinds of AI keeping in mind the

shortcomings of AI. Both these schemes ensure that the injured party will be compensated but

53
The Consumer Protection Act in the UK.
54
Curtis E. A Karnow, ‘Liability for Distributed Artificial Intelligence’ (1996) 4 Berkely Technlogy Law Journal,
147.
55
UK Automated and Electric Vehicles Bill 2017-19 s 2.
56
Accident Compensation Act 1972 (NZ). This scheme compensates the victims if the accident regardless of the
person at fault. The Accident Compensation Corporation would pay the damages. The money raised the relevant
constituency for these corporations are mostly by taxes.

17

each of it has obstacles. Insurance schemes need to understand that the insurers would exclude

liability where the AI functions in a manner outside the limited range. For example, making the

delivery robot undertake work of a concierge. Therefore, if the AI were unpredictable it would

be problematic to assess the price for the damage. Likewise the compensation scheme of New

Zealand would be better managed in smaller economies and are limited solely to physical

harms.57 In addition to insurance schemes, there is another solution, ‘transparency’, which will

be discussed under negligence.

With the current development of AI, insurance schemes are best suitable58 for situations

such as autonomous vehicles or medical products and strict liability supporters reason that AI

companies should have a greater responsibility because through safety protocols and quality

assurance they are in a better place to avoid defects. Nevertheless, this would be expensive for

the consumers as the companies would increase costs in the name of risk through insurance.59

Even then, it is a better approach, especially in the case of autonomous vehicles and therefore

more countries are discussing the application of this rule to certain AI technologies.

Strict liability for products apply only to a minority of technologies because the sole

purpose of AI is continued learning and application of stricter liability will hamper innovation.60

Products are fixed in nature and with constantly changing AI it is likely to be called as a service.

The laws pertaining to product liability are not certain and create a huge responsibility gap. Strict

liability is a poor solution, as the one of the options for AI to function with harsh liability

revolves around the concept of agency and personhood. In addition to that, even though the

57
Turner (n 28). 103.
58
Kenneth S. Abraham & Robert L. Rabin, ‘Automated Vehicles and Manufacturer Responsibility for Accidents: A
New Legal Regime for a New Era’ 105 VA. L. REV.
59
Lawrence B Levy and Suzanne Y Bell, 'Software Product Liability: Understanding and Minimizing the Risks'
(1989) 5 HIGH TECH LJ 1.
60
Chris Reed, ‘How Should We Regulate Artificial Intelligence’ (2018) 376 Philosophical Transactions of The
Royal Society: A Mathematical, Physical and Engineering Science < https://doi.org/10.1098/rsta.2017.0360>
accessed 15 June 2019.

18

concept of severe liability does not require a fault and could overlook the unexpected aspect of

AI, there is still a dire need of the foreseeability and due to the AIs black box it is difficult to

reason as to why the AI went wrong. One cannot engage in strict liability without being able to

hold the creator or user responsible for the acts committed by AI. Chris Reed further supports

this argument and believes that strict liability is not as strict. The defendant has to consider the

foreseeability aspect and understand that fault is a crucial in determining liability.61 Even the EU

laws are contradicting each other making the concept of strict liability lenient. Despite the

complexities of affixing liability under strict liability, it seems that the current liability patterns

might be a better solution to achieve for lawmakers and smear the goals of harm correction and

inculcate the new technologies within the definition of the traditional tort law and civil liability

rules.62 The strict liability regimes are already applied in autonomous vehicles and might extend

to medical practice but it cannot include the other fields of AI such as Big Data AI technologies.

Hence, a better solution would be focus on negligence. 63

3.1.2 Negligence

The current legal systems recognize that the law of negligence places reliance on fault unlike

strict liability. To bring cases within the ambit of negligence, the judges observe the method in

which people make decisions. They observe if the decision was made with proper care or

whether an unreasonable risk was created. 64 In, the landmark case of Donoghue v. Stevenson65.


61
ibid.
62
Ioannis Revolidis and Alan Dahi, ‘The Peculiar Case of the Mushroom Picking Robot: Extra-contractual Liability
in Robotics’ in Marcelo Corrales, Mark Fenwick Nikolaus Forgó (eds), Robotics, AI and the Future of Law
(Springer 2018) 123.
63
David C Vladeck, 'Machines without Principals: Liability Rules and Artificial Intelligence' (2014) 89 WASH L
REV 117.
64
Andrew D Selbst. ‘Negligence and AI's Human Users’ (2019) Boston University Law Review,
<https://ssrn.com/abstract=> accessed 10 June 2019.
65
Donoghue v. Stevenson [1932] AC 562

19

the court held that the producer of a bottled ginger ale had a duty of care towards the woman

who fell ill after drinking the ale. The bottle contained a dead snail and the court decided that it

was the responsibility of the producer to compensate the woman, even when there was no direct

contact amidst them. The concept of duty of care was further expanded in Caparo Caparo

Industries v Dickman and Others 66 wherein it was held that there should be a foreseeable harm,

and a fair and reasonable circumstances to impose a duty of care. There should be a connection

between the defendant, claimant and circumstances.

For negligence to be applicable to AI systems inference can be drawn from these cases.

The liability in negligence arises when there is a breach in the duty of care and hence it is

appropriate to apply the principles of negligence to the loss suffered by a person due to a

decision made by the AI. The AI system or the user of the AI technology has a duty of care and

ability to foresee the harm.67 Therefore, for smooth application of the law of negligence, courts

in UK require answers to four questions:

1. If any person was under a duty of care to prevent the harm?

2. Whether such duty was breached?

3. Whether the breach of duty caused damage?

4. Whether such harm or damage was reasonably foreseeable? 68

Similar rules apply to different legal systems in France, China and Germany. 69 With

these questions it is clear that liability in negligence arises when there is a duty of care and such


66
Caparo Caparo Industries v Dickman and Others [1990] 2 AC 605
67
Emily Barwell. ‘Legal Liability Options for Artificial Intelligence’ <https://www.bpe.co.uk/why-
bpe/blog/2018/10/legal-liability-options-for-artificial-intelligence/> accessed 23 July 2019.
68
Curtis E.A. Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine Intelligence’, Robot Law,
edited by Ryan Calo, Michael Froomkin, and Ian Kerr (Edward Elgar, 2015), 53; David G. Owen, ‘The Five
Elements of Negligence’ (2007) 35 HOFSTRA L. REV. 1671.
69
Turner (n 28) 84.

20

duty was breached resulting in damage to another person. Thus, various domains of AI70 are now

being regulated by negligence because it is beneficial for courts to determine answers and

compensate the victim for breach caused by AI systems.71

Additionally, as AI systems are not given the status of a legal person, they are not

responsible for its own actions. Hence, liability in these situations would rest on multiple people

such as the owner, manufacturer and the designer who trained and designed it.72

Negligence appears to be a viable solution73 and there are various benefits in applying

negligence to AI. One of the main benefits of negligence is that it is flexible. The level of

precaution that a person is required to take in order to prevent harms, is different in each system

and the level of duty can expand on contract according to the level of precaution. For example in

the UK, the judges may be more lenient to a risky AI that is beneficial74 to the public than a

dangerous AI75 with very minimum benefits to the public.76 Incase of a police officer who is

driving fast and unsafe manner in pursuance of a criminal is not likely to be liable under

negligence than a person who was driving rash for fun. This benefits the owners, designers and

operators of the AI to take additional safeguards in cases where AI is likely to cause more harm.

This balanced approach between the creators and lawmakers with the use of negligence is

beneficial for development and innovation while respecting the importance of law.77


70
Medical malpractice, partially autonomous car accident and data security.
71
Abbott (n 29).
72
Barewell, (n 67).
73
Reed (n 60).
74
AI in Medicine.
75
Killer robots.
76
Turner (n 28) 87.
77
ibid.

21

The second benefit of negligence is that there is no definite list of persons who can be

sued under negligence. 78 This is very important because AI can interact with a lot of people and

the person who is affected could be someone who is not a party to the contract between the AI’s

owner, controller or creator. For instance, the AI delivery drone that creates its own route and

adapts it without the input of humans, can come into contact with multiple people on its route to

the destination and thus any person can bring a claim in case of an injury. 79

The third benefit of negligence is the possibility of involuntary and voluntary duty. 80 The

voluntary duty might occur either from a dangerous activity of a person or an intentional act of a

person that gives rise to potential liability. The involuntary characteristic of negligence is

beneficial because it reassures the subjects in any legal system to be more cautious and have

compassion towards other subjects. This ensures that the developers of AI have a duty towards

the people and are not just purely seeking methods for profit maximization.

Considering the above reasons at hand, the law of negligence deems to be a perfect fit for

courts to assess claims for AI.81 However, there are some limitations that arise in applying the

law of negligence. One of the main limitations of negligence is to determine whether the

definition of reasonable person extends to user of AI or AI itself. The most important point of

contention in the negligence regime is determining if the defendant behaved in manner akin to

the behavior of a reasonable person in the same situation. This is difficult to establish when the

acts are committed by AI technologies.

To understand the concept of reasonable person it would be beneficial to comprehend

what a reasonable human or designer of the AI would have done in a similar situation. This is not


78
ibid. 88.
79
ibid.
80
ibid.
81
Abbott (n 29).
22

a long-term option because one of the main challenges of AI is that it can operate using real-time

data. Moreover, AI systems could be used for the exact purpose it is created for and still cause

harm that was unanticipated hence, it is not easy to fix liability to the designer or user.

In order to overcome these issues, Ryan Abbott’s proposes the concept of a ‘reasonable

computer’ and states, “If a manufacturer, operator or designer or retailor can show that an

autonomous computer, robot is safer than a reasonable person then the supplier should only be

liable in negligence rather than strict liability for harm caused by the autonomous entity.”82 It is

proposed that the test of negligence should be applied to a computer rather than a human. Instead

of thinking of what a reasonable designer would do, it should be according to what a reasonable

computer would do. Though this is a very well established idea and would shift the strict

liability regime to negligence, it is difficult to apply because, like every human is different, the

understanding and processing of the computer may differ. Today, most of the laws and

applications rely on the way humans operate and it would be inappropriate to apply it to artificial

objects. Therefore, the concept of reasonable computer might be applicable but would not be

entirely useful because it requires a certain level of analysis as to why certain decisions were

made by the AI technology.

The second limitation of negligence is the reliance of foreseeability and autonomy. The

concept of foreseeability revolves around the definition of negligence. When a person has to be

under scrutiny for law of negligence, two questions are raised:

1. Whether it was foreseeable that the person could be harmed?

2. What kind of damage was foreseeable?

The answer to these questions is difficult because of the emergence and unpredictability

characteristics that AI possesses. To understand predictability, there are two kinds of AI systems,

82
ibid.

23

one, who makes decisions which go beyond the understanding of the human but are mainly the

result of the information inserted by the programmers. For instance, Deep Blue chess program

moves that go beyond the understanding of the basic human chess strategy.83 Two, the AI

systems that self-learn and solve problems without the need of human intervention. For example,

incases of reinforcement learning, the robots have the ability to perform tasks in a given

environment. Hence, in situations where the AI performs tasks on its own without human

intervention pose a challenge to negligence. These tasks are unpredictable and make it difficult

to affix liability. Due to the autonomy and unpredictable AI, it is further difficult to determine a

link between the human and the series of incidents leading to the damage. 84 Thus, for a human

tortfeasor to be held guilty under negligence, his act should be linked legally to the harm

caused.85The inexplicable behavior of AI hampers the causation nexus between the user of the

AI and the victim of the injury. In contradiction, to attribute causation, the mere decision to use

of AI should be sufficient cause to bring a lawsuit against the injurer.

In addition to this, to affix liability, there has to be a breach of duty to care, failure to act,

as a reasonable person and the damage should be foreseeable. While determining fault, there has

to be a fair understanding on how far the risk could be foreseeable and if the question is

answered, the human decision-maker would be held liable.86

However, the designer can avoid liability by applying the concept of superseding cause87

stating that the decisions of AI were unforeseeable because these systems learns on its own and it

is not under their control once these systems has left their care. If AI systems are created with

83
Bruce Pandolfini, Kasparov And And Deep Blue: The Historic Chess Match Between Man And Machine (1997)
Fireside 7–8.
84
Weston Kowert, 'The Foreseeability Of Human - Artificial Intelligence Interactions' (2017) 96 TEX L REV 181.
85
Richard W. Wright, ‘Causation in Tort Law’ ((1985) 73 Calif. L. Rev. 1735.

86
Reed (n 60).
87
An intervening force or act that is deemed sufficient to prevent liability for an actor whose tortious conduct was a
factual cause of harm.

24

reasonable care by the designer is sold to the consumer and after the sale the AI acts in an

unpredictable manner, the designer can avoid liability on the grounds that these acts were

unpredictable and not under the control of the designer.

To contradict this statement, Matthew U. Scherer88 states that the designers intended the

unforeseeable behavior and that is a sufficient cause even if the unforeseen act was not intended.

So, according to the current state of art of AI, AI has limited scope. If the legal systems do not

hold the designer liable for the unforeseeable harms caused by AI systems, it would be difficult

to compensate the victim of harm. Thereby, in the litigation of the Da Vinci Surgical Robot89the

designer or user can be held responsible because he would be best suited to understand the

unforeseeable acts and he made the decision to use the product. This is not a permanent solution

because AI is constantly developing and it would be difficult to determine the questions

pertaining to AI’s Black Box. Therefore temporarily, the courts can approach negligence by

applying the two concepts: ‘transparency’ and ‘res ipsa loquitur’.

Transparency is a very practicable solution for determining responsibility for both

negligence and strict liability.90 It is very closely attached to the concept of explainability.

According to EU Guidelines, transparency includes traceability, explainability and

communication.91 Traceability means to detect the process and databases, that directed the AI to

make a possible choice, explainability means the ability to understand the technical view of AI’s


88
Scherer (n 22).
89
Daniel 'Brien v Intuitive Surgical, INC [2011] United States District Court, ND Illinois, Eastern Division, 10 C
3005.
90
Bathaee (n 45).
91
European Commission, 'Ethics Guidelines For Trustworthy AI' (High-Level Expert Group on AI 2019).
<https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai> accessed 10 March 2019. (EU
Guidelines on Trustworthy AI).

25

choice of making process 92 and justification of the human decision making process.

Communication means transparency on the characteristics of AI and human.93

Transparency can be achieved in two ways: ex ante and ex post transparency. Ex ante

transparency is the decision making process which can be explained prior to AI being used, the

ex post transparency is when the decision process is unknown in the beginning but can be

understood by testing the performance of AI retrospectively in the similar surrounding. The

courts to determine law of negligence focus on ex post transparency to decide liability. The

evidence collected by transparency would be used to determine the breach in lack of care and

reasonability.94 The courts can reason and hold the exact person liable. Though this concept

seems to be a viable solution to find responsibility, the level of transparency differs in each

industry. To explain, consider when the investor invests in an AI system whose logic and

decision-making process is not easy to understand, the duty to inform is more important as

compared to a decision of robotic vacuum used by a consumer. Although transparency seems to

be dependent on industries, it is not easy to obtain. Additionally, it is difficult to determine

whether the transparency obligation should be imposed on the manufacturers because of the lack

of real life examples of AI. Even if the obligation of transparency were imposed, it would be

much harder to determine the kind of transparency (ex ante or ex post). It is also expensive and

difficult to apply the transparency obligation because it coincides with the interests of the

manufactures to protect their Intellectual Property.95 Even with these drawbacks, negligence is


92
The General Data Protection Regulation 2016, 2016/679, S 22.
93
DLA Piper https://blogs.dlapiper.com/iptitaly/2019/06/fintech-who-is-responsible-if-ai-makes-mistakes-when-
suggesting-investments/ accessed 1 August 2019.
94
ibid.
95
EU Directive 2016/943 of The European Parliament And Of The Council of 8 June 2016 on the protection of
undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and
disclosure. OJ L 157/16 1-18.

26

capable of adapting to problems with time, as it has proved with respect to other technologies,

and thus it would be practical to limit regulation and use transparency as an interim solution.

In addition, the principal of tort law, Res ipsa Loquitur or speaks for itself and can also

be a possible interim solution. Both EU and US accept this doctrine. The defendant can refute all

the essential elements of negligence in this doctrine by showing that his conduct was not

negligent. If the harm in question is unexplainable and uncommon then the doctrine cannot be

applied. However, if there are multiple consecutive inexplicable failures, which have no

explanation, it can be applied. For instance, the Toyota Motor Corporation case96 in US. It was

discovered that the Lexus model of Toyota accelerated for no reason even after the intervention

of the driver. After thorough investigation the failure could not be detected. The Oklahoma court

settled 400 cases of Toyota awarding 3 million dollars in damages by applying the doctrine of

res ipsa loquitur.97 Therefore, the doctrine can only be applied if there is commonality between

the potential plaintiffs and thus if the incident is rare and in isolation, this doctrine cannot

apply.98 Hence, both the solutions, transparency and res ipsa loquitor permit the lawmakers to

adapt and apply negligence to AI.

Negligence is an ideal liability regime for determining the liability of AI caused harms.

The benefit of negligence is that it would ensure that the producers, manufactures would take due

care and precaution to avoid liability and regulators to determine liability. However, the basic

challenge to negligence is foreseeability. Countries like EU and US99are trying to resolve them.


96
In re Toyota Motor Corp. Unintended Acceleration Mktg 785 F Supp 2d 925 (C.D Cal 2011).
97
John Buyers, ‘Liability Issues in Autonomous and Semi-Autonomous systems’ (2015)
<https://www.osborneclarke.com/media/filer_public/c9/73/c973bc5c-cef0-4e45-8554-f6f90f396256/itech_law.pdf>
accessed on 15 July 2019.
98
Brandon W Jackson, 'Artificial Intelligence and the Fog of Innovation: A Deep-Dive on Governance and the
Liability of Autonomous Systems' (2019) 35 Santa Clara High Tech. L.J.
99
EPRS| European Parliamentary Research Service, 'Artificial Intelligence Ante Portas: Legal & Ethical Reflections'
(Scientific Foresight Unit (STOA) 2019) 3.

27

Though the existing law concerning negligence is insufficient and can be a sub-optimal option,

the use of negligence with transparency and res ipsa liquitor can be sufficient for the current

scenario. Further, tort law also recognizes the concept of vicarious liability when negligence and

strict liability are sub-optimal options.

3.1.3 Vicarious Liability

The present legal system has various mechanisms, which can make one responsible (Principal)

for the actions of another person (Agent or AI technology). Today the employer and employee

relationship revolves around the concept of vicarious liability.100 Vicarious liability can also be

used for relationships like parent and child or student and teacher. It is the responsibility of the

principal in case the agent causes harm to another person. The concept of strict liability can be

applied to vicarious liability as most of the times the behavior of the children or pet animals can

be harmful to others and their acts are strictly related to the acts of the owner or parent. For

example, if a horse eats all the crops from the neighbor’s farm, the owner of the horse would be

liable to compensate the neighbor for the damages. Likewise, the same analogy of social valance

as mentioned above, is applied to computers.

Considering the Employer-employee relationship under vicarious liability, the computer

will be assumed to be the employee and the owner will be the employer, making the owner

vicariously liable for the acts of the computer. 101 For example, if the police force use a robot for

patrolling and the robot causes harm to a civilian, the police force will be vicariously liable for

the acts of the patrol robot. The mere use of the robot is enough to provoke the agency laws,

even if the police had not created the robot. Similarly, the UN convention on the Use of


100
Turner (n 28).
101
Abbott (n 29).

28

Electronic Communications in International Contracts states, “that a person (whether a natural

person or a legal entity) on whose behalf a computer was programmed should ultimately be

responsible for any message generated by the machine.” 88 Therefore, the liability imposed on the

principal is not because of the act but its connection with the wrong doer. The civil law rules on

robotics support this analogy and states that vicarious liability for robots is similar to strict

liability and there should be a definite link between the damage suffered and behaviour of robot

to hold the principal liable.102

Taking in consideration the above attempts to identify responsibility and liability, it is

suitable to assume that the owner or principal of the AI would be liable.103 According to the laws

of agency the concept of vicarious liability is an appropriate explanation for legislators to assign

responsibility for any injury caused by user of the AI technologies.

The main purpose of applying vicarious liability to AI systems is that there is a balance

between accommodating the independent agency (AI technology) and the known legal person

liable for its acts (principal or owner). By giving the status of an agent to the AI technologies, it

simplifies the liability issues making the end user or the licensee of the software responsible for

any wrongful acts committed by the agent. This statement was established in re Ashley Madison

Customer Data Sec. Breach Litigation 104 where there were claims pertaining to the breach of

data on the Ashley Madison website that caused a large distribution of user information. There

were various allegations that the ‘bots’ or ‘host’ used in the website acted as fake women and

were used to entice the male members to make various purchases from the website. So, it was


102
Civil Law Rules on Robotics (n 3).
103
Leon E. Wein, ‘The Responsibility Of Intelligent Artifacts: Toward An Automation Jurisprudence’ (1992) 6
Harv JL & Tech 113.
104
In re Ashley Madison Customer Data Sec. Breach Litigation 148 F Supp 3d 1378 1380 (JPML 2015).

29

held that the agency theory is better applicable to the AI based technology or robots, instead of

allocating liability to the robot itself.

Considering the benefits AI and vicarious liability, the approach has certain shortcomings

that are likely to act as hurdles in the smooth application of this regime. As stated, AI has the

ability to emerge and learn on its own. Thus, the AI agents are capable of working on its own

without any human intervention or authorization. Accordingly, liability for AI technologies is

limited to only a definite set of activities, which is conducted by the agent. So, not every act of

the AI can be traced back to the owner or user. In the basic principle of vicarious liability, the

limitation is that the principal is liable for the acts of the agent during the course of the agency

and the acts committed outside it cannot be related to the principal. Accordingly, the relation

between the principal and agent can split making it difficult to assign liability. Then the question

arises is that, whether the AI agent can depart from the principal when they divert from the

assigned tasks and function on their own. Legislators have to assess the emergence of AI agents

and decide if the agency laws are appropriate to affix liability on the agents themself or the

principal for harm caused.

Currently, the law of agency is not sufficient to include AI entities liable for their own

acts. This statement is further supported by Paul Burgess believes that the laws today are not

well equipped to give robots the status of legal agents105. Therefore, human user or owner will be

liable for the acts of the AI technology.106

So, according to the definition of vicarious liability, the principal is liable for the acts of


105
Jiahong Chen and Paul Burgess, ‘The Boundaries Of Legal Personhood: How Spontaneous Intelligence Can
Problematize Differences Between Humans, Artificial Intelligence, Companies and Animals’ (2018) 2
https://doi.org/10.1007/s10506-018-9229-x accessed on 10th August 2019.
106
Calo (n 15).

30

the agents under the scope of its employment. Thus, when AI system learns and becomes more

interactive, the courts will have to adapt to the given circumstances. There is no one size fits all

with respect to AI technologies and there is a long time before it can be treated as legal persons.

Vicarious liability is very similar to strict liability and therefore, the principal or the user

would be liable in the same way as their pets or children without their own fault. Moreover, the

concept of agency can be extended to contracts, creating contractual liability.

3.1.4 Contractual Liability

For determining contractual liability, the definition of contracts is important. So,

contracts are agreements enforceable by law.107 Contractual liability arises when two or more

parties enter in a formal agreement to determine who would be legally responsible in case of a

breach. The same concept is applied to AI and acts committed by it. The Uniform Commercial

Code (UCC) protects the parties from damages suffered from the product purchased. There are

express warranties created by the seller pertaining to the product. The warranties or promises

give information about the product. With express warranties, there are also certain implied

warranties that the sellers have to oblige. The Consumer Rights Act 2016, Sale of Goods Act

1979 includes implied terms such as quality, fitness and description, which correspond to the

buyer’s expectation towards that product. During a breach of warranty, the buyer has the right to

sue for failure of product or terms as well as he has the right to claim for indirect and direct

harm.108 Similarly, AI software or hardware developers enter into warranty contracts with the

buyers and are obligated to adhere to the terms. It is the responsibility of the software developer


107
Section 2(d) of Indian Contracts Act.
108
Michael Callier and Harly Callier, ‘Blame It On The Machine: A Socio-Legal Analysis Of Liability In An AI
World’ (2018) 14 WASH. J.L. TECH. & ARTS 49.

31

to inform consumers about new updates and the change in the level of safety, the developer can

be sued incase of a breach.

The benefit of applying contracts to AI is the fact that the parties with a common

consensus agreeing upon the terms and conditions have the freedom to determine the risks

between them. The liability between the parties depend on the agreement made by them and if

the terms clearly state indemnification in events of any harm caused by the AI software, the

seller is bound to indemnify the buyer of the AI software. In the case of Volvo, the CEO of

Håkan Samuelsson made an announcement stating that the company would be held liable for any

autonomous acts of the car.109 The announcement of CEO is deemed to be a contract according

to the principle started in Carlill v Carbolic Smoke Ball Company.110 Hence, the liability would

rest on company incase of a breach.

One of the shortcomings to this concept is that in situations of breach of warranty; the

recovery of damages is significantly low because the AI companies would limit their risks by

including it in the terms and conditions as seen in the Dell Laptop warranty. The Dell Laptop

consumer warranty is restricted to 3 years recovery period and after this period no claims can be

made, express or implied. Even with these limitations, it is fairly easier to assign liability under

contract.

Another shortcoming for application of contract law to AI systems is that the contracts

are between two parties. So, if a person who is not a party to the contract cannot sue for any

damages or breach of contract. The liability is solely limited to the parties with whom the


109
Kristen Korosec, ‘Volvo CEO: We Will Accept All Liability When Our Cars Are in Autonomous Mode’ (2015)
<https://fortune.com/2015/10/07/volvo-liability-self-driving-cars/> accessed 29 July 2019.
110
Carlill v Carbolic Smoke Ball Company [1892] EWCA Civ 1 In the particular case, the medical firm had
advertised that if the new drug would not cure the flu, the buyer would receive 100 pounds. This held in courts to be
a serious offer.

32

contracts have been concluded. For instance, if a delivery drone damages the property of a

person on its way to the location, such a person is not a party to the contract and hence he cannot

sue the manufactures or the designer for such loss incurred.

Until now, the contract was between two persons, buying and selling AI technologies.

There are situations wherein the AI enters into contracts by itself with the use of algorithms.

There are various automated contractual systems where artificial agents act on behalf of a

principal and complete the contracts. These contracts are effective but not all of them, as there

are certain times where AI enters into contracts without any human input. For example,

Blockchain technology system is one in which there are “self-executing” contracts.111 It is

difficult to understand, if the contract was made by AI itself or with human influence and this

poses a challenge to the legal system in deciding who will be held liable. There were discussions

if these contracts can be considered valid. To clarify it, efforts have been made by United

Nations Convention on the use of electronic communications in international contracts. Article

12: Use of automated message systems for contract formation, provides,

A contract formed by the interaction of an automated message system and a natural

person, or by the interaction of automated message systems, shall not be denied validity

or enforceability on the sole ground that no natural person reviewed or intervened in each

of the individual actions carried out by the automated message systems or the resulting

contract.112

According to the U.S. Uniform Computer Information Transaction Act,113 these agents are

treated as principals’ mere tools for communication. So, even if the agent concludes the contract,


111
Joshua Fairfield, ‘Smart Contracts, Bitcoin Bots, and Consumer Protection’ (2014) 71 WASH. & LEE L. REV.
ONLINE 35, 40-45.
112
United Nations Convention on the Use of Electronic Communications in International Contracts (23 November
2005) https://uncitral.un.org/en/texts/ecommerce/conventions/electronic_communications accessed 20 June 2019.
113
Uniform Computer Transactions Act (2000) <http://www.ucitaonline.com/ >accessed 3 August 2019.

33

the responsibility lies on the principal, as assigning liability to the principal is a suitable option

for AI based contracts. Even though the current law adopts these contracting agents as mere tools

working on behalf of the principal, there are various contracts that are not in conformity with the

contract law. These contracts are contracts which are created by the autonomous acts of the

algorithms that conform to the emergence concept of AI. Such contracts pose two challenges for

applying the current law. Firstly, the algorithms (AI) create the possibility wherein the creator of

the algorithm does not foresee the results. This black box algorithm creates situations where the

creator is unaware of the decisions made by the algorithm. Secondly, these algorithms act

autonomously and can make decisions that would be illegal. It is important to know that, the

contract laws presume all AI technologies are mere tools and cannot understand the gravity of

the situation Hence, the contract laws currently are insufficient to apply liability for the

unforeseeable decisions made by the algorithms. 114

The self-learning feature of AI poses a challenge to the basic principle of contracts:

‘Agreeing upon terms’. First, the reasons these difficulties arise are because the definition of

contracts stem from the need to have two parties. As artificial agents are not considered to be

legal person, there is only one party either the buyer or seller.115 Second, there is no set guarantee

that the AI contracts will enter into a contract with the predetermined agreement made by the

company.116 If the AI makes contracts that are different from the terms of the company, it

becomes difficult to enforce a contract and hold the company liable. As a result of which the

company that uses AI based contracts have certain set notions and when the decisions made by

the AI contracts are not similar to that, it can be said that the company can avoid liability on the

114
Lauren Henry Scholz, 'Algorithmic Contracts' (2017) 20 STAN TECH L REV 128.
115
Samir Chopra and Laurence White (n 7).
116
Ian R. Kerr, ‘Providing For Autonomous Electronic Devices In The Uniform Electronic Commerce Act’ (1999).

34

grounds that this was not the intention of the company and are not bound by the contract.117

The current AI technologies have not reached to such an extent so as to act completely

autonomous and even with the black box algorithm there are three possible solutions that could

help determine liability. First, the company can monitor the real time data of the contracts and

understand why the contracts are acting in a particular way. If the companies comply with the

principle of transparency as per the lines of EU118, it would be easier to provide all the

information that is imperative for a user of AI technology. Additionally, if there is a contractual

relationship between the consumer of AI system and the supplier, then the supplier should

inform the consumer about the limitations and abilities pertaining to AI technologies and inform

them about the liability aspects and laws applicable. Once this transparency is achieved, it would

be easier to understand the enforceability of the contracts and apply contractual liability.

Second, by applying the agency law, the principal can enter into insurance schemes for

any uncertain behavior of the algorithm. With this approach, there is a guarantee that the victim

would be compensated incase of any injury or harm caused by AI. Third, ensure that there is a

human approval for each transaction. When there is a human approval, it would fall under the

principles of ratification and every act of the computer/ AI technology will be ratified as if the

principal had already approved of it. 119

Contractual liability is applicable to the manufacturers and retailers who do not meet the

contractual standard.120 The liability of AI companies and user is limited to the terms of the

contract, the law applicable to these agreements. When there is a breach of warranty, the buyer


117
ibid.
118
EU Guidelines Trustworthy AI. (n 91).
119
Samir Chopra (n 7).
120
Callier (n 108).

35

can sue the seller for damages. The biggest question here was the contracts created by artificial

agents. To which, the various models such as the UN Convention or the EU directive considered

computer contracts to be mere tools and therefore, granting artificial agents a limited scope of

legal agency. Even with regards to black box theory, there are 3 main methods that can ensure

the determination of liability: Transparency, Insurance schemes under agency law and human

approval at each stage. Though transparency and human approval are both viable options,

application of agency law is the most suitable option because it not only promotes AI

technologies but also ensures that someone is held liable.121 It is a better approach because due to

the discrepancies in the contract law and enforceability, companies try to avoid responsibility by

simply blaming the unforeseeable aspects of the AI contracts. Lastly, contractual liability can be

combined with extra-contractual liability and the responsible party can be sued under both..

When applying contractual liability, the manufacturer or owner that is held liable in case of any

breach. Moreover, in situations of contracts made by AI agents, the company that uses it or the

developer of such technologies will be liable. Inferences from the strict liability regime are

drawn making the seller/principal liable without their fault. This is dependent on the laws made

by each country.122

In addition to understanding contractual liability and extra-contractual liability, the law

extends to criminal liability. These laws may overlap with each other but criminal liability is an

additional recourse for victims to ensure justice is served. In civil liabilities, reliance is placed on

a reasonable person for making decisions and in criminal liability it focuses on the mental intend


121
Samir Chopra (n 7).
122
European Commission, and accompanying the document communication from the Commission to the European
Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee
of the Regions Artificial intelligence for Europe, ‘Liability For Emerging Digital Technologies’ COM (2018) 237
final. (European Commission for liability for emerging digital technologies)
<http://data.consilium.europa.eu/doc/document/ST-8507-2018-ADD-1/en/pdf> accessed on 25 July 2019.

36

of the perpetrator’s act. Civil liabilities are more monetary based, criminal liabilities require a

higher degree of fault and is punishment based. Hence, in certain situations it may be essential to

ascribe criminal liability in place of civil liability for acts committed by AI technologies.

3.2 Criminal Liability



A culpable act (actus reus) and the mental intention to commit the act (Mens rea) are both

essential elements for affixing criminal liability. This study focuses on the method in which a

human is held criminally liable for the acts of AI systems. Professor Gabriel Hallevy advises and

helps simplify assigning responsibility to humans for the acts of their AI systems by two

different classifications: ‘Acts of an Innocent Agent or Perpetrator-via-another’ and ‘Natural

Probable Consequence or Vicarious Criminal Liability’. 123

Under the first category, the AI can deem to be an innocent agent. The AI systems are

given a similar status of a person whose thought process is limited such as a child or an animal.

This is solely because, like a child or a domesticated animal the mens rea of the AI entity is

limited. In situations where AI commits a crime, the owner or the user will be criminally liable

for instructing the AI system. By this analogy, these systems are treated as innocent agents. In

other words, the AI are used as intermediaries while the perpetrator orchestrates the offence. For

instance, if the programmer of an aircraft writes a program to eject the pilot out of the cockpit

and the AI system follows the instruction, the programmer will be held liable for the acts of its

system. With this approach and the vicarious liability regime, the programmer can be held liable

for any criminal acts.

Under the second category, humans are held criminally liable even if the act was a natural


123
Hallevy (n 6).

37

probable consequence. If the AI was created with a moral intention and purpose and it still

performs a criminal act, the accomplice is held responsible for the act of the AI system. Natural

or probable consequence means that the collaborators are liable for the crime. This category is

based on the ability of the programmer to anticipate a potential threat and thus liability is based

on the principle of negligence. The software developer or user should have foreseen the harm

and controlled it from being committed. So, the users or the software developers will be held

liable even when the users or developer was unaware and nor did he intend it neither did he

participate in it. Using the same example as stated above, regarding the ejection of the pilot:

Here, even if the programmer did not have a specific intent but it was clearly foreseeable by a

reasonable person in place of the programmer, then such a programmer would be held liable.

There is a further distinction from the AI programs that were specifically programmed for the

criminal offence and the ones that are not. Therefore, for the latter group, the criminal liability is

generally based on the strict liability regime or negligence.

Though criminal liability seems like a suitable option, unlike civil liability it involves

punishment, with a far more deterrent solution compared to monetary compensation. If the

criminal law assigns responsibility directly to the user or the developer, they would be in

constant deterrence and would be overcautious which would hamper innovation. Criminal

liability is perfect solution to legally ensure that someone is held responsible for the acts of the

AI. Yet, there are situations wherein the AI system learns and develops on its own which is not

foreseeable or predicted by the software developer or the user. This is opening up the retribution

gap. 124 Practical concerns are not linked to retributive punishment as much as a moral desert is

and it is more difficult to assign liability in criminal law compared to civil law. Professor


124
John Danaher, ‘Robots, Law and the Retribution Gap’ (2016) 18 Eithics and Information Technology, 299.

38

Hallevy suggests one more category and a solution to this, known as the ‘direct liability

model’125. In this approach, the AI system and the programmer/user will be criminally liable for

the acts of the AI system. To make AI liable for their acts, it would require to be a legal person

and this characteristic pertaining to social valance poses a challenge to criminal liability.

Currently AI has not developed to such an extent to work completely autonomously and there is

always some level of human control. Therefore, with this degree of control on robots, the

liability is placed on humans and not AI systems.

Criminal liability requires an act and the mental intent. The act is easily identified and

difficulty arises in identifying the mental intent. If the robot commits an act, it is clearly visible

but it is difficult to interpret the mental intent of the robot. The three models of Hallevy give us

an overview of who would be held responsible and in all situations; it is the software developer

or the user.126 There are ongoing debates if the developer should be solely responsible or if the

user, the designer, the manager who appointed the expert and the expert who added the

information into the software or the AI itself should be responsible. The mental intent is

allocated to the programmer or developer who created the AI technology. There are various

situations where the AI functions on its own and accordingly if AI were given legal personality,

they would be held responsible for their own acts.

This is a very debatable solution yet a very probable solution. With the current status of

AI, only human would be held liable for the act and the AI system is either considered to be an

innocent agent or a child/animal. Just like civil liability can be attached to harmful acts caused by

AI technologies, criminal liability can be adopted as well. However, with the shortcomings of

criminal law, it is a far more deterrent solution and hence should be used with utmost caution. In


125
Hallevy (n 6).
126
ibid.

39

both Criminal Law and Private Law, there is one probable convict who can be held liable if the

AI technology causes harm.

Chapter 4: Conclusion

AI has increased tremendously affecting the daily lives of human and presents both a practical

and a conceptual challenge to the existing laws. The former evolves from the method in which

the AI is developed and the basic issues pertaining to controlling of these autonomous AI

systems The latter are mainly pertaining to the difficulties for affixing legal liability for any

injury caused by such autonomous technologies and from the solving the most difficult

conundrum of defining what AI really means. In other words, the emergence and embodiment

features of AI challenge regulators. Today, most of the ideas for defining AI revolve around the

concept of human intelligence127 and the major challenge for the regulators is the constant

emergence.. The law has evolved from applying legal transactions to minors to the most recent

development of product liability. Regulators are proving that the current legal system is adaptive

and creative with affixing liability under both civil and criminal liabilities. The main aim of

regulators should be to strike a balance between innovation and regulation and adapt the current

laws in affixing liability.

Various other countries such as the US and Japan are also discussing the legal issues

pertaining to AI and considering their perspective on responsibility liability and rights of AI.128

To determine liability, the EU and the US are applying insurance regimes to autonomous


127
McCharty (n 9).
128
European Commission for liability for emerging digital technologies (n 122).

40

vehicles129, New Zealand has provided the no fault compensation scheme and even Japan is

battling over the application of Product liability to defective products.

The legal regulators do not recognize AI as legal persons which means they have to work

within the framework of the current laws and not hold the AI personally liable for the damaged

caused by it. Even without granting legal personhood to the AI number of liability options have

been highlighted such as tort law, contract law and criminal law. The liability challenge in all

these different laws raises issues of their own while determining liability and affixing

responsibility.

The doctrine of product liability is a suitable solution because it affixes liability directly

onto the producer or manufacturer of the product for no fault of his own. However, it struggles

with challenges such as foreseeability and autonomy. There is also a constant debate regarding

whether AI is service or a product. While robots can be defined as a product, the

software/algorithm that controls the robot is defined a service. These challenges limit the

instantaneous application of this defect doctrine.

Alternatively, the owner of the computer can also be liable under the vicarious liability

approach by treating the AI technology as employees and employers. However, for an entity to

be liable under this approach it should fall within the definition of agent. This is not possible

currently because for an entity to be determined an agent, it has to be given the status of a legal

person, for example corporations. The laws currently do not recognize AI systems as agents and

this limits the application of this doctrine.

Similarly, the negligence regime is an appropriate regime and has always been used for

129
UK Automated and Electric Vehicles Bill.

41

new technologies and is easily applicable ensure liability. The autonomy and foreseeability

features of AI affect the smooth application of the law in determining the reasonable person. To

overcome this, the concept of res ipsa loquitor can be a solution holding the computer

responsible instead of the user. There are several limitations to this doctrine and it cannot be

considered in an isolated event.

The determination of liability goes beyond tortious liability and extends to contractual

liability. The law of contracts is very well defined and is constantly updating itself with the new

technologies. Contracts made by electronic agents are enforceable by law. However, though

electronic agents are mere tools of the principal, it cannot be applied blindly to AI technologies.

AI has the ability to act on its own without human intervention and that challenges the contract

law because they may go beyond the agreed terms and make contracts with new terms that are

not agreed by the parties. Therefore, even though contractual liability can be applied incase of

breach of warranty, it has limitations when they have to be applied to contracts made by AI

systems themselves.

Legal regulators have not limited determining liability solely to civil law and also

examine the concepts of criminal liability to AI systems. Unlike civil liability, criminal liability

is more deterrent and requires both an act as well as a mental intention to commit the offence.

Considering AI, it is very difficult to attribute the mental intention and therefore the liability falls

back on the human. However, due to the autonomy characteristics of AI there may be situations

where the AI acts without the knowledge of the human. This makes it difficult for legal

regulators to affix liability under criminal law. Thus, the deterrent nature of criminal liability

with the autonomy of AI limits the application of criminal law. Further, this can be resolved by

giving legal personality to AI technology. It would benefit both criminal liability and vicarious

42

liability. There are constant debates regarding giving legal personality and to bring them in the

spectrum of agency laws and criminal law but Ryan Calo believes130that there is still another 15-

20 years before AI can act on its own and now, there is some level of human control.

Forming a mental picture about the application of appropriate liability regime pertaining

to AI, none of the usual methods blend swiftly. However, the European Commission attempts to

give a general method for determining liability,

To determine the appropriate liability standard for artificial intelligence software, both

the intended function of the program and the method of selling the software must be

considered. If the function is one that is potentially hazardous (e.g. engineering design,

drug delivery), strict liability should be applied. If the intended function is nonhazardous

(e.g. tax preparation, gardening advice), the method of marketing the software determines

the liability standard. Strict liability should be applied if the software is mass-marketed;

negligence should be applied if the software is a custom program. 131

According to a report by the Legal Affairs Committee of the European Parliament, 132 it is

suggested that the product liability model is a perfect solution for the producers of AI systems.

As per the European Commission133 strict liability or product liability is applied for mass

marketed products such as autonomous vehicles or medical equipment’s.

Accordingly, when software is a custom program, negligence is applicable. However, as

noted above, there are various shortcomings in applying negligence and due to the lack of

foreseeability there is no link between the act of AI system and damages incurred to the


130
Calo (n 15).
131
European Commission for liability for emerging digital technologies (n 122).
132
European Parliament Draft Report, 'With Recommendations To The Commission On Civil Law Rules On
Robotics’ (2017) (2015/2103(INL)).
133
European Commission for liability for emerging digital technologies (n 122).

43

sufficient party. For resolving this, the European Commission in their Guidelines for

Trustworthy AI134 and the commission in Singapore 114


discussing artificial intelligence suggest

transparency as an essential solution for determining the responsibility of the person.

Transparency means tracing back how AI system reached an outcome in a given

circumstance. It can also be interchanged with the word explainability. When a question is asked

to a human, pertaining a decision made, it is expected that he will reply explaining why he made

a decision. Applying transparency will also help to determine the outcomes or decisions of an

algorithm. It is a very viable solution and can be applied in tortious liability, criminal and

contractual liability. The transparency for civil liabilities will help court to affix liability on the

person responsible for the failure in making the decision. The courts would require proof on how

the algorithm flawed or reached the decision. AI companies collect the information for

transparency. The evidence is gathered by continuous monitoring the acts of the self-learning AI

and understanding reasons for the decisions made. Once this is attained, the courts affix liability

on the person responsible behind the act or to the AI itself. In addition to this, the law of

negligence is capable for adapting to problems with time as it has proved with respect to other

technologies135 and then it would be practical use of transparency as an interim solution for both

civil and criminal liability.

To determine liability in the current laws, the product liability regime rests liability on

the manufacturer or producer, which ensures that the victim is compensated. It also includes to

insurance schemes and is widely accepted in situations such as autonomous cars.

Negligence regime, liability might divert to the user incase the manufacturer had

informed the consumer about the risks pertaining to the use of AI technology. It may also rely on


134
EU Guidelines for Trustworthy AI (n 91).
135
Internet.

44

the operator or designer of AI systems. It depends on a case-to-case basis and hence is flexible. It

allows victims to bring claims that are not a party to the contract.

Contractual liability can also be applied in certain situations pertaining to AI

technologies. In both situations, breach in the terms or warranty and AI made contracts, it is the

company or the manufacturers responsibility to compensate the victim.136

Similarly, under criminal liability, the software developer who inputs the code in the AI

system is held responsible because the data inserted by him is the reasons the AI is acting in a

particular manner and using the models of professor Hallevy137, the developer or the user is

liable for any harm sustained.

With strict liability, transparency and insurance schemes it is considered that the current

laws are sufficient for determining responsibility and liability and there is no need of a full-

fledged regulation of AI. Like the Internet, AI is only a new technology and by placing trust in

the extra contractual, contractual and criminal liabilities these technologies can be regulated.138

Therefore, it is assumed that AI is still nascent and that the current laws capable of being applied

and should not be strictly legislated currently. To support this, the French Jurist Jean Carbonnier

stated, “One should always tremble when legislating”. 139 Various pioneers have different

opinions; Elon Musk believes that AI needs a strict regulation whereas Bill Gates believes that

AI is still at a Semi autonomous stage.140 Therefore, the existing laws serve almost all injuries

pertaining to AI. While it might be required for regulators to enhance the liability for AI

136
Jessica Lis, ‘Mom, the Robot Ran over the Dog’ < https://medium.com/in-the-ether/mom-the-robot-ran-over-the-
dog-4881489999e4> accessed 10 August 2019.
137
Hallevy (n 6).
138
Revolidis and Dahi (n 62).
139
Ira Giuffrida and others, ‘A Legal Perspective on the Trials and Tribulations of AI: How Artificial Intelligence,
the Internet of Things, Smart Contracts, and Other Technologies Will Affect the Law’ (2018) 3 Case Western Law
Review 747.
140
Jeremy Straub, ‘Does regulating artificial intelligence save humanity or just stifle innovation?’ (2017) The
Conversation <https://theconversation.com/does-regulating-artificial-intelligence-save-humanity-or-just-stifle-
innovation-85718> accessed 10 August 2019.

45

technologies and their acts as these technologies develop, formulating stringent laws beyond the

ones that already exists would be detrimental for the development of these technologies and

would restrict the creation of something overwhelmingly beneficial. 141


141
ibid.

46

Bibliography

Books

1. Buyers J, Artificial Intelligence - The Practical Legal Issues (Law Briefing Publishing Ltd

2018)

2. Corrales M, M FenwickN Forgó, Robotics, AI And The Future Of Law (Springer Singapore

2018)

3. Barfield W, Pagallo U, Research Handbook On The Law Of Artificial Intelligence (Edward

Elgar Publishing 2018)

4. Revolidis I. and Dahi A., ‘The Peculiar Case of the Mushroom Picking Robot: Extra-

contractual Liability in Robotics’ in Marcelo Corrales, Mark Fenwick Nikolaus Forgó (eds),

Robotics, AI and the Future of Law (Springer 2018)

5. Turner J, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan Publishing

2018)

Cases

6. Ransome v. Wisconsin Electric Power Co [1979] 87 Wis 2d 605

7. Donoghue v. Stevenson [1932] AC 562.

8. Caparo Caparo Industries v Dickman and Others [1990] 2 AC 605.

9. Daniel 'Brien v Intuitive Surgical, INC [2011] United States District Court, ND Illinois,

Eastern Division, 10 C 3005.

47

10. In re Toyota Motor Corp. Unintended Acceleration Mktg [2011] (C.D Cal)785 F Supp 2d

925

11. In re Ashley Madison Customer Data Sec. Breach Litigation [2015] JPML 148 F Supp 3d

1378 1380

12. Carlill v Carbolic Smoke Ball Company [1892] EWCA Civ 1.


European Documents and Directives

13. European Commission, 'Ethics Guidelines For Trustworthy AI' (High-Level Expert Group on

AI 2019).

14. European Commission High-Level Expert Group on Artificial Intelligence, 'A Definition Of

AI: Main Capabilities And Disciplines' (European Commission 2019).

15. Communication from the Commission to the European Parliament, the Council, the

European Economic and Social Committee and the Committee of the Regions, 'Building

Trust In Human Centric Artificial Intelligence' (DG Connect 2019).

16. European Commission, and accompanying the document communication from the

Commission to the European Parliament, the European Council, the Council, the European

Economic and Social Committee and the Committee of the Regions Artificial intelligence for

Europe, ‘Liability For Emerging Digital Technologies’ COM (2018) 237 final.

17. European Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws,

regulations and administrative provisions of the Member States concerning liability for

defective products OJ L 210/29–33

18. European Commission Staff Working Document, 'Advancing The Internet Of Things In

Europe accompanying The Document "Communication From The Commission To The

European Parliament, The Council, The European Economic And Social Committee And

48

The Committee Of The Regions: Digitising European Industry - Reaping The Full Benefits

Of A Digital Single Market' COM (2016) 180

19. European Parliament, 'European Civil Law Rules In Robotics' (Directorate-General For

Internal Policies 2019)

<http://www.europarl.europa.eu/RegData/etudes/STUD/2016/571379/IPOL_STU(2016)5713

79_EN.pdf >accessed 16 June 2019.

20. European Parliament Draft Report, 'With Recommendations To The Commission On Civil

Law Rules On Robotics (2015/2103(INL))' (2017).

21. EPRS| European Parliamentary Research Service, 'Artificial Intelligence Ante Portas: Legal

& Ethical Reflections' (Scientific Foresight Unit (STOA) 2019) 3

UN Convention and Other Acts

22. United Nations Convention on the Use of Electronic Communications in International

Contracts (23 November 2005)

https://uncitral.un.org/en/texts/ecommerce/conventions/electronic_communications accessed

20 June 2019.

23. Uniform Computer Transactions Act (2000) http://www.ucitaonline.com/ accessed 3 August

2019.

24. UK Automated and Electric Vehicles Bill 2017-19 s 2.

25. US Restatement of the Law Third. Torts: Products Liability.

26. Accident Compensation Act 1972 (NZ).

27. General Data Protection Regulation 2016/679

49

Articles and Blogs

28. Mathias, Andreas, ‘The Responsibility Gap – Ascribing Responsibility For The Actions Of

Learning Automata’ (2004) 6 Ethics and Information Technology.

29. Abbot, Ryan, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’ (2016)

86 GWashLRev.

30. Scherer, Matthew U, ‘Regulating Artificial Intelligence Systems: Risks, Challenges,

Competencies, and Strategies’ 29, (2015). Harv JL &

Tech <https://ssrn.com/abstract=2609777> accessed 29 July 2019.

31. Reed, Chris and Kennedy, Elizabeth and Silva, Sara, Responsibility, Autonomy and

Accountability: Legal Liability for Machine Learning (2016). Queen Mary School of Law

Legal Studies Research Paper No. 243/2016. <https://ssrn.com/abstract=2853462> accessed

10 March 2019.

32. Karnow, Curtis E., ‘Liability for Distributed Artificial Intelligence’ (1996) 4 Berkely

Technlogy Law Journal.

33. Vladeck, David C., 'Machines without Principals: Liability Rules and Artificial Intelligence'

(2014) 89 WASH L REV 117.

34. Selbst, Andrew D. ‘Negligence and AI's Human Users’ (2019). Boston University Law

Review.

35. Gerstner E. M., ‘Liability Issues with Artificial Intelligence Software’. (1993) 33 SANTA

CLARA L REV 239.

36. Calo Ryan, ‘Robotics and the Lessons of Cyberlaw’ (2015)103 CALIF LREV 513, 514–15.

37. Callier Michael. and Callier Harly, ‘Blame It On The Machine: A Socio-Legal Analysis Of

Liability In An AI World’ (2018) 14 WASH. J.L. TECH. & ARTS 49.

50

38. Lauren, Scholz,H. 'Algorithmic Contracts' (2017) 20 STAN TECH L REV 128

39. Chopra, S. and White, L., Artificial Agents And The Contracting Problem: A Solution Via

An Agency Analysis. <http://illinoisjltp.com/journal/wp-

content/uploads/2013/10/Chopra.pdf> accessed 2 August 2019.

40. Cerka, P. Grigiene, J. and Sirbikyte, G. (2015) ‘ Liability For Damages Caused By Artificial

Intelligence’<https://is.muni.cz/el/1422/podzim2017/MV735K/um/ai/Cerka_Grigiene_Sirbik

yte_Liability_for_Damages_caused_by_AI.pdf> accessed 9 Jul. 2019.

41. Rachum-Twaig O, ‘Whose Robot Is It Anyway?: Liability For Artificial-Intelligence-Based

Robots’,(2019) .2020 University of Illinois Law Review, ,

<https://ssrn.com/abstract=3339230> Accessed on June 20 2019.

42. Alpalhão Gonçalves M, ‘Liability Arising From The Use Of Artificial Intelligence For The

Purposes Of Medical Diagnosis And Choice Of Treatment: Who Should Be Held Liable In

The Event Of Damage To Health?’ <http://arno.uvt.nl/show.cgi?fid=146408> accessed July

20 2019.

43. Kerr I, ‘Bots, Babes and the Californication of Commerce’ (2003-2004) 1 U OTTAWA L &

TECH J 285.

44. Giuffrida I, Lederer F and Vermerys N, ‘A Legal Perspective on the Trials and Tribulations

of AI: How Artificial Intelligence, the Internet of Things, Smart Contracts, and Other

Technologies Will Affect the Law’ <

https://scholarlycommons.law.case.edu/cgi/viewcontent.cgi?article=4765&context=caselrev>

Accessed 11 August 2019.

51

45. Eidenmueller, Horst G, The Rise of Robots and the Law of Humans (2017) Oxford Legal

Studies Research Paper No. 27/2017. < https://ssrn.com/abstract=2941001> accessed on 9

July 2019.

46. Shimpo Fumio, ‘ The Principal Japanse AI and Robot Statergy and Research Towards

Establishing Basic Principles’ (2018) 3 Journal of Law and Information Systems.

47. Bathaee Yavar, 'The Artificial Intelligence Black Box and the Failure of Intent and

Causation' (2018) 31 HARV J L & TECH 88.

48. Barewell E., ‘Legal Liability Options for Artificial Intelligence’

<https://www.bpe.co.uk/why-bpe/blog/2018/10/legal-liability-options-for-artificial-

intelligence/> accessed 23 July 2019.

49. Kowert Weston. 'The Foreseeability Of Human - Artificial Intelligence Interactions' (2017)

96 TEX L REV 181.

50. Jackson, Brandon W. 'Artificial Intelligence and the Fog of Innovation: A Deep-Dive on

Governance and the Liability of Autonomous Systems' (2019) 35 Santa Clara High Tech.

L.J.

51. Koops, Bert J and others, 'Bridging the Accountability Gap: Rights for New Entities in the

Information Society' (2010) 11 MINN JL SCI & TECH.

52. Wright, Richard W., ‘Causation in Tort Law’ ((1985) 73 Calif. L. Rev.

53. Owen, David G. ‘The Five Elements of Negligence’ (2007) 35 HOFSTRA L. REV.

54. Danaher John, ‘Robots, Law and the Retribution Gap’ (2016) 18 Eithics and Information

Technology, 299-309.

55. DLA Piper https://blogs.dlapiper.com/iptitaly/2019/06/fintech-who-is-responsible-if-ai-

makes-mistakes-when-suggesting-investments/ accessed 1 August 2019.

52

56. Buyers J, ‘Liability Issues in Autonomous and Semi-Autonomous systems’ (2015)

<https://www.osborneclarke.com/media/filer_public/c9/73/c973bc5c-cef0-4e45-8554-

f6f90f396256/itech_law.pdf> accessed on 15 July 2019.

57. Chen J and Burgess P, ‘The Boundaries Of Legal Personhood: How Spontaneous

Intelligence Can Problematize Differences Between Humans, Artificial Intelligence,

Companies And Animals’ (2018) 2 https://doi.org/10.1007/s10506-018-9229-x accessed on

10th August 2019.

58. Yardon, D. and Tynan, D, ‘Tesla driver dies in first fatal crash while using autopilot’ (2016)

The Guardian <https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-

self-driving-car-elon-musk> accessed 29 July 2019.

59. Straub J, ‘Does regulating artificial intelligence save humanity or just stifle innovation?’

(2017) The Conversation <https://theconversation.com/does-regulating-artificial-intelligence-

save-humanity-or-just-stifle-innovation-85718> accessed 10 August 2019.

53

You might also like