You are on page 1of 14

AI & SOCIETY

https://doi.org/10.1007/s00146-020-01051-6

OPEN FORUM

From computerised thing to digital being: mission (Im)possible?


Julija Kiršienė1 · Edita Gruodytė1 · Darius Amilevičius2

Received: 7 July 2019 / Accepted: 8 May 2020


© Springer-Verlag London Ltd., part of Springer Nature 2020

Abstract
Artificial intelligence (AI) is one of the main drivers of what has been described as the “Fourth Industrial Revolution”, as
well as the most innovative technology developed to date. It is a pervasive transformative innovation, which needs a new
approach. In 2017, the European Parliament introduced the notion of the “electronic person”, which sparked huge debates
in philosophical, legal, technological, and other academic settings. The issues related to AI should be examined from an
interdisciplinary perspective. In this paper, we examine this legal innovation—that has been proposed by the European
Parliament—from not only legal but also technological points of view. In the first section, we define AI and analyse its main
characteristics. We argue that, from a technical perspective, it appears premature and probably inappropriate to introduce AI
personhood now. In the second section, justifications for the European Parliament’s proposals are explored in contrast with
the opposing arguments that have been presented. As the existing mechanisms of liability could be insufficient in scenarios
where AI systems cause harm, especially when algorithms of AI learn and evolve on their own, there is a need to depart
from traditional liability theories.

Keywords  Artificial intelligence · Electronic personhood · Algorithms · Liability

1 Introduction world innovator in technology and computing. We should


recognise that granting someone, or something, legal person-
Unlike more specialised innovations, artificial intelligence hood is—and always has been—a highly sensitive political
(AI) is becoming a true general-purpose technology. AI is issue. The recent case of Saudi Arabia enrolling Sophia as
evolving into a utility that is likely to infiltrate every industry a citizen is hence unsurprising.1 Indeed, this could turn into
and sector of our economy, as well as nearly every aspect the realm of discretionary political power that sometimes
of science, society, and culture. It permeates human spaces devolves into mere sovereign arbitrariness (Atabekov and
silently and invisibly. As Arthur C. Clarke states in his Third Yastrebov 2018: 775–77). This reminds us of Suetonius’
Law, though, any form of technology that is sufficiently “Lives of the Twelve Caesars” (121 AD), in which we find
advanced is indistinguishable from magic (Clarke 1962). Caligula planning to make his horse, Incitatus, a consul,
On 25 October 2017, the announcement of the robot, and “the horse would invite dignitaries to dine with him in a
named Sophia, with Saudi Arabian citizenship, was a care- house outfitted with servants there to entertain such events”
ful piece of marketing to position Saudi Arabia as a major (Suetonius 1913, 55).
While Saudi Arabia is the first country to grant citizen-
ship to an AI-enabled android, it is not alone in pushing for
* Darius Amilevičius more rights for robots. In 2017, the European Parliament
darius.amilevicius@vdu.lt
proposed a set of regulations to govern the use and creation
Julija Kiršienė
1
julija.kirsiene@vdu.lt   In this paper we do not discuss the issues of robot rights, but we
agree with the opinion of T. Jaynes that there is question to ask “how
Edita Gruodytė
not biological intelligence can gain citizenship in nations without a
edita.gruodyte@vdu.lt
monarchist system of government or being based within a human
1 subject”. “While not biological intelligence systems may be granted
Faculty of Law, Vytautas Magnus University, K. Donelaičio
citizenship, this citizenship does not provide many useful protections
g. 58, 44248 Kaunas, Lithuania
for the not biological intelligence system—if any are granted at all,
2
Faculty of Informatics, Vytautas Magnus University, which is a subject that has not even been adequately addressed by the
K. Donelaičio g. 58, 44248 Kaunas, Lithuania nation of Saudi Arabia” (Jaynes 2019, 8).

13
Vol.:(0123456789)
AI & SOCIETY

of robotic and artificial intelligence, including the granting Analysis of the arguments in opposition to the idea of
of “electronic personhood” to the most advanced machines electronic “person” shows asymmetry between purpose as
(European Parliament 2017). But not everyone agrees that it described in “Civil Law Rules on Robotics” and criticism.
this is the best solution, and many experts fear that giving Which mainly concentrates on granting legal personhood to
robots the same kind personhood or citizenship as people robots as full subjects of the law, including rights like citi-
will infringe on human rights. In an open letter (https​:// zenship, free speech, privacy, property ownership, inherit-
www.roboti​ cs-openle​ tter.​ eu/), written to the European Com- ance, and so on. Hence, we suggest that—at least regarding
mission in 2018, 150 experts in medicine, robotics, AI, and the most sophisticated autonomous robots—the proposal
ethics criticised the plans as being “ideological, nonsensical for electronic personhood has some grounds. The existing
and non-pragmatic”. The letter outlined the belief that “from liability instruments could be inadequate, ineffective, may
an ethical and legal perspective, creating a legal personhood not fully reflect the digital economic needs, stifle innovation,
for a robot is inappropriate”. The letter also demanded that and might be unjust in terms of the apportionment of risk in
the EU ensure a legal framework weighted towards “the society. From legal and technical perspectives, research has
protection of robots’ users and third parties”, rather than highlighted various issues that relate to the autonomy of AI,
the robots themselves.2 Despite the matter of Sophia’s citi- such as the “black box problem”, foreseeability, traceabil-
zenship being intended as a publicity stunt, the questions ity, the establishment of liable or guilty persons, and others,
that were consequently raised have opened up a huge debate which cannot be addressed effectively by existing liability
(Pagallo 2018a; Singh 2017; Čerka et al. 2017; Aleksandre instruments. We argue that, in the context of the Fourth
2017) that is a long way from being resolved. This is espe- Industrial Revolution and rapid industrialisation, it would
cially notable in relation to granting legal personhood for be prudent to ensure that there is a robust legal instrument in
AI devices. place for liability purposes when AI development reaches a
According to the EU Parliament’s proposition, electronic certain technological level. Thus, we suggest elaborating the
personhood is meant only for the most sophisticated autono- concept of “electronic persons” for the purposes of model-
mous robots, in cases where robots make smart autonomous ling a new liability instrument.
decisions or otherwise interact independently. As such, a key The definitions of AI—and the related issue of “electronic
issue that warrants serious consideration in the context of the personhood”—in scholarship are mostly considered from
legal status of intelligent AI-based machines is the definition one perspective: ethical, legal, technological, or another.
of the term “artificial intelligence”. In the “European Civil However, the issues related to AI should be examined from
Law Rules on Robotics”, the European Parliament appears to an interdisciplinary perspective. In this paper, we examine
consider AIs as being easily identifiable artefacts (European AI and the legal innovation of electronic personhood—as
Parliament 2017: sec.AF). In our opinion, however, such a proposed by the European Parliament—from not only legal
definition is a misleading one. Furthermore, it raises two but also technological points of view.
key questions. Firstly, what level of AI technology must we
achieve? Secondly, what qualifies as being “state-of-the-art”
technology at the present time? To answer these questions, 2 How has the concept of the “electronic
we must examine not only the working definition of artificial person” been justified in the “Civil Law
intelligence but also the state-of-the-art of its main functions Rules on Robotics” from a technical point
like intellectual capacity and autonomy. In this study, we of view?
interrogate the scope of the definition of “electronic person”
as it is presented in “Civil Law Rules on Robotics”, present- The term, “electronic person”, was first used in 1967, in
ing and elaborating upon the main arguments of the docu- an article that introduced a robot named Shaky (Alexandre
ment. The main issue in this regard is that of ascertaining 2017: 22). In 2015, the Committee on Legal Affairs (JURI)
whether the aim of the proposition is to grant an electronic in the European Parliament established a working group for
“person” rights and obligations or just to introduce one more legal questions relating to the development of robotics and
possible solution to the question of the legal accountability artificial intelligence in the European Union. The resulting
of robots in torts and contracts. European Parliament resolution, “Civil Law Rules on Robot-
ics”, was released on 27 January 2017 (European Parliament
2017). It proposed a set of regulations for governing the use
2
 As was mentioned above, we do not discuss the question of the and creation of artificial intelligence and suggested granting
robots rights, but we agree with the opinion of Dowel that actually “electronic personhood” to the most advanced machines to
the non-biological intelligence does not clearly fit into any extent
ensure their accountability In this resolution, special legal
category of persons and the path forward is uncertain, legislators are
beginning to explore potential issues (Dowell 2018). However, the status for AI is stipulated as being one possible legal solution
foundation is being put down now. E.g., see Gunkel (2018). to the problem of establishing the liability of robots, as cited:

13
AI & SOCIETY

creating a specific legal status for robots in the long definitions assume that intelligence is clearly defined itself;
run, so that at least the most sophisticated autono- this too, though, is ambiguous.
mous robots could be established as having the status Several features of AI make it exceptionally difficult to
of electronic persons responsible for making good any define and to regulate when compared with other human-
damage they may cause, and possibly applying elec- made objects of the material world. Artificial intelligence
tronic personality to cases where robots make autono- (or cognitive computing) is the theory and development of
mous decisions or otherwise interact with third parties computer systems that can perform tasks that were previ-
independently (European Parliament 2017: 59 (f)). ously thought to require human intelligence. The term “AI”
is exceptionally wide in scope; it is a moving target. Techni-
Hence, in the EU Parliament’s proposition—electronic
cal AI definitions change as related technological advance-
personhood is meant only for the most sophisticated autono-
ments develop further.4 AI is not a single technology. Rather,
mous robots, in cases where robots make smart autonomous
it is many different technologies that are applied for differ-
decisions or otherwise interact independently. However, this
ent functions through various applications. AI is helping
document does not provide any definition for the particular
computers to achieve extraordinary feats that truly catapult
criteria, features, or standards that are required for defining
us into the future. It is only deemed to be “artificial” intelli-
the smartness, autonomy, or independence levels of such
gence before its functioning is really understood; after that, it
robots.
is merely “software”.5 The concept of AI is itself notoriously
In “Civil Rules on Robotics”, the European Parliament
elusive, with different groups strongly disagreeing over pre-
appears to consider AI as being easily identifiable artefacts
cisely what it entails (Legg and Hutter 2007). This constant
(European Parliament 2017: sec.AF). We argue, though, that
shift in the definition and scope of AI is known as the “AI
to do so is misleading. In this section, we question what
effect”, and it is best described by Tesler’s Theorem: “AI
qualifies as being the “most sophisticated autonomous”
is whatever hasn’t been done yet” (Hofstadter 1999: 601).6
machines, justifying “electronic person” in a technical con-
We can further distinguish between “narrow AI” and
text. We will proceed in two steps. First, we will seek to
“general AI”. “Narrow AI” is already widely used. “Gen-
establish a working definition of AI. Secondly, we will look
eral AI”, by contrast, requires intelligent behaviour that is (at
present the common technical features of forms of AI, like
least) as broad, adaptive, and advanced as a human’s across a
intelligence, autonomy, and embodiment.
full range of cognitive tasks (Goertzel and Pennachin 2007).
After “Civil Rules on Robotics” were released, the
Notably, it is also debatable if—and when—general AI will
authors of EU documents regarding AI used erroneous or
be achieved.
convoluted terminology when providing definitions for
and the key characteristics of AI.3 Unfortunately, the defi-
nitions of AI (and “smart robot”) that are provided in the 4
  For example, the European High-Level Expert Group has launched
nine-page document that was produced by the High-Level a nine-page document containing only the definition of AI (European
Expert Group on AI are complicated and unclear (High- Commission’s High-Level Expert Group on Artificial Intelligence
Level Expert Group on Artificial Intelligence 2018). Conse- 2018).
5
quently, we agree with the findings of a study panel that was   It must be noted that the term “software” is not defined in EU law;
Directive 2009/24/EC provides a definition of “computer program” in
organized as part of Stanford University’s “One Hundred
recitals. The two definitions are often used interchangeably, but there
Year Study on Artificial Intelligence”: “The Study Panel’s is a technical difference: a “program” is a set of instructions that tell
consensus is that attempts to regulate ‘AI’ in general would a computer what to do while “software” can be made up of more than
be misguided, since there is no clear definition of AI, and one program.
6
the risks and considerations are very different in different  When our article was ready for publishing, EC Joint Research
Center in February 2020 has released technical Repot “AI Watch.
domains” (Stone et al. 2016). Defining Artificial Intelligence”, where JRC also tend to define the
In 1955, John McCarthy coined the term “artificial intel- object in consideration After have considered 55 documents that
ligence”, which he defined as “the science and engineering address the AI domain from different perspectives, the authors of this
of making intelligent machines” (McCarthy 2007). Most Report not surprisingly take the definition of AI proposed by HLEG
on AI as the starting point for further development, but also assert
definitions follow this lead by describing AI as “intelligence that‚ considering that the HLEG definition is comprehensive, hence
exhibited by machines”. Common variants add that AI must highly technical and detailed, […] the definitions provided by the
demonstrate “human” or “human-like” intelligence. Such EC JRC Flagship report on AI (2018) […] are suitable alternatives
(Samoili et  al. 2020: 9). But AI Watch also assert, that despite the
increased interest in AI by the academia, industry and public insti-
tutions, there is no standard definition of what AI actually involves
… human intelligence is also difficult to define and measure … as
a consequence, most definitions found in research, policy or market
3
 (European Commission 2018a, b; European Commission’s High- reports are vague and propose an ideal target rather than a measurable
Level Expert Group on Artificial Intelligence 2018, 2019). research concept” (Samoili et al. 2020: 7).

13
AI & SOCIETY

We agree with the partial definition of AI, as proposed by general AI—it is a bit like trying to conduct particle physics
Zevenbergen et al. that it consists of “human-created digital experiments without having a standard model. The abstrac-
information technologies and associated hardware that dis- tion of biological neurons and insights based on fragments
plays intelligent behaviour that comes not purely from the of biological network topology do not constitute a general
programmer, but also through some other means” (Zeven- theory of intelligence. However, we agree with Borden’s
bergen et al. 2018). Indeed, this definition assumes that we opinion that, in actual situations, Turing’s teasing discus-
are in relation to not only pure software or computerised sion of the imitation game8 was aimed at avoiding endless
things but with some kind of intelligence. discussions about definitions of intelligence in the context
In academic and political circles, there is currently no of AI (Borden 2006).
consensus on the definition of the term “AI”. However, from Presently, forms of AI are certainly not conscious or
a technical point of view, the term “AI” is used to describe rational; it is often assumed that they possess these char-
a variety of technologies that have certain features in com- acteristics when people are faced with advanced or over-
mon. AI is not a discrete technology; rather, it is a series of hyped technologies. The biggest problem that we face in
technologies, concepts, and approaches, all of which align the context of present “machine learning” (ML) and “deep
in our quest to create intelligent machines. learning” (DL) machines is that, although they are capa-
Indeed, in accordance with the EU Parliament’s proposed ble of learning things, they cannot understand them. The
use of the term “electronic personhood”, we can trace certain most advanced AI systems are merely products that follow
technical features that are common to artefacts in considera- processes that have been devised and defined by intelligent
tion: the candidates for a new kind of legal title. Those com- people. AI techniques (natural language processing and ML/
mon features are intelligence, autonomy, and embodiment. DL) enable private and public decision-makers to analyse
When attempting to ascertain which features ought to big datasets to build profiles, which are then used to make
be covered by the term “intelligence”, it should be noted decisions in an automated way. Yet, for now, machines can-
that “intelligence” in itself is hard to define; discussions not make decisions on their own (Atkinson 2016). Atkinson
about doing so often cause controversy. It is debatable as to states that there is a fundamental difference between infor-
whether consciousness and rationality are prerequisites for mation processing and thinking (Atkinson 2016). Domingos
intelligence or vice versa. The MIT Encyclopedia of Cog- is in agreement, noting that “unfortunately, what we have so
nitive Science states: “An intelligent agent is a device that far is only a very crude cartoon of how nature learns, good
interacts with its environment in flexible, goal-directed ways, enough for many applications but still a pale shadow of the
recognizing important states of the environment and acting real thing” (Domingos 2018: 140).
to achieve desired results” (Rosenschein 1999). In practice, The process of trying to define the interwire terms
intelligence is a relative attribute and is evaluated in con- “autonomy” and “agency” has been no less challenging.
nection with human capabilities. For example, we would not The term “autonomous artificial intelligent agent” would be
normally consider a rat to be an intelligent creature (imply- closer to the notion of “personhood” instead of “electronic
ing a comparison to a human), but we would recognise it agent”9 or “software agent”.10 We do not support the use
to be more intelligent than a cockroach. We share Florian’s of the term “software agent” (closely related to “electronic
view that we would consider an agent as being intelligent if agent”) as it does not cover embodied agents, such as robots,
it can perform non-trivial, purposeful behaviours that adapt or hardware implementations, such as neural network chips.
to changes in the environment (Florian 2003). However, the
evaluation of behaviour is arbitrarily undertaken by humans;
intelligence is, thus, a property that is subjectively assigned.
We have no general mathematical theory of intelligence, as 8
  In 1950 A. Turing in his paper “Computing Machinery and Intel-
all AI technologies are digital.7 Without a general theory ligence” has introduced his famous test, called Turing test, but the
of “natural” intelligence to understand its scope—that of author by itself does not call his idea "Turing test", but rather the
"Imitation Game". Turing opens with the words: "I propose to con-
sider the question, ’Can machines think?’", because “thinking” is
difficult to define and he chooses to replace the question by another,
7 which is closely related to it and is expressed in relatively unambigu-
 Max Tegmark proposed the hypothesis that our physical reality
ous words (Turing 1950).
is a mathematical structure (Tegmark 2014). Steven Pinker argued 9
that intelligence does not come from a special kind of spirit or mat-  E.g., for W. Barfield the term “electronic person” is sufficient to
ter or energy, but from a different commodity, namely information. describe virtual avatar or virtual agent, acting only in cyber space
Information is a correlation between two things that are produced by (Barfield 2006), but it is not sufficient and does not explore all the
a lawful process. Correlation is a mathematical and logical concept extend of applications of AI-based systems.
10
(Pinker 1998). However, Cathy O’Neil demonstrates the opposite,  An artificial agent may be instantiated by an optical, chemical,
namely that many algorithms are not inherently fair just because they quantum or, indeed, biological—rather than an electronic—comput-
have a mathematical basis (O’Neil 2016). ing process.

13
AI & SOCIETY

Autonomous intelligent agent research is a domain that artificial agent (equal to “autonomous system” and “robots”
is situated at the forefront of AI. It has been argued by many in the EU resolution) rather than merely disembodied intel-
authors (e.g., Clark 2017; Menary 2007) that genuine intel- ligence. We must also clarify whether only “autonomous
ligence can emerge only in embodied, situated, cognitive intelligent technologies” that are based in physical forms
agents. Agency is closely connected to qualities like auton- in an unrestricted physical world are within the scope of
omy and embodiment. Most authors refrain from giving new kind of personhood or whether all algorithms should
precise definitions of “agent”. In their canonical book on be taken into consideration.
AI, Russell and Norvig assert the following: “The notion of Embodiment is another important quality of many auton-
an agent is meant to be a tool for analysing systems, not an omous agents.12 It refers to the physical body that interacts
absolute characterization that divides the world into agents with the surrounding environment. This property is impor-
and non-agents” (Stuart and Norvig 2010). We generally tant for the expression of cognitive capabilities. Embodiment
consider humans and most other animals to be agents; we is closely connected to situatedness: a body is not sufficient
consider robots, autonomous systems, and software pro- for embodiment if it is not also situated in an environment.
grams, meanwhile, to be artificial agents. But what really The body must be adapted to the environment to engage
distinguishes an agent from other artificial systems? We in interactions. According to this interpretation, a robot or
agree with main opinion among scholars asserts that agents a machine that does not perceive its environment, acting
can be constructed to mimic or even surpass the cognitive according to a predefined plan or remote control, is not con-
functions of humans. sidered to be embodied or situated. Not all agents are the
The most obvious feature of AI that separates it from ear- same, and We propose that there are at least two types of
lier technologies is the ability to act autonomously. Auton- artificial agents: (1) inhabitants—these are agents that have
omy is a characteristic that enhances the viability of an agent been paired with real-world hardware that uses geographic
in a dynamic environment.11 Autonomy requires automatic- location, and includes artificial agents in cars, drones, sen-
ity and even goes beyond that, implying some degree of sors, cameras, mobile phones, or computers; (2) pure soft-
adaptability. Automaticity means that the agent possesses ware—these are artificial agents that exist in the digital
at least some mechanisms that allow the agent to sense the space only (geographical position has no meaning).
environment in which it is situated and to act upon such In summarising all of the above, we can confirm that the
information accordingly. Automaticity does not require the current generation of AI is a long way away from the intel-
intervention of other agents to be executed. However, auton- ligent machines of science fiction, despite demonstrating or
omy is a matter of degrees, not a clear-cut quality (Smithers simulating some kind of intelligence. Modern machine intel-
1995). Most animals and some robots can be autonomous ligence is a combination of intelligence imitation and self-
agents. Pattie Maes defines artificial autonomous agents learning imitation. AI machines are completely lacking with
as being “computational systems that inhabit some com- regard to common sense and critical thinking, which are
plex dynamic environment, sense and act autonomously in indispensable for evaluating data and information on one’s
this environment, and by doing so realize a set of goals or own. The current wave of advances in artificial intelligence
tasks for which they are designed” (Maes 1995: 108–114). does not actually bring us real intelligence but is, instead,
In a study in which they seek to draw distinctions between a critical component of intelligence, specifically, predic-
autonomous agents and other systems, Franklin and Graesser tion. Certainly, we cannot prevent political initiatives from
provide the following definition: “An autonomous agent is a turning AI-based systems into persons or citizens. In this
system situated within a part of an environment that senses regard, we agree with the views of the European Commis-
that environment and acts on it, over time, in pursuit of its sion that, while the adoption of more technical approaches
own agenda and so as to effect what it senses in the future” to the definition of AI is possible, those approaches would be
(Franklin and Graesser 1996). less suitable in view of the fast pace of technological devel-
Thus, for the reasons that we have described above, “arti- opment. The definition of AI must be sufficiently flexible to
ficial agent” is a more suitable term to use than “artificial
intelligence” or “intelligent agent”, as we wish to empha-
12
sise the embedded, social, real-world nature of the typical  In this paper we do not question morality issues, but the oppo-
site opinion is expressed by E. Schwitzgebel and M. Garza. They
use notion psycho-social view of moral status and they suggest that
it shouldn’t matter to one’s moral status what kind of body one has,
11
  Autonomy is one of the main features of AI-based systems from a except insofar as one’s body influences one’s psychological and social
technological point of view, but it presents serious ontological issues properties. Similarly, it shouldn’t matter to one’s moral status what
(Chinen 2019). It must be noted that autonomy is presented as one of kind of underlying architecture one has, except insofar as underly-
main characteristics of AI, IoT and robotics technologies in the EC ing architecture influences one’s psychological and social proper-
Report on the safety and liability of AI, IoT and robotics (European ties. Only psychological and social properties are directly relevant to
Commission 2020). moral status (Schwitzgebel and Garza 2015).

13
AI & SOCIETY

accommodate technical progress while providing the neces- make good the damage it has caused” (European Parliament
sary legal certainty (European Commission 2019). However, 2017: sec.AF). This problem demonstrates the necessity of
while there may be future conditions to justify or even neces- further exploration, including the legal regulation of elec-
sitate AI personhood, doing so now appears to be techni- tronic personhood.
cally premature and is likely to be inappropriate due to the Scholars argue that, from the wording of the European
following: (1) currently there is no consensus regarding the Parliament resolution, it is unclear as to whether “the status
definition of the term “AI”; (2) common technical features of of electronic persons” (European Parliament 2017: sec.56)
AI like intelligence, autonomy, and embodiment cannot be refers to the legal personhood of robots as full subjects of
defined. There is no consensus concerning the definitions of the law or only their accountability in torts and contracts
these main features and their status as far from sophisticated (Pagallo 2018b).15 However, as evidenced by the cited Euro-
autonomous agents; they cannot make intelligent autono- pean Parliament resolution (European Parliament 2017), the
mous decisions or (inter)act independently as stipulated in main reason behind this proposition is to ensure that per-
the EU proposition. This means that, from a technical point sons who are affected by AI can be fully compensated for
of view, it is unclear as to which specific entities should be damages.
granted the proposed legal status of “electronic person.” The debate over granting electronic personhood to the
most advanced artificially-intelligent machines is a long way
from resolution. Opponents of the European Parliament’s
3 How is the concept of an “electronic resolution on “Civil Law Rules on Robotics” submitted the
person” justified in the “Civil Law Rules so-called “Robotics Open Letter” (https​://www.robot​ics-
on Robotics” from a legal point of view? openl​etter​.eu/) to the European Commission in 2018. This
letter was written by 150 experts in medicine, robotics, AI,
The authors of the “Civil Law Rules on Robotics” (Euro- and ethics, who criticised the plans for being “ideological,
pean Parliament 2017) invited the European Commission nonsensical and non-pragmatic”. They argued that it was
to explore, analyse, and consider electronic personhood as unnecessary and misleading for the EC to claim that the
a possible legal solution for autonomous robots. This legal problem of proving liability for damages could be remedied
solution, along with other alternatives, such as trainer’s by introducing the concept of “electronic personhood”. More
responsibility,13 mandatory insurance schemes, and sup- generally, they feared that giving citizenship or personhood
plementary guaranty funds to compensate victims of acci- to robots would infringe human rights. The letter outlined
dents (involving driverless cars), should be designed for the belief that “from an ethical and legal perspective, creat-
reducing the public risks that Al presents “without stifling ing a legal personality for a robot is inappropriate”. The
innovation”.14 letter also demanded that the EU ensured that they would
Notably, this EU document is not binding; it presents produce a legal framework that was weighted towards “the
recommendations and, thus, the possibility of legal status protection of robots’ users and third parties” rather than the
is expressed “cautious[ly] and noncommittal[ly]” (Bryson robots themselves. Other scholars have echoed this posi-
et al. 2017: 276). On the other hand, this statement strongly tion. At this point, “smart autonomous robots with rights
implies that the door is open for such legal innovation (Bry- and duties seems to be at least premature and would also
son et al. 2017: 276). Hence, its authors stress the need to lead to a situation in which no certainty is given to damaged
treat AI differently from previous technologies, including parties as to who should and will, compensate the damage”
“depart[ing] from traditional liability theories” (Kritikos (Renda 2019: 87).
2019: 3). As in the case of autonomous robots, “they [rules] Another argument against the introduction of legal status
would not make it possible to identify the party responsible for AI is that “there would be inevitably situations where the
for providing compensation and for requiring that party to acts of robots would interfere with the rights of humans and
other legal persons” (Bryson et al. 2017: 285). The main
problem at this stage of development is that the right bal-
13
  In accordance with the formula suggested in the resolution, liabil- ance of rights and obligations cannot be reached because of
ity for damage caused by cognitive computing should be proportional the difficulty of imposing obligations on AI (Bryson et al.
to the actual level of instructions given to the robot and its degree of
autonomy, so that the greater a robot’s learning capability or auton- 2017: 285). Consequently, “a legal system, if it will be cho-
omy, and the longer a robot’s training, the greater responsibility of its sen to confer legal personhood on robots, would need to say
trainer should be.
14
  European AI strategy supports an ethical, secure, and cutting-edge
AI made in Europe, and based on three pillars: (1) increasing pub-
15
lic and private investments in AI; (2) preparing for socio-economic   This includes the possibility to bear fundamental legal rights and
changes; and (3) ensuring an appropriate ethical and legal framework duties, like citizenship, free speech, privacy, to sue and be sued, enter
(European Commission 2018c: 1). contracts, own property, inherit, and so on.

13
AI & SOCIETY

specifically which legal rights and obligations went with this search for alternative means of reducing the danger and the
designation” (Bryson et al. 2017: 285). application of proportionality tests when weighing up all
The prospective introduction of electronic personhood interests at stake. In other words, a high level of expertise is
has often been compared to the introduction of corporations expected from those who specialize in dangerous areas. In
as legal persons, which led to an increase in economic power this regard, foreseeability and proportionality tests are often
in the hands of accumulated capital, growing image, and the used (Winiger et al. 2018: 400–402). Also, since its adoption
importance of corporations in society (Robson 2010: 113; in 1985, the Product Liability Directive has been consid-
Gordon 2018). ered to be one of the cornerstones of European Private Law,
The proponents of electronic personhood follow the so- seeking to achieve maximum harmonization of consumer
called pragmatic approach, where robots are not humans protection in the field (Council Directive 1985; European
and the term “electronic personality” is introduced for legal Parliament and Council Directive 1999). The Directive also
convenience: “it is possible to create a legal status, which ascertained that product liability must be seen in the context
would only be a ‘tangible symbol’ for the cooperation of of new technologies and risk apportionment. The goal of the
all the people creating and using that specific robot” (Beck Product Liability Directive is to protect individuals and soci-
2016: 141). The underlying rationale is the legal system’s ety from the risks that arise from defective products so that
recognition that such legal status is necessary “when doing everybody can live in a safe environment (BEUC 2017: 2).
so suits its ends (Bryson et al. 2017: 277). For example, Notably, the member states of the European Union are not
autonomous vehicles have been given the status of legal enti- allowed to provide a higher level of protection to consumers
ties in Nevada in the United States (Selwood 2017: 869). than that which has been set by the Directive.
Other scholars also argue that future technologies will be so The concept of strict liability (liability without fault)
sophisticated and developed that AI will bear certain simi- of producers has traditionally been being a very important
larities to human beings. principle that protects both the interests of consumers and
Analysis of arguments against electronic person shows society. The rationale behind this concept is that producers
asymmetry between the purpose as it is described in Civil who make profits from dangerous activities should be held
Law Rules on Robotics to consider electronic person as a accountable if this danger materializes (BEUC 2017: 2).
possible liability instrument for autonomous artificial intel- Several arguments speak in favour of the solution to attribute
ligent agents, along with other alternatives, such as trainer’s liability to producers or whoever puts a product on the mar-
responsibility, mandatory insurance schemes, etc. However, ket. The first argument is that of spreading costs. Instead of
criticism of the concept of electronic personhood mainly putting the costs of the accident and the whole burden on the
focuses on the issues of granting legal personhood to robots shoulders of a single person (victim of the accident), there
as full subjects of the law. is the option of spreading the costs of unavoidable accidents
among the whole community of persons who benefit from
3.1 Artificially intelligent agents and challenges the activity because risks and such expenses are intrinsic to
for product liability legal framework the production and the price of the product. Producers also
have the option of self-insuring against liability. Self-insur-
As the main purpose of the proposition is to introduce one ance and cost-spreading thus alleviate the expenses that are
more possible solution to the problem of establishing the associated with accidents. The second argument relates to
legal accountability of artificially intelligent agents in torts injury prevention. By having to bear the accident costs with
and contracts, in this section, we examine the existing liabil- which an activity is associated, the producer has the incen-
ity framework and how it reflects digital economic needs. tive to take all the possible precautions against a recurrence
The liability system in the EU functions in accordance of the accident. So, producers are responsible for minimising
with basic principles, such as safety, security, transparency, the potential risks to consumers that are associated with a
protection of privacy and data, non-discrimination, fairness, product that has been brought to the market. Legal regulation
and accountability (Independent High-Level Expert Group should encourage manufacturers to become pro-active about
on Artificial Intelligence set up by the European Union protecting customers from damage and improving the ex-
2019). Following the principle of precaution, provided by ante supervision of their own quality management systems
the PETL (Principles of European Tort Law), art. 2:102 (2), (Winiger et al. 2018: 400–402; Leeuwen and Verbruggen
the more serious the risk of threat created by the product 2015: 912).
and the more likely the damage, the greater the level of pre- The concept of strict liability is closely related to the
caution that must be taken. The factors that are involved in standard of care issues. Normally, the professional stand-
assessing the required standard of care under the PETL are ard of care is higher when a certain activity is dangerous.
the value of endangered interest, the magnitude of risk, the Danger is often measured by the probability of occurrence
likelihood and gravity of injury, the cost of prevention, the and the gravity of harm—which, in other words, means that

13
AI & SOCIETY

the greater the danger, the higher the necessary degree of person when the same product is made by several producers
care. For example, one of the first technologies to put legal and contributors? There should be joint liability of profes-
regulation to the test was that of autonomous vehicles. In sionals in the product supply chain: “Since the consumer
many ways, autonomous vehicles are better at performing has the onus of [the] burden of proof, the victim will have
precarious activities that are currently performed by humans otherwise no possibility of recourse under the current Direc-
(such as driving) because machines never fall asleep, they tive” (BEUC 2017: 4). Maybe existing liability instruments
never drive drunk, they never get distracted by a text mes- are inadequate, ineffective, not reflecting the digital, partici-
sage or conversation, they never drink coffee or eat, and they patory economic needs, stifle innovation, and are unjust in
do not get angry or drowsy. Bearing in mind these human terms of risk apportionment in society?
deficiencies, it would be no surprise if the mass introduction Researchers argue that the current product liability legal
of autonomous vehicles considerably lessened the rate of framework is likely to become inadequate as commercially
accidents (Vladeck 2014: 126). In a world where driverless available AI machines become more sophisticated and
cars or drones will have the capacity to “sense-think-act” autonomous (Asaro 2012: 242). As aforementioned, where
at their own wills and with their own plans (Vladeck 2014: human involvement in the decision-making processes of AI
122), the legal system will have to address the standard of is obvious, there is no need to re-examine legal regulations
care issues (Solum 2017: 241–42) of such autonomously- (Vladeck 2014: 120). The companies that currently manu-
thinking machines. Adhering to the current rationale, should facture devices with AI are already subject to a well-devel-
we—in terms of product liability rules—apply a lower oped doctrine of product liability (Asaro 2012: 170–76). So,
standard of care to autonomously-thinking machines just all harms that could potentially be caused by AI technolo-
because we expect them to be so technologically advanced gies are treated in the same way as any other technological
that they should fail much less than humans? (Vladeck 2014: products (e.g., toys, cars, or weapons). As a matter of fact,
131–41). most accidents occur due to inevitable errors in the design,
Following the European consumer organization report: programming, and production of such machines. Because
of this, possible failures are usually categorised into design
safety standards and safety expectations must be tan-
defects, manufacturing defects, information defects, and
dem criteria that encourage producers to place only
failures to instruct (Vladeck 2014: 127–41; European Par-
safe products on the market. Therefore, there must be a
liament 2017: 242).
smooth interplay between the Product Liability Direc-
Danger is usually the source of strict liability regula-
tive and the Product Safety Directive, considering
tory regimes in many legal systems—or, at least, in the
both curative and preventive considerations (BEUC
case of more-strict liability that is, for example, based
2017:4).
on the reversal of the burden of proof that is normally
An important role is played by public supervisory agen- set (Winiger et  al. 2018: 403–5). In this context, one
cies, who are responsible for the overall supervision of the more issue that has been discussed by scholars is: why
market and who can take products off the market if they do should we apply a strict liability regime in AI cases if
not comply with the European legislation.16 developed AI devices are likely to be less dangerous than
However, the current rules on liability for defective prod- the products that they replace? Strict liability rules are
ucts in the context of new technologies have turned out to an exemption to the fault-based liability regime. They are
be ineffective in achieving their objectives. For example, based on risk theory, i.e., the notion that the activity or
without any obvious reason, software is not covered by the device being used is dangerous. Some scholars argue that
Directive as being a product, which is legally defined as applying this theory to AI devices might be problematic
covering “movables” (Article 2). Another problem lies with because they are likely to be far less hazardous or risky
the definition of “damage” (Art. 9), which currently focuses than the products that they replace (Vladeck 2014: 146).
on the destruction of another item of property, excluding However, others argue that there are strong policy reasons
damage to the digital environment. Also, consumers’ organi- for establishing an insurance-based strict liability regime
zations pose another problem: how do we identify the liable for cases that relate to intelligent devices.17 One reason is
the aforementioned traceability and opacity problem. The
16 injured person should not bear the loss when causal fail-
 For example, pacemakers are active implants and as such come
within the scope of the Active Medical Devices Directive. To be able ure is hardly explicable (European Parliament 2017: 242).
to place such devices on the European market, manufacturers must go Furthermore, as it has been shown that as the “complexity
through a conformity assessment procedure. This procedure is under-
taken by a certification body. After the products have been placed on
the market, notified bodies continue their supervising role and have to
17
carry out regular inspections (Leeuwen and Verbruggen 2015: 912–  Some legal liabilities cannot be met by insurance (Solum 2017:
913). 1245).

13
AI & SOCIETY

of such products rises geometrically, the cost of litigating paths must be re-constructible for the purposes of litigation
products liability cases would increase exponentially”, and dispute resolution” (Riek and Howard 2014: 6).
strict liability regimes would then spare enormous trans- Hence, there is an obvious tension between the “trace-
action costs—that is if parties decided to litigate. Accord- ability” requirement (considering the problem of epistemic
ing to insurance-based theory, the manufacturers are in opacity) and the development of robots that have a high
a better economic position to absorb the costs of loss level of autonomy in decision-making and advanced learn-
through pricing decisions, spreading the burden of loss. ing capabilities. For example, if “the software was developed
These arguments conform to the notions of compensatory using open source code (with a vague number of developers
justice and the apportionment of risk in society (Vladeck behind it), it would be quite difficult to identify the devel-
2014: 146–48). Also, a predictable liability regime would oper. The situation will become more complicated if the AI
be likely to spur on innovation to a much greater extent is self-aware, capable of self-improvement, self-preserva-
than an “uncertain fault-based liability system” (Vladeck tion, creativity, strives to obtain necessary resources, etc.”
2014: 147). Eventually, in spite of how sophisticated and (Radutniy 2017: 242). Who will be accountable, then? Is the
autonomous in terms of decision making an AI machine person who built the AI responsible, or does accountability
can be, even if it is capable of displaying an independent lie with the person who provided the data, the one who vali-
initiative and formulating plans, some scholars suggest dated the data, or the company operating it, etc.?
that it is still an instrument of other entities and has no So, liability issues will arise due to these traceability
attributed legal personhood (European Parliament 2017: and opacity problems. Court rulings to develop legal prec-
242; Vladeck 2014: 121). edents take place on a case-by-case basis, which is a long
For example, in one of the recent judgements of the and expensive process. At the same time, injured persons
Court of Justice of the European Union in the case of Bos- should not bear losses when causal failure is hardly explica-
ton Scientific Medizintechnik, they extended the interpre- ble (European Parliament 2017). Furthermore, as “complex-
tation of concepts of defect and damage in the Product ity of such products rises geometrically, the cost of litigat-
Liability Directive. In the case of implantable medical ing products’ liability cases would increase exponentially”
devices, the court stated that a product is defective if it (Vladeck 2014: 147).
does not provide the level of safety that a person is entitled Also, the lines between the responsibilities of manufac-
to expect, even if the risk was not caused by the use of the turers/developers and the responsibilities of users are blur-
product but was found in the “abnormal potential” of the ring (Asaro 2012: 174; European Parliament 2017: 242).
products to cause damage. They specifically referred to For example, if a manufacturer offered different versions of
the sixth recital of the Product Liability Directive, which AI algorithms, and a buyer knowingly chose one of them,
states that the assessment of whether a product is defec- is the buyer to blame for the harmful consequences of the
tive “must be carried out having regard to the reasonable algorithm’s decisions? In this regard, it would be difficult to
expectations of the public at large”. The Court then con- hold the manufacturer responsible for any creative—even if
cluded that products that have a potential defect can be dangerous—uses of their products. Cars and weapons are
classified as being defective—there is no need to prove also very dangerous consumer products, but users still tend
that a specific product has a defect (Leeuwen and Ver- to be liable for most of the harms that they have caused
bruggen 2015: 899–915). Still, it is necessary to prove that because the use of those potentially dangerous products
the products had “a significantly higher than normal risk of places an additional burden of responsibility on the user
failure”. However, it is unclear to what extent the judgment (Asaro 2012: 170–74). Hence, the increasing autonomy of
is relevant for other consumer goods, like “potentially” smart devices poses an additional question: who exactly
defective AI technologies. Could it be that AI products should bear ethical and legal responsibility for the behav-
have an abnormal potential for damage to users that is iour of autonomous (with learning abilities) machines? Cur-
comparable to that of the pacemakers that were manufac- rently, there typically seems to be a “shared” or “distributed”
tured by Boston Scientific Medizintechnik? responsibility between robot designers, engineers, program-
AI technology uses cognitive algorithms, which are not mers, manufacturers, investors, sellers, and users, because
merely programmed to perform specific tasks, but also to none of these agents can be identified as the ultimate source
learn and further develop themselves in interaction with their of action (World Commission on the Ethics of Scientific
environments. Given the complexity of the design, construc- Knowledge and Technology 2017: 42). Scholars argue that
tion, and programming of AI devices, a central legal issue is this solution tends to dilute the notion of responsibility alto-
to track the reasons behind all past actions (and omissions) gether: if everybody has a part in the total responsibility, no
(Vladeck 2014: 141–43; World Commission on the Ethics one is fully responsible (World Commission on the Ethics
of Scientific Knowledge and Technology 2017: 6–7). This of Scientific Knowledge and Technology 2017: 4). On the
is ethically and legally crucial because “a robot’s decision other hand, most European jurisdictions already have rules

13
AI & SOCIETY

that determine that any damage that has been caused by the will the adequacy and completeness of liability regimes still
concurrent actions of several tortfeasors generally results in be unquestionable in 10 years or even less?
collective liability through the concept of “joint and several” As the Report on the Safety and Liability Implications of
liability or solidary liability. This concept prevents actors Artificial Intelligence, The Internet of Things and Robotics
who have contributed to the damage from avoiding liability points out, these emerging technologies share such charac-
and complex accidents from fractioning, in this sense, offer- teristics as complexity,19 opacity20 and openness.21 If the
ing better protection for the victim (Winiger et al. 2018). goal of European regulation is trustworthy, human-centric
However, it is hardly the most efficient model in terms of AI, liability issues of emerging digital technologies should
spurring innovation and the most righteous risk apportion- first be considered from the potential victim’s position, tak-
ment in society. ing into consideration the changing technological, economic
So, granting subjectivity to robots would lead to “chang- and legal ecosystems. Due to the complexity of products,
ing perceptions of appropriate allocation of risks…including different connected devices, software and services, long
change of attitudes” (Robson 2010: 118–119). “Conferring supply chains, later updates and upgrades, huge amounts of
‘personhood’ on these machines would resolve the agency self-learning data for AI, contributory negligence of injured
question; the machines would become principals in their persons, etc., it will be very difficult for victims to identify
own right, and along with new legal status would come the liable person.22 Furthermore, because the of so-called
new legal burdens, including the burden of self-insurance” black-box effect, understanding autonomous AI decisions
(Vladeck 2014: 150). and the data behind them will require the cooperation of the
Analysis from legal and technical points of view fore- potentially liable party (which will not be easy to identify in
grounds the issues that are related to the autonomy of AI the long technological chain) and technical expertise that for
like the “black box problem”, foreseeability, traceability, the victim could be prohibitively costly.23 Reports also empha-
establishment of liable or guilty persons, and others—ones size that “allocation of the cost when damage occurs may
which have not been addressed adequately by current prod- be unfair or inefficient under the current rules” (European
uct liability legal frameworks. Commission 2020: 17). Taking into consideration the differ-
ences in legal liability frameworks between Member States,
3.2 How can the concept of the “electronic person” although the market for technologies is increasingly global,
work for modelling new liability instruments? “the cost of proving all necessary conditions as required

In recent EU documents (European Commission 2019,


2020) there has been a tendency to depart from the idea of
19
electronic personality. However, it should be noted that the   Complexity is reflected in plurality of economic operators involved
in the supply chain, multiplicity of components, software and services
authors of these documents are not fully consistent. Despite as well as interconnectivity with other devices (European Commis-
of long list of threats and gaps18 in the reports, their con- sion 2020: 2).
clusion is that current safety and liability regulation seems 20
  For ex-post mechanism of enforcement, it is decisive that humans
adequate (European Commission 2019: 36, 2020: 11). We can be able to understand how the algorithmic decisions of the arti-
agree that for now, taking into consideration the state of art ficially intelligent agent have been reached. The self-learning artifi-
cially intelligent agents will be able to take decisions that may devi-
of AI, which we analysed before, this is probably true. But ate from what was initially intended by the producers. The producer’s
control may be limited if the product’s operation requires software or
data provided by third parties or collected from the environment, and
depends on self-learning processes and personalizing settings cho-
sen by the user. This dilutes the traditional role of a producer, when
18
a multitude of actors contribute to the design, functioning and use of
 For example, Union product safety legislation does not address the AI product/system (European Commission 2020: 28).
the human oversight in the context of self-learning AI, also the risks 21
  AI products are open to updates, upgrades, self-learning data after
to safety derived from faulty data (p. 8), the increasing risks derived
placement on the market.
from the opacity of self-learning systems, fragmentation of liability 22
regimes and the significant differences between the tort laws of all  The more complex the interplay of various factors that either
Member States, the outcome of cases will often be different depend- jointly or separately contributed to the damage, the more crucial links
ing on which jurisdiction applies; damage caused by self-learning in the chain of events are within the defendant’s control, the more
algorithms on financial markets, often remain uncompensated, difficult it will be for the victim to succeed in establishing causation
because some legal systems do not provide tort law protection of such (European Commission 2020: 20).
23
interests at all (European Commission 2020: 19); Emerging digital   The more complex the circumstances leading to the victim’s harm
technologies make it difficult to apply fault-based liability rules, due are, the harder it is to identify relevant evidence. It can be difficult
to the lack of well-established models of proper functioning of these and costly to identify a bug in a long and complicated software code.
technologies and the possibility of their developing as a result of Examining the process leading to a specific result (how the input data
learning without direct human control (European Commission 2020: led to the output data) may be difficult, very time-consuming and
23). expensive (European Commission 2019: 24).

13
AI & SOCIETY

under national laws may be unduly burdensome, economi- company; however, even if the damage could be attributed
cally prohibitive and discourage victims from claiming com- to the company as a legal person, there is no reimbursement
pensation” (European Commission 2020: 14). Furthermore, guarantee for victims. On the other hand, there is quite an
experts point out that “for most technological ecosystems, elaborate doctrine of lifting the corporate veil; however, the
however, no specific liability regimes exist.… The more courts rarely apply it.
complex these ecosystems become with emerging digital Electronic personality as a liability instrument could
technologies, the more … difficult it becomes to apply liabil- be coupled with mandatory insurance and strict liability
ity frameworks” (European Commission 2019: 17). schemes. This would not only ensure compensation irrespec-
It is obvious that without clear and predictable legal tive of fault in the long technological chain of products and
regulation, the goal of making Europe a world leader in AI services, but also reduce the litigation costs for victims and
(European Commission 2020: 1) with “the promise of mak- put a cap on liability. The synergy of three instruments (elec-
ing the world a safer, fairer, more productive, more conveni- tronic personhood, strict liability and mandatory insurance)
ent place” (European Commission 2019: 11) is just a nice would greatly contribute to legal certainty and, accordingly,
declaration. Moreover, national non-harmonised regimes24 trust for customers and investors. Of course, an economic
can lead to further fragmentation of legal liability frame- analysis of the interplay of these three instruments would be
works, increasing the costs of introducing innovative AI helpful, to identify the cheapest cost avoider and the cheap-
technologies and the insurance costs for producers. Thus, est taker of insurance, which most likely will be the producer
the EU probably departed from the idea of electronic person- (putting product into circulation). Also, from the analysis of
hood as a possible liability instrument too early. autonomous artificial agents it is clear that it may be chal-
The 2019 European Commission report on liability for lenging to identify a benchmark for electronic personality.
artificial intelligence and other emerging digital technolo- One potential benchmark might be when the application of
gies, admits that “giving robots or AI a legal personality artificial autonomous agents is safer and less likely to cause
would not require including all rights natural persons, or damage to others than the activity of human actors. So, we
companies, have” (European Commission 2019: 38), so it is argue that idea of electronic personality should be further
clear that there is no problem in this regard, even if the main elaborated as a possible instrument for liability purposes.
criticism of electronic personhood concentrates on ethical Relevant scholarship and legal documents in this field
and human rights issues. emphasize that AI is a pervasive, transformative innova-
The main arguments from the liability point of view tion that needs a new approach. We agree that AI, as the
against electronic personality are: (1) it “would not be prac- most innovative technology developed thus far, is one of
tically useful, since civil liability is property liability, requir- the main drivers of the Fourth Industrial Revolution and
ing its bearer to have assets” (European Commission 2019: should be treated differently to those who have preceded it.
38) and (2) “there is no need to give a legal personality to In an open letter from the Future of Life Institute, signed by
emerging digital technologies”, as harm caused by them is more than 8000 scholars, leaders, and experts, it has been
generally attributable to natural or legal persons (European argued that the impact of AI on society is likely to lead to
Commission 2019: 38). The concept of vicarious liability greater research investments in the field. The letter calls for
is employed to argue that operators of machines, comput- expanded research to reap the benefits of AI while avoiding
ers, robots or similar technologies should also be strictly potential pitfalls (Future of Life Institute; Russell et al. 2015:
liable for their operations, based on an analogy to the basis 105–114). AI is pervasive because other innovations are
of vicarious liability (European Commission 2019: 25). increasingly being based on AI technology as well, afforded
However, these arguments are not very convincing. with human-like skills such as learning, speech recognition,
Indeed, the same “no assets” argument could then apply automated reasoning, sensing, interaction, problem-solving,
to most limited liability companies, especially small and and creativity (World Commission on the Ethics of Scien-
medium size enterprises. As a matter of fact, the company tific Knowledge and Technology 2017: 4). From autonomous
as a person is a legal fiction. Limited liability is a pivotal fea- vehicles to self-learning personal assistants, the application
ture, which makes a company an attractive legal instrument scenarios are overwhelmingly vast, but so are the possible
for doing business. Legal requirements for company assets ethical, societal, and legal implications.
(capital) vary depending on the jurisdiction and the type of One possible solution to the debate over whether AI
should be granted electronic personhood is the consideration
of existing levels of AI and justification of its rapid devel-
opment. The issue is primarily one of liability. In response,
24
  Only the strict liability of producers for defective products is har- we propose the following compromise: modified liability
monized at EU level by the Product Liability Directive, while all
other regimes are regulated by the Member States themselves (Euro- mechanisms for harm that has been caused by autonomous
pean Commission 2019: 3). AI. The idea of sophisticated AI machines as agents would

13
AI & SOCIETY

perfectly match a strict, insurance-based liability regime intelligence we are creating under the notion of “AI”; (3) it
(Vladeck 2014). There should be a correlation between the is almost completely opaque technology.
degree of AI autonomy and the applied liability regime. In In recent EU documents, there has been a tendency to
the case of deterministic AI devices that have basic or no depart from the idea of electronic personality; however,
autonomy, the applied liability regime is strict, based on the their arguments are not very convincing. We agree that for
well-known product liability doctrine. In the case of cogni- now, taking into consideration the state of art of AI, the
tive AI devices with high levels of autonomy, the applied current safety and liability regulation seems robust and reli-
liability regime is also strict, but the novel regulation of lim- able; however, very soon the adequacy and completeness of
ited liability schemes of autonomous artificially intelligent liability regimes will become highly questionable.
agents, based on mandatory insurance, should be considered. The adoption of the concept of the “electronic person”
The proposed liability instrument would question, in part, could provide an apt legal instrument for liability purposes
the traditional, well-developed doctrine of product liability when AI development reaches a certain technological
(Asaro 2012: 170–176), as well as negligence and the fault- level. AI, as the most innovative technology that has been
based liability system, which relies on a chain of causation. developed to date and one of the main drivers of the Fourth
However, this proposition would conform to notions of com- Industrial Revolution, should be treated differently to previ-
pensatory justice and the apportionment of risk in society ous technologies; there is a need to depart from traditional
(Vladeck 2014: 146–48). Also, a predictable liability regime liability theories. As one possible solution, we propose the
like this would spur innovation much more so than would consideration of limited liability schemes of autonomous
an “uncertain fault-based liability system” (Vladeck 2014: artificially intelligent agents, based on mandatory insurance.
147). Of course, this scheme should be examined against Mandatory insurance together with strict liability would not
other alternatives, such as product liability, trainer’s respon- only ensure compensation irrespective of the fault in the
sibility, mandatory insurance, supplementary guaranty fund long technological chain of products and services, but also
to compensate victims of accidents, etc. reduce the litigation costs for the victim and put a cap on
liability. Synergy of three instruments (electronic person-
hood, strict liability and mandatory insurance) would greatly
4 Conclusions contribute to legal certainty and, accordingly, trust for cus-
tomers and investors.
It is evident that the main reason behind the proposal of
electronic personhood in “Civil Law Rules on Robotics” is
to address scenarios where AI systems might cause harm, Funding  This research is funded by the European Social Fund accord-
ing to the activity, “Improvement of Researchers”, qualification, by
especially in situations where AI algorithms learn and evolve implementing world-class R&D projects of Measure No. 09.3.3-LMT-
of their own accord. It is argued that there is a need to review K-712. Special acknowledgment is dedicated to prof. J. Gordon (the
existing liability mechanisms due to the problems of foresee- chief-researcher of this project) for his valuable ideas and contribution
ability, traceability, and the establishment of liable or guilty to this research.
persons. However, there are also very strong arguments that
such decisions are premature, or simply acts of marketing—
in other words, political whims.
In “Civil Law Rules on Robotics”, the European Parlia- References
ment appears to consider AIs as being easily identifiable
Aleksandre FM (2017) The legal status of artificially intelligent robots.
entities, but this is a misleading position. The definition of Personhood, taxation, and control. Dissertation, University of
AI is not obvious at all. The features of AI, such as intel- Tilburg
ligence and autonomy, make these forms of technology Asaro PM (2012) A body to kick, but still no soul to damn: legal per-
spectives on robotics. In: Lin P, Abney K, Bekey GA (eds) Robot
exceptionally difficult to regulate, as compared to other
ethics: the ethical and social implications of robotics, intelligent
human-made objects in the material world. The current robotics, and autonomous agents. MIT Press, Cambridge, pp
generation of AI completely lacks common sense, ration- 169–186
ality, consciousness, or critical thinking: all of which are Atabekov A, Yastrebov O (2018) Legal status of artificial intelli-
gence across countries: legislation on the move. Eur Res Stud J
indispensable for evaluating data and information on one’s
21(4):773–782
own. The current wave of advances in AI does not actually Atkinson R (2016) “It’s going to kill us!” and other myths about the
bring us intelligence that can act autonomously and safely. future of artificial intelligence. NCSSS J 21(1):8–11
So, from a technical point of view, it appears to be premature Barfield W (2006) Intellectual property rights in virtual environments:
considering the rights of owners, programmers and virtual avatars.
and, likely, inappropriate to introduce electronic personhood
Akron Law Rev 39(3):649–700
at the present time because (1) the scope of AI is unclear,
as a concept or as an artefact; (2) it is not clear what kind of

13
AI & SOCIETY

Beck S (2016) Intelligent agents and criminal law—negligence, dif- European Parliament (2017) Civil rules on robotics. European Parlia-
fusion of liability, and electronic personhood. Robot Auton Syst ment resolution of 16 February 2017 with recommendations to the
86:138–143 Commission on Civil Law Rules on Robotics (2015/2103(INL)).
BEUC (2017) The consumer voice in Europe. Review of product liabil- https​://www.europ​arl.europ​a.eu/doceo​/docum​ent/TA-8-2017-
ity rules. BEUC Position Paper. https​://www.beuc.eu/publi​catio​ 0051_EN.pdf. Accessed 15 June 2019
ns/beuc-x-2017-039_csc_revie​w_of_produ​ct_liabi​lity_rules​.pdf. European Parliament and Council Directive (1999) Amending Council
Accessed 7 Jan 2020 Directive 85/374/EEC on the approximation of the laws, regula-
Borden MA (2006) Mind as machine: a history of cognitive science. tions and administrative provisions of the Member States con-
Oxford University Press, Oxford cerning liability for defective products, 1999/34/EC, (OJ L 141,
Bryson JJ, Diamantis ME, Grant TD (2017) Of, for, and by people: the 4.6.1999)
legal lacuna of synthetic persons. Artif Intell Law 25:273–291 European Union (2019) Independent high-level expert group on arti-
Čerka P, Grigienė J, Sirbikytė G (2017) Is it possible to grant legal ficial intelligence set up. Ethics guidelines for trustworthy AI.
personality to artificial intelligence software system? Comput https​://ec.europ​a.eu/digit​al-singl​e-marke​t/en/news/ethic​s-guide​
Law Secur Rev Int J Technol Law Pract. https:​ //doi.org/10.1016/j. lines​-trust​worth​y-ai. Accessed 22 June 2019
clsr.2017.03.022 Florian R (2003) Autonomous artificial intelligent agents. Center for
Chinen M (2019) Law and autonomous machines. Edward Elgar, cognitive and neural studies. https​://coneu​ral.org/repor​ts/Coneu​
Cheltenham ral-03-01.pdf. Accessed 02 Feb 2020
Clarke A (1962) Hazards of prophecy: the failure of imagination. In: Franklin S, Graesser A (1996) Is it an agent, or just a program? A
Profiles of the future: an enquiry into the limits of the possible. taxonomy for autonomous agents. In: Proceedings of the third
Gollancz, London international workshop on agent theories, architectures, and lan-
Clark E (2017) Embodied, situated, and distributed cognition. In: A guages. Springer
companion to cognitive science. Wiley Goertzel B, Pennachin C (eds) (2007) Artificial general intelligence.
Council Directive (1985) On the approximation of the laws, regulations Springer, Berlin
and administrative provisions of the Member States concerning Gordon JS (2018) What do we owe to intelligent robots?”. AI Soc. https​
liability for defective products, 85/374/EEC (OJ L 210, 7.8.1985) ://doi.org/10.1007/s0014​6-018-0844-6
Domingos P (2018) The master algorithm: how the quest for the ulti- Gunkel DJ (2018) Robot rights. The MIT Press, Cambridge
mate learning machine will remake our world. Basic Books, New Hofstadter D (1999) Gödel, Escher, bach: an eternal golden braid.
York Basic Books, New York
Dowell R (2018) Fundamental protections for non-biological intel- Jaynes TL (2019) Legal personhood for artificial intelligence: citizen-
ligences (or: how we learn to stop worrying and love our robot ship as the exception to the rule. AI Soc https​://doi.org/10.1007/
Brethren). Minn J Law Sci Technol 19:305–335 s0014​6-019-00897​-9. https​://www.resea​rchga​te.net/publi​catio​
European Commission (2018a) Staff Working Document on liability n/33400​3203_Legal​_perso​nhood​_for_artif​i cial​_intel​ligen​ce_citiz​
for emerging digital technologies accompanying the document enshi​p_as_the_excep​tion_to_the_rule
Communication from the Commission to the European Parlia- Kritikos M (2019) European Parliament. Artificial intelligence ante
ment, the European Council, the Council, the European Economic portas: Legal and ethical reflections. Briefing. European Par-
and Social Committee and the Committee of the Regions "Artifi- liamentary Research Service. https​://www.europ​arl.europ​a.eu/
cial Intelligence for Europe" SWD (2018) 137 final at-your-servi​ce/files​/be-heard​/relig​ious-and-non-confe​ssion​al-
European Commission (2018b) Communication from the Commission dialo​gue/event​s/en-20190​319-artif​i cial​-intel​ligen​ce-ante-porta​
to the European Parliament, the European Council, the Council, s.pdf. Accessed 22 June 2019
the European Economic and Social Committee and the Committee Leeuwen VB, Verbruggen P (2015) Resuscitating EU product Liability
of the Regions on Artificial Intelligence for Europe COM (2018) law? Eur Rev Private Law 23(5):899–915
237 final Legg S, Hutter M (2007) A collection of definitions of intelligence, in
European Commission (2018c) Communication from the Commission Proc. Conf. on Adv. in Artif. Gen. Intell, IOS, Amsterdam
to the European Parliament, the European Council, the Council, Maes P (1995) Artificial life meets entertainment: life like autonomous
the European Economic, and Social Committee and the Commit- agents. Commun ACM 38:108–114
tee of the Regions—Coordinated Plan on Artificial Intelligence. McCarthy J (2007) What is artificial intelligence? https​://www-forma​
COM (2018) 795 final l.stanf​ord.edu/jmc/whati​sai.pdf. Accessed 12 June 2019
European Commission (2019) Liability for artificial intelligence and Menary R (2007) Cognitive integration. Palgrave Macmillan, UK
other emerging digital technologies. Report from the Expert O’Neil C (2016) Weapons of math destruction. Crown Publishers, New
Group on Liability and New Technologies—New Technologies York
Formation, European Union. https​://ec.europ​a.eu/trans​paren​cy/ Pagallo U (2018a) Vital, Sophia, and Co.—The quest for the legal
regex​pert/index​.cfm?do=group​Detai​l.group​Meeti​ngDoc​&docid​ personhood of robots. Information 9:1–11
=36608​Accessed 27 Dec 2019 Pagallo U (2018b) Apples, oranges, robots: four misunderstandings in
European Commission (2020) Report on the safety and liability impli- today’s debate on the legal status of AI systems. Philos Trans R
cations of artificial intelligence, the Internet of Things and robot- Soc A. https​://doi.org/10.1098/rsta.2018.0168
ics. https:​ //ec.europa​ .eu/info/sites/​ info/files/​ report​ -safety​ -liabil​ ity- Pinker S (1998) How the mind works. Penguin Press, London
artif​i cial​-intel​ligen​ce-feb20​20_en_1.pdf Accessed 30 Mar 2020 Radutniy OE (2017) Criminal liability of the artificial intelligence.
European Commission’s High-Level Expert Group on Artificial Intel- Probl Legal 138:132–141
ligence (2018) A definition of AI: main capabilities and scientific Renda A (2019) Artificial intelligence: ethics, governance, and policy
disciplines. Brussel. https:​ //ec.europa​ .eu/digita​ l-single​ -market​ /en/ challenges. Report of a CEPS Task Force. Centre for European
news/defini​ tion-​ artif​i cial-​ intell​ igenc​ e-main-capabi​ litie​ s-and-scien​ Policy Studies (CEPS), Brussels
tific​-disci​pline​s. Accessed 17 June 2019 Riek LD, Howard D (2014) A code of ethics for the human–robot inter-
European Commission’s High-Level Expert Group on Artificial Intel- action profession, we robot. https​://robot​s.law.miami​.edu/2014/
ligence (2019) Ethics guidelines for trustworthy AI. Brussel. https​ wp-conte​nt/uploa​ds/2014/03/a-code-of-ethic​s-for-the-human​
://ec.europ​a.eu/digit​al-singl​e-marke​t/en/news/ethic​s-guide​lines​ -robot​-inter​actio​n-profe​ssion​-riek-howar​d.pdf. Accessed 14 June
-trust​worth​y-ai. Accessed 20 June 2019 2019

13
AI & SOCIETY

Robson RA (2010) Crime and punishment: rehabilitating retribution as study on artificial intelligence: Report of the 2015–2016 Study
a justification for organizational criminal liability. Am Bus Law Panel, Stanford University, Stanford, CA, September 2016. https​
J 47(1):109–144 ://ai100​.stanf​ord.edu/2016-repor​t. Accessed 17 July 2019
Rosenschein SJ (1999) Intelligent agent architecture. In: Wilson RA, Stuart R, Norvig P (2010) Artificial intelligence: a modern approach,
Keil F (eds) The MIT encyclopedia of cognitive sciences. MIT 3rd edn, Pearson Prentice Hall, Upper Saddle River
Press, Cambridge Suetonius De vita Caesarum (1913) Caligula, 55. https​://penel​ope.
Russell S, Dewey D, Tegmark M (2015) Research priorities for robust uchic​ago.edu/Thaye​r/E/Roman​/Texts​/Sueto​nius/12Cae​sars/Calig​
and beneficial artificial intelligence. AI Mag 36(4):105–114 ula*.html#55. Accessed 03 Mar 2020
Samoili S, Lopez Cobo M, Gomez Gutierrez E, De Prato G, Martinez- Tegmark M (2014) Our mathematical universe. Random House LLC,
Plumed F, Delipetrev B (2020) AI WATCH. Defining Artificial New York
Intelligence, Publications Office of the European Union, Luxem- Turing A (1950) Computing machinery and intelligence. Mind
bourg, 2020, (online). https​://doi.org/10.2760/38273​0 (online), 59(236):433–460. https​://doi.org/10.1093/mind/LIX.236.433
JRC118163. https:​ //ec.europa​ .eu/jrc/en/public​ ation​ /ai-watch-​ defin​ Vladeck DC (2014) Machines without principals: liability rules and
ing-artif​i cial​-intel​ligen​ce. Accessed 02 Apr 2020 artificial intelligence. Wash Law Rev 89(1):117–150
Schwitzgebel E, Mara G (2015) A defense of the rights of artificial Winiger B, Karner E, Oliphant K (2018) Essential cases on miscon-
intelligences. Midwest Stud Philos 30:98–119 duct. Digest of European Tort Law. De Gruyter, Berlin
Selwood M (2017) The road to autonomy. San Diego Law Rev World Commission on the Ethics of Scientific Knowledge and Technol-
54:829–873 ogy (2017) Report of COMEST on robotics ethics. https​://unesd​
Singh S (2017) Attribution of legal personhood to artificially intelligent oc.unesc​o.org/ark:/48223​/pf000​02539​52. Accessed 20 July 2019.
beings. Bharati Law Rev 54:194–201 Zevenbergen B, Finlayson M, Kortz M, Pagallo U, Borg JS, Zapušek T
Smithers T (1995) Are autonomous agents information processing (2018) Appropriateness and feasibility of legal personhood for AI
systems? In: Steels L, Brooks R (eds) The artificial life route to systems. In: International conference on robot ethics and standards
artificial intelligence: building embodied, situated agents. Law- (ICRES 2018). https​://users​.cs.fiu.edu/~marka​f/doc/w16.zeven​
rence Erlbaum Associates, Hillsdale berge​n.2018.proci​cres.3.x_camer​a.pdf. Accessed 14 June 2019
Solum B (2017) Legal personhood for artificial intelligences. N C Law
Rev 70(4):1231–1287 Publisher’s Note Springer Nature remains neutral with regard to
Stone P, Brooks R, Brynjolfsson E, Calo R, Etzioni O, Hager G, Hirsch- jurisdictional claims in published maps and institutional affiliations.
berg J, Kalyanakrishnan S, Kamar E, Kraus S, Leyton-Brown K,
Parkes D, Press W, Saxenian AL, Shah J, Tambe M, Teller A
(2016) Artificial intelligence and life in 2030. One hundred year

13

You might also like