Professional Documents
Culture Documents
Telecommunications Policy
journal homepage: http://www.elsevier.com/locate/telpol
A R T I C L E I N F O A B S T R A C T
Keywords: AI is the subject of a wide-ranging debate in which there is a growing concern about its ethical
Artificial intelligence and legal aspects. Frequently, the two are mixed and confused despite being different issues and
Ethics areas of knowledge. The ethical debate raises two main problems: the first, conceptual, relates to
Law
the idea and content of ethics; the second, functional, concerns its relationship with law. Both
establish models of social behaviour, but they are different in scope and nature. The juridical
analysis is based on a non-formalistic scientific methodology. This means that it is necessary to
consider the nature and characteristics of the AI as a preliminary step to the definition of its legal
paradigm. In this regard, there are two main issues: the relationship between artificial and human
intelligence and the question of the unitary or diverse nature of the AI. From that theoretical and
practical basis, the study of the legal system is carried out by examining its foundations, the
governance model and the regulatory bases. According to this analysis, throughout the work and
in the conclusions, International Law is identified as the principal legal framework for the
regulation of AI.
1. Introduction
For some time now, Artificial Intelligence (AI) has been the focus of an extensive and worthwhile debate in the international scene
and in most of the countries worldwide. Concern about AI involves States and international organizations and also other non-State
actors from academia, corporations, enterprises or industry and civil society.1 This debate encompasses its technological, economic
and socio-political aspects, as well as the ethical and legal issues raised by AI.
The ethical and legal aspects of AI have been the subject of numerous academic studies. Most of them deal with specific or
particular aspects, such as the use for medical purposes or the lethal autonomous weapons systems, as two totally different examples.
This predominant trend has two consequences. On the one hand, there are specific issues that do not receive the same attention, like
the energy cost involved in data analysis,2 its environmental effects or the use of this technology for terrorist purposes.3 On the other
https://doi.org/10.1016/j.telpol.2020.101937
Received 30 April 2019; Received in revised form 10 February 2020; Accepted 12 February 2020
Available online 25 February 2020
0308-5961/© 2020 Elsevier Ltd. All rights reserved.
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
hand, and this is more worrying, there is relatively limited doctrinal research on the overall panorama of the ethical or legal problems
posed by AI. As a result, a sectorial and fragmented perspective prevails over the integral and holistic overview. There is a lack of a
general and global approach on the legal4 and ethical5 aspects of AI. Winfield goes further in this same line of reasoning by stating that
“we lack a general (mathematical) theory of intelligence” (Winfield, 2019, p. 11).
The aim of this work is to contribute to the AI debate by promoting and providing a more general and not specific analysis. In this
approach, the study of doctrine and practice reveals two main problems: the confusion between ethical and legal aspects and a certain
disregard of law. Both problems require an analysis and a solution. The first precludes a real understanding of the role and function of
ethics and law within the framework of the AI, which is the necessary preliminary step to the definition of their corresponding
principles. The second reflects a lack of appreciation of the role of law as an instrument of social and political order. Law is necessary in
respect of any matter or reality simply because it establishes rules of social behaviour necessary for the coexistence of people in society.
Law cannot be ignored, nor can it be confused with ethics. Both are parameters of social behaviour necessary in any field or context
and, in particular, in areas of significant complexity such as AI.
In the first place, the ethical debate on AI (Section 2) is particularly complicated because of this confusion between ethical and legal
principles, but also because of its underlying and incorrect conception of ethics. It is therefore necessary to clarify the concept of ethics
(Section 2.1) before differentiating it from the law (Section 2.2). Then, once their differences have been identified, the juridical
analysis (Section 3) is addressed by first explaining the methodological paradigm. Making law is not the same as thinking about law. Law
is a domain of scientific knowledge in which jurists, as in this case, apply a methodology for a better understanding and imple
mentation of its foundations, contents and objectives (Section 3.1). The non-formalistic methodological approach explained in this
section justifies the need to analyse below the nature and characteristics of the AI as an object of regulation (Section. 3.2). Finally, on
that theoretical and practical basis, which supports the need for a regulation from the perspective of International Law, this paper
proposes a legal framework for AI with an institutional and a normative component (Section 3.3). The paper finishes with a series of
conclusions on the complex relationship between ethics and law and on the need to make progress in the legal construction of AI.
The importance of ethical principles in AI is generally recognized in the institutional framework, in the scientific community and in
society in general (Boddington, 2017). The number6 and variety7 of proposals submitted in this regard by public or private institutions
are, however, difficult to encompass and not always very comprehensible. The ethical debate poses two main problems: a conceptual
problem regarding the idea and content of ethics (Section 2.1); and a functional problem concerning its relationship and differentiation
with law (Section 2.2).
Ethics is a philosophical discipline that studies good and evil and their relationship to morality and human behaviour. Ethics is an
idea, a framework or a model of thought and action, a unique concept in abstract terms, but with a variable scope and content. The
reason is that the concepts of good or evil, the idea of morality, and models of human behaviour are not permanent, rigid, or static, but
evolve over time and through space.
Historically, there has not been only one single ethics, nor has it always and at all times had the same relevance and function in the
development of different human beings and different societies, cultures and civilizations. The ethical parameters of today’s society are
not the same as the principles elaborated in the classical Greek or Roman worlds. The ethical principles of today’s European society are
not exactly the same as those prevailing in the Asian, American, African or Islamic world.8 The coexistence of ethics, morality and
religion, the relationship between the individual and the community or respect for ancestors or nature, for instance, receive different
4
As Alžběta Krausov� a noted, “A considerable amount of research has been conducted until now in order to describe various aspects of the
relationship between AI and law. However, the knowledge on AI and law is fragmented in various papers, specialized books, reports, opinions,
notes, comments etc. Mostly only individual aspects or problems are being tackled. The overall description providing a bigger picture of the
discipline in a succinct paper is missing” (Krausov�a, 2017). Richard Collins highlights the double danger of this type of analysis for the law: the
de-formalisation and fragmentation of the legal system (Collins, 2019). McGregor argues, specifically, that “is needed to situate the demands for
technological or algorithmic accountability within a wider accountability framework of governance choice” (McGregor, 2019, p. 1085).
5
Andrej Dameski also points out that “there is a clear need for the establishment of a comprehensive ethical framework in regards of AI”
(Dameski, 2018).
6
According to Floridi, “there are currently more than 70 recommendations, published in the last 2 years, just about the ethics of AI”. The author
makes an interesting analysis of ethical practices (Floridi, 2019).
7
Regarding to the variety of proposals, Dameski identifies the main ethical issues in the field of AI: Moral entities; Consciousness; Universalism vs.
anthropocentrism; Aliveness/‘Being’; Personhood and legal personhood; Agency, autonomy; Complexity and moral uncertainty; Rights; Values;
Virtues (and vices); Accountability and responsibility; Opacity and transparency; Utility; Trust; Morally-burdened effects (Dameski, 2018). Some of
these issues also raise legal problems. However, the methodological approach to their study, as well as the results of the analysis are different in each
case.
8
In the Islamic world, AI and robotics are “merely modifications and adjustments of materials that were already created by Allah, in order to
improve human life … This is because Islam discourages the creation of things that resemble the original creation of God, unless with good or strong
justification” (Ikram & y Kepli, 2018, pp. 177–179).
2
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
9
https://www.baai.ac.cn/blog/beijing-ai-principles.
10
https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-
environment.
11
For a complete overview of the most relevant proposals and the common principles among them, see FJELD et al., 2019.
12
Boddington identifies three main theories: “Consequentialist theories, which broadly claim that the right action is the one that brings about the
best consequences. This is most commonly held as some form of utilitarianism, which aims to bring about the greatest balance of happiness over
unhappiness, or pleasure over pain, for the largest number of people. Deontological theories, which claim that what matters is whether an action is of
the right kind, that is, whether it is in accordance with some general overarching principle, or with a set of principles, such as ‘do not take innocent
life’, ‘do not lie’, and so on. Virtue ethics, which focuses of the character of the ideal moral agent, and describes the range of different virtues such an
agent has, and, broadly, claims that the right thing to do in any given situation is to do what the fully virtuous person would” (Boddington, 2017, p.
8).
13
Floridi collects 47 principles from documents of different authorship and classify them attending to four core principles commonly used in
bioethics: beneficence, non-maleficence, autonomy, and justice. They add another one: explicability, understood as incorporating both intelligibility
and accountability (Floridi et al., 2018, p. 696).
14
This is the case of the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems adopted by the European Commission for the Efficiency
of Justice (CEPEJ) during its 31st Plenary meeting (Strasbourg, 3–4 December 2018). These principles are: Respect for fundamental rights; Non-
discrimination; Quality and Security; Transparency, Impartiality and Fairness; and the Principle “under user control” Available at: https://www.
coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment.
15
The Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects adopts a very specific report on “Ethics
and autonomous weapon systems: An ethical basis for human control?“. Available at: https://www.unog.ch/80256EDD006B8954/(http://www.
Assets/20092911F6495FA7C125830E003F9A5B/$file/CCW_GGE.1_2018_3_final.pdf.
3
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
bring a result quite different from that achieved from a social,16 philosophical or ideological point of view.17 On the other hand, the
approach from a hierarchical model is different from that resulting from a multi-stakeholder process as demonstrated by the IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems18 or the Asilomar AI Principles19; 5) There are also significant
differences in the discussion when the proposals come from a State20, an international organization21 or a non-institutional frame
work22; 6) And, last but not least, an important problem is the existence of a plurality and variety of proposals coming even from the
same subjects acting in parallel fora. This intensive work indicates an appreciable interest in the ethical dimension of AI. But it also
generates some confusion, reduces the possibilities of transparency of the debate itself and, at times, carries a duplication of work.
A paradigmatic example of that situation can be found in the European Union (EU). In addition to the normative proposals, there
are four main working forums on ethics and AI. First, the European Group on Ethics in Science and New Technologies (EGE) is an
independent advisory body of the President of the European Commission.23 In its “Statement on Ethics of Artificial Intelligence”, the
EGE proposes a set of 9 basic principles.24 Second, the AI4People’s project has surveyed the aforementioned EGE principles as well as
36 other ethical principles put forward to date and subsumed them under 4 overarching general principles.25 Third, the High-Level
Expert Group on Artificial Intelligence (AI HLEG), appointed by the European Commission, has published a draft26 and, then, on 8
April 2019, the “Ethics Guidelines for Trustworthy AI”.27 Finally, the European AI Alliance, steered by the AI HLEG, is the European
Union’s multi-stakeholder platform on AI. There are not only variety and perhaps duplicity of forums. There are also appreciable
16
The UNI Global Union, based in Switzerland, represents more than 20 million workers from over 150 countries in the fastest growing sectors in
the world. This organization adopts 10 Principles for Ethical AI: 1. Demand That AI Systems Are Transparent; 2. Equip AI Systems With an “Ethical
Black Box”; 3. Make AI Serve People and Planet; 4. Adopt a Human-In-Command Approach; 5. Ensure a Genderless, Unbiased AI; 6. Share the
Benefits of AI Systems; 7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights; 8. Establish Global Governance
Mechanisms; 9. Ban the Attribution of Responsibility to Robots; 10. BanAI Arms Race. Available at: http://www.thefutureworldofwork.org/
opinions/10-principles-for-ethical-ai/.
17
The ADM Manifesto sets up particular principles: 1) Algorithmic decision making (ADM) is a fact of life today; it will be a much bigger fact of life
tomorrow. It carries enormous dangers; it holds enormous promise. The fact that most ADM procedures are black boxes to the people affected by
them is not a law of nature. It must end.; 2) ADM is never neutral; 3) The creator of ADM is responsible for its results. ADM is created not only by its
designer; 4) ADM has to be intelligible in order to be held accountable to democratic control; 5) Democratic societies have the duty to achieve
intelligibility of ADM with a mix of technologies, regulation, and suitable oversight institutions; 6) We have to decide how much of our freedom we
all ow ADM to preempt. Available at https://algorithmwatch.org/en/the-adm-manifesto/.
18
The IEEE defends these principles: Human Rights; Well-being; Data Agency; Effectiveness; Transparency; Accountability; Awareness of misuse;
and Competence (IEEE, 2019).
19
The Asilomar Principles (2017) are divided into specific categories. In general terms, the goal of AI research should be to create not undirected
intelligence, but beneficial intelligence. Research funding, Science-Policy Link, Research Culture and Race Avoidance are the main tools to realize
this idea. The ethics and values are: Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal
Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion and AI Arms Race. The longer-term Issues concern
Capability Caution, Importance, Risks, Recursive Self-Improvement and Common Good. Available at: https://futureoflife.org/ai-principles/.
20
The Japanese Society for Artificial Intelligence Ethical Guidelines (JSAI) establishes the following principles: Contribution to humanity;
Abidance of laws and regulations; Respect for the privacy of others; Fairness; Security; Act with integrity; Accountability and Social Responsibility;
Communication with society and self-development; Abidance of ethics Guidelines by AI. Available at: http://ai-elsi.org/wp-content/uploads/2017/
05/JSAI-Ethical-Guidelines-1.pdf.
21
UNESCO is actively working on AI (https://en.unesco.org/news/participants-global-unesco-conference-artificial-intelligence-urge-rights-based-
governance-ai). The COMEST Working Group adopts the Report of World Commission on the Ethics of Scientific Knowledge and Technology
(COMEST) on Robotics Ethics. The report identifies the following relevant ethical principles and values: Human Dignity; Value of Autonomy; Value
of Privacy; Do not Harm Principle; Principle of Responsability; Value of Beneficence; Value of Justice. Available at: http://www.unesco.org/new/
en/social-and-human-sciences/themes/comest/
22
The Toronto Declaration is an example in this case. Available at: https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-
Declaration_ENG_08-2018.pdf.
23
The EGE (2018) is a multi-disciplinary body, which advises on all aspects of policies and legislation where ethical, societal and fundamental
rights dimensions intersect with the development of science and new technologies (https://ec.europa.eu/info/research-and-innovation/strategy/
support-policy-making/scientific-support-eu-policies/european-group-ethics-science-and-new-technologies-ege_en).
24
https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf.
25
These include: Beneficence (defined as ‘do good’); Non-maleficence (defined as ‘do no harm’); Autonomy (defined as ‘respect for self-
determination and choice of individuals’); and Justice (defined as ‘fair and equitable treatment for all’) (https://www.eismd.eu/ai4people-
europes-first-global-forum-ai-ethics-launches-at-the-european-parliament/).
26
https://ec.europa.eu/knowledge4policy/publication/draft-ethics-guidelines-trustworthy-ai_en (European Commission, 2018a,b).
27
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (AI HLEG, 2018).
4
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
28
The “Draft Ethics Guidelines for Trustworthy AI” had two main basic purposes: a human-centric approach to AI and a trustworthy AI. That
specifically means that: 1) AI should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”;
and (2) AI should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
The originality of this proposal was twofold: first, this document does not aim to provide a list of core values and principles for AI, but rather to offer
guidance on its concrete implementation; And second, such guidance is provided in three layers of abstraction, namely, fundamental rights,
principles and values and the assessment list intended to guarantee the achievement of a trustworthy AI. The project clearly points out that the
guidelines are not intended as a substitute for any form of policymaking or regulation. For their part, the “Ethics Guidelines for Trustworthy AI”
finally adopted, introduces a clearer and more coherent overall approach, which is more political than technical. There are four ethical principles
identified as ethical imperatives: Respect for human autonomy; Prevention of harm; Fairness; and Explicability. These guidelines are not an official
document and are not legally binding.
29
A comparative study can be found in the report Artificial Intelligence: how knowledge is created, transferred, and used. Trends in China, Europe and
the United States (https://www.elsevier.com/research-intelligence/resource-library/ai-report).
30
https://www.baai.ac.cn/blog/beijing-ai-principles.
31
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-excerpts-chinas-white-paper-artificial-intelligence-
standardization/.
32
http://most.gov.cn/kjbgz/201906/t20190617_147107.htm.
33
https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/.
34
In fact, the State of Washington is debating a proposal for an act relating to artificial intelligence-enabled profiling with important ethical
aspects. According to this proposal, these practices not only threaten the fundamental rights and privileges of people but they menace the foundation
and supporting institutions of a free democratic state (http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/House Bills/2644.pdf).
35
In 2019, the US Department of Defence adopted its Recommendations on the Ethical Use of AI according to which the use of AI system must be
responsible, equitable, traceable, reliable and governable (https://admin.govexec.com/media/dib_ai_principles_-_supporting_document_-_
embargoed_copy_(oct_2019).pdf).
36
The rapid expansion in the use of applications in developing countries is often due to the ineffectiveness of public services or basic private
services such as banking. Such motivations are not the same as those existing among the population of developed countries in which it is not a
question of covering basic needs in the strict sense.
5
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
impose ethical principles. Commitment to ethics cannot be turned into the right to decide what is ethical or what ethics means to the
rest of the world (Crawford et al., 2019, p. 21).
In these circumstances, it is essential to avoid a selective or exclusionary debate, limited to countries with the level of scientific and
technological development required by the AI or resulting, directly or indirectly, in the exclusion of others. It is also essential to accept
that there is no single, unique and universal ethical code and that ethical principles cannot and should not be imposed. Adherence to
ethical principles is voluntary (Bostrom & Yudkowsky, 2011). Respect for legal rules is not, however, voluntary, but mandatory. Law
also plays a role in harmonizing and balancing different ethical conceptions. Indeed, the “Ethics Guidelines for Trustworthy AI”
identifies the three main components to ensure a trustworthy AI: Lawful AI, Ethical AI and Robust AI. According to it, each of these
components is necessary but not sufficient in itself. To a large extent, the ethical principles designed for AI support and/or reproduce
legal norms and principles. But identity or similarity of contents should not lead to confusion between the ethical and the legal ap
proaches to AI. Confusion is a functional problem.
As Boddington explains, historically there is a strong and complex relationship between ethics and law (Boddington, 2017, p. 25).
However, the widespread confusion between ethical and legal principles in the AI field (Wagner, 2018, p. 2) is a worrying phenomenon
for two main reasons. First, it reveals a disturbing lack of knowledge of both disciplines. Secondly, that confusion is used to defend the
need for ethical principles and to exclude legal rules, as if they were equal or interchangeable.
In fact, there is widespread agreement on the need to endow AI with ethical principles. There is not the same concern or consensus
on the importance of legal rules. The reasons for such a situation can be varied and different. Sometimes the argument is used that only
ethical principles are important or necessary. Sometimes the intention is guessed to reinforce the ethical component in order to
minimize or exclude legal requirements. At times, the willingness to organize ethical aspects can be appreciated as an alternative to the
difficulty of managing legal aspects. As Wagner notes, “ethics is seen as the ‘easy’ or ‘soft’ option which can help structure and give
meaning to existing self-regulatory initiatives. In this world, ‘ethics’ is the new ‘industry self-regulation’” (Wagner, 2018, p. 1).
However, even the coincidence/similarity between some ethical and legal principles does not mean coincidence/similarity as to their
nature, scope and application.37 There is no obligation to comply with ethics and there is no responsibility for non-compliance,
whereas there are both in the legal area.
Indeed, there are significant differences between an ethical principle and a legal regulation. First, legal standards are mandatory
(Boddington, 2017, p. 25). Second, legal norms can be common and uniform because they arise from the agreement between States in
the case of International Law or from a legitimate legislative process in domestic law (Scherer, 2017: 379). Third, legal regulation
addresses and reflects the political, social and economic aspects of AI that can sometimes be unfortunately more relevant than ethical
ones. Finally, compliance with the rules is guaranteed both legally and judicially. The ethical component of AI is fundamental. But it is
neither the decisive nor the definitive one for two main reasons (Floridi et al., 2018, p. 694). First, ethics is not an obligatory mandate.
It is assumed on a voluntary basis by a particular subject or community (Boddington, 2017, p. 8). Second, as discussed, there is not one
single or a universal uniform ethical code, although there are many common or shared ethical concepts and principles (Bostrom &
Yudkowsky, 2011, p. 13). In fact, the ethical principles present in the main debates on AI do not represent the entire international
community or its different civilizations, societies, ideologies or cultures.38 As is well known, scientific and technological progress has
widened the so-called digital divide. With AI, the phenomenon reaches a greater quantitative and qualitative dimension. The
discriminatory biases of AI have manifested themselves in many fields to the point of being identified as a reproduction of western male
thought.39
Ethics is necessary, even indispensable, but not sufficient to meet the challenge of AI. Ethics is specially needed when regulation is
lacking40. Law, however, is essential. Law implies a binding legal commitment, including for instance those ethical contents that are
common and/or shared and therefore reach the statute of obligatory norms. However, not all ethical concepts have a legal translation.
37
Moreover, as Wagner noted, “is a world in which ethics-washing and ethics-shopping are seemingly becoming increasingly common, it is
important to have common criteria based on which the quality of commitments made can be evaluated. If not, there is a considerable danger such
frameworks become arbitrary, optional or meaningless rather than substantive, effective and rigorous ways to design technologies. When ethics are
seen as an alternative to regulation or as a substitute for fundamental rights, both ethics, rights and technology suffer” (Wagner, 2018, p. 6).
38
According to the European Ethics Guidelines, “Ethics as a field of study is centuries old and centres on questions like ‘what is a good’ action,
‘what is right’, and in some instances ‘what is the good life’. AI Ethics is a sub-field of applied ethics and technology, and focuses on the ethical issues
raised by the design, development, implementation and use of AI. The goal of AI ethics is to identify how AI can advance or raise concerns to the
good life of individuals, whether this be in terms of quality of life, mental autonomy or freedom to live in a democratic society. It concerns itself with
issues of diversity and inclusion (with regards to training data and the ends to which AI serves) as well as issues of distributive justice (who will
benefit from AI and who will not)” (https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai, p. 2).
39
“It is a critical time to be addressing the diversity crisis in AI, because we now see diversity itself being weaponized. Over the past year and a
half, evidence of systemic discrimination and harassment at tech companies and conference spaces has entered the public debate, much of it exposed
by worker-led initiatives and whistle-blowers. This growing awareness, accompanied by demands for inclusion and equity, has led to some change,
but there has also been resistance, especially among those implicitly privileged by the status quo” (Myers et al., 2019, p. 28).
40
Actually, “especially when technology is rapidly advancing, the law might not be able to keep up, and professional bodies and others considering
ethical aspects of that technology might well lobby for appropriate changes to the law. It may be possible to amend codes of ethics issued by
professional bodies more flexibly and more rapidly than national, and especially international, laws” (Boddington, 2017, p. 25).
6
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
Not all ethical principles can evolve into legal rules, nor do they have the nature or sufficient consensus to become legal norms.41 It is
true that there are concepts and principles that are both ethical and legal. It is also true that there are common, general and potentially
universal ethical concepts. Nevertheless, in any event, the functions of ethics and law are quite different.
Law is often a misunderstood world. The relationship between law and justice is sometimes complex, but the balance is almost
always negative for law.42 The perception of law as a set of mandates, limitations and prohibitions, in a negative sense, prevails over its
conception as a necessary instrument to organize society and coexistence among people. Law does not always lead to a just solution,
but without law coexistence in society would hardly be possible. From the oldest communities and throughout the history of humanity,
there have always been rules to order human behaviour in society (Wagner, 2018, p. 5). Ethical principles serve this purpose but lack
both the enforceability of legal standards and the necessary mechanisms to ensure compliance.
Law is also a complicated world and not very accessible and known. Legal language is complex. The terms used do not always
coincide with their colloquial meaning. The processes and normative techniques are sometimes poorly understood. The origin, basis
and relationships between norms are sometimes unintelligible. The function of legal science is to explain what law is, its nature and
foundations, its mechanisms and guarantees, as well as its loopholes and shortcomings. Like other fields of scientific knowledge, there
are different methodological approaches to law and, in essence, a “legal logic” (Walton, 2005).
Concerning the relationship between law and AI, Nicolas Petit explains that “two dominant routes have been followed. The first is
legalistic. It consists in starting from the legal system, and proceed by drawing lists of legal fields or issues affected by AIs and robots:
liability, privacy, cyber security, etc. The second is technological. The point here is to envision legal issues from the bottom-up
standpoint of each class of technological application: driverless cars, social robots, exoskeletons, etc”. In his opinion, “the legalistic
approach is driven by teleological question” whereas “the technology approach is more ontological”. (Petit, 2019). Alžběta Krausov� a
prefers “the approach of legal scholars to artificial intelligence rather than the technical approach of computer scientists to law”
(Krausov� a, 2017). Actually, law is not always understood and approached as an object of scientific knowledge. A scientific method
ology is not always applied to its study.43
Legal science has given rise to different currents of thought or scientific schools (Smith et al., 1995). There are many different
theoretical approaches to International Law (Bianchi, 2016). In spite of, in functional terms, there are two main trends: formalist
(Allen, 2001) and non-formalist. Formal knowledge of the law is a model of ascertainment based on the status and value of rules in
general legal theory and in the theory of sources of international law. Law is a structured body of positive principles and norms.
According to the non-formalist approach, however, law is more than just a set of principles and norms. Law is an instrument for the
organization of society. Norms change through time and space in order to accommodate themselves to social and human evolution.
Law is the expression of that historical evolution, as well as of the specific social and political reality. Because of that, the usefulness
and the effectiveness of law depend on its ability to adapt itself to the reality it is intended to regulate. Notwithstanding the precedents,
from a juridical point of view, AI is a new and different reality. Knowing this reality, its nature and features, is the basic starting point to
address its regulation.
According to Nuria Oliver, AI have some specific characteristics: 1) Mainstreaming and invisibility explain that, generally, there is
not a clear social and political consciousness about the existence, scope and importance of AI; 2) Complexity, scalability and constant
updating serve to realize that AI has led to a reality that is not easily understandable, nor rationalizable through norms or principles,
because its complexity is constantly growing in exponential terms; and 3) The ability to predict poses a major dilemma (Oliver, 2018).
If the ability to predict leads to more just and objective situations, actually there would be no need for ethical or legal standards.
However, if that ability had not been sufficiently or generally established, part of the AI’s usefulness, functioning and purposes could
be questioned. Basically, there is not enough social or political awareness on AI. It is an increasingly complex and difficult issue to
regulate, that even challenges the need for such regulation. It is a difficult starting point.
The analysis of practice and scientific doctrine on AI reveals two main problems: the association of the concepts of AI and humanity
(Section 3.2.1); and the discussion about the unitary idea of AI (Section 3.2.2).
41
For example, how can the principle of non-maleficence be legally translated?.
42
The debate on the relationship between justice and law is a classic issue within legal science. Evidence demonstrates that sometimes the
application of the law can lead to an unfair outcome. As in domestic law, in International Law, Article 38 of the Statute of the International Court of
Justice provides that the Court may judge according to criteria of equity rather than by applying the law. This so-called “contra legem” equity serves
as an alternative to cases in which the application of the law may lead to an unfair result. A long time ago, in his monograph A Protest against Law
Taxes, Jeremy Bentham wrote “Justice is the security which the law provides us with, or professes to provide us with”.
43
The difference between a legal perspective and a scientific legal perspective can be easily appreciated. The first concerns making law. The second
implies thinking about law.
7
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
44
There is scientific evidence that animals can be intelligent.
45
Some time ago, Harry G. Frankfurt identified “consciousness” as the constitutive component of the concept of person, alongside with its corporal
characteristics (Frankfurt, 1971).
46
There is an interesting study about the law as a computable number in the sense described by Alan Turing (Huws & Finnis, 2017).
47
Solum defends that “AI cannot possess consciousness” (Solum, 1992, p. 1264). In his opinion, organic brains may be the only objects that are
actually capable of generating consciousness. Moreover, AIs cannot possess intentionality, feelings, interests or free will (Solum, 1992, pp.
1265–1272).
48
Nilsson affirms: “There is no possibility that computers will ever equal or replace the mind except in those limited functional applications that do
involve data processing and procedural thinking. The possibility is ruled out in principle, because the metaphysical assumptions that underlie the
effort are false” (Nilsson, 2010, p. 397).
8
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
generalist, like we humans” (Winfield, 2019, p. 7). Surden argues that “AI is neither magic nor is it intelligent in the human-cognitive
sense of the word. Rather, today’s AI technology is able to produce intelligent results without intelligence by harnessing patterns, rules,
and heuristic proxies that allow it to make useful decisions in certain, narrow contexts” (Surden, 2019, p. 1337).
Instead of appealing to the concept of “intelligence”, Luke Muehlhauser and Louie Helm prefer the idea of optimization power. In
their opinion, “AI researchers working to improve machine intelligence do not mean that super-intelligent machines will exhibit, for
example, increased modesty or honesty. Rather, AI researchers’ concepts of machine intelligence converge on the idea of optimal goal
fulfilment in a wide variety of environments, what we might call “optimization power”. In addition, this optimization concept is not
anthropomorphic and can be applied to any agent: human, animal, machine or otherwise. They use the term “machine super-opti
mizer” in place of “machine super-intelligence” (Muehlhauser & Helm, 2012, pp. 3–4). Even the idea of singularity could be inter
preted as the supreme optimization instead of the overcoming of the human transcending biology (Kurzweil, 2017).49 Along with the
previous ones, there is a little explored but convincing argument. Winfield affirms that “the processes and mechanisms of biological
and artificial evolution are profoundly different (…), but there is an ineluctable truth: artificial evolution still has an energy cost.
Virtual creatures, evolved in a virtual world, have a real energy cost” (Winfield, 2019, p. 4).
In addition to doctrinal arguments, in the EU, the AI HLEG prefers a definition based on the concept of rationality (AI HLEG, 2019).
The AI HLEG states that “since intelligence (both in machines and in humans) is a vague concept, although it has been studied at length
by psychologists, biologists, and neuroscientists, AI researchers use mostly the notion of rationality. This refers to the ability to choose
the best action to take in order to achieve a certain goal given certain criteria to be optimized and the available resources”50. The AI
HLEG acknowledges that rationality is not the only ingredient in the concept of intelligence, but it is a significant part of it, possibly,
the most significant in the field of AI.
The discussion around AI and human intelligence is still open. It is not the only one. An issue related to the former is the definition
of the nature of AI as an object of knowledge. AI is often addressed as a unitary and homogeneous whole. However, under that
umbrella, there are very diverse devices and processes.
49
Winfield notes that “The singularity is basically the idea that as soon as artificial intelligence exceeds human intelligence then everything
changes. There are two central planks to the singularity hypothesis: one is the idea that as soon as we succeed in building AI as smart as humans then
it rapidly re-invents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans
cannot possibly comprehend how the super-intelligent AI works. The other is that the future of humanity becomes unpredictable and, in some sense,
out-of-control from the moment of the singularity onwards” (Winfield, 2019, p. 6).
50
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
51
In accordance to the COMEST Report: “Gibilisco distinguishes five generations of robots according to their respective capabilities. The first
generation of robots (before 1980) was mechanical, stationary, precise, fast, physically rugged, based on servomechanisms, but without external
sensors and artificial intelligence. The second generation (1980–1990), thanks to the microcomputer control, was programmable, involved vision
systems, as well as tactile, position and pressure sensors. The third generation (mid-1990s and after) became mobile and autonomous, able to
recognize and synthesize speech, incorporated navigation systems or teleoperated, and artificial intelligence. He further argues that the fourth and
fifth generations are speculative robots of the future able, for example, to reproduce, acquire various human characteristics such as a sense of
humour” (COMEST, 2017, p. 12).
52
According to the Oxford Dictionary, AI is defined as “The theory and development of computer systems able to perform tasks normally requiring
human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.
53
The possibility of making medical diagnoses much faster than humans on the basis of data analysis could be one such case.
54
The medical or health applications are not comparable to its use for war purposes, even if the operational performance is similar or identical.
Utilities and objectives are decisive. One topic that can generate diversity of opinion is the use of robotics and AI devices for sexual purposes (Frank
& Nyholm, 2017).
9
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
A) The Functional Approach. The historical and conceptual presence of two main schools of thought -the Symbolic-Logical Approach
(top-down) and the Data-Driven Approach (bottom-up)- allows for a classification based on a functional criterion. The Symbolic
Approach defends the development of AI on the basis of a predefined set of logical rules and principles. The Data-Driven
Approach considers that AI should be constructed on the basis of observation and experience, that is, data. This functional
classification is intrinsically and objectively important not only in technological terms.
The dilemma between the logic of the principles and the reality of the data has relevant juridical implications. Firstly, the regu
lation of the AI devices has to be different because they are two very distinct models in their conception and operation.55 Secondly,
the cause and grounds of any wrongful acts could also be different, as well as the procedure for identifying them in each case. By
their very nature, unlawful acts arising from the application of a logical principle and those arising from the observation and
analysis of data are not comparable.56 In addition, the historical evolution of these schools of thought has not been homogeneous.
Big data may have favoured the bottom-up model but without excluding the other. The simple existence of these two models
implies that legal debate on AI has to consider each of these two functional categories in a differentiated way. To be effective, the
rules in each case will have to be different.
B) The Nature and Formal Approach. AI can be classified into categories attending to its nature and/or representations. Davenport
and Kirby differentiate three types of automation.57 Lydia Kostopoulos distinguishes three types/mediums: intangible, tangible
and embedded. According to the author, “Intangible AI does not have a physical form, instead it can be communicated through a
sound, a notification on a device, and/or invisible computation”. Instead, Tangible AI is embodied in a physical form, which
humans can interact with. Finally, Embedded AI is when it is fused with our brain either through an invasive or non-invasive
mechanism (Kostopoulos, 2018). This classification is interesting in terms of public or general acceptance of the AI. The first
type is largely accepted for its own invisibility. The latter can be appreciated for its potentiality in practical terms. Paradoxi
cally, the Tangible AI could be more questioned without being necessarily more intrusive than the others. These different ty
pologies also require a juridical analysis both separately and as a whole to keep in mind the different human reactions to each AI
device in order to its legal regulation.
C) The Teleological Approach. On the basis of this criterion which points to their potential results, the doctrine distinguishes be
tween strong and weak AI (Russell & Norvig, 2016) or, more precisely, between systems with specific AI, systems with general
AI and systems with superintelligence (Oliver, 2018). There is also a typology with four categories: Relieve; Split up; Replace;
and Augment.58 Legal solution can neither be unique nor uniform because each of these modalities requires a specific treatment.
D) The Autonomy Approach. A distinction could be made between two main models attending to their degree of autonomy in the
learning process: Machine Learning and Deep Learning. The essential difference lies in their ability to learn but, above all, in the
consequences of the autonomy of learning. It is not merely a technical matter. The AI autonomy raises both the need to clearly
identify its legal status and the problem of defining the scope and nature of its relationship with the human being (Hage, 2017,
p. 255).
Human control over AI and responsibility for its actions are the subject of an intense debate. The focus of this debate should be
redirected to a greater extent towards the real question raised by this phenomenon: What is AI? Is it a thing? Is it a different
category of person? Could it be a juridical person or a non-human person? Is it a tertium genus between a thing and a person?
(Bryson, Diamantis, & Grant, 2017, p. 273). There is not only one answer, no general answer, because there is not only one AI.
Each of those methodological approaches show the variety of devices and situations covered by the concept of AI. A formalistic
legal approach may aim at a single treatment of all of them. From a non-formalistic approach to law, AI regulation must take into
account the existence of those different modalities with specific regulatory requirements and problems, as well as the need to define AI
in a more realistic and understandable way for the whole of citizenship, such as optimization, rather than situate it around the concept
of humanity.
AI is a diverse and complex reality. Not only is it developed and acts in different ways and with different objectives. In certain
aspects, it seeks to emulate, even surpass, human intelligence. As Chesterman explains, “the rule of law is the epitome of anthropo
centrism: humans are the primary subject and object of norms that are created, interpreted, and enforced by humans” (Chesterman,
2019, p. 38). AI could change that paradigm.
55
In the top-down model, the principles may form part of the design of the AI itself and problems may arise in its implementation. In the bottom-up
system, the form of incorporating ethical or legal principles from the base and in the processes of data collection and processing must be sought.
56
In the top-down system, the comparison between logical principles and subsequent actions will be the way to determine their legality or
illegality. The hierarchical technique acts as an instrument to correct this incompatibility. In the bottom-up model, there is neither a reference
framework such as that of principles nor a technique for the identification and solution of possible problems that may arise as a consequence of
actions derived from the observation and analysis of data.
57
According to the authors, in the first, machines replaced human muscle in some manual tasks—think of factories and farm machinery. In the
second, clerical and knowledge workers were relieved of routine work such as data entry. The third era brings the automation of intelligence—the
computerization of tasks previously thought to require human judgment (Davenport & Kirby, 2016).
58
https://www2.deloitte.com/content/dam/insights/us/articles/3832_AI-augmented-government/DUP_AI-augmented-government.pdf.
10
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
3.3.1. Foundations
Law is a system of organization of social life which has been designed for the physical world and for the human being. The human
being has been located at the core of that system through the concept and the right to human dignity. The physical world has been
organised politically on the basis of the idea of the State as a sovereign entity.
As a consequence, there are two models of society: international and national. The first has a horizontal and decentralized structure
because it is a society of sovereign States. The second has a vertical and hierarchical structure because it is a society of individuals
submitted to the power of the State in which they are located or of which they are nationals. As they are different social and political
models, law is also different in each case. International society is regulated by International Law, which is a legal system based on the
consensus or agreement between States. Internal societies are regulated by the different domestic laws. However, consistent and
increasing internationalization and globalization have led to a reshaping of this juridical architecture. States are increasingly relying
on International Law to regulate matters that were previously domestic matters. International norms are then integrated and guar
anteed within the framework of domestic law (Krieger, H., Nolte, G. and Zimmermann, 2019). As a result, the scope of the norms is
broader and their content more homogeneous. The rationale behind this practice is logical. Virtually any human activity has an in
ternational origin or projection, while domestic law cannot extend beyond the territory of the State itself.
Universalization is a phenomenon that cannot be ignored. AI devices tend to be inherently transnational (CEPEJ, 2018, p. 60) and
potentially universal in scope, function, and nature. Alžběta Krausova � argues that: “As the development of artificial intelligence is a
global phenomenon that has worldwide social and economic effects, new international laws should be adopted” (Krausova �, 2017).
Carter and Carter even support the definition of super-AI as “a common heritage of mankind and not something to be appropriated and
developed by any individual State or natural or juridical person (Carter and Carter, 2016, p. 12). International organizations and
forums clearly support international regulation.59
International law is the legal order that must necessarily regulate AI for three main reasons. The first is precisely the general and
universal scope of AI. It is a technical and practical matter. The question lies in determining whether it is possible to territorially
demarcate country by country the use of AI to identify applicable law, e.g. in data analysis or in the supply of services. Not only it is
difficult, but even impossible. The second reason relates to a matter of legal economy. Should an AI patent, for example, be registered
in all countries where it can be used or would it be better to register it in accordance with a common international standard accepted by
all those States?60. The recognition of author’ rights to an AI for an article produced autonomously has taken place in China. The
Chinese court took this decision because it considered that the structure of the article was correct, logical and with a certain originality.
Instead, the European Patent Office refused two European patent applications in which an AI system was designated as inventor.61 A
patchwork system of State-by-State legislation is not the best solution in an interconnected and globalized world. The third reason is
the protection and legal certainty of users, creators, producers or manufacturers, i.e. the security of all the stakeholders involved or
affected by the AI. A general regulation, common and comprehensible to all, is an immeasurably better solution than partial and
fragmented provisions according to the legislation of each State. If each of them has its own legislation, the question remains of
determining the applicable rule in each case. It could be that of the country of the user or consumer, that of the place of manufacture or
production or that of the country of the author or creator of the AI device, for example, or even all of them. Legal uncertainty is obvious
and unnecessary.
It is true that each country has the capacity to adopt its own rules. However, domestic law is not the first choice for AI for three main
reasons: 1) Domestic law cannot be legitimately and effectively imposed outside the territory of the State because it lacks competence;
2) All domestic legal systems recognize the pre-eminence of International Law and also establish procedures for its integration or
conversion into domestic law; and 3) Domestic law is fragmented into different branches with its own methodology, content and
processes. The civil,62 criminal,63 labour64 or, especially, constitutional law65 aspects, among others, are relevant. International Law
can manage those different areas as well as the diversity of legal systems (Oskamp & Lauritsen, 2002).66 AI needs primarily a general
59
As will be seen below, the G-20, the OECD or the EU claim for international regulation.
60
The World Intellectual Property Organization has been recently working in this topic. The WIPO Technology Trends 2019 is devoted to AI. Its
study shows “that most patent applications have a commercial, application focus, as they refer to an AI functional application or are combined with
an AI application field. This report identifies 20 fields/industry sectors that patent documents refer to, ranging from entertainment to education to
banking, indicating that sectors across the board are exploring the application of AI technologies” (https://www.wipo.int/publications/en/details.
jsp?id¼4386).
61
https://www.epo.org/law-practice/case-law-appeals/recent.html.
62
The recognition of legal personhood to AI is one of the most important and controversial issues in this field as well as intellectual property or
liability (Petit, 2017).
63
The most interesting topic in this matter is criminal liability for acts committed by AI systems (Lagioia & Sartor, 2019).
64
In this area of law, many issues are of concern, ranging from the spectre of job losses as a result of AI to the need to establish specific AI
employment rights and/or obligations such as the right to leave or social security contributions.
65
The main topic is the protection of human rights attending to the challenges posed by AI.
66
National and local regulations are different. For instance, technically, the Anglo-Saxon legal model gives a role to the jurisprudence (Ashley,
2002, p. 163) that in the European system only corresponds to the law. Materially, the content and purpose of the rules also differs even between
countries with common traditions (Hage, 2000). The scope of freedom of expression under the U.S. First Amendment is broader than in the Eu
ropean framework where limits are set on that right derived, for example, from the legislator’s willingness to combat the apology of genocide. Nor is
the responsibility regime the same in the various juridical systems (Lehmann, Breuker, & Brouwer, 2004, p. 279).
11
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
67
An example to explain this statement can be found in intellectual property law. In many countries, the protection of this right is done through
administrative, civil and criminal procedures. Each of them fulfills its function. Separately, each one transfers a partial vision of that legal regime. It
is necessary to analyse the whole in order to understand the concept and its regulations.
68
There are many different international proposals and measures both from public and private authorship (Castel & Castel, 2016).
69
The principles are: Inclusive growth, sustainable development and well-being; Human-centered values and fairness; Transparency and
explainability; Robustness, security and safety; and Accountability (https://www.oecd.org/going-digital/ai/principles/).
70
These recommendations are: 1) investing in AI research and development; 2) fostering a digital ecosystem for AI; 3) shaping an enabling policy
environment for AI; 4) building human capacity and preparing for labour market transformation; and 5) international co-operation for trustworthy
AI (https://www.oecd.org/going-digital/ai/principles/).
71
https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf.
72
The EU activity has been compiled in March 2019 in the publication A survey of the European Union’s artificial intelligence ecosystem. It outlines the
EU’s high-level strategy and vision for AI, before looking at three crucial components the EU will need to implement this vision: funding, talent, and
collaboration (https://ec.europa.eu/jrc/communities/en/node/1286/document/survey-european-union%92s-artificial-intelligence-ecosystem).
73
https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence.
74
According to the Communication, there are seven key requirements that AI applications should respect to be considered trustworthy: Human
agency and oversight; Technical robustness and safety; Privacy and Data Governance; Transparency; Diversity, non-discrimination and fairness;
Societal and environmental well-being; and Accountability (Communication from the Commission to the European Parliament, the Council, the
European Economic and Social Committee and the Committee of the Regions “Building Trust in Human-Centric Artificial Intelligence”, COM (2019)
168 final, Brussels, 8.4.2019, p. 3).
75
Ibidem, p. 1.
76
Ibidem, pp. 3–4.
77
https://ec.europa.eu/jrc/communities/en/node/1286/document/eu-declaration-cooperation-artificial-intelligence.
78
The Council of the European Union is also very active in the field of AI, in particular from a human rights perspective (https://www.coe.int/en/
web/artificial-intelligence).
79
The recommendation has been adopted by the 36 Member States of the OECD and six other countries: Argentina, Brazil, Colombia, Costa Rica,
Peru and Romania.
12
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
make, but it is necessary to establish a universal governance model and a general normative framework for AI.
80
The structure of this IO could envisage four basic components: (1) An assembly of States, IOs concerned and a representation of the multi-
stakeholder community with a different and appropriate legal status in each case; (2) An executive body, the council, elected by the assembly
with representation of all its members according to the nature and status of each of them; (3) An international court for ensuring respect for the rules
and the settlement of any disputes that might arise; (4) An administrative body exercising the functions of secretariat. Along with these main organs,
it would be possible to include consultative committees specialized in the diverse aspects and interests present in the AI guaranteeing a plural and
interdisciplinary participation.
81
The Members of the United Nations are already collaborating in research and development through the International Space Station, the Human
Genome Project and the Large Hadron Particle Accelerator (Castel & Castel, 2016, p. 11).
82
There are some examples of operational bodies within international organizations that, through various channels and techniques, seek to
overcome these functional or even structural differences. This would be the case of the Enterprise of the International Seabed Authority, which
includes the transfer of technology among its main attributions. The same idea is considered in Article IV.2 of the Treaty on the Non-Proliferation of
Nuclear Weapons. According to this article, “All the Parties to the Treaty undertake to facilitate, and have the right to participate in, the fullest
possible exchange of equipment, materials and scientific and technological information for the peaceful uses of nuclear energy. Parties to the Treaty
in a position to do so shall also co-operate in contributing alone or together with other States or international organizations to the further
development of the applications of nuclear energy for peaceful purposes, especially in the territories of non-nuclear-weapon States Party to the
Treaty, with due consideration for the needs of the developing areas of the world”) and the Coordinated Research Activities of the International
Atomic Energy Agency (IAEA)”.
83
It’s a broad and complicated issue. At times, there is some confusion about this topic. There are those who limit themselves to defending the
application the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction,
allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the
previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws. Actually, AI is a
cross-cutting phenomenon that requires not only the establishment of specific standards but also the rethinking of the feasibility and effectiveness of
pre-existing rules. An interesting and comprehensive study about the problems and challenges posed by AI can be found in the report published by
the European Commission Artificial Intelligence. A European Perspective (https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-
research-reports/artificial-intelligence-european-perspective).
84
On 26 June 2019, in the Council of Europe, the Committee of experts on human rights dimensions of automated data processing and different
forms of artificial intelligence has adopted the “Draft Recommendation of the Committee of Ministers to member States on the human rights impacts
of algorithmic systems”. Threats and risks to human rights are thoroughly analysed. (MSI-AUT (2018)06rev1).
85
https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf.
13
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
exactly the same position.86 The second point is the definition of the legal status of AI devices. The third is the question of the rela
tionship between humans and AI. These issues are a priority for three main reasons: 1) The rapid and continuous technological progress
requires to prioritize the analysis of juridical solutions on basic general and constitutional aspects; 2) AI and the technologies of the
so-called Fourth Industrial Revolution call into question basic existential principles of humanity and society (Rouhiainen, 2018, p. 36);
And 3) The coexistence and relationships between humans and AI devices is the main legal, social and political challenge (Frank &
Nyholm, 2017).
Although the questions posed by AI are numerous and significant, it is important to highlight that there is not a legal vacuum.87
Firstly, there are legal rules and principles of an imperative nature which apply generally to all human and social activity including the
development of AI. The principle of the prohibition of the use of force in the context of international relations or the right to life and the
integrity of the person are clear examples in this regard.88 Secondly, there are mandatory rules and principles that can be applied to AI
through the principle of analogy.89 The regulations on consumer protection or liability for defective products can be extrapolated
analogously to the sphere of the AI. The question of responsibility arises repeatedly in connection with the so-called “many hands
problem”, posed by the fact that the development and operation of AI systems typically entails contributions from multiple individuals,
organizations and machine components (Yeung, 2019, p. 11). Actually, all legal systems have principles and procedures to demand
responsibility. Ultimately, according to Yeung, “the fundamental principle of reciprocity applies: those who deploy and reap the
benefits of these advanced digital technologies (including AI) in the provision of services (from which they derive profit) must be
responsible for their adverse consequences” (Yeung, 2019, p. 14). Thirdly, there are rules and principles which may need to be revised
to take account of the unique characteristics of AI. Data protection regulations should be revisited to be effective in the different
scenarios of the massive use of data implied by the AI (Wachter & Mittelstadt, 2019). It is also the case concerning the circulation of
autonomous vehicles prohibited by the Vienna Convention on Urban Traffic of 1968 (Palmerini, 2017, p. 69). Finally, AI may require
the formulation of new rules. The principle of explicability or explanation can be a good example in this sense (Goodman & Flaxman,
2017).
In the end, there is an important and solid legal acquis on which to proceed with such changes, adaptations or ex novo normative
creations required by AI. Not every new phenomenon demands new normative bodies. But such a scientific and technological advance
that necessarily is changing human and social behaviours may require an adaptation of existing norms or the creation of specific rules if
the law in force proves insufficient or inefficient.
4. Conclusions
The AI debate has led to different theories and lines of thought ranging from the utopia of a perfect world to the dystopia of a
dehumanized world. From the utopian (idealistic) to the dystopian (disruptive) future, there is a wide variety of conceptions and
interpretations of this phenomenon (Oliver Ramírez, 2018, p. 34). It is not something unusual, on the contrary. In this case, it presents
two basic problems: the lack of the minimum socio-political consensus, and the absence of a global interdisciplinary analysis (Surden,
2019, p. 1310).
There is neither a basic social consciousness nor a sufficient and solid political will to address the challenge of AI. There is no
common language, no single methodology on its uses, skills and objectives. AI can have a positive or negative impact, or both,
simultaneously, for different audiences or from different perspectives. There is not uniform or unanimous assessment of its advantages
and/or disadvantages or how to manage them. In general terms, the debate is being placed between resistance to change and the
sublimation of change implied by AI. Either by ignoring it or by magnifying it, lack of knowledge or misconceptions about this
phenomenon are too widespread and really worrying. To some extent, that is understandable, as AI raises diverse and complex doubts,
concerns and problems.
AI research is being developed at different public and private levels, in large and small enterprises, corporations, academic in
stitutions, organizations and States. The open source model is a process of knowledge sharing for anyone interested in AI. Large
corporations such as Google, Amazon, Microsoft, IBM, Apple or Nvidia offer platforms, applications and tools that provide users with
knowledge, skills and learning mechanisms for the development of AI. This modus operandi has a positive, even democratizing, effect
(Rouhiainen, 2018, p. 261), but also eventual negative effects. In fact, the functionalities of AI can be classified into several generic
categories: beneficial use; useful use; lawful use; perverse use; and illicit use which, in turn, may be criminal, terrorist or militaristic
uses. Ethical principles may provide the direction towards a positive use of AI, but they do not have the capacity to prevent, repress and
sanction negative uses. That is the function of law. Moreover, the legal discourse has to go further also because it includes the social,
86
https://fra.europa.eu/en/publication/2019/data-quality-and-artificial-intelligence-mitigating-bias-and-error-protect.
87
According to the Ethics Guidelines for Trustworthy AI, “it should be noted that no legal vacuum currently exists, as Europe already has
regulation in place that applies to AI” (AI HLEG, 2019). The Committee of experts on human rights dimensions of automated data processing and
different forms of artificial intelligence of the Council of Europe includes a relation of the pre-existing rules applicable to the AI in his Draft
Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems (MSI-AUT (2018)06rev1).
88
For instance, Member States of the Council of Europe are bound by the European Convention on Human Rights and the other treaties they have
concluded in relation to any area of action, including the AI. This implies that they must guarantee the rights and comply with the obligations
contained therein also with respect to AI.
89
Richard Collins highlights the importance of analogical reasoning in gaining an understanding of the nature of modern international law
(Collins, 2019).
14
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
Acknowledgment
This work has been partially supported by Spanish Government-MINECO and FEDER, European Union, Spain funds, through
project TIN 2017-83494-R.
References
Allen, R. J. (2001). Artificial Intelligence and the evidentiary process: The challenges of formalism and computation. Artificial Intelligence and Law (Vol. 9,, 99–114.
Aradau, C., & Blanke, T. (2017). Governing others: Anomaly and the algorithmic subject of security. European Journal of International Security, 3(1), 1–21.
Asay, C. D. (2019). Artificial stupidity. William and Mary Law Review, 61, 1–56.
Ashley, K. D. (2002). An AI Model of case-based legal argument from a jurisprudential viewpoint. Artificial Intelligence and Law, 10, 163–218.
Asilomar AI principles.(2017). Available at: https://futureoflife.org/ai-principles/.
Bianchi, A. (2016). International law theories: An inquiry into different ways of thinking. Oxford Scholarship Online.
Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Oxford: Springer.
Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. Available at: https://nickbostrom.com/ethics/artificial-intelligence.pdf.
Brooks, R. A. (1991). Intelligence without representation (Vol. 47, pp. 139–159). Artificial Intelligence.
Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: Le legal vacuum of synthetic persons. Artificial Intelligence and Law, 25, 273–291.
Castel, J.-G., & Castel, M. E. (2016). The road to artificial super-intelligence: Has international law a role to play? Canadian Journal of Law and Technology, 14(1), 1–15.
CEPEJ. (2018). Ethical charter on the use of artificial intelligence in judicial systems. Available at: https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-
the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment.
Chesterman, S. (2019). Artificial Intelligence and the Problem of autonomy. NUS law working paper 2019/06. National University of Singapore.
Collins, R. (2019). Two idea(l)s of the international rule of law. Global Constitutionalism, 8(2), 191–226.
COMEST Working Group. (2017). Report of world commission on the ethics of scientific knowledge and technology (COMEST) on robotics ethics. Available at: http://www.
unesco.org/new/en/social-and-human-sciences/themes/comest/.
Crawford, K., et al. (2019). AI now report. AI Now Institute.
Dameski, A. (2018). A comprehensive ethical framework for AI entities: Foundations (pp. 42–51). Artificial General Intelligence.
Davenport, T., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.
European Commission. (2018a). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Artificial Intelligence for Europe, COM (2018) 237 final, 25.4.2018.
European Commission. (2018b). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Coordinated Plan on Artificial Intelligence, COM (2018) 795 final, 7.12.2018.
European Commission. (2019). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Building Trust in Human-Centric Artificial Intelligence, COM (2019) 168 final, 8.4.2019.
European Group on Ethics in Science and New Technologies (EGE). (2018). Statement on ethics of artificial intelligence. Available at: https://ec.europa.eu/info/news/
ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en.
Fjeld, J., et al. (2019). Principled artificial intelligence. Mapping consensus in ethical and rights-based approaches to principles for IA. Berkman Klein Center, Harvard
University.
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical (pp. 185–193). Philosophy & Technology.
15
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937
Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations.
Minds and Machines, 28(4), 689–707.
Frankfurt, H. G. (1971). Freedom of the will and the concept of person. The Journal of Philosophy, 68(1), 5–20.
Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable). Artificial Intelligence and
Law, 25, 305–323.
Goodman, B., & Flaxman, S. (2017). European regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 1–9.
Hage, J. (2000). Dialectical models in artificial intelligence and law. Artificial Intelligence and Law (Vol. 8,, 137–172.
Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25, 255–271.
Hawkins, J. (2017). What intelligent machines need to learn from the neocortex. Available at: https://spectrum.org/computing/software/what-intelligent-machines-
need-to-learn-from-the-neocortex.
Herrera Triguero, F. (2014). Inteligencia artificial, inteligencia computacional y big data. Universidad de Ja�en.
High-Level Expert Group on Artificial Intelligence (AI HLEG). (2018). Draft ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/knowledge4policy/
publication/draft-ethics-guidelines-trustworthy-ai_en.
High-Level Expert Group on Artificial Intelligence (AI HLEG). (2019). Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/futurium/en/ai-alliance-
consultation.
Horowitz, M., et al. (2018). Strategic competition in an Era of artificial intelligence. Washington: Center for a New American Security.
Huws, C. F., & Finnis. (2017). On computable numbers with an application to the Alan Turing problem (Vol. 25, pp. 181–203). Artificial Intelligence Law.
IEEE. (2019). Ethically aligned design. Available at: https://ethicsinaction.ieee.org/.
Ikram, N. A. H. S., & y Kepli, M. Y. Z. (2018). Establishing legal rights and liabilities for artificial intelligence. International Islamic University of Malaysia Law Journal,
26(1), 177–178, 2018.
Japanese Society for Artificial Intelligence. (2017). Artificial intelligence ethical guidelines. Available at: http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-
Guidelines-1.pdf.
Kostopoulos, L. (2018). The emerging artificial intelligence wellness landscape: Opportunities and areas of ethical debate. California western school of law “AI ethics
symposium”. Available at: https://medium.com/@lkcyber/the-emerging-artificial-intelligence-wellness-landscape-802caf9638de.
Krausov� a, A. (2017). Intersections between law and artificial intelligence. International Journal of Computer, 27(1), 55–68.
Krieger, H., Nolte, G., & Zimmermann, A. (2019). The international rule of law. Rise or decline. Oxford University Press.
Kurzweil, K. (2017). La singularidad est� a cerca. Berlin: Lola Books GBR.
Lagioia, F., & Sartor, G. (2019). AI systems under criminal law: A legal analysis and a regulatory perspective (pp. 1–33). Philosophy & Technology.
Lehmann, J., Breuker, J., & Brouwer, B. (2004). Causation in AI and law. Artificial Intelligence and Law, 12, 279–315.
McCarthy, J., & Hayes, P. (1981). Some philosophical problems from the standpoint of artificial intelligence. Readings in artificial intelligence. Available at: https://www.
sciencedirect.com/science/article/pii/B9780934613033500337.
McGregor, L. (2019). Accountability for governance choices in artificial intelligence. European Journal of International Law, 29(4), 1079–1085.
McKinsey Global Institute. (2019). Notes from the AI frontier. Tackling europe’s gap in digital and AI. Discussion Paper. McKensy & Company.
Muehlhauser, L., & Helm, L. (2012). Intelligence Explosion and Machine Ethics, en. In Singularity hypotheses: A scientific and philosophical assessment, edited by amnon
eden, johnny søraker, james H. Moor, and eric steinhart. Berlin: Springer.
Myers West, S., Whittaker, M., & Crawford. (2019). Discriminating system. Gender, Race, and Power in AI. AI now.
Nilsson, N. J. (2010). The quest for artificial intelligence. A history of ideas and achievements. Available at: https://ai.stanford.edu/~nilsson/QAI/qai.pdf.
Oliver Ramírez, N. (2018). Inteligencia Artificial: Ficci�on, realidad y … sue~ nos. Available at: http://www.raing.es/es/publicaciones/discursos-de-ingresos/inteligencia-
artificial-ficci-n-realidad-y-sue-os.
Oskamp, A., & Lauritsen, M. (2002). AI in law practice? So far, not much. Artificial Intelligence and Law, 10, 227–236.
Palmerini, E. (2017). Rob� otica y derecho: Sugerencias, confluencias, evoluciones en el marco de una investigaci�
on europea (Vol. 32, pp. 53–97). Revista de Derecho Privado.
Penrose, R. (2012). Las sombras de la mente. Hacia una comprensi� on científica de la consciencia. Barcelona: Crítica.
Petit, N. (2017). Law and regulation of artificial intelligence: Conceptual framework and normative implications. Working Paper. Available at: https://www.researchgate.
net/publication/332850407_Law_and_Regulation_of_Artificial_Intelligence_and_Robots_-_Conceptual_Framework_and_Normative_Implications.
Renda, A. (2019). Artificial Intelligence. Ethics, governance and policy challenges. Brussels: Centre for European Policy Studies.
Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41, 1–16. Number 1, 2019.
Roberts, H., et al. (2019). The Chinese approach to artificial intelligence: An analysis of policy and regulation. https://doi.org/10.2139/ssrn.3469784. https://ssrn.com/
abstract¼3469784. Available at: SSRN:.
Rouhiainen, L. (2018). Inteligencia artificial. Alienta Editorial.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence. A modern approach. Pearson. Available at: http://thuvien.thanglong.edu.vn:8081/dspace/bitstream/DHTL_
123456789/4010/1/CS503-2.pdf.
Scherer, M. U. (2017). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law and Technology, 29(2),
354–400.
Searle, J. R. (1980). Minds, brains, and programs. Behavioural and Brain Sciences, 3, 417–457.
Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able? House of Lords.
Smith, B., & Browne, C. A. (2019). Tools and weapons. The promise and the peril of the digital age. Penguin Press, 2019.
Smith, J. C., et al. (1995). Artificial intelligence and legal discourse: The flex law legal text management system. Artificial Intelligence and Law, 3, 55–95.
Solum, L. B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70(4), 1230–1287.
Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1304–1337.
Turing, A. (1950). Computing Machinery and intelligence. Mind, 49, 433–460. Available at: https://www.csee.umbc.edu/courses/471/papers/turing.pdf.
UNI Global Union. (2018). 10 principles for ethical AI. Available at: http://www.thefutureworldofwork.org/opinions/10-principles-for-ethical-ai/.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 1,
1–130.
Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics-shopping? In M. Hildebrandt (Ed.), Being Profiling. Cogitas ergo sum. Amsterdam
University Press.
Walton, D. (2005). Argumentation methods for artificial intelligence in law. Winnipeg: Springer.
West, D. M. (2018). The role of corporations in addressing AI’s ethical dilemmas. Available at: https://www.brookings.edu/research/how-to-address-ai-ethical-
dilemmas/.
Winfield, A. (2019). On the simulation (and energy costs) of human intelligence, the singularity and simulationism. In A. Adamatzky, & V. Kendon (Eds.), From
astrophysics to unconventional computation. Emergence, complexity and computation (Vol. 35). Cham: Springer.
Yeung, K. (2019). Responsibility and AI. Council of Europe Study DGI (2919)5.
16