You are on page 1of 16

Telecommunications Policy 44 (2020) 101937

Contents lists available at ScienceDirect

Telecommunications Policy
journal homepage: http://www.elsevier.com/locate/telpol

Artificial intelligence: From ethics to law


Margarita Robles Carrillo
Member of the Network Engineering & Security Group, University of Granada. Spain

A R T I C L E I N F O A B S T R A C T

Keywords: AI is the subject of a wide-ranging debate in which there is a growing concern about its ethical
Artificial intelligence and legal aspects. Frequently, the two are mixed and confused despite being different issues and
Ethics areas of knowledge. The ethical debate raises two main problems: the first, conceptual, relates to
Law
the idea and content of ethics; the second, functional, concerns its relationship with law. Both
establish models of social behaviour, but they are different in scope and nature. The juridical
analysis is based on a non-formalistic scientific methodology. This means that it is necessary to
consider the nature and characteristics of the AI as a preliminary step to the definition of its legal
paradigm. In this regard, there are two main issues: the relationship between artificial and human
intelligence and the question of the unitary or diverse nature of the AI. From that theoretical and
practical basis, the study of the legal system is carried out by examining its foundations, the
governance model and the regulatory bases. According to this analysis, throughout the work and
in the conclusions, International Law is identified as the principal legal framework for the
regulation of AI.

1. Introduction

For some time now, Artificial Intelligence (AI) has been the focus of an extensive and worthwhile debate in the international scene
and in most of the countries worldwide. Concern about AI involves States and international organizations and also other non-State
actors from academia, corporations, enterprises or industry and civil society.1 This debate encompasses its technological, economic
and socio-political aspects, as well as the ethical and legal issues raised by AI.
The ethical and legal aspects of AI have been the subject of numerous academic studies. Most of them deal with specific or
particular aspects, such as the use for medical purposes or the lethal autonomous weapons systems, as two totally different examples.
This predominant trend has two consequences. On the one hand, there are specific issues that do not receive the same attention, like
the energy cost involved in data analysis,2 its environmental effects or the use of this technology for terrorist purposes.3 On the other

E-mail address: mrobles@ugr.es.


1
The main areas of interest can be found in the AI Now Report 2018 (https://ainowinstitute.org/AI_Now_2018_Report.pdf).
2
Winfield performs an innovative analysis of the economic cost of AI by assuming that “the energy cost would be colossal; so great perhaps as to
rule out the evolutionary approach altogether” (Winfield, 2019, p. 4). Not much literature deals with the subject from this perspective.
3
Clark D. Asay argues that scholars have examined a number of important IP-related questions. Despite this attention, “crucial questions remain.
(…) artificial stupidity, rather than true general artificial intelligence, will continue as the norm” (Asay, 2019, p. 4).

https://doi.org/10.1016/j.telpol.2020.101937
Received 30 April 2019; Received in revised form 10 February 2020; Accepted 12 February 2020
Available online 25 February 2020
0308-5961/© 2020 Elsevier Ltd. All rights reserved.
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

hand, and this is more worrying, there is relatively limited doctrinal research on the overall panorama of the ethical or legal problems
posed by AI. As a result, a sectorial and fragmented perspective prevails over the integral and holistic overview. There is a lack of a
general and global approach on the legal4 and ethical5 aspects of AI. Winfield goes further in this same line of reasoning by stating that
“we lack a general (mathematical) theory of intelligence” (Winfield, 2019, p. 11).
The aim of this work is to contribute to the AI debate by promoting and providing a more general and not specific analysis. In this
approach, the study of doctrine and practice reveals two main problems: the confusion between ethical and legal aspects and a certain
disregard of law. Both problems require an analysis and a solution. The first precludes a real understanding of the role and function of
ethics and law within the framework of the AI, which is the necessary preliminary step to the definition of their corresponding
principles. The second reflects a lack of appreciation of the role of law as an instrument of social and political order. Law is necessary in
respect of any matter or reality simply because it establishes rules of social behaviour necessary for the coexistence of people in society.
Law cannot be ignored, nor can it be confused with ethics. Both are parameters of social behaviour necessary in any field or context
and, in particular, in areas of significant complexity such as AI.
In the first place, the ethical debate on AI (Section 2) is particularly complicated because of this confusion between ethical and legal
principles, but also because of its underlying and incorrect conception of ethics. It is therefore necessary to clarify the concept of ethics
(Section 2.1) before differentiating it from the law (Section 2.2). Then, once their differences have been identified, the juridical
analysis (Section 3) is addressed by first explaining the methodological paradigm. Making law is not the same as thinking about law. Law
is a domain of scientific knowledge in which jurists, as in this case, apply a methodology for a better understanding and imple­
mentation of its foundations, contents and objectives (Section 3.1). The non-formalistic methodological approach explained in this
section justifies the need to analyse below the nature and characteristics of the AI as an object of regulation (Section. 3.2). Finally, on
that theoretical and practical basis, which supports the need for a regulation from the perspective of International Law, this paper
proposes a legal framework for AI with an institutional and a normative component (Section 3.3). The paper finishes with a series of
conclusions on the complex relationship between ethics and law and on the need to make progress in the legal construction of AI.

2. The ethical debate

The importance of ethical principles in AI is generally recognized in the institutional framework, in the scientific community and in
society in general (Boddington, 2017). The number6 and variety7 of proposals submitted in this regard by public or private institutions
are, however, difficult to encompass and not always very comprehensible. The ethical debate poses two main problems: a conceptual
problem regarding the idea and content of ethics (Section 2.1); and a functional problem concerning its relationship and differentiation
with law (Section 2.2).

2.1. The conceptual problem

Ethics is a philosophical discipline that studies good and evil and their relationship to morality and human behaviour. Ethics is an
idea, a framework or a model of thought and action, a unique concept in abstract terms, but with a variable scope and content. The
reason is that the concepts of good or evil, the idea of morality, and models of human behaviour are not permanent, rigid, or static, but
evolve over time and through space.
Historically, there has not been only one single ethics, nor has it always and at all times had the same relevance and function in the
development of different human beings and different societies, cultures and civilizations. The ethical parameters of today’s society are
not the same as the principles elaborated in the classical Greek or Roman worlds. The ethical principles of today’s European society are
not exactly the same as those prevailing in the Asian, American, African or Islamic world.8 The coexistence of ethics, morality and
religion, the relationship between the individual and the community or respect for ancestors or nature, for instance, receive different

4
As Alžběta Krausov� a noted, “A considerable amount of research has been conducted until now in order to describe various aspects of the
relationship between AI and law. However, the knowledge on AI and law is fragmented in various papers, specialized books, reports, opinions,
notes, comments etc. Mostly only individual aspects or problems are being tackled. The overall description providing a bigger picture of the
discipline in a succinct paper is missing” (Krausov�a, 2017). Richard Collins highlights the double danger of this type of analysis for the law: the
de-formalisation and fragmentation of the legal system (Collins, 2019). McGregor argues, specifically, that “is needed to situate the demands for
technological or algorithmic accountability within a wider accountability framework of governance choice” (McGregor, 2019, p. 1085).
5
Andrej Dameski also points out that “there is a clear need for the establishment of a comprehensive ethical framework in regards of AI”
(Dameski, 2018).
6
According to Floridi, “there are currently more than 70 recommendations, published in the last 2 years, just about the ethics of AI”. The author
makes an interesting analysis of ethical practices (Floridi, 2019).
7
Regarding to the variety of proposals, Dameski identifies the main ethical issues in the field of AI: Moral entities; Consciousness; Universalism vs.
anthropocentrism; Aliveness/‘Being’; Personhood and legal personhood; Agency, autonomy; Complexity and moral uncertainty; Rights; Values;
Virtues (and vices); Accountability and responsibility; Opacity and transparency; Utility; Trust; Morally-burdened effects (Dameski, 2018). Some of
these issues also raise legal problems. However, the methodological approach to their study, as well as the results of the analysis are different in each
case.
8
In the Islamic world, AI and robotics are “merely modifications and adjustments of materials that were already created by Allah, in order to
improve human life … This is because Islam discourages the creation of things that resemble the original creation of God, unless with good or strong
justification” (Ikram & y Kepli, 2018, pp. 177–179).

2
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

responses at different historical moments.


Geographically, there has not been one single universal ethical code either. It is well known that certain ethical concepts have been
exported, even imposed, in other countries and territories, sometimes conquered or colonized. Such an operation has usually had
limited success or utility. Adherence to ethical principles is voluntary, personal or communitarian. It relies on convictions, not on
impositions. Moreover, the legitimacy of the enforcement of ethical principles has long been questioned with moral, political and even
legal arguments. The recurrent topicality of the debates on neo-colonialism or on the universality of human rights, being different
themes, exemplifies the rejection to the imposition around the world of Western-style social, political or ethical models. This is an
important fact to keep in mind. The development of AI is taking place among developed countries. These countries, their institutions
and companies, are also monopolizing the ethical debate until now.
Materially, ethical principles cannot be applied in a mechanical manner, equally or in a similar way, in all fields and on all subjects.
It is not the same to apply ethical rules in the development of commercial activity or the statute of corporations than to establish them
in relation to the persons or their rights.
In short, neither historically, geographically, nor materially, it is possible to defend the existence of a single, homogeneous or
universal ethical code, although there may be some common principles. Indeed, some common and shared values can be identified by
carrying out a brief comparison between the Beijing AI Principles,9 the European Ethical Charter on the Use of AI in Judicial System
and their environment10 and the principles developed by the White House Office of Science and Technology Policy (OSTP). Despite the
fact that they are different agencies and contexts, these initiatives share basic principles such as public trust in AI, fairness and non-
discrimination, diversity and inclusiveness, disclosure and transparency, explainability, accountability and responsibility, safety and
security or risks control, among others. All ethical proposals include, in particular, respect for basic human rights, including dignity,
equality and non-discrimination.11 But, beyond these and some others principles and common elements, there are different ethical
conceptions and principles depending on traditions, cultures, ideologies, systems and countries. In the end, if the expression “ethics” in
itself is universal, the content of “the ethical” evolves and includes variable and flexible standards in accordance with the evolution of
times and societies. The ethical debate on AI should be approached starting from this premise.
Ethical proposals on AI come from diverse types of actors from States and international organizations to non-governmental or­
ganizations, academia (Renda, 2019), enterprises and corporations (West, 2018), individuals and civil society (Boddington, 2017). The
content of these proposals coincides in some basic aspects. It is different in many others for two main reasons: their different authorship
and the absence of a single or common methodological approach.
Indeed, the ethical debate on AI is conditioned from various perspectives. The following should be highlighted: 1) The existence of
different methodological approaches may lead and leads to different results12; 2) A predefined and specified analytical framework can
also determine the development of the debate and limits its scope13; 3) The proposals are both general or specific on certain aspects of
AI.14 However, specialization on a particular aspect of AI can distort the global discussion15; 4) The priorities of the ethical debate also
change according to the authors and their preferences. On the one hand, a technical, scientific, economic or political perspective may

9
https://www.baai.ac.cn/blog/beijing-ai-principles.
10
https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-
environment.
11
For a complete overview of the most relevant proposals and the common principles among them, see FJELD et al., 2019.
12
Boddington identifies three main theories: “Consequentialist theories, which broadly claim that the right action is the one that brings about the
best consequences. This is most commonly held as some form of utilitarianism, which aims to bring about the greatest balance of happiness over
unhappiness, or pleasure over pain, for the largest number of people. Deontological theories, which claim that what matters is whether an action is of
the right kind, that is, whether it is in accordance with some general overarching principle, or with a set of principles, such as ‘do not take innocent
life’, ‘do not lie’, and so on. Virtue ethics, which focuses of the character of the ideal moral agent, and describes the range of different virtues such an
agent has, and, broadly, claims that the right thing to do in any given situation is to do what the fully virtuous person would” (Boddington, 2017, p.
8).
13
Floridi collects 47 principles from documents of different authorship and classify them attending to four core principles commonly used in
bioethics: beneficence, non-maleficence, autonomy, and justice. They add another one: explicability, understood as incorporating both intelligibility
and accountability (Floridi et al., 2018, p. 696).
14
This is the case of the Ethical Charter on the Use of Artificial Intelligence in Judicial Systems adopted by the European Commission for the Efficiency
of Justice (CEPEJ) during its 31st Plenary meeting (Strasbourg, 3–4 December 2018). These principles are: Respect for fundamental rights; Non-
discrimination; Quality and Security; Transparency, Impartiality and Fairness; and the Principle “under user control” Available at: https://www.
coe.int/en/web/cepej/cepej-european-ethical-charter-on-the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment.
15
The Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain
Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects adopts a very specific report on “Ethics
and autonomous weapon systems: An ethical basis for human control?“. Available at: https://www.unog.ch/80256EDD006B8954/(http://www.
Assets/20092911F6495FA7C125830E003F9A5B/$file/CCW_GGE.1_2018_3_final.pdf.

3
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

bring a result quite different from that achieved from a social,16 philosophical or ideological point of view.17 On the other hand, the
approach from a hierarchical model is different from that resulting from a multi-stakeholder process as demonstrated by the IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems18 or the Asilomar AI Principles19; 5) There are also significant
differences in the discussion when the proposals come from a State20, an international organization21 or a non-institutional frame­
work22; 6) And, last but not least, an important problem is the existence of a plurality and variety of proposals coming even from the
same subjects acting in parallel fora. This intensive work indicates an appreciable interest in the ethical dimension of AI. But it also
generates some confusion, reduces the possibilities of transparency of the debate itself and, at times, carries a duplication of work.
A paradigmatic example of that situation can be found in the European Union (EU). In addition to the normative proposals, there
are four main working forums on ethics and AI. First, the European Group on Ethics in Science and New Technologies (EGE) is an
independent advisory body of the President of the European Commission.23 In its “Statement on Ethics of Artificial Intelligence”, the
EGE proposes a set of 9 basic principles.24 Second, the AI4People’s project has surveyed the aforementioned EGE principles as well as
36 other ethical principles put forward to date and subsumed them under 4 overarching general principles.25 Third, the High-Level
Expert Group on Artificial Intelligence (AI HLEG), appointed by the European Commission, has published a draft26 and, then, on 8
April 2019, the “Ethics Guidelines for Trustworthy AI”.27 Finally, the European AI Alliance, steered by the AI HLEG, is the European
Union’s multi-stakeholder platform on AI. There are not only variety and perhaps duplicity of forums. There are also appreciable

16
The UNI Global Union, based in Switzerland, represents more than 20 million workers from over 150 countries in the fastest growing sectors in
the world. This organization adopts 10 Principles for Ethical AI: 1. Demand That AI Systems Are Transparent; 2. Equip AI Systems With an “Ethical
Black Box”; 3. Make AI Serve People and Planet; 4. Adopt a Human-In-Command Approach; 5. Ensure a Genderless, Unbiased AI; 6. Share the
Benefits of AI Systems; 7. Secure a Just Transition and Ensuring Support for Fundamental Freedoms and Rights; 8. Establish Global Governance
Mechanisms; 9. Ban the Attribution of Responsibility to Robots; 10. BanAI Arms Race. Available at: http://www.thefutureworldofwork.org/
opinions/10-principles-for-ethical-ai/.
17
The ADM Manifesto sets up particular principles: 1) Algorithmic decision making (ADM) is a fact of life today; it will be a much bigger fact of life
tomorrow. It carries enormous dangers; it holds enormous promise. The fact that most ADM procedures are black boxes to the people affected by
them is not a law of nature. It must end.; 2) ADM is never neutral; 3) The creator of ADM is responsible for its results. ADM is created not only by its
designer; 4) ADM has to be intelligible in order to be held accountable to democratic control; 5) Democratic societies have the duty to achieve
intelligibility of ADM with a mix of technologies, regulation, and suitable oversight institutions; 6) We have to decide how much of our freedom we
all ow ADM to preempt. Available at https://algorithmwatch.org/en/the-adm-manifesto/.
18
The IEEE defends these principles: Human Rights; Well-being; Data Agency; Effectiveness; Transparency; Accountability; Awareness of misuse;
and Competence (IEEE, 2019).
19
The Asilomar Principles (2017) are divided into specific categories. In general terms, the goal of AI research should be to create not undirected
intelligence, but beneficial intelligence. Research funding, Science-Policy Link, Research Culture and Race Avoidance are the main tools to realize
this idea. The ethics and values are: Safety, Failure Transparency, Judicial Transparency, Responsibility, Value Alignment, Human Values, Personal
Privacy, Liberty and Privacy, Shared Benefit, Shared Prosperity, Human Control, Non-subversion and AI Arms Race. The longer-term Issues concern
Capability Caution, Importance, Risks, Recursive Self-Improvement and Common Good. Available at: https://futureoflife.org/ai-principles/.
20
The Japanese Society for Artificial Intelligence Ethical Guidelines (JSAI) establishes the following principles: Contribution to humanity;
Abidance of laws and regulations; Respect for the privacy of others; Fairness; Security; Act with integrity; Accountability and Social Responsibility;
Communication with society and self-development; Abidance of ethics Guidelines by AI. Available at: http://ai-elsi.org/wp-content/uploads/2017/
05/JSAI-Ethical-Guidelines-1.pdf.
21
UNESCO is actively working on AI (https://en.unesco.org/news/participants-global-unesco-conference-artificial-intelligence-urge-rights-based-
governance-ai). The COMEST Working Group adopts the Report of World Commission on the Ethics of Scientific Knowledge and Technology
(COMEST) on Robotics Ethics. The report identifies the following relevant ethical principles and values: Human Dignity; Value of Autonomy; Value
of Privacy; Do not Harm Principle; Principle of Responsability; Value of Beneficence; Value of Justice. Available at: http://www.unesco.org/new/
en/social-and-human-sciences/themes/comest/
22
The Toronto Declaration is an example in this case. Available at: https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-
Declaration_ENG_08-2018.pdf.
23
The EGE (2018) is a multi-disciplinary body, which advises on all aspects of policies and legislation where ethical, societal and fundamental
rights dimensions intersect with the development of science and new technologies (https://ec.europa.eu/info/research-and-innovation/strategy/
support-policy-making/scientific-support-eu-policies/european-group-ethics-science-and-new-technologies-ege_en).
24
https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf.
25
These include: Beneficence (defined as ‘do good’); Non-maleficence (defined as ‘do no harm’); Autonomy (defined as ‘respect for self-
determination and choice of individuals’); and Justice (defined as ‘fair and equitable treatment for all’) (https://www.eismd.eu/ai4people-
europes-first-global-forum-ai-ethics-launches-at-the-european-parliament/).
26
https://ec.europa.eu/knowledge4policy/publication/draft-ethics-guidelines-trustworthy-ai_en (European Commission, 2018a,b).
27
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai (AI HLEG, 2018).

4
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

changes in their proposals.28


This overview reveals that there are many ethical proposals and they are not always coinciding, uniform or unanimous, even in an
organization like the EU with such an advanced level of cohesion and despite the leadership exercised by the Commission. If this
happens in the European context (McKinsey Global Institute, 2019), outside, United States and China exemplify their own significant
views of AI from a political, economic and social perspective supported by their own ethical assessments.29
Four priorities define the Chinese policy: international competition, economic development, social governance and moral
governance. The doctrine warns on “the risks of implementing AI for governance stem from the intertwining of the material aspects of
social governance with surveillance and moral control” (Roberts et al., 2019, p. 11). Although the Beijing AI Principles include some
ethical values that are common or similar to those of Europe and the West, their interpretation and implementation may not neces­
sarily coincide with them in each and every one of their aspects. Those principles are structured in three categories: Research and
Development, Use of AI and Governance. The most significant ethical values such as “Do Good”, “For Humanity”, “Be Responsible”,
“Open and Share”, “Be Diverse and Inclusive” or “Be Ethical” are included in the section dedicated to Research and Development, while
the section devoted to Governance covers principles of a technical or operational nature such as “Optimizing Employment”, “Adap­
tation and Moderation”, “Subdivision and Implementation” or “Long-term Planning”30. In 2018, the White Paper on Artificial Intel­
ligence Standardization highlighted the principles of human interest and privacy.31 Finally, on 17 June 2019, the National New
Generation Artificial Intelligence Governance Committee released the New Generation AI Governance Principles – Developing
Responsible which includes values such as fairness and justice, harmony and friendship, respect for privacy, open collaboration or agile
governance. According to this document, “Further research and prediction of potential risks of more advanced AI will be done in the
future to ensure that AI will always develop in a human-friendly direction”.32 Curiously enough, in an article on Strategic Competition
in an Era of Artificial Intelligence, some of the authors have argued that China’s AI strategy reflects the key principles from the Obama
administration report (Horowitz et al., 2018, p. 10).
The current US Administration has not followed a uniform criterion because it has evolved from a deregulatory policy where the
ethical aspects were more important to a regulatory approach in which the norms acquire a greater significance. Particularly relevant
is the 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence.33 This process has coincided with state­
ments by some of the heads of large technology companies such as Microsoft (Smith & Browne, 2019) or Google calling for greater
regulation of AI.34 According to the 2019 Executive Order, there are two main principles that guide its AI policy: first, “foster public
trust and confidence in AI technologies and protect civil liberties, privacy, and American values”; and, second, American leadership in
AI to maintaining the economic and national security of the United States and to shaping the global evolution of AI.35 As can be seen,
the debate on the ethical aspects of AI is neither neutral nor exactly focused on ethics. The defence of national values also involves a
struggle for global leadership and supremacy.
From a totally different perspective, there are many other countries that lack the level of technological and economic development
necessary to participate in this issue in the same position and with the same possibilities of success and influence as the previous ones.
Discussion forums are mostly located or supported by countries and agencies in the developed world. However, it is essential not to
overlook that AI is not perceived and experienced equally in the different countries and societies.36 The ethical debate seems to be
dominated only by a minority and not by the majority of the members of the international society. And it just does not seem ethical to

28
The “Draft Ethics Guidelines for Trustworthy AI” had two main basic purposes: a human-centric approach to AI and a trustworthy AI. That
specifically means that: 1) AI should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”;
and (2) AI should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
The originality of this proposal was twofold: first, this document does not aim to provide a list of core values and principles for AI, but rather to offer
guidance on its concrete implementation; And second, such guidance is provided in three layers of abstraction, namely, fundamental rights,
principles and values and the assessment list intended to guarantee the achievement of a trustworthy AI. The project clearly points out that the
guidelines are not intended as a substitute for any form of policymaking or regulation. For their part, the “Ethics Guidelines for Trustworthy AI”
finally adopted, introduces a clearer and more coherent overall approach, which is more political than technical. There are four ethical principles
identified as ethical imperatives: Respect for human autonomy; Prevention of harm; Fairness; and Explicability. These guidelines are not an official
document and are not legally binding.
29
A comparative study can be found in the report Artificial Intelligence: how knowledge is created, transferred, and used. Trends in China, Europe and
the United States (https://www.elsevier.com/research-intelligence/resource-library/ai-report).
30
https://www.baai.ac.cn/blog/beijing-ai-principles.
31
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/translation-excerpts-chinas-white-paper-artificial-intelligence-
standardization/.
32
http://most.gov.cn/kjbgz/201906/t20190617_147107.htm.
33
https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence/.
34
In fact, the State of Washington is debating a proposal for an act relating to artificial intelligence-enabled profiling with important ethical
aspects. According to this proposal, these practices not only threaten the fundamental rights and privileges of people but they menace the foundation
and supporting institutions of a free democratic state (http://lawfilesext.leg.wa.gov/biennium/2019-20/Pdf/Bills/House Bills/2644.pdf).
35
In 2019, the US Department of Defence adopted its Recommendations on the Ethical Use of AI according to which the use of AI system must be
responsible, equitable, traceable, reliable and governable (https://admin.govexec.com/media/dib_ai_principles_-_supporting_document_-_
embargoed_copy_(oct_2019).pdf).
36
The rapid expansion in the use of applications in developing countries is often due to the ineffectiveness of public services or basic private
services such as banking. Such motivations are not the same as those existing among the population of developed countries in which it is not a
question of covering basic needs in the strict sense.

5
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

impose ethical principles. Commitment to ethics cannot be turned into the right to decide what is ethical or what ethics means to the
rest of the world (Crawford et al., 2019, p. 21).
In these circumstances, it is essential to avoid a selective or exclusionary debate, limited to countries with the level of scientific and
technological development required by the AI or resulting, directly or indirectly, in the exclusion of others. It is also essential to accept
that there is no single, unique and universal ethical code and that ethical principles cannot and should not be imposed. Adherence to
ethical principles is voluntary (Bostrom & Yudkowsky, 2011). Respect for legal rules is not, however, voluntary, but mandatory. Law
also plays a role in harmonizing and balancing different ethical conceptions. Indeed, the “Ethics Guidelines for Trustworthy AI”
identifies the three main components to ensure a trustworthy AI: Lawful AI, Ethical AI and Robust AI. According to it, each of these
components is necessary but not sufficient in itself. To a large extent, the ethical principles designed for AI support and/or reproduce
legal norms and principles. But identity or similarity of contents should not lead to confusion between the ethical and the legal ap­
proaches to AI. Confusion is a functional problem.

2.2. The functional problem

As Boddington explains, historically there is a strong and complex relationship between ethics and law (Boddington, 2017, p. 25).
However, the widespread confusion between ethical and legal principles in the AI field (Wagner, 2018, p. 2) is a worrying phenomenon
for two main reasons. First, it reveals a disturbing lack of knowledge of both disciplines. Secondly, that confusion is used to defend the
need for ethical principles and to exclude legal rules, as if they were equal or interchangeable.
In fact, there is widespread agreement on the need to endow AI with ethical principles. There is not the same concern or consensus
on the importance of legal rules. The reasons for such a situation can be varied and different. Sometimes the argument is used that only
ethical principles are important or necessary. Sometimes the intention is guessed to reinforce the ethical component in order to
minimize or exclude legal requirements. At times, the willingness to organize ethical aspects can be appreciated as an alternative to the
difficulty of managing legal aspects. As Wagner notes, “ethics is seen as the ‘easy’ or ‘soft’ option which can help structure and give
meaning to existing self-regulatory initiatives. In this world, ‘ethics’ is the new ‘industry self-regulation’” (Wagner, 2018, p. 1).
However, even the coincidence/similarity between some ethical and legal principles does not mean coincidence/similarity as to their
nature, scope and application.37 There is no obligation to comply with ethics and there is no responsibility for non-compliance,
whereas there are both in the legal area.
Indeed, there are significant differences between an ethical principle and a legal regulation. First, legal standards are mandatory
(Boddington, 2017, p. 25). Second, legal norms can be common and uniform because they arise from the agreement between States in
the case of International Law or from a legitimate legislative process in domestic law (Scherer, 2017: 379). Third, legal regulation
addresses and reflects the political, social and economic aspects of AI that can sometimes be unfortunately more relevant than ethical
ones. Finally, compliance with the rules is guaranteed both legally and judicially. The ethical component of AI is fundamental. But it is
neither the decisive nor the definitive one for two main reasons (Floridi et al., 2018, p. 694). First, ethics is not an obligatory mandate.
It is assumed on a voluntary basis by a particular subject or community (Boddington, 2017, p. 8). Second, as discussed, there is not one
single or a universal uniform ethical code, although there are many common or shared ethical concepts and principles (Bostrom &
Yudkowsky, 2011, p. 13). In fact, the ethical principles present in the main debates on AI do not represent the entire international
community or its different civilizations, societies, ideologies or cultures.38 As is well known, scientific and technological progress has
widened the so-called digital divide. With AI, the phenomenon reaches a greater quantitative and qualitative dimension. The
discriminatory biases of AI have manifested themselves in many fields to the point of being identified as a reproduction of western male
thought.39
Ethics is necessary, even indispensable, but not sufficient to meet the challenge of AI. Ethics is specially needed when regulation is
lacking40. Law, however, is essential. Law implies a binding legal commitment, including for instance those ethical contents that are
common and/or shared and therefore reach the statute of obligatory norms. However, not all ethical concepts have a legal translation.

37
Moreover, as Wagner noted, “is a world in which ethics-washing and ethics-shopping are seemingly becoming increasingly common, it is
important to have common criteria based on which the quality of commitments made can be evaluated. If not, there is a considerable danger such
frameworks become arbitrary, optional or meaningless rather than substantive, effective and rigorous ways to design technologies. When ethics are
seen as an alternative to regulation or as a substitute for fundamental rights, both ethics, rights and technology suffer” (Wagner, 2018, p. 6).
38
According to the European Ethics Guidelines, “Ethics as a field of study is centuries old and centres on questions like ‘what is a good’ action,
‘what is right’, and in some instances ‘what is the good life’. AI Ethics is a sub-field of applied ethics and technology, and focuses on the ethical issues
raised by the design, development, implementation and use of AI. The goal of AI ethics is to identify how AI can advance or raise concerns to the
good life of individuals, whether this be in terms of quality of life, mental autonomy or freedom to live in a democratic society. It concerns itself with
issues of diversity and inclusion (with regards to training data and the ends to which AI serves) as well as issues of distributive justice (who will
benefit from AI and who will not)” (https://ec.europa.eu/digital-single-market/en/news/draft-ethics-guidelines-trustworthy-ai, p. 2).
39
“It is a critical time to be addressing the diversity crisis in AI, because we now see diversity itself being weaponized. Over the past year and a
half, evidence of systemic discrimination and harassment at tech companies and conference spaces has entered the public debate, much of it exposed
by worker-led initiatives and whistle-blowers. This growing awareness, accompanied by demands for inclusion and equity, has led to some change,
but there has also been resistance, especially among those implicitly privileged by the status quo” (Myers et al., 2019, p. 28).
40
Actually, “especially when technology is rapidly advancing, the law might not be able to keep up, and professional bodies and others considering
ethical aspects of that technology might well lobby for appropriate changes to the law. It may be possible to amend codes of ethics issued by
professional bodies more flexibly and more rapidly than national, and especially international, laws” (Boddington, 2017, p. 25).

6
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

Not all ethical principles can evolve into legal rules, nor do they have the nature or sufficient consensus to become legal norms.41 It is
true that there are concepts and principles that are both ethical and legal. It is also true that there are common, general and potentially
universal ethical concepts. Nevertheless, in any event, the functions of ethics and law are quite different.

3. The juridical analysis

Law is often a misunderstood world. The relationship between law and justice is sometimes complex, but the balance is almost
always negative for law.42 The perception of law as a set of mandates, limitations and prohibitions, in a negative sense, prevails over its
conception as a necessary instrument to organize society and coexistence among people. Law does not always lead to a just solution,
but without law coexistence in society would hardly be possible. From the oldest communities and throughout the history of humanity,
there have always been rules to order human behaviour in society (Wagner, 2018, p. 5). Ethical principles serve this purpose but lack
both the enforceability of legal standards and the necessary mechanisms to ensure compliance.
Law is also a complicated world and not very accessible and known. Legal language is complex. The terms used do not always
coincide with their colloquial meaning. The processes and normative techniques are sometimes poorly understood. The origin, basis
and relationships between norms are sometimes unintelligible. The function of legal science is to explain what law is, its nature and
foundations, its mechanisms and guarantees, as well as its loopholes and shortcomings. Like other fields of scientific knowledge, there
are different methodological approaches to law and, in essence, a “legal logic” (Walton, 2005).

3.1. Scientific approach

Concerning the relationship between law and AI, Nicolas Petit explains that “two dominant routes have been followed. The first is
legalistic. It consists in starting from the legal system, and proceed by drawing lists of legal fields or issues affected by AIs and robots:
liability, privacy, cyber security, etc. The second is technological. The point here is to envision legal issues from the bottom-up
standpoint of each class of technological application: driverless cars, social robots, exoskeletons, etc”. In his opinion, “the legalistic
approach is driven by teleological question” whereas “the technology approach is more ontological”. (Petit, 2019). Alžběta Krausov� a
prefers “the approach of legal scholars to artificial intelligence rather than the technical approach of computer scientists to law”
(Krausov� a, 2017). Actually, law is not always understood and approached as an object of scientific knowledge. A scientific method­
ology is not always applied to its study.43
Legal science has given rise to different currents of thought or scientific schools (Smith et al., 1995). There are many different
theoretical approaches to International Law (Bianchi, 2016). In spite of, in functional terms, there are two main trends: formalist
(Allen, 2001) and non-formalist. Formal knowledge of the law is a model of ascertainment based on the status and value of rules in
general legal theory and in the theory of sources of international law. Law is a structured body of positive principles and norms.
According to the non-formalist approach, however, law is more than just a set of principles and norms. Law is an instrument for the
organization of society. Norms change through time and space in order to accommodate themselves to social and human evolution.
Law is the expression of that historical evolution, as well as of the specific social and political reality. Because of that, the usefulness
and the effectiveness of law depend on its ability to adapt itself to the reality it is intended to regulate. Notwithstanding the precedents,
from a juridical point of view, AI is a new and different reality. Knowing this reality, its nature and features, is the basic starting point to
address its regulation.

3.2. Nature and features of the AI

According to Nuria Oliver, AI have some specific characteristics: 1) Mainstreaming and invisibility explain that, generally, there is
not a clear social and political consciousness about the existence, scope and importance of AI; 2) Complexity, scalability and constant
updating serve to realize that AI has led to a reality that is not easily understandable, nor rationalizable through norms or principles,
because its complexity is constantly growing in exponential terms; and 3) The ability to predict poses a major dilemma (Oliver, 2018).
If the ability to predict leads to more just and objective situations, actually there would be no need for ethical or legal standards.
However, if that ability had not been sufficiently or generally established, part of the AI’s usefulness, functioning and purposes could
be questioned. Basically, there is not enough social or political awareness on AI. It is an increasingly complex and difficult issue to
regulate, that even challenges the need for such regulation. It is a difficult starting point.
The analysis of practice and scientific doctrine on AI reveals two main problems: the association of the concepts of AI and humanity
(Section 3.2.1); and the discussion about the unitary idea of AI (Section 3.2.2).

41
For example, how can the principle of non-maleficence be legally translated?.
42
The debate on the relationship between justice and law is a classic issue within legal science. Evidence demonstrates that sometimes the
application of the law can lead to an unfair outcome. As in domestic law, in International Law, Article 38 of the Statute of the International Court of
Justice provides that the Court may judge according to criteria of equity rather than by applying the law. This so-called “contra legem” equity serves
as an alternative to cases in which the application of the law may lead to an unfair result. A long time ago, in his monograph A Protest against Law
Taxes, Jeremy Bentham wrote “Justice is the security which the law provides us with, or professes to provide us with”.
43
The difference between a legal perspective and a scientific legal perspective can be easily appreciated. The first concerns making law. The second
implies thinking about law.

7
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

3.2.1. AI versus human intelligence


The construct of AI has been developed through the association of two categories: “intelligence” and “humanity”. Three bench­
marks challenge this operation. First, intelligence is not only a human quality or a quality that is unique to the human being.44 Second,
the “human” condition is not determined solely or principally by intelligence (Boddington, 2017, p. 86).45 Finally, there is not a single
human or natural intelligence but different modalities of intelligence (Herrera, 2014). According to Winfield, “Humans have several
different kinds of intelligence – all of which combine to make us human” (Winfield, 2019, p. 2). Therefore, the definition of intelligence
as “artificial”, as opposed to “human”, is not really a significant or determining element. Nor can we recognize the condition of
“intelligent” to all the devices commonly included in AI.
There is no widely accepted definition of artificial intelligence (Select Committee on Artificial Intelligence, 2018, p. 13). AI has
multiple and diverse manifestations, ranging from purely mechanical devices, which could hardly be qualified as intelligent, to devices
designed to create super-intelligent systems. Being automatic or mechanical does not mean to be intelligent. The questions are two:
first, what are we talking about when we talk about intelligence? And, second, can intelligence be defended as the common char­
acteristic of all those artificial devices generally included in that denomination? Moreover, the term “artificial” has greater conse­
quences than those derived from its use to qualify work carried out in an unnatural way. Some of them lead to a process of idealisation
of AI or, on the contrary, to its simplification.
The expression “Artificial Intelligence” has not been developed without controversy. The Dartmouth Convention, in 1956, is the
reference for the use of that term. This proposal defended by McCarthy in order to underline the connection of AI with logic (McCarthy
& Hayes, 1981, pp. 431–450) prevails over the concept of “cybernetics” posed by Norbert Wiener (Nilsson, 2010).
AI paternity is generally attributed to Alan Turing. In his article “Computing Machinery and Intelligence”, published in 1950
(Turing, 1950), he established the so-called “Turing Test” which aims to determine the intelligence of an artificial device. This test is
passed when an external third party is unable to distinguish whether the answers to their questions come from a machine or from a
human. The “Imitation Game” has generally been taken as an irrefutable demonstration that a machine is able to think and act like a
human. The usefulness of this test has been appreciated but also questioned.46
The Chinese Room Argument, published in 1980 by the philosopher John R. Searle, introduced serious doubts about the func­
tionality of this test (Searle, 1980). The core idea is that the machine simulates understanding. It cannot be compared to the human
mind because it lacks understanding (Solum, 1992: 1267) (Nilsson, 2010, p. 381). The experiment emphasizes the fact that computers
merely use syntactic rules to manipulate symbol strings. But they have no understanding of meaning or semantics (Nilsson, 2010, p.
387). They manipulate syntactic concepts that have nothing to do with the semantic comprehension of the processed contents. This
argument is further developed by Roger Penrose, mathematical physicist, in his research devoted to a scientific understanding of
consciousness. In his monograph Shadows of the Mind, Penrose states that the external effects of consciousness cannot be correctly
simulated by a computer (Penrose, 2012, p. 29). The problem of consciousness is an important part of the ethical and legal discourse
(Solum, 1992, p. 1265).
Generally speaking, the idea that AI devices have understanding ability or comprehension skills is becoming widespread. There are
several and varied examples from smartphones or virtual assistants to the quantum computing. Computational Linguistics or Cognitive
Computing are a specialization devoted to designing devices able to understand and emulate the functioning of the human mind.
However, with increasing frequency and various arguments, the doctrine rejects the comparison between human and artificial in­
telligence that has been encouraged by constant scientific and technological advances.
According to Jeff Hawkins, “Although machine-learning techniques such as deep neural networks have recently made impressive
gains, they are still a world away from being intelligent, from being able to understand and act in the world the way that we do. The
only example of intelligence, of the ability to learn from the world, to plan and to execute, is the brain” (Hawkins, 2017). Hawkins
argues that machines will not become intelligent unless they incorporate certain features of the human brain and, in particular, the
following three: learning by rewiring, sparse distributed representations, and sensorimotor integration (Hawkins, 2017). These three
fundamental attributes of the neocortex will be cornerstones of machine intelligence. In his opinion, future thinking machines can
ignore many aspects of biology, but not these three.
Lawrence B. Solum47 and Nils J. Nilsson48 reach a similar conclusion with arguments of a different nature, as well as other
recognized AI experts. Brooks notes that slow progress was made over this time in demonstrating isolated aspects of intelligence
(Brooks, 1991, p. 139). Winfield states that “the human ability to learn, then generalise that learning and apply it to completely
different problems, is fundamental and remains an elusive goal for robotics and AI. This is called Artificial General Intelligence, which
remains as controversial as it is unsolved” (Winfield, 2019, p. 3). According to the author, “A human-equivalent AI would need to be a

44
There is scientific evidence that animals can be intelligent.
45
Some time ago, Harry G. Frankfurt identified “consciousness” as the constitutive component of the concept of person, alongside with its corporal
characteristics (Frankfurt, 1971).
46
There is an interesting study about the law as a computable number in the sense described by Alan Turing (Huws & Finnis, 2017).
47
Solum defends that “AI cannot possess consciousness” (Solum, 1992, p. 1264). In his opinion, organic brains may be the only objects that are
actually capable of generating consciousness. Moreover, AIs cannot possess intentionality, feelings, interests or free will (Solum, 1992, pp.
1265–1272).
48
Nilsson affirms: “There is no possibility that computers will ever equal or replace the mind except in those limited functional applications that do
involve data processing and procedural thinking. The possibility is ruled out in principle, because the metaphysical assumptions that underlie the
effort are false” (Nilsson, 2010, p. 397).

8
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

generalist, like we humans” (Winfield, 2019, p. 7). Surden argues that “AI is neither magic nor is it intelligent in the human-cognitive
sense of the word. Rather, today’s AI technology is able to produce intelligent results without intelligence by harnessing patterns, rules,
and heuristic proxies that allow it to make useful decisions in certain, narrow contexts” (Surden, 2019, p. 1337).
Instead of appealing to the concept of “intelligence”, Luke Muehlhauser and Louie Helm prefer the idea of optimization power. In
their opinion, “AI researchers working to improve machine intelligence do not mean that super-intelligent machines will exhibit, for
example, increased modesty or honesty. Rather, AI researchers’ concepts of machine intelligence converge on the idea of optimal goal
fulfilment in a wide variety of environments, what we might call “optimization power”. In addition, this optimization concept is not
anthropomorphic and can be applied to any agent: human, animal, machine or otherwise. They use the term “machine super-opti­
mizer” in place of “machine super-intelligence” (Muehlhauser & Helm, 2012, pp. 3–4). Even the idea of singularity could be inter­
preted as the supreme optimization instead of the overcoming of the human transcending biology (Kurzweil, 2017).49 Along with the
previous ones, there is a little explored but convincing argument. Winfield affirms that “the processes and mechanisms of biological
and artificial evolution are profoundly different (…), but there is an ineluctable truth: artificial evolution still has an energy cost.
Virtual creatures, evolved in a virtual world, have a real energy cost” (Winfield, 2019, p. 4).
In addition to doctrinal arguments, in the EU, the AI HLEG prefers a definition based on the concept of rationality (AI HLEG, 2019).
The AI HLEG states that “since intelligence (both in machines and in humans) is a vague concept, although it has been studied at length
by psychologists, biologists, and neuroscientists, AI researchers use mostly the notion of rationality. This refers to the ability to choose
the best action to take in order to achieve a certain goal given certain criteria to be optimized and the available resources”50. The AI
HLEG acknowledges that rationality is not the only ingredient in the concept of intelligence, but it is a significant part of it, possibly,
the most significant in the field of AI.
The discussion around AI and human intelligence is still open. It is not the only one. An issue related to the former is the definition
of the nature of AI as an object of knowledge. AI is often addressed as a unitary and homogeneous whole. However, under that
umbrella, there are very diverse devices and processes.

3.2.2. An AI or several AIs


The lack of public awareness about AI is mainly due to a lack of knowledge. Mythology, culture, religion, literature and science
fiction have led to an anthropomorphic view of AI (Oliver, 2018, p. 11) (Muehlhauser & Helm, 2012, p. 4) and, in that way, sometimes
to a confusion between robotic and AI.51 This popularly rooted conception has two negative effects. On the one hand, it exacerbates
positive or negative reactions to AI. On the other, it conditions and hinders the acceptance of AI as a present reality and not only as a
futuristic one. Most people are not aware that AI has been present in everyday life for a long time and in a natural way. Most people do
not realize how, why and to what extent social, economic or political activity currently depend on the existence of AI devices. Actually,
there is not one unique AI, nor is its main manifestation the anthropomorphic one, although this may be the most striking in terms of
popularity (Select Committee on Artificial Intelligence, 2018, p. 22).
AI is a global and abstract concept that encompasses numerous and diverse modalities and manifestations (CEPEJ, 2018, p. 31).52
The common core is difficult to define even in scientific terms. But, to a greater or lesser extent, the common element to all of them is
their association and/or identification with human or rational qualities. Scientific and technological progress has led to the devel­
opment of numerous AI devices with the ability to see, hear and understand, trying to emulate humans, doing just as humans do, or
even acting at a higher and faster level than humans.53
The lack of public awareness and knowledge about AI must be overcome by explaining its reality taking into account the different
typologies. They are not all the same, nor do they all serve the same purposes.54 They cannot be explained or treated equally. This
important point is not receiving the necessary attention. The analysis of the AI on the basis of its different types would allow a better
understanding of the phenomenon as a whole. Actually, AI has been classified in different ways.

49
Winfield notes that “The singularity is basically the idea that as soon as artificial intelligence exceeds human intelligence then everything
changes. There are two central planks to the singularity hypothesis: one is the idea that as soon as we succeed in building AI as smart as humans then
it rapidly re-invents itself to be even smarter, starting a chain reaction of smarter-AI inventing even-smarter-AI until even the smartest humans
cannot possibly comprehend how the super-intelligent AI works. The other is that the future of humanity becomes unpredictable and, in some sense,
out-of-control from the moment of the singularity onwards” (Winfield, 2019, p. 6).
50
https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
51
In accordance to the COMEST Report: “Gibilisco distinguishes five generations of robots according to their respective capabilities. The first
generation of robots (before 1980) was mechanical, stationary, precise, fast, physically rugged, based on servomechanisms, but without external
sensors and artificial intelligence. The second generation (1980–1990), thanks to the microcomputer control, was programmable, involved vision
systems, as well as tactile, position and pressure sensors. The third generation (mid-1990s and after) became mobile and autonomous, able to
recognize and synthesize speech, incorporated navigation systems or teleoperated, and artificial intelligence. He further argues that the fourth and
fifth generations are speculative robots of the future able, for example, to reproduce, acquire various human characteristics such as a sense of
humour” (COMEST, 2017, p. 12).
52
According to the Oxford Dictionary, AI is defined as “The theory and development of computer systems able to perform tasks normally requiring
human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages”.
53
The possibility of making medical diagnoses much faster than humans on the basis of data analysis could be one such case.
54
The medical or health applications are not comparable to its use for war purposes, even if the operational performance is similar or identical.
Utilities and objectives are decisive. One topic that can generate diversity of opinion is the use of robotics and AI devices for sexual purposes (Frank
& Nyholm, 2017).

9
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

A) The Functional Approach. The historical and conceptual presence of two main schools of thought -the Symbolic-Logical Approach
(top-down) and the Data-Driven Approach (bottom-up)- allows for a classification based on a functional criterion. The Symbolic
Approach defends the development of AI on the basis of a predefined set of logical rules and principles. The Data-Driven
Approach considers that AI should be constructed on the basis of observation and experience, that is, data. This functional
classification is intrinsically and objectively important not only in technological terms.
The dilemma between the logic of the principles and the reality of the data has relevant juridical implications. Firstly, the regu­
lation of the AI devices has to be different because they are two very distinct models in their conception and operation.55 Secondly,
the cause and grounds of any wrongful acts could also be different, as well as the procedure for identifying them in each case. By
their very nature, unlawful acts arising from the application of a logical principle and those arising from the observation and
analysis of data are not comparable.56 In addition, the historical evolution of these schools of thought has not been homogeneous.
Big data may have favoured the bottom-up model but without excluding the other. The simple existence of these two models
implies that legal debate on AI has to consider each of these two functional categories in a differentiated way. To be effective, the
rules in each case will have to be different.
B) The Nature and Formal Approach. AI can be classified into categories attending to its nature and/or representations. Davenport
and Kirby differentiate three types of automation.57 Lydia Kostopoulos distinguishes three types/mediums: intangible, tangible
and embedded. According to the author, “Intangible AI does not have a physical form, instead it can be communicated through a
sound, a notification on a device, and/or invisible computation”. Instead, Tangible AI is embodied in a physical form, which
humans can interact with. Finally, Embedded AI is when it is fused with our brain either through an invasive or non-invasive
mechanism (Kostopoulos, 2018). This classification is interesting in terms of public or general acceptance of the AI. The first
type is largely accepted for its own invisibility. The latter can be appreciated for its potentiality in practical terms. Paradoxi­
cally, the Tangible AI could be more questioned without being necessarily more intrusive than the others. These different ty­
pologies also require a juridical analysis both separately and as a whole to keep in mind the different human reactions to each AI
device in order to its legal regulation.
C) The Teleological Approach. On the basis of this criterion which points to their potential results, the doctrine distinguishes be­
tween strong and weak AI (Russell & Norvig, 2016) or, more precisely, between systems with specific AI, systems with general
AI and systems with superintelligence (Oliver, 2018). There is also a typology with four categories: Relieve; Split up; Replace;
and Augment.58 Legal solution can neither be unique nor uniform because each of these modalities requires a specific treatment.
D) The Autonomy Approach. A distinction could be made between two main models attending to their degree of autonomy in the
learning process: Machine Learning and Deep Learning. The essential difference lies in their ability to learn but, above all, in the
consequences of the autonomy of learning. It is not merely a technical matter. The AI autonomy raises both the need to clearly
identify its legal status and the problem of defining the scope and nature of its relationship with the human being (Hage, 2017,
p. 255).
Human control over AI and responsibility for its actions are the subject of an intense debate. The focus of this debate should be
redirected to a greater extent towards the real question raised by this phenomenon: What is AI? Is it a thing? Is it a different
category of person? Could it be a juridical person or a non-human person? Is it a tertium genus between a thing and a person?
(Bryson, Diamantis, & Grant, 2017, p. 273). There is not only one answer, no general answer, because there is not only one AI.

Each of those methodological approaches show the variety of devices and situations covered by the concept of AI. A formalistic
legal approach may aim at a single treatment of all of them. From a non-formalistic approach to law, AI regulation must take into
account the existence of those different modalities with specific regulatory requirements and problems, as well as the need to define AI
in a more realistic and understandable way for the whole of citizenship, such as optimization, rather than situate it around the concept
of humanity.

3.3. Legal framework

AI is a diverse and complex reality. Not only is it developed and acts in different ways and with different objectives. In certain
aspects, it seeks to emulate, even surpass, human intelligence. As Chesterman explains, “the rule of law is the epitome of anthropo­
centrism: humans are the primary subject and object of norms that are created, interpreted, and enforced by humans” (Chesterman,
2019, p. 38). AI could change that paradigm.

55
In the top-down model, the principles may form part of the design of the AI itself and problems may arise in its implementation. In the bottom-up
system, the form of incorporating ethical or legal principles from the base and in the processes of data collection and processing must be sought.
56
In the top-down system, the comparison between logical principles and subsequent actions will be the way to determine their legality or
illegality. The hierarchical technique acts as an instrument to correct this incompatibility. In the bottom-up model, there is neither a reference
framework such as that of principles nor a technique for the identification and solution of possible problems that may arise as a consequence of
actions derived from the observation and analysis of data.
57
According to the authors, in the first, machines replaced human muscle in some manual tasks—think of factories and farm machinery. In the
second, clerical and knowledge workers were relieved of routine work such as data entry. The third era brings the automation of intelligence—the
computerization of tasks previously thought to require human judgment (Davenport & Kirby, 2016).
58
https://www2.deloitte.com/content/dam/insights/us/articles/3832_AI-augmented-government/DUP_AI-augmented-government.pdf.

10
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

3.3.1. Foundations
Law is a system of organization of social life which has been designed for the physical world and for the human being. The human
being has been located at the core of that system through the concept and the right to human dignity. The physical world has been
organised politically on the basis of the idea of the State as a sovereign entity.
As a consequence, there are two models of society: international and national. The first has a horizontal and decentralized structure
because it is a society of sovereign States. The second has a vertical and hierarchical structure because it is a society of individuals
submitted to the power of the State in which they are located or of which they are nationals. As they are different social and political
models, law is also different in each case. International society is regulated by International Law, which is a legal system based on the
consensus or agreement between States. Internal societies are regulated by the different domestic laws. However, consistent and
increasing internationalization and globalization have led to a reshaping of this juridical architecture. States are increasingly relying
on International Law to regulate matters that were previously domestic matters. International norms are then integrated and guar­
anteed within the framework of domestic law (Krieger, H., Nolte, G. and Zimmermann, 2019). As a result, the scope of the norms is
broader and their content more homogeneous. The rationale behind this practice is logical. Virtually any human activity has an in­
ternational origin or projection, while domestic law cannot extend beyond the territory of the State itself.
Universalization is a phenomenon that cannot be ignored. AI devices tend to be inherently transnational (CEPEJ, 2018, p. 60) and
potentially universal in scope, function, and nature. Alžběta Krausova � argues that: “As the development of artificial intelligence is a
global phenomenon that has worldwide social and economic effects, new international laws should be adopted” (Krausova �, 2017).
Carter and Carter even support the definition of super-AI as “a common heritage of mankind and not something to be appropriated and
developed by any individual State or natural or juridical person (Carter and Carter, 2016, p. 12). International organizations and
forums clearly support international regulation.59
International law is the legal order that must necessarily regulate AI for three main reasons. The first is precisely the general and
universal scope of AI. It is a technical and practical matter. The question lies in determining whether it is possible to territorially
demarcate country by country the use of AI to identify applicable law, e.g. in data analysis or in the supply of services. Not only it is
difficult, but even impossible. The second reason relates to a matter of legal economy. Should an AI patent, for example, be registered
in all countries where it can be used or would it be better to register it in accordance with a common international standard accepted by
all those States?60. The recognition of author’ rights to an AI for an article produced autonomously has taken place in China. The
Chinese court took this decision because it considered that the structure of the article was correct, logical and with a certain originality.
Instead, the European Patent Office refused two European patent applications in which an AI system was designated as inventor.61 A
patchwork system of State-by-State legislation is not the best solution in an interconnected and globalized world. The third reason is
the protection and legal certainty of users, creators, producers or manufacturers, i.e. the security of all the stakeholders involved or
affected by the AI. A general regulation, common and comprehensible to all, is an immeasurably better solution than partial and
fragmented provisions according to the legislation of each State. If each of them has its own legislation, the question remains of
determining the applicable rule in each case. It could be that of the country of the user or consumer, that of the place of manufacture or
production or that of the country of the author or creator of the AI device, for example, or even all of them. Legal uncertainty is obvious
and unnecessary.
It is true that each country has the capacity to adopt its own rules. However, domestic law is not the first choice for AI for three main
reasons: 1) Domestic law cannot be legitimately and effectively imposed outside the territory of the State because it lacks competence;
2) All domestic legal systems recognize the pre-eminence of International Law and also establish procedures for its integration or
conversion into domestic law; and 3) Domestic law is fragmented into different branches with its own methodology, content and
processes. The civil,62 criminal,63 labour64 or, especially, constitutional law65 aspects, among others, are relevant. International Law
can manage those different areas as well as the diversity of legal systems (Oskamp & Lauritsen, 2002).66 AI needs primarily a general

59
As will be seen below, the G-20, the OECD or the EU claim for international regulation.
60
The World Intellectual Property Organization has been recently working in this topic. The WIPO Technology Trends 2019 is devoted to AI. Its
study shows “that most patent applications have a commercial, application focus, as they refer to an AI functional application or are combined with
an AI application field. This report identifies 20 fields/industry sectors that patent documents refer to, ranging from entertainment to education to
banking, indicating that sectors across the board are exploring the application of AI technologies” (https://www.wipo.int/publications/en/details.
jsp?id¼4386).
61
https://www.epo.org/law-practice/case-law-appeals/recent.html.
62
The recognition of legal personhood to AI is one of the most important and controversial issues in this field as well as intellectual property or
liability (Petit, 2017).
63
The most interesting topic in this matter is criminal liability for acts committed by AI systems (Lagioia & Sartor, 2019).
64
In this area of law, many issues are of concern, ranging from the spectre of job losses as a result of AI to the need to establish specific AI
employment rights and/or obligations such as the right to leave or social security contributions.
65
The main topic is the protection of human rights attending to the challenges posed by AI.
66
National and local regulations are different. For instance, technically, the Anglo-Saxon legal model gives a role to the jurisprudence (Ashley,
2002, p. 163) that in the European system only corresponds to the law. Materially, the content and purpose of the rules also differs even between
countries with common traditions (Hage, 2000). The scope of freedom of expression under the U.S. First Amendment is broader than in the Eu­
ropean framework where limits are set on that right derived, for example, from the legislator’s willingness to combat the apology of genocide. Nor is
the responsibility regime the same in the various juridical systems (Lehmann, Breuker, & Brouwer, 2004, p. 279).

11
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

and comprehensive approach.67


International law is the main normative choice. Nevertheless, it has two characteristics that limit its scope and content for the
regulation of any matter, including AI. First, it is a legal system created to regulate relations of a public nature between States which
over time extends to the regulation of broader spheres, including interpersonal and private aspects, but without losing its original
interstate nature and functions. Matters such as AI do not easily adjust themselves to an interstate dialogue -due to the divergence of
criteria and situation among the different countries- nor do they respond to that classic model of regulation. AI can hardly be addressed
effectively in an exclusively interstate framework. Multi-stakeholder community involvement in this process is critical.
Second, International Law is based on the consensus or agreement between States. But even a basic compromise is difficult to
achieve when there are different political or economic interests. It is not just a problem of lack of commitment or political will to reach
compromise. Not infrequently, agreement is difficult because the issue itself does not facilitate the adoption of commitments. AI is
really a very complex domain of regulation.
Despite of that, there are important regulatory developments in the field of AI.68 No in-depth study is possible in this paper.
Notwithstanding, two different examples support this assertion and illustrate the complexity of this task. First, on 22 May 2019, the
OECD Ministerial Council adopts its “Recommendation on Artificial Intelligence”. It includes two sections. The first one sets out five
complementary principles relevant to all stakeholders.69 The second section concerns national policies and international co-operation
for trustworthy AI with recommendations to OECD Members and non-Members that having adhered to it70. It provides the first
intergovernmental standard for AI policies and a foundation on which to conduct further analysis and develop tools to support
governments in their implementation efforts. On 9 June 2019, the G-20 Ministers endorses the Principles for responsible stewardship
of trustworthy AI draw from OECD Recommendation.71
The second example is the EU, a completely different organization.72 In December 2018, the European Commission and the
Member States published a “Coordinated Action Plan on the Development of AI”.73 The Commission also launched the” Communi­
cation on Building Trust in Human-Centric Artificial Intelligence” on 8 April 2019.74 The European AI strategy and the coordinated
plan make clear that “trust is a prerequisite to ensure a human-centric approach to AI: AI is not an end in itself, but a tool that has to
serve people with the ultimate aim of increasing human well-being”.75 The Commission points out that the “Ethics Guidelines on
Artificial Intelligence” adopted by the AI HLEG “are non-binding and as such do not create any new legal obligations. However, many
existing (and often use- or domain-specific) provisions of Union law of course already reflect one or several of these key requirements,
for example safety, personal data protection, privacy or environmental protection rules”.76 This is very important. Even if there were
no specific provisions, there is no legal vacuum. There are standards directly or analogically applicable to AI. For their part, in addition,
Member States of the EU have signed on 10th April 2018 the EU Declaration on Cooperation on Artificial Intelligence.77 These two
examples are illustrative. The OECD and the EU are different organizations.78 The former has more countries but less powers and its
acts are not binding per se but are the guidelines followed by States in their internal order.79 The EU is an integration organization with
extensive competences but its powers concerning AI are still limited. The OECD “principles” are defined as “requirements” in the case
of the EU. Transparency and accountability are the only common features. The OECD principle of “Robustness, security and safety” is
limited to “Technical robustness and safety” in the EU. The requirement of “Privacy and Data Governance” is exclusive of EU. There is
no coincidence. The nuances and differences are important and also evidence a certain disagreement. Compromises are not easy to

67
An example to explain this statement can be found in intellectual property law. In many countries, the protection of this right is done through
administrative, civil and criminal procedures. Each of them fulfills its function. Separately, each one transfers a partial vision of that legal regime. It
is necessary to analyse the whole in order to understand the concept and its regulations.
68
There are many different international proposals and measures both from public and private authorship (Castel & Castel, 2016).
69
The principles are: Inclusive growth, sustainable development and well-being; Human-centered values and fairness; Transparency and
explainability; Robustness, security and safety; and Accountability (https://www.oecd.org/going-digital/ai/principles/).
70
These recommendations are: 1) investing in AI research and development; 2) fostering a digital ecosystem for AI; 3) shaping an enabling policy
environment for AI; 4) building human capacity and preparing for labour market transformation; and 5) international co-operation for trustworthy
AI (https://www.oecd.org/going-digital/ai/principles/).
71
https://g20trade-digital.go.jp/dl/Ministerial_Statement_on_Trade_and_Digital_Economy.pdf.
72
The EU activity has been compiled in March 2019 in the publication A survey of the European Union’s artificial intelligence ecosystem. It outlines the
EU’s high-level strategy and vision for AI, before looking at three crucial components the EU will need to implement this vision: funding, talent, and
collaboration (https://ec.europa.eu/jrc/communities/en/node/1286/document/survey-european-union%92s-artificial-intelligence-ecosystem).
73
https://ec.europa.eu/digital-single-market/en/news/coordinated-plan-artificial-intelligence.
74
According to the Communication, there are seven key requirements that AI applications should respect to be considered trustworthy: Human
agency and oversight; Technical robustness and safety; Privacy and Data Governance; Transparency; Diversity, non-discrimination and fairness;
Societal and environmental well-being; and Accountability (Communication from the Commission to the European Parliament, the Council, the
European Economic and Social Committee and the Committee of the Regions “Building Trust in Human-Centric Artificial Intelligence”, COM (2019)
168 final, Brussels, 8.4.2019, p. 3).
75
Ibidem, p. 1.
76
Ibidem, pp. 3–4.
77
https://ec.europa.eu/jrc/communities/en/node/1286/document/eu-declaration-cooperation-artificial-intelligence.
78
The Council of the European Union is also very active in the field of AI, in particular from a human rights perspective (https://www.coe.int/en/
web/artificial-intelligence).
79
The recommendation has been adopted by the 36 Member States of the OECD and six other countries: Argentina, Brazil, Colombia, Costa Rica,
Peru and Romania.

12
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

make, but it is necessary to establish a universal governance model and a general normative framework for AI.

3.3.2. Governance model


The ideal governance model for AI would be the creation of a universal international organization (IO) based on an international
treaty. This treaty should establish rights and obligations as well as clear commitments concerning its use and development. The IO
should have a basic structure that meets the dual requirement of being acceptable to countries that remain the principal actors of
international society and of being flexible and innovative in the light of the matter. States, as subjects of International Law, must be the
main architects of that organization. However, the organization should adapt to the specific conditions imposed by the management of
AI.
Firstly, it could not be a classical organization composed only by States. In fact, it should guarantee the association of the multi-
stakeholder community to the decision-making and implementation processes for two reasons: one, because of its value and impor­
tance in the development of AI; and two, because of the transnational and intersubjective nature of this phenomenon.
Secondly, the organization should have the basic structure with an assembly, a council, the secretariat and a court that guarantees
legality80. It is also important to establish three specific organs devoted to: 1) Representation of the multi-stakeholder community; 2)
Research and Development81; and 3) Technological Transference, especially to countries with lower levels of AI technology
development.82
From a realistic perspective, the functioning of this proposed IO may be seriously hampered by the enormous differences between
States. In addition to the usual divergencies of a political or economic nature, the technological disparities between them can be an
important obstacle to cooperation. States defend their own advances in AI not only for political, social and economic reasons but also in
terms of security (Aradau & Blanke, 2017). States are also aware of the need to overcome the growing imbalance implied by their
technological differences. There is an interest in the economic benefits of technology transfer and AI-related trade. There is also some
awareness of the global threats and risks posed by structural inequality in AI development. The function of this IO would be to
articulate cooperation between States, mainly for the adoption of a specific normative framework in the field of AI.

3.3.3. Regulatory bases


AI raises many questions and challenges from a juridical point of view.83 There are, however, three basic, general and transversal
aspects that must be analysed first and foremost. The first is the protection of the basic rights and freedoms of individuals to the extent
that they may be highly affected by AI development (Risse, 2019).84 Human rights, which are based on more established legal in­
terpretations and practice, at the universal and national level, should replace ethics as the dominant framework for debate (Crawford
et al., 2019, p. 21). Also moving away from the discourse on ethics and AI, the Toronto Declaration aim to draw attention to the
relevant and well-established framework of international human rights.85 The European Union Agency for Fundamental Rights defend

80
The structure of this IO could envisage four basic components: (1) An assembly of States, IOs concerned and a representation of the multi-
stakeholder community with a different and appropriate legal status in each case; (2) An executive body, the council, elected by the assembly
with representation of all its members according to the nature and status of each of them; (3) An international court for ensuring respect for the rules
and the settlement of any disputes that might arise; (4) An administrative body exercising the functions of secretariat. Along with these main organs,
it would be possible to include consultative committees specialized in the diverse aspects and interests present in the AI guaranteeing a plural and
interdisciplinary participation.
81
The Members of the United Nations are already collaborating in research and development through the International Space Station, the Human
Genome Project and the Large Hadron Particle Accelerator (Castel & Castel, 2016, p. 11).
82
There are some examples of operational bodies within international organizations that, through various channels and techniques, seek to
overcome these functional or even structural differences. This would be the case of the Enterprise of the International Seabed Authority, which
includes the transfer of technology among its main attributions. The same idea is considered in Article IV.2 of the Treaty on the Non-Proliferation of
Nuclear Weapons. According to this article, “All the Parties to the Treaty undertake to facilitate, and have the right to participate in, the fullest
possible exchange of equipment, materials and scientific and technological information for the peaceful uses of nuclear energy. Parties to the Treaty
in a position to do so shall also co-operate in contributing alone or together with other States or international organizations to the further
development of the applications of nuclear energy for peaceful purposes, especially in the territories of non-nuclear-weapon States Party to the
Treaty, with due consideration for the needs of the developing areas of the world”) and the Coordinated Research Activities of the International
Atomic Energy Agency (IAEA)”.
83
It’s a broad and complicated issue. At times, there is some confusion about this topic. There are those who limit themselves to defending the
application the “three laws of robotics” that the writer Isaac Asimov introduced in 1942: A robot may not injure a human being or, through inaction,
allow a human being to come to harm; a robot must obey the orders given it by human beings, except when such orders would conflict with the
previous law; and a robot must protect its own existence as long as such protection does not conflict with the previous two laws. Actually, AI is a
cross-cutting phenomenon that requires not only the establishment of specific standards but also the rethinking of the feasibility and effectiveness of
pre-existing rules. An interesting and comprehensive study about the problems and challenges posed by AI can be found in the report published by
the European Commission Artificial Intelligence. A European Perspective (https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-
research-reports/artificial-intelligence-european-perspective).
84
On 26 June 2019, in the Council of Europe, the Committee of experts on human rights dimensions of automated data processing and different
forms of artificial intelligence has adopted the “Draft Recommendation of the Committee of Ministers to member States on the human rights impacts
of algorithmic systems”. Threats and risks to human rights are thoroughly analysed. (MSI-AUT (2018)06rev1).
85
https://www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf.

13
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

exactly the same position.86 The second point is the definition of the legal status of AI devices. The third is the question of the rela­
tionship between humans and AI. These issues are a priority for three main reasons: 1) The rapid and continuous technological progress
requires to prioritize the analysis of juridical solutions on basic general and constitutional aspects; 2) AI and the technologies of the
so-called Fourth Industrial Revolution call into question basic existential principles of humanity and society (Rouhiainen, 2018, p. 36);
And 3) The coexistence and relationships between humans and AI devices is the main legal, social and political challenge (Frank &
Nyholm, 2017).
Although the questions posed by AI are numerous and significant, it is important to highlight that there is not a legal vacuum.87
Firstly, there are legal rules and principles of an imperative nature which apply generally to all human and social activity including the
development of AI. The principle of the prohibition of the use of force in the context of international relations or the right to life and the
integrity of the person are clear examples in this regard.88 Secondly, there are mandatory rules and principles that can be applied to AI
through the principle of analogy.89 The regulations on consumer protection or liability for defective products can be extrapolated
analogously to the sphere of the AI. The question of responsibility arises repeatedly in connection with the so-called “many hands
problem”, posed by the fact that the development and operation of AI systems typically entails contributions from multiple individuals,
organizations and machine components (Yeung, 2019, p. 11). Actually, all legal systems have principles and procedures to demand
responsibility. Ultimately, according to Yeung, “the fundamental principle of reciprocity applies: those who deploy and reap the
benefits of these advanced digital technologies (including AI) in the provision of services (from which they derive profit) must be
responsible for their adverse consequences” (Yeung, 2019, p. 14). Thirdly, there are rules and principles which may need to be revised
to take account of the unique characteristics of AI. Data protection regulations should be revisited to be effective in the different
scenarios of the massive use of data implied by the AI (Wachter & Mittelstadt, 2019). It is also the case concerning the circulation of
autonomous vehicles prohibited by the Vienna Convention on Urban Traffic of 1968 (Palmerini, 2017, p. 69). Finally, AI may require
the formulation of new rules. The principle of explicability or explanation can be a good example in this sense (Goodman & Flaxman,
2017).
In the end, there is an important and solid legal acquis on which to proceed with such changes, adaptations or ex novo normative
creations required by AI. Not every new phenomenon demands new normative bodies. But such a scientific and technological advance
that necessarily is changing human and social behaviours may require an adaptation of existing norms or the creation of specific rules if
the law in force proves insufficient or inefficient.

4. Conclusions

The AI debate has led to different theories and lines of thought ranging from the utopia of a perfect world to the dystopia of a
dehumanized world. From the utopian (idealistic) to the dystopian (disruptive) future, there is a wide variety of conceptions and
interpretations of this phenomenon (Oliver Ramírez, 2018, p. 34). It is not something unusual, on the contrary. In this case, it presents
two basic problems: the lack of the minimum socio-political consensus, and the absence of a global interdisciplinary analysis (Surden,
2019, p. 1310).
There is neither a basic social consciousness nor a sufficient and solid political will to address the challenge of AI. There is no
common language, no single methodology on its uses, skills and objectives. AI can have a positive or negative impact, or both,
simultaneously, for different audiences or from different perspectives. There is not uniform or unanimous assessment of its advantages
and/or disadvantages or how to manage them. In general terms, the debate is being placed between resistance to change and the
sublimation of change implied by AI. Either by ignoring it or by magnifying it, lack of knowledge or misconceptions about this
phenomenon are too widespread and really worrying. To some extent, that is understandable, as AI raises diverse and complex doubts,
concerns and problems.
AI research is being developed at different public and private levels, in large and small enterprises, corporations, academic in­
stitutions, organizations and States. The open source model is a process of knowledge sharing for anyone interested in AI. Large
corporations such as Google, Amazon, Microsoft, IBM, Apple or Nvidia offer platforms, applications and tools that provide users with
knowledge, skills and learning mechanisms for the development of AI. This modus operandi has a positive, even democratizing, effect
(Rouhiainen, 2018, p. 261), but also eventual negative effects. In fact, the functionalities of AI can be classified into several generic
categories: beneficial use; useful use; lawful use; perverse use; and illicit use which, in turn, may be criminal, terrorist or militaristic
uses. Ethical principles may provide the direction towards a positive use of AI, but they do not have the capacity to prevent, repress and
sanction negative uses. That is the function of law. Moreover, the legal discourse has to go further also because it includes the social,

86
https://fra.europa.eu/en/publication/2019/data-quality-and-artificial-intelligence-mitigating-bias-and-error-protect.
87
According to the Ethics Guidelines for Trustworthy AI, “it should be noted that no legal vacuum currently exists, as Europe already has
regulation in place that applies to AI” (AI HLEG, 2019). The Committee of experts on human rights dimensions of automated data processing and
different forms of artificial intelligence of the Council of Europe includes a relation of the pre-existing rules applicable to the AI in his Draft
Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems (MSI-AUT (2018)06rev1).
88
For instance, Member States of the Council of Europe are bound by the European Convention on Human Rights and the other treaties they have
concluded in relation to any area of action, including the AI. This implies that they must guarantee the rights and comply with the obligations
contained therein also with respect to AI.
89
Richard Collins highlights the importance of analogical reasoning in gaining an understanding of the nature of modern international law
(Collins, 2019).

14
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

political and economic dimensions of AI.


In this paper, AI is analysed on the basis of three methodological assumptions: 1) A clear distinction of the function of ethics and
law; 2) A non-formalist approach to law; and 3) The international and global nature of AI.
As seen in the previous sections, AI has led to a wide and solid debate on its ethical aspects. Ethics play an essential role, but ethical
concepts and principles vary in time and space and among the different subjects involved. For that reason, the ethical debate must be
open and inclusive, never exclusive or selective. Moreover, the role of ethics is different from that of law. Law is mandatory and has
legal and jurisdictional mechanisms to ensure its enforcement.
From a non-formalist approach, law is an instrument for the organization of social life. The effectiveness of norms depends on their
capacity to adapt themselves and respond to the characters of the reality they are called to regulate. The legal discourse has to
recognize the significance of AI, at least from a double perspective: on the one hand, the relationship between artificial and human
intelligence, which is an issue in which both the aspirations and fears raised by AI converge; and, on the other, the existence of various
AI modalities that require specific legal treatment due to the differences between them. It is fundamental to analyse AI from a non-
formalistic legal approach able to identify these different typologies and to organize AI in a way that is understandable and accept­
able to the citizenship.
Legal rules can be adopted in the internal framework and in the international sphere but, due to the scope and nature of this
phenomenon, international regulation is unavoidable. AI must be regulated by International Law with a universal or general vocation
due to the scope and nature of the phenomenon itself (Carter and Carter, 2016: 13). Nevertheless, universal legal norms can coexist and
be fully compatible with regional norms in those cases in which the consensus cannot be reached at a general level or when a greater
progress is desired at a regional or inter-regional level. It is important, particularly, to establish cooperation structures.
The creation of an IO could be the way to manage the necessary cooperation between States. It cannot be a classic model. It needs to
be a modality of open and inclusive organization able to respond to the needs and expectations of the different States, in spite of their
disparities, and able also to associate the multi-stakeholder community, key in the development and functioning of the AI. It also needs
to be a specialized and proactive IO devoted to research, technical development and transfer technology, as well as to the adoption of
specific regulatory standards. Normatively, there is not a legal vacuum. There are principles of an imperative nature that apply
obligatorily to AI. There are norms that can be applied analogically. There are rules that can be adapted to the AI. But there are also
aspects that require an ex novo regulation that responds to the uniqueness of the AI.
Challenges posed by AI must be approached from an interdisciplinary perspective. Technology, ethics and law are unavoidable
components in this debate. It is not easy to combine each of these dimensions of AI but only a serious and continuous commitment to
mutual understanding between them would allow us to respond to these challenges.

Acknowledgment

This work has been partially supported by Spanish Government-MINECO and FEDER, European Union, Spain funds, through
project TIN 2017-83494-R.

References

Allen, R. J. (2001). Artificial Intelligence and the evidentiary process: The challenges of formalism and computation. Artificial Intelligence and Law (Vol. 9,, 99–114.
Aradau, C., & Blanke, T. (2017). Governing others: Anomaly and the algorithmic subject of security. European Journal of International Security, 3(1), 1–21.
Asay, C. D. (2019). Artificial stupidity. William and Mary Law Review, 61, 1–56.
Ashley, K. D. (2002). An AI Model of case-based legal argument from a jurisprudential viewpoint. Artificial Intelligence and Law, 10, 163–218.
Asilomar AI principles.(2017). Available at: https://futureoflife.org/ai-principles/.
Bianchi, A. (2016). International law theories: An inquiry into different ways of thinking. Oxford Scholarship Online.
Boddington, P. (2017). Towards a code of ethics for artificial intelligence. Oxford: Springer.
Bostrom, N., & Yudkowsky, E. (2011). The ethics of artificial intelligence. Available at: https://nickbostrom.com/ethics/artificial-intelligence.pdf.
Brooks, R. A. (1991). Intelligence without representation (Vol. 47, pp. 139–159). Artificial Intelligence.
Bryson, J. J., Diamantis, M. E., & Grant, T. D. (2017). Of, for, and by the people: Le legal vacuum of synthetic persons. Artificial Intelligence and Law, 25, 273–291.
Castel, J.-G., & Castel, M. E. (2016). The road to artificial super-intelligence: Has international law a role to play? Canadian Journal of Law and Technology, 14(1), 1–15.
CEPEJ. (2018). Ethical charter on the use of artificial intelligence in judicial systems. Available at: https://www.coe.int/en/web/cepej/cepej-european-ethical-charter-on-
the-use-of-artificial-intelligence-ai-in-judicial-systems-and-their-environment.
Chesterman, S. (2019). Artificial Intelligence and the Problem of autonomy. NUS law working paper 2019/06. National University of Singapore.
Collins, R. (2019). Two idea(l)s of the international rule of law. Global Constitutionalism, 8(2), 191–226.
COMEST Working Group. (2017). Report of world commission on the ethics of scientific knowledge and technology (COMEST) on robotics ethics. Available at: http://www.
unesco.org/new/en/social-and-human-sciences/themes/comest/.
Crawford, K., et al. (2019). AI now report. AI Now Institute.
Dameski, A. (2018). A comprehensive ethical framework for AI entities: Foundations (pp. 42–51). Artificial General Intelligence.
Davenport, T., & Kirby, J. (2016). Only humans need apply: Winners and losers in the age of smart machines. Harper Business.
European Commission. (2018a). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Artificial Intelligence for Europe, COM (2018) 237 final, 25.4.2018.
European Commission. (2018b). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Coordinated Plan on Artificial Intelligence, COM (2018) 795 final, 7.12.2018.
European Commission. (2019). Communication from the commission to the European parliament, the European council, the council, the European economic and social
committee and the committee of the Regions. Brussels: Building Trust in Human-Centric Artificial Intelligence, COM (2019) 168 final, 8.4.2019.
European Group on Ethics in Science and New Technologies (EGE). (2018). Statement on ethics of artificial intelligence. Available at: https://ec.europa.eu/info/news/
ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en.
Fjeld, J., et al. (2019). Principled artificial intelligence. Mapping consensus in ethical and rights-based approaches to principles for IA. Berkman Klein Center, Harvard
University.
Floridi, L. (2019). Translating principles into practices of digital ethics: Five risks of being unethical (pp. 185–193). Philosophy & Technology.

15
M. Robles Carrillo Telecommunications Policy 44 (2020) 101937

Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations.
Minds and Machines, 28(4), 689–707.
Frankfurt, H. G. (1971). Freedom of the will and the concept of person. The Journal of Philosophy, 68(1), 5–20.
Frank, L., & Nyholm, S. (2017). Robot sex and consent: Is consent to sex between a robot and a human conceivable, possible, and desirable). Artificial Intelligence and
Law, 25, 305–323.
Goodman, B., & Flaxman, S. (2017). European regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 1–9.
Hage, J. (2000). Dialectical models in artificial intelligence and law. Artificial Intelligence and Law (Vol. 8,, 137–172.
Hage, J. (2017). Theoretical foundations for the responsibility of autonomous agents. Artificial Intelligence and Law, 25, 255–271.
Hawkins, J. (2017). What intelligent machines need to learn from the neocortex. Available at: https://spectrum.org/computing/software/what-intelligent-machines-
need-to-learn-from-the-neocortex.
Herrera Triguero, F. (2014). Inteligencia artificial, inteligencia computacional y big data. Universidad de Ja�en.
High-Level Expert Group on Artificial Intelligence (AI HLEG). (2018). Draft ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/knowledge4policy/
publication/draft-ethics-guidelines-trustworthy-ai_en.
High-Level Expert Group on Artificial Intelligence (AI HLEG). (2019). Ethics guidelines for trustworthy AI. Available at: https://ec.europa.eu/futurium/en/ai-alliance-
consultation.
Horowitz, M., et al. (2018). Strategic competition in an Era of artificial intelligence. Washington: Center for a New American Security.
Huws, C. F., & Finnis. (2017). On computable numbers with an application to the Alan Turing problem (Vol. 25, pp. 181–203). Artificial Intelligence Law.
IEEE. (2019). Ethically aligned design. Available at: https://ethicsinaction.ieee.org/.
Ikram, N. A. H. S., & y Kepli, M. Y. Z. (2018). Establishing legal rights and liabilities for artificial intelligence. International Islamic University of Malaysia Law Journal,
26(1), 177–178, 2018.
Japanese Society for Artificial Intelligence. (2017). Artificial intelligence ethical guidelines. Available at: http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-
Guidelines-1.pdf.
Kostopoulos, L. (2018). The emerging artificial intelligence wellness landscape: Opportunities and areas of ethical debate. California western school of law “AI ethics
symposium”. Available at: https://medium.com/@lkcyber/the-emerging-artificial-intelligence-wellness-landscape-802caf9638de.
Krausov� a, A. (2017). Intersections between law and artificial intelligence. International Journal of Computer, 27(1), 55–68.
Krieger, H., Nolte, G., & Zimmermann, A. (2019). The international rule of law. Rise or decline. Oxford University Press.
Kurzweil, K. (2017). La singularidad est� a cerca. Berlin: Lola Books GBR.
Lagioia, F., & Sartor, G. (2019). AI systems under criminal law: A legal analysis and a regulatory perspective (pp. 1–33). Philosophy & Technology.
Lehmann, J., Breuker, J., & Brouwer, B. (2004). Causation in AI and law. Artificial Intelligence and Law, 12, 279–315.
McCarthy, J., & Hayes, P. (1981). Some philosophical problems from the standpoint of artificial intelligence. Readings in artificial intelligence. Available at: https://www.
sciencedirect.com/science/article/pii/B9780934613033500337.
McGregor, L. (2019). Accountability for governance choices in artificial intelligence. European Journal of International Law, 29(4), 1079–1085.
McKinsey Global Institute. (2019). Notes from the AI frontier. Tackling europe’s gap in digital and AI. Discussion Paper. McKensy & Company.
Muehlhauser, L., & Helm, L. (2012). Intelligence Explosion and Machine Ethics, en. In Singularity hypotheses: A scientific and philosophical assessment, edited by amnon
eden, johnny søraker, james H. Moor, and eric steinhart. Berlin: Springer.
Myers West, S., Whittaker, M., & Crawford. (2019). Discriminating system. Gender, Race, and Power in AI. AI now.
Nilsson, N. J. (2010). The quest for artificial intelligence. A history of ideas and achievements. Available at: https://ai.stanford.edu/~nilsson/QAI/qai.pdf.
Oliver Ramírez, N. (2018). Inteligencia Artificial: Ficci�on, realidad y … sue~ nos. Available at: http://www.raing.es/es/publicaciones/discursos-de-ingresos/inteligencia-
artificial-ficci-n-realidad-y-sue-os.
Oskamp, A., & Lauritsen, M. (2002). AI in law practice? So far, not much. Artificial Intelligence and Law, 10, 227–236.
Palmerini, E. (2017). Rob� otica y derecho: Sugerencias, confluencias, evoluciones en el marco de una investigaci�
on europea (Vol. 32, pp. 53–97). Revista de Derecho Privado.
Penrose, R. (2012). Las sombras de la mente. Hacia una comprensi� on científica de la consciencia. Barcelona: Crítica.
Petit, N. (2017). Law and regulation of artificial intelligence: Conceptual framework and normative implications. Working Paper. Available at: https://www.researchgate.
net/publication/332850407_Law_and_Regulation_of_Artificial_Intelligence_and_Robots_-_Conceptual_Framework_and_Normative_Implications.
Renda, A. (2019). Artificial Intelligence. Ethics, governance and policy challenges. Brussels: Centre for European Policy Studies.
Risse, M. (2019). Human rights and artificial intelligence: An urgently needed agenda. Human Rights Quarterly, 41, 1–16. Number 1, 2019.
Roberts, H., et al. (2019). The Chinese approach to artificial intelligence: An analysis of policy and regulation. https://doi.org/10.2139/ssrn.3469784. https://ssrn.com/
abstract¼3469784. Available at: SSRN:.
Rouhiainen, L. (2018). Inteligencia artificial. Alienta Editorial.
Russell, S. J., & Norvig, P. (2016). Artificial intelligence. A modern approach. Pearson. Available at: http://thuvien.thanglong.edu.vn:8081/dspace/bitstream/DHTL_
123456789/4010/1/CS503-2.pdf.
Scherer, M. U. (2017). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law and Technology, 29(2),
354–400.
Searle, J. R. (1980). Minds, brains, and programs. Behavioural and Brain Sciences, 3, 417–457.
Select Committee on Artificial Intelligence. (2018). AI in the UK: Ready, willing and able? House of Lords.
Smith, B., & Browne, C. A. (2019). Tools and weapons. The promise and the peril of the digital age. Penguin Press, 2019.
Smith, J. C., et al. (1995). Artificial intelligence and legal discourse: The flex law legal text management system. Artificial Intelligence and Law, 3, 55–95.
Solum, L. B. (1992). Legal personhood for artificial intelligences. North Carolina Law Review, 70(4), 1230–1287.
Surden, H. (2019). Artificial intelligence and law: An overview. Georgia State University Law Review, 35(4), 1304–1337.
Turing, A. (1950). Computing Machinery and intelligence. Mind, 49, 433–460. Available at: https://www.csee.umbc.edu/courses/471/papers/turing.pdf.
UNI Global Union. (2018). 10 principles for ethical AI. Available at: http://www.thefutureworldofwork.org/opinions/10-principles-for-ethical-ai/.
Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI. Columbia Business Law Review, 1,
1–130.
Wagner, B. (2018). Ethics as an escape from regulation: From ethics-washing to ethics-shopping? In M. Hildebrandt (Ed.), Being Profiling. Cogitas ergo sum. Amsterdam
University Press.
Walton, D. (2005). Argumentation methods for artificial intelligence in law. Winnipeg: Springer.
West, D. M. (2018). The role of corporations in addressing AI’s ethical dilemmas. Available at: https://www.brookings.edu/research/how-to-address-ai-ethical-
dilemmas/.
Winfield, A. (2019). On the simulation (and energy costs) of human intelligence, the singularity and simulationism. In A. Adamatzky, & V. Kendon (Eds.), From
astrophysics to unconventional computation. Emergence, complexity and computation (Vol. 35). Cham: Springer.
Yeung, K. (2019). Responsibility and AI. Council of Europe Study DGI (2919)5.

16

You might also like