You are on page 1of 255

Routledge Research in the Law of Emerging Technologies

REGULATING ARTIFICIAL
INTELLIGENCE IN INDUSTRY
Edited by
Damian M. Bielicki
Regulating Artifcial Intelligence
in Industry

Artificial intelligence (AI) has augmented human activities and unlocked


opportunities for many sectors of the economy. It is used for data management
and analysis, decision-making, and many other aspects. As with most rapidly
advancing technologies, the law is often playing catch-up, so the study of
how the law interacts with AI is more critical now than ever before. This
book provides a detailed qualitative exploration into regulatory aspects of AI
in industry.
Offering a unique focus on current practice and existing trends in a wide range
of industries where AI plays an increasingly important role, this book contains
legal and technical analysis performed by 15 researchers and practitioners from
different institutions around the world to provide an overview of how AI is
being used and regulated across a wide range of sectors, including aviation,
energy, government, healthcare, legal, maritime, military, music, and others.
It addresses a broad range of aspects, including privacy, liability, transparency,
justice, and others, from the perspective of different jurisdictions.
Including a discussion on the role of AI in industry during the COVID-19
pandemic, the chapters also offer a set of recommendations for optimal regulatory
interventions. Therefore, this book will be of interest to academics, students, and
practitioners interested in the technological and regulatory aspects of AI.

Damian M. Bielicki is Senior Lecturer in Law and Director of the Law &
Technology Research Group at Kingston University London. He is also a
lecturer in Space Law and Cyber Law at Birkbeck University of London. He
is a Senior Fellow of the Higher Education Academy, a member of the
International Law Association, and a member of the International Institute of
Space Law.
Routledge Research in the Law of Emerging Technologies

Biometrics, Surveillance and the Law


Societies of Restricted Access, Discipline and Control
Sara M. Smyth

Artifcial Intelligence, Healthcare, and the Law


Regulating Automation in Personal Care
Eduard Fosch-Villaronga

Health Data Privacy under the GDPR


Big Data Challenges and Regulatory Responses
Edited by Maria Tzanou

Regulating Artifcial Intelligence


Binary Ethics and the Law
Dominika Ewa Harasimiuk and Tomasz Braun

Cryptocurrencies and Regulatory Challenge


Allan C. Hutchinson

Regulating Artifcial Intelligence in Industry


Edited by Damian M. Bielicki
Regulating Artifcial
Intelligence in Industry

Edited by Damian M. Bielicki


First published 2022
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN
and by Routledge
605 Third Avenue, New York, NY 10158
Routledge is an imprint of the Taylor & Francis Group, an informa business
© 2022 selection and editorial matter, Damian M. Bielicki; individual chapters, the
contributors
The right of Damian M. Bielicki to be identified as the author of the editorial
material, and of the authors for their individual chapters, has been asserted in
accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised
in any form or by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying and recording, or in any information
storage or retrieval system, without permission in writing from the publishers.
Trademark notice: Product or corporate names may be trademarks or registered
trademarks, and are used only for identification and explanation without intent to
infringe.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
Names: Bielicki, Damian M., 1983- editor.
Title: Regulating artificial intelligence in industry / edited by Damian M.
Bielicki.
Description: Abingdon, Oxon [UK]; New York, NY: Routledge, 2021. |
Series: Routledge research in the law of emerging technologies |
Includes bibliographical references and index.
Identifiers: LCCN 2021031305 (print) | LCCN 2021031306 (ebook) | ISBN
9780367774622 (hardback) | ISBN 9781032159652 (paperback) | ISBN
9781003246503 (ebook)
Subjects: LCSH: Artificial intelligence--Law and legislation. |
Industries--Technological innovations. | Privacy, Right of. | Data
protection--Law and legislation. | Artificial intelligence--Law and
legislation--European Union countries.
Classification: LCC K564.C6 R449 2021 (print) | LCC K564.C6 (ebook) | DDC
343.09/99--dc23
LC record available at https://lccn.loc.gov/2021031305
LC ebook record available at https://lccn.loc.gov/2021031306
ISBN: 978-0-367-77462-2 (hbk)
ISBN: 978-1-032-15965-2 (pbk)
ISBN: 978-1-003-24650-3 (ebk)
DOI: 10.4324/9781003246503
Typeset in Bembo
by Deanta Global Publishing Services, Chennai, India
Contents

Contributors vii
Preface x
List of acronyms and abbreviations xii

PART I
Horizontal AI applications 1

1 Artificial intelligence and its regulation in the European Union 3


GAURI SINHA AND RUPERT DUNBAR

2 The impact of facial recognition technology empowered by


artificial intelligence on the right to privacy 21
NATALIA MENÉNDEZ GONZÁLEZ

3 The malicious use of artificial intelligence against government


and political institutions in the psychological area 36
EVGENY PASHENTSEV AND DARYA BAZARKINA

4 Leveraging artificial intelligence in citizenship by investment


programmes 53
JEIEL D. JOSEPH

5 Artificial intelligence application in advance healthcare


decision-making: Potentials, challenges and
regulatory safeguards 66
HUI YUN CHAN

6 Artificial intelligence in the legal profession 83


DAMIAN M. BIELICKI
vi Contents
PART II
Vertical AI applications 97

7 Artificial intelligence: An earthquake in the copyright


protection of digital music 99
LUO LI

8 Artificial intelligence and risk preparedness in the aviation


industry 114
KINGA KOLASA-SOKOŁOWSKA

9 Autonomous AI, smart seaports, and supply chain


management: Challenges and Risks 127
MITJA KOVAČ

10 Artificial intelligence and climate-energy policies


of the EU and Japan 138
MACIEJ M. SOKOŁOWSKI

11 The regulation of militarised artificial intelligence:


Protecting civilians through legal reviews of new weapons
and precautions 156
TSVETELINA VAN BENTHEM

12 The use of Artificial Intelligence in armed conflicts –


implications for state responsibility 176
DOMINIKA IWAN

13 The problematisation of human control over lethal


autonomous weapons: A case study of the US Department
of State 190
MIKOLAJ FIRLEJ

Summary 213
Bibliography 215
Index 234
Contributors

Darya Yu Bazarkina is a Leading Researcher in the Department of European


Integration Research at the Institute of Europe of the Russian Academy of
Sciences in Moscow, Russia. She is an author of more than 100 publications
on the communication aspects of counter-terrorist activity, including the
problem of countering the malicious use of artificial intelligence in terror-
ists’ psychological operations.
Damian M. Bielicki is a Senior Lecturer in Law and Director of the Law
& Technology Research Group at Kingston University London, UK. Dr
Bielicki specialises in law and technology. In addition to his post at Kingston
University, he is also Lecturer in Space Law and Cyber Law at Birkbeck
University of London, UK. He is a Senior Fellow of the Higher Education
Academy, a member of the International Law Association, and a member of
the International Institute of Space Law.
Hui Yun Chan is a Senior Lecturer in Law at the University of Huddersfield,
UK. Dr Chan’s research interests are broadly in the field of health law,
particularly in relation to end-of-life, health governance and recently the
legal and ethical questions arising from innovative medical technologies.
She is the author of Advance Directives: Rethinking Regulation, Autonomy and
Healthcare Decision-Making (Springer, 2018).
Rupert Dunbar is a Senior Lecturer at Kingston University London, UK. Dr
Dunbar’s research concerns the relationship between international and domes-
tic law (especially the EU), debating and testing notions of justice, and legal
certainty in domestic courts’ case law. He also has an interest in the EU inter-
nal market and the accommodation of new initiatives/technologies within it.
Mikołaj Firlej is a DPhil student in Socio-Legal Studies at the Faculty of Law,
University of Oxford, UK. In the past, Mikołaj has helped to establish a new
academic centre at Oxford focused on the impact of technologies on global
affairs. Earlier, Mikołaj earned both an MPP and an MPhil in Socio-Legal
Studies from the University of Oxford and an MA from the University of
Warsaw, Poland. His research interests include political philosophy, tech-
nology policy, and security studies.
viii Contributors
Dominika Iwan is a Lecturer in Law at the University of Silesia in Katowice,
Poland. Dr Iwan’s research is on the impact of neural networks on interna-
tional human rights law and international humanitarian law. Moreover, she
is currently leading a research project on the ‘Prohibition of Discrimination
in Algorithmic Decision-Making’, awarded funding by the National Science
Centre of Poland. Previously she worked with the Polish Commissioner for
Human Rights and interned with the Organisation for Aid to Refugees in
Prague, Czech Republic.
Jeiel D. Joseph is a PhD candidate at the Centre for Financial and Corporate
Integrity at Coventry University, UK. Jeiel’s research focuses on the appli-
cation of proven artificial intelligence techniques, such as machine learning,
automated facial recognition, language processing, and blockchain technol-
ogy, in Citizenship by Investment Programmes for mitigation of corporate
and immigration risks, such as money laundering, tax evasion, and border
security.
Kinga Kolasa-Sokołowska is an aviation insurance professional. She comes
from a varied background that includes legal, broking, and academic experi-
ence. She is Client Advisor, Aviation & Aerospace, Asia, at Marsh Broker
Japan and Lecturer at Rikkyo University in Tokyo, Japan.
Mitja Kovač is a Professor of Civil and Commercial Law at the University
of Ljubljana, Slovenia. Professor Kovač’s research is in on comparative
contract law and economics, new institutional economics, consumer pro-
tection, contract theory, and competition law and economics. His most
research publication was on collective action and European public policy
during the COVID-19 pandemic.
Luo Li is a Assistant Professor in Law at Coventry Law School, UK. Dr Li’s
research focuses on digital transformation and advanced technologies,
including the application of artificial intelligence in creative industries and
its implications to intellectual property law and regulations. Prior to her
academic post, she worked at the Law and Legislative Advice Division of
the World Intellectual Property Organization.
Natalia Menéndez González is a PhD researcher at the Law Department at
the European University Institute in Florence, Italy. Natalia’s research con-
cerns the legal implications of facial recognition technology (FRT) empow-
ered by AI. She is concurrently working on the use of FRT by Facebook
and the privacy impact of FRT usage during the COVID-19 health emer-
gency. Natalia has previously worked at different law firms in Spain, and is
a member of the Lucena Bar Association.
Evgeny Pashentsev is a Leading Researcher at the Diplomatic Academy
of the Ministry of Foreign Affairs of the Russian Federation in Moscow,
Russia. He is an author of more than 150 publications on international
security, AI, and psychological warfare issues.
Contributors ix
Gauri Sinha is a Lecturer in Law at Royal Holloway, University of London,
UK. Gauri’s research concerns financial crime, cybercrime, financial regula-
tion, and corporate accountability. She consults on financial crime, and her
recent project with the Government of Jamaica resulted in amendments to
the domestic legislation around financial sanctions and asset freezing. She
previously worked at Pricewaterhouse Coopers as the Subject Matter Expert
in financial crime and at the World Bank as a financial crime specialist.
Maciej M. Sokołowski (PhD, DSc) is an Assistant Professor at the Faculty
of Law and Administration, University of Warsaw, Poland who has been a
long-term scholar at the Japanese universities (Keio, Todai, Meiji). Dr hab.
Sokołowski’s expertise is in energy law. He is the author of over 70 papers
and reports on the energy sector. He is a fellow at the Sustainability College
Bruges in Belgium, as well as a member of the SI Network for Future
Global Leaders, the Polish Electricity Association, the Australian Network
for Japanese Law, and the Japan Association of EU Studies. He was awarded
the Swiss Government Excellence scholarship, the Swedish Institute Visby
Program scholarship, and the municipal government of Shanghai scholar-
ship, and the scholarship for outstanding young scientists offered by the
Minister of Education and Science of Poland.
Tsvetelina Van Benthem is a DPhil candidate in Public International Law at
Merton College, University of Oxford, UK. She is also a research officer at the
Oxford Institute for Ethics, Law and Armed Conflict, working on the interna-
tional law protections against cyber operations targeting the healthcare sector.
Preface

Artificial intelligence (AI) has augmented human activities and unlocked


opportunities in many sectors of the economy. It is used for data management
and analysis, decision-making, and many other aspects. As with most rapidly
advancing technologies, the law is often playing catch-up, so the study of how
the law interacts with AI is more critical now than ever before. Therefore, the
aim of this book is to provide a detailed qualitative exploration into the legal
and regulatory aspects of AI in industry.
Professor John McCarthy is credited by many with coining the term ‘artifi-
cial intelligence’ at a Dartmouth College workshop in 1956.1 He described the
process of creating AI as ‘that of making a machine behave in ways that would
be called intelligent if a human were so behaving’.2 This term has been through
a significant evolution since then, and yet there is a lack of consensus as to what
AI is. There are dozens of definitions and typologies of AI in different contexts.
Textbook definitions vary and consider different qualities, standards, and tech-
nicalities. It can be argued that it is futile to create a ‘universal’ definition of
this term, particularly because it would first require a proper definition of the
term ‘intelligence’, which has many connotations.3 Moreover, there are many
different types and branches of AI, depending on their levels of sophistication
and autonomy. However, in its broadest sense, AI can be defined as the ability
of a computer system to deal with ambiguity4 and to provide automation of intelligent

1 For the evolution of the term and some of the technologies see, for example: W. Barfielfd and U.
Pagallo, Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018), 7–38; M. Corrales,
M. Fenwick, N. Forgo (ed.), Robotics,AI and the Future of Law (Springer 2018), 1–7.
2 J. Kaplan, Artificial Intelligence:What Everyone Needs to Know (OUP 2016), 1.
3 For instance, Legg and Hutter identified over 70 definitions of this term and its variations. See: S. Legg
and M. Hutter, ‘A Collection of Definitions of Intelligence’ (arXiv,Technical Report IDSIA-07-07, 15
June2007) <https://arxiv.org/pdf/0706.3639.pdf> accessed 27 May 2021.
4 This definition is based on Microsoft’s publication in which it is further specified that AI is ‘making
predictions using previously gathered data, and learning from errors in those predictions in order to
generate newer, more accurate predictions about how to behave in the future’. See T. van Kraay,‘What
is Artificial Intelligence?’ (Microsoft Azure, 9 August 2018) <https://azure.microsoft.com/en-gb/blog/
what-is-artificial-intelligence/#:~:text=%E2%80%9CThe%20ability%20of%20a%20digital,displayed
%20by%20humans.%E2%80%9D%20%E2%80%93%20Wikipedia> accessed 27 May 2021.
Preface xi
behaviour.5 This definition seems particularly appropriate to this book in that it
focuses on what AI systems are doing nowadays in the different sectors covered
in this publication.
This book is divided into two main parts. Part I includes sectors with hori-
zontal AI applications, and Part II concerns sectors with vertical AI applica-
tions. Accordingly, Part I focuses on AI applications that are generally more
acquirable because the technology can fit into different disciplines and fulfil
a variety of similar needs. The sectors that use it face similar types of legal
problems, including data protection, transparency, explainability, accountabil-
ity, and others. On the other side, Part II focuses on AI that is applied to a
specific problem in a specific industry that is highly optimised for that industry.
A vertical AI is therefore designed to solve very targeted needs in a particular
sector.
Any work that sought to cover all the sectors using AI would run to many
volumes. Therefore, this book is not intended to be comprehensive, and the
industries covered in this book do not encompass every industry currently
using AI. Nevertheless, it provides a solid overview of how AI is being used
and regulated across a wide range of sectors, including aviation, energy, gov-
ernment, healthcare, legal, maritime, military, music, security, supply chain,
and others. Wherever possible, it includes the impact of the COVID-19 pan-
demic on the use of AI in industry and on corresponding regulatory matters.
Finally, it offers a set of recommendations for optimal regulatory interventions.
This book collects the efforts of a diverse group of scholars and practitioners,
each of whom are attributed to in the list of contributors.
Damian M. Bielicki

5 The second part of this definition comes from the publication by Luger and Stubblefield who defined
this term as ‘the branch of computer science that is concerned with the automation of intelligent
behaviour’. See: G.F. Luger and W.A. Stubblefield, Artificial Intelligence: Structures and Strategies for Com-
plex Problem Solving (6th ed., Pearson 2008), 1.
List of acronyms and abbreviations

AEVA Automated and Electric Vehicles Act 2018 (UK)


AFR Automated Facial Recognition
AGI Artificial General Intelligence
AI Artificial Intelligence
APA US Administrative Procedure Act
ARSIWA Articles on Responsibility of States for Internationally
Wrongful Acts
ATM Air Traffic Management
AWS Autonomous Weapons Systems
BIPA Biometric Information Privacy Act
BSB UK Bar Standards Board
CAC Cyberspace Administration of China
CBI Citizenship by Investment
CCBE Council of Bars and Law Societies of Europe
CCW Certain Conventional Weapons
CDPA Copyright, Designs and Patents Act 1988 (UK)
CEPEJ European Commission for the Efficiency of Justice
CIU Citizen by Investment Unit
COMPAS Correctional Offender Management Profiling for
Alternative Sanctions
CRT Civil Resolution Tribunal
DoD U.S. Department of Defence
DPA Data Protection Authorities
DSB US Defence Science Board
EASA European Union Aviation Safety Agency
ECHR European Convention on Human Rights
EDPB European Data Protection Board
ENISA European Network and Information Security Agency
EU European Union
EUROCONTROL European Organisation for the Safety of Air
Navigation
EUROPOL European Union Agency for Law Enforcement
Cooperation
FCA US Foreign Claims Act
List of acronyms and abbreviations xiii
FERMA Federation of European Risk Management
Associations
FRT Facial Recognition Technology
GDPR General Data Protection Regulation
GGE Group of Governmental Experts
HART Harm Assessment Risk Tool
IATA International Air Transport Association
ICAO International Civil Aviation Organisation
ICJ International Court of Justice
ICRC International Committee of the Red Cross
ICTY International Criminal Tribunal for the Former
Yugoslavia
IEEE Institute of Electrical and Electronics Engineers
Standards Association
IHL International Humanitarian Law
IoT Internet of Things
IPS International psychological security
ISO International Organisation for Standardisation
ISPS International Ship and Port Facility Security
IT Information Technology
LAWS Lethal Autonomous Weapons Systems
LOAC Law of Armed Conflict
LRASM Long-Range Anti-Ship Missile
MOST Ministry of Science and Technology of the People’s
Republic of China
MUAI Malicious Use of Artificial Intelligence
NATO North Atlantic Treaty Organization
NHS UK National Health Service
OECD Organisation for Economic Co-operation and
Development
PS Psychological security
PSA Public Safety Assessment
PTRA Pre-Trial Risk Assessment Instrument
PV Photovoltaic
SARP Standards and Recommended Practices
SIPRI Stockholm International Peace Research Institute
SRA UK Solicitors Regulation Authority
STOA European Parliament’s Science and Technology
Options Assessment
SUPACE Supreme Court (of India) Portal for Assistance in
Courts Efficiency
UAV Unmanned aerial vehicle
UN United Nations
USAF US Air Force
WIPO World Intellectual Property Organisation
PART I

Horizontal AI applications
1 Artifcial intelligence and its
regulation in the European Union
Gauri Sinha and Rupert Dunbar

Background
There is an acknowledgement that the European Union (EU) is behind key
competitors in the race to attract, create and nurture AI companies and invest-
ment.1 On this basis, the European Commission has published a White Paper,
and a public consultation process on this has now been undertaken.2 The plan
is for the publication of legislative proposals imminently.3
With some uncertainty concerning the precise future regulation of AI in
the EU, this chapter seeks nonetheless to establish from the White Paper and
consultation process what indication there is for companies on key points of
concern: transparency, explainability and accountability. It contextualises these
concepts and highlights the challenges inherent in them. These, of course, are
challenges common to all jurisdictions.
Problems more unique to the EU also remain though, not least an under-
lying scepticism concerning AI across not just EU citizenry, but also public
authority and business—with 90% of the consultation respondents expressing
concern that AI may breach fundamental rights.4 There is also an interesting
development in that the European Parliament has published its positions on AI
and the emergence of nationalism in both the consultation and following the
COVID-19 pandemic, providing a far from unified picture.
The reality is that regulation on the ground is likely to be years away.
Given that action through legislation (‘positive harmonisation’) is far from
immediate, it is important to draw attention to the Court of Justice of the
European Union’s probable role in developing this area through its case law

1 White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, COM (2020)
65 Final (Brussels, 19 February 2020), 4.
2 Public Consultation on AI White Paper: Final Report, European Commission, DG for Communica-
tions Networks, Content and Technology (November 2020).
3 European Parliament (Press Releases), ‘Parliament leads the way on first set of EU rules for Arti-
ficial Intelligence’ (20 October 2020) <https://www.europarl.europa.eu/news/en/press-room
/20201016IPR89544/parliament-leads-the-way-on-first-set-of-eu-rules-for-artificial-intelligence>
accessed 27 May 2021.
4 Public Consultation on AI (n 2), 7.

DOI: 10.4324/9781003246503-2
4 Regulating artificial intelligence in industry
(so-called ‘negative harmonisation’). Overall, companies will be encouraged
by the Court’s mood music, even if the specific tune is difficult to identify at
this stage.
Ultimately, concerning the future of regulation for AI in the EU, it must
be said that whilst there is ambition to achieve much in the field, the collec-
tive vision and philosophy for what AI can do, and how it can benefit society,
remains fragmented.

Introduction
The White Paper defines AI as ‘a collection of technologies that combine
data, algorithms and computing power’.5 Its applications are multiple and the
Commission recognises that, in a positive light, it could improve healthcare,
help combat climate change, increase the efficiency of production and improve
security (amongst others).6 But in order to achieve these ambitions it needs to
overcome certain challenges, including the fact that citizens are anxious con-
cerning AI’s capacity to do them harm, both intentionally and unintentionally,
and that businesses are seeking legal certainty. A unified way forward is sought.
How can this be achieved? The answer is not straightforward and depends
on a number of subjective factors that have their roots in trust and ethics. For
the EU there is an increasing realisation that making decisions quickly or more
efficiently is not what defines the success of AI. What is more important is how
the benefits are realised. Are we ready to leave decisions that impact human
lives in the hands of machines? Perhaps the answer to this ‘readiness’ lies some-
where in the corridors of trust and ethics, and it is to these concepts that the
chapter now turns.
In April 2019, the European Commission High Level Expert Group on AI
adopted the Ethics Guidelines for Trustworthy AI, stressing that human beings
will only be able to confidently and fully reap the benefits of AI if they can trust
the technology.7 The broad principle that AI initiatives should not be realised
if they entail the compromising of ethics is abundantly clear.8 Compliance with
ethics, however, is a different complexity that merits discussion. The ‘creative’
interpretation of ethical principles should not be used as a shield to achieve
‘box-ticking’ compliance where corporations only seem to be complying.9
The relationship between ethics and law adds a further layer of intricacy. As
highlighted by the European Data Protection Supervisor, ethics in the EU is

5 White Paper (n 1), 2. See also A Definition of AI: Main Capabilities and Disciplines, High-Level Expert
Group on Artificial Intelligence (Independent), European Commission B-1049 (Brussels, 8 April 2019).
6 White Paper (n 1), 1.
7 Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence (Independent),
European Commission B-1049 (Brussels, 8 April 2019).
8 S.Tsakiridi,‘AI ethics in the post-GDPR world: Part 1’ (2020) P. & D.P. 20(6), 13–15.
9 Compliance in several other areas, for example, financial crime laws, has been reduced to ‘box-ticking’
where organisations seem to be complying to avoid regulatory action.
AI and its regulation in the EU 5
not conceived as an alternative to compliance with the law, but as the under-
pinning values that protect human dignity and freedom.10
What follows in this chapter is an attempt to unravel the complexities
behind three ethical principles that form the cornerstone of an effective AI
model—transparency, explainability and accountability.

Transparency
In the context of AI, transparency indicates the capability to describe, inspect
and reproduce the mechanisms through which AI systems make decisions.
Transparency, however, is closely linked to trust, which is also about being
explicit and open about choices and decisions concerning data sources and
development processes and stakeholders.11 In essence, it would mean having
a complete view of the system on three levels.12 First, at the implementa-
tion level, where the AI model acts on the data that it is fed to produce a
known output, including the technical principles of the model and the associ-
ated parameters. The Commission considers this as the standard ‘white-box’
model, in contrast to the ‘black-box’ model where this principle is unknown.
Second, at the specification level, all the information that resulted in the imple-
mentation, such as objectives, tasks and relevant datasets, should be open and
known. The third level is interpretability, which represents the understanding
of the underlying mechanisms of the model. For example, what are the logi-
cal principles behind the processing of data and what is the rationale behind a
certain output? The Commission believes that these questions are the hardest
to answer and the third level of transparency is not achieved in current AI sys-
tems.13 This is particularly complex as the third level of transparency is closely
linked to fairness in decisions, often impacting human lives. Fairness itself is a
subjective and contextual concept, influenced by multiple social, cultural and
legal factors.
With the level of subjectivity present, it is crucial to remember that AI
models are not designed by machines and potentially reflect the biases and
prejudices of the designers choosing the features, metrics and structures of
a model.14 The concept of trust also needs further clarification to navigate
through the effectiveness of AI. Trust is primarily used to speak about the trust
or distrust of individuals and institutions who are responsible for developing,
deploying or using AI. However, trust could also be directed at those who are

10 European Data Protection Supervisor,‘Ethics’ <https://edps.europa.eu/data-protection/our-work/


ethics_en> accessed 27 May 2021.
11 V. Dignum, Responsible Artificial Intelligence, in Artificial Intelligence: Foundations,Theory, and Algorithms
(Springer 2019), 54.
12 R. Harmon, H. Junklewitz, I. Sanchez,‘Robustness and Explainability of Artificial Intelligence’ EUR
30040 EN (2020) EU Science Hub, 11.
13 ibid., 12.
14 Tsakiridi (n 8), 13–15.
6 Regulating artificial intelligence in industry
responsible for regulating AI, who are overseeing its use or providing condi-
tions for its development. Having all of these stakeholders involved in decisions
that impact human beings is perhaps the fairest way to build trust, but this may
not be possible due to the number of stakeholders or the complexity of the AI
systems. The outcome then becomes almost as important as the process—but
are we able to ensure that AI systems are transparent to the extent they can be?
Opacity in machine learning, the so-called ‘black-box’ algorithm, is often
mentioned as one of the primary hurdles to achieving transparency in AI.15
The primary aim of AI algorithms is to identify patterns or similarities in a
particular dataset. As such, the correlations that a particular dataset presents
(for example, the correlation between geographic area and race) are also learnt
by the algorithm. There is a risk that these correlations will lead to bias and
unfair outcomes due to preconceived human notions. For example, there have
been many issues regarding facial recognition models trained on limited datasets
reflecting the bias of model developers, and unfair outcomes of loan decision
models that have historically used biased datasets to determine availability of
credit.16
It follows that bias-free decision-making should be the outcome of a trans-
parent AI model. The decision-making process should be open and fair, for
example, rejected loan applications should be able to list objective reasons why
an applicant was rejected. This could be yearly income or the amount of sav-
ings, but not subjective reasons such as race, gender or postcode. Transparent
objective criteria would also result in a fairer process for challenging AI deci-
sions, as AI users will be in a better position to understand unexpected AI deci-
sions. To achieve the fairest outcome, it should be possible to demonstrate that
both the design and implementation processes that have gone into a particular
decision are ethically fair and trustworthy. 17
Labelling an AI model as effective without achieving the level of transpar-
ency discussed above carries with it an inherent risk at the user level. It fosters
‘ignorance’, as the users are not aware of their actions when they are letting AI
make decisions. This could lead to an overreliance on AI, which potentially
may not align with societal ideas of morality.18 For AI to succeed, the object of
trust needs to be made clearer, along with identifying various trust relationships
with different parties.19

15 Dignum (n 11), 59.


16 R. Schmelzer,‘Towards a More Transparent AI’ (Forbes, 23 May 2020)
<https://www.forbes.com/sites/cognitiveworld/2020/05/23/towards-a-more-transparent-ai>
accessed 27 May 2021.
17 Sandy Tsakiridi,‘AI Ethics in the Post-GDPR World: Part 2’ (2020) P. & D.P. 20(7), 6–10, 6.
18 When the ‘driver’ of a self-driving car, for example, uses the car but is completely ignorant about how
the car makes decisions, any misjudgements by the car may lead to the perception of immorality on
the part of the driver.
19 Margit Sutrop,‘Should we trust Artificial Intelligence?’ (2019) TRAMES 23(73/68) 4, 499–522, 512.
AI and its regulation in the EU 7
Accordingly, the White Paper acknowledges challenges in verifying com-
pliance, and contemplates placing some onus on companies to diligently retain
training data (and explain why such data was selected), to document the pro-
gramming of the algorithm and to record any relevant processes/techniques
when designing or testing the system.20 The idea is not just that this could
create a benchmark against which to judge compliance, but also that it could
mitigate risk and improve the process of AI design by integrating better prac-
tices in transparency from the outset. This would still fall short of the third level
of transparency, indicated above.

Explainability
Whilst transparency is about understanding how an AI model makes its deci-
sions, explainability goes a step further. It adds the need for justification, which
in the AI context would require an explanation as to why a particular decision
was reached. For example, if a loan application is rejected, a consumer may
rightly ask for the reasons or justification behind the rejection. Resolving these
issues definitively is especially pressing as there is ongoing academic debate
concerning the current extent of legal requirements under the General Data
Protection Regulations (‘GDPR’) in this regard.21
Explainability has also been linked to responsibility, in that AI experts should
know what they are doing, and should be able and willing to communicate,
explain and give reasons for the impact that AI models are having on human
and non-human subjects. Reading between the lines, morality, again, is closely
linked, including the obligation to gain greater awareness of unintended conse-
quences and the moral significance of what the models do. This awareness has
been seen as an essential requirement for the effective working of AI.22
Explainability could take various forms, the most obvious being the justifi-
cation of key features that come into play in AI decisions. Conversely, explain-
ing the requirements that are not met may also help in explaining a decision,
particularly in customer-facing scenarios such as loan applications, where it
might be explained why the customer has not met certain criteria to be success-
ful.23 Regardless of the justification provided, the risk of subjectivity reflecting
human biases still creeps in. It is the Commission’s view that bias and dis-
crimination are inherent risks of any societal or economic activity that involves
human decision-making. However, AI amplifies this risk by impacting a large

20 White Paper (n 1), 18–19.


21 See Maja Brkan and Grégory Bonnet, ‘Legal and Technical Feasibility of the GDPR’s Quest for
Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas’ (2020)
11(1) EJRR, 18–50.
22 Mark Coeckelbergh ‘Artificial Intelligence, Responsibility Attribution, and a Relational Justification
of Explainability’ (2020) 26 Science and Engineering Ethics 2051–2068, 2066
23 Harmon, Junklewitz, Sanchez (n 12), 13.
8 Regulating artificial intelligence in industry
group of people who may be directly affected by such discrimination and bias.
This is particularly true when AI ‘learns’ from the data that is fed into it.24
How, then, do we ensure that an AI model is fair in its decision-making
process? The answer is not a convincing one; however, GDPR offers some
respite, at least with respect to data processing.25 GDPR has been specifically
put in place to ensure the protection of personal data in the context of rapid
technological developments. It provides that personal data should be processed
fairly, which implies an analysis of whether such processing is discriminatory,
detrimental or misleading to the impacted individuals.26 This means that the
AI controller should consider the likely impact of its use of AI on a recur-
ring basis and continuously reassess it, bringing us back to the significance of
explainability.27
GDPR upholds the principles of transparency and explainability by includ-
ing the data subject’s rights to information and access to personal data.28 This
means that the developers of AI systems have a legal duty to build in safeguards
that guarantee the upholding of data protection principles. However, it is not
exhaustively defined what personal data is, which may make it difficult to
determine the extent to which AI can be used for data processing purposes.29 If
AI ‘learns’ a pattern of data processing that falls outside the realm of GDPR, it
runs the risk of discrimination and bias, as mentioned previously. This makes
it crucial that the gap of defining personal data is addressed at the development
stage.30
On the surface, GDPR’s prohibition of the processing of special categories
of personal data by solely automated means can be seen to provide strong pro-
tection against discrimination by AI models.31 In reality, however, the special
categories of personal data do not include colour, language, property, member-
ship of a minority and birth, which are recognised as grounds of discrimina-
tion in the EU Charter.32 This constitutes a potential gap in the prevention

24 White Paper (n 1), 11.


25 As long as a certain activity of processing personal data falls within the material scope of the Regula-
tion (EU) 2016/679 on the protection of natural persons with regard to the processing of personal
data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Pro-
tection Regulation) [2016] OJ L119, art. 2, the full body of its provisions apply.
26 Ibid., art. 5.
27 Tsakiridi (n 8), 13–15.
28 Regulation (EU) 2016/679 (n 25), art. 5, para. 1, letter (a); art 12, para 1; art 15, para 1.
29 F. Ufert, ‘AI Regulation Through the Lens of Fundamental Rights: How Well Does the GDPR
Address the Challenges Posed by AI?’ (2020) 5(2) European Papers 1087–1097, 1095
30 ibid.
31 Regulation (EU) 2016/679 (n 25), art. 9, para. 1 – Processing of personal data revealing racial or
ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and
the processing of genetic data, biometric data for the purpose of uniquely identifying a natural
person, data concerning health or data concerning a natural person’s sex life or sexual orientation
shall be prohibited.
32 Charter of Fundamental Rights of the European Union [2012] OJ C326/391, arts. 21 and 22.
AI and its regulation in the EU 9
of discrimination by the processing of personal data.33 GDPR also prohibits
profiling, which is a form of processing where AI systems categorise individuals
based on their probability to belong to a certain group, and not as individuals
in their own right.34 However, the data subject’s specific consent constitutes an
exception to the prohibition, again leaving a margin of inefficiency to prevent
discrimination.35
In its White Paper, the Commission admits that it is crucial to review the
current legislative framework to reflect the current technological developments
and to fully consider the human and ethical considerations of AI.36 Opacity as
an inherent feature of AI makes the implementation of any legislation around
it more challenging. From a consumer’s point of view, the level of safety and
protection of fundamental rights should be no different to a product that does
not rely on AI for its development or deployment. AI is also a rapidly evolving
landscape, and any current legislation must quickly adapt and evolve for the
regulatory framework to be effective.37
Current proposals that emerge seem to fall short concerning explainability
(which is to be expected given that full transparency has also not been achieved).
Ultimately there is the suggestion that companies could be required to provide
‘clear information … as to the AI system’s capabilities and limitations’ and that
‘citizens should be clearly informed when they are interacting with an AI system
and not a human being’.38 However, these seem to be band-aids to mitigate
anticipated shortcomings of explainability rather than measures providing for it.

Accountability
Transparency, explainability and accountability may often be used as syno-
nyms, but it is important to highlight the fundamental differences between
them. Whilst transparency and explainability refer to the explanation of deci-
sions before the AI model has taken its decision, accountability refers to the
ability to explain the role of an AI model after the action is done, or not done.39
In simple terms, the goal is to determine where the liability lies.

33 Ufert (n 29), 1096.


34 Regulation (EU) 2016/679 (n 25), art. 22, para 1—The data subject shall have the right not to be
subject to a decision based solely on automated processing, including profiling, which produces legal
effects concerning him or her or similarly significantly affects him or her.
35 Ufert (n 29) 1087–1097, 1096
36 White Paper (n 1): An extensive body of existing EU product safety and liability legislation, including sector-
specific rules, further complemented by national legislation, is relevant and potentially applicable to a number of
emerging AI applications.As regards the protection of fundamental rights and consumer rights, the EU legislative
framework includes legislation such as the Race Equality Directive, the Directive on equal treatment in employ-
ment and occupation, the Directives on equal treatment between men and women in relation to employment and
access to goods and services and a number of consumer protection rules.
37 ibid., 13–17.
38 ibid., 20.
39 Dignum (n 11), 54.
10 Regulating artificial intelligence in industry
Unsurprisingly there are few hard and fast rules on liability. However, two
main issues need to be considered. First, there is a question of how responsibil-
ity will be distributed between the different AI players involved. Many actors
are involved in the lifecycle of an AI system. These include the developer, the
deployer (the person who uses an AI-equipped product or service) and poten-
tially others (producer, distributor or importer, service provider, professional
or private user).
In the Commission’s view, in a future regulatory framework, any liability
should be matched to the party that is best placed to address any potential
risks.40 For example, whilst the developers of AI may be best placed to address
risks arising from the development phase, their ability to control risks during
the use phase may be more limited. In that case, the deployer should be subject
to the relevant obligation. This is without prejudice to the question of, for the
purpose of liability to end-users or other parties suffering harm and ensuring
effective access to justice, which party should be liable for any damage caused.
Under EU product liability law, liability for defective products is attributed to
the producer, without prejudice to national laws, which may also allow recov-
ery from other parties.41 The concept of a defective product comes into play
when it does not provide the safety that a person is entitled to expect, taking all
circumstances into account, including presentation, the use to which it could
reasonably be expected to be put to and the time when the product came into
circulation.42 This is of limited use in the AI context because it is not unified,
and AI is capable of doing harm without being ‘defective’.
The Commission’s approach could be seen as what has been referred to as
the ‘control’ condition. An AI agent should only be responsible for an action
or decision if they had control over their actions.43 But, what if the agent is
not human and the responsibility lies with a machine? There are extensive
debates on this issue of whether technology can be a responsible agent.44 Since
the nature of AI models involves processing data and ‘learning’ over a period
of time, attributing complete responsibility to a human may seem extreme.
At the same time, it cannot be ignored that humans are behind the data that
is fed into AI models, in the first instance at least. Even if (some) AIs can act
or decide, they lack the capacity for moral agency, and so the responsibility
for their actions remains with the human agents who develop and use the
technology.45 To overcome these fine lines between human and AI interven-
tions, a hybrid approach of ‘distributed agency’ has been suggested, entailing
distributed responsibility whereby all parties are held accountable for their role

40 White Paper (n 1), 22.


41 Council Directive 85/374/EEC on the Approximation of the Laws, Regulations and Administrative
Provisions of the Member States Concerning Liability for Defective Products [1985] OJ L210/29.
42 ibid., art. 6.
43 Coeckelbergh (n 22), 2054.
44 These debates are beyond the scope of this chapter, see more in ibid., 2051–2068.
45 ibid., 2055.
AI and its regulation in the EU 11
in the outcomes and actions of an AI application on the basis of a common
moral framework.46 But this concept runs the risk of implementation, where a
party can be found ethically but not legally accountable. Even if this approach
is adopted, the reality remains that machines cannot be attributed responsibility
(and therefore also not irresponsibility). Humans, however, can be responsible
and therefore should be held responsible for what they do and decide when
developing or using AI.47
Although GDPR applies to AI systems that process personal data, the regu-
latory vacuum of not being able to address the unique challenges posed by AI is
increasingly visible. Until legislation is introduced to address these challenges,
clarity can be provided through regulatory guidance to effectively implement
certain rules. An efficient regulatory mechanism should focus on how to mini-
mise any potential harm arising out of AI decisions, in particular the harms that
are likely to cause the greatest impact.48 Maximum stakeholder participation in
governance structures is also key to ensure that from consumers to businesses,
researchers and civil society, all the relevant parties should be consulted on
how the regulatory framework could evolve.49
The difficulty in determining accountability has led to the existence of an
‘accountability gap’.50 The Commission acknowledges that ‘[m]any actors
are involved’ in an AI life cycle, including developer, producer, importer/
exporter, deployer and user.51 But the following example provides that:

developers of AI may be best placed to address risks arising from the devel-
opment phase, [but] their ability to control risks during the use phase
may be more limited. In that case, the deployer should be subject to the
relevant obligation.52

This is far from developed and appears to anticipate that some potential influ-
ence is not sufficient to carry forward any obligation for the developer into
the deployment stage. It may be preferable to treat the endeavour as a shared
enterprise, rather than a baton (or hot potato) to be passed. This is the approach
for issues of liability in the proposals (where a developer could theoretically be
accountable for the harm caused in the given example). This, it is submitted,
confuses matters by envisioning single party obligations with multiple party
liabilities. This is in part attributable to the fact that issues of liability, as dis-
cussed above, also rest within Member State rules.

46 Tsakiridi (n 17), 8.
47 Coeckelbergh (n 22), 2055.
48 White Paper (n 1), 10.
49 Ibid., 25.
50 Tsakiridi (n 17), 9.
51 White Paper (n 1), 22.
52 ibid.
12 Regulating artificial intelligence in industry
The level of human oversight which will be required is not yet fully evident.
The White Paper explores—non-exhaustively—review prior to deployment,
review after AI action, monitoring throughout or integrating operational con-
straints (e.g. automatic stop).53 The EU Parliament has been clearer in stating
that for high-risk AI applications human oversight should be possible at all
stages.54

Investment and co-ordination


As indicated above, shortcomings in attracting investment and developing
companies within the EU have led to the need for action at the EU level. The
figures that the Commission relies upon showed €3.2 billion was invested in
AI in Europe in 2016, compared to around €12.1 billion in North America
and €6.5 billion in Asia. These numbers will be further impacted by Brexit,
whereby in 2018 it was recorded that the UK accounted for one-third of all AI
firms in the EU.55 This, of course, is not merely a significant loss but simultane-
ously the instant emergence of a strong competitor.
Accordingly, it is not surprising that proposals to attract €20 billion in fund-
ing for AI in the EU have been welcomed by industry, with particular support
for investment in skills,56 innovation and research,57 and working with Member
States.58
Which leads to an interesting wrinkle. This comes in the shape of reluctance
reflected in the consultation concerning the Commission’s flagship ‘lighthouse’
project. The Commission asserted there was a ‘need’ for a ‘centre of research,
innovation and expertise that would coordinate these [national] efforts and
be a world reference of excellence in AI and that can attract investments and
the best talents in the field’.59 This was suggested as a means to overcome
the ‘current fragmented landscape of centres of competence with none reach-
ing the scale necessary to compete with the leading institutes globally’.60 In
2019 Forbes recorded that 32 of the most promising 50 AI companies were
based in California (reflecting a strong grouping around Silicon Valley).61

53 ibid., 21.
54 European Parliament (n 3).
55 JRC Technical Reports, ‘AI Watch TES analysis of AI Worldwide Ecosystem in 2009-2018’ EUR
30109 EN (2020) EU Science Hub, 5.
56 90% considered this very important, Public Consultation on AI (n 2), 5.
57 88% considered this very important, ibid.
58 87% considered this very important, ibid.
59 White Paper (n 1), 6.
60 ibid., 5.
61 J. D’Onfro, ‘AI 50: America’s Most Promising Artificial Intelligence Companies’ (Forbes, 17 Sep-
tember 2019) <https://www.forbes.com/sites/jilliandonfro/2019/09/17/ai-50-americas-most
-promising-artificial-intelligence-companies/?sh=735e2a86565c> accessed 27 May 2021—The
remaining breakdown is Massachusetts (5) (Boston (4) and Cambridge (1)); New York City, NY (5);
Seattle,WA (5);Ann Arbor, MI (1);Austin,TX (1); Chicago, IL (1).
AI and its regulation in the EU 13
Whilst China’s experience is currently more diffuse, it too plans to develop a
central hub model.62 The reluctance to pool talent and focus innovation in the
EU ‘lighthouse’ (reflected by 86% suggesting it was very important to support
current centres and networks with ‘only’ 64% suggesting a lighthouse scheme
was very important63) was perhaps to be expected given national self-interest,
but it does mean that a more organic and less turbo-charged ascent may result.
There is also a broader concern. One needs to be tentative in drawing
conclusions too soon from the coronavirus pandemic, but it is fair to say that
even between nations and institutions within the EU the experience has not
always been edifying concerning united action. The coordinated day planned
to simultaneously begin mass COVID vaccinations across all EU Member
States, in what the EU termed ‘a touching moment of unity’, was undermined
by some nations acting unilaterally in advance.64 The Commission’s threat to
invoke Article 16 of the Northern Ireland protocol to prevent vaccines from
arriving in the UK via the ‘back door’ was met with incredulity in both the
Northern Ireland and the Republic of Ireland (an EU Member State that was
frustrated by the lack of any consultation).65 The Commission’s overseeing of a
slow vaccination rollout has also been acknowledged by Commission President
von der Leyen and has led some to call for the President’s resignation.66 Any
perceived shortcomings in EU leadership concerning the crisis could have
spill-over effects, impacting on coordinated EU projects.

Scope of future regulation?


A significant caveat is perhaps required concerning the implications of the
White Paper for transparency, explainability and accountability of companies.
The Commission has suggested a differentiation between companies based on
risk. Those applications defined as ‘high risk’ would be required to comply
with rules governing AI. However, those not falling in this category would
currently fall outside of the rules. Such companies would have the option of

62 This would comprise nine cities from the Guangdong province and two Special Administrative
Regions (Hong Kong and Macau), which collectively form the ‘Greater Bay Area’—H. Bork,‘Made
in China:The Pearl River Delta Area is experiencing a growth spurt in ambition and tech’ (Roland
Berger, 5.12.2019) <https://www.rolandberger.com/en/Insights/Publications/China's-government
-plan-for-its-own-Silicon-Valley.html> accessed 27 May 2021.
63 Public Consultation on AI (n 2), 5.
64 M. Eddy,‘Germany and Hungary Begin Vaccinations a Day Early’ (The New York Times, 26 December
2020) <https://www.nytimes.com/2020/12/26/world/Germany-Hungary-vaccinations-begin
.html> accessed 27 May 2021.
65 S. Harrison, ‘EU Vaccine Export Row: Irish Government ”In Talks” with European Commis-
sion: Analysis’ (BBC, 9 February 2021) <https://www.bbc.co.uk/news/world-europe-55986492>
accessed 27 May 2021.
66 A. Zorzut, ‘EU president faces fresh calls to resign over ‘disastrous’ Covid vaccine programme’ (The
New European, 16 April 2021) <https://www.theneweuropean.co.uk/brexit-news/europe-news/vdl
-faces-calls-to-resign-over-vaccine-programme-7903690> accessed 27 May 2021.
14 Regulating artificial intelligence in industry
voluntarily opting into the scheme and would then receive a ‘quality label’ for
their AI applications as a result.67
‘High risk’ would be defined based on cumulative criteria. The first concerns
the nature of the sector and whether it is one where ‘given the characteristics of
the activities typically undertaken, significant risks can be expected to occur’.68
For the sake of clarity, it was proposed that an exhaustive list be provided69 and
that this be periodically reviewed. The second cumulative criteria require that
the activity also be one in which the AI’s specific use is such that ‘significant
risks are likely to arise’.70 This, it is suggested, could be based on the impact
likely to result, particularly death, injury and immaterial damage, legal effects
or similar, and outcomes that cannot reasonably be avoided by subjects.71
It might be thought that such a distinction between high-risk (regulated)
and other (unregulated) applications might form a secure basis around which to
structure future planning (or perhaps a chapter in an edited collection discuss-
ing future planning). However, only 43% of respondents favoured limiting the
regulations to high-risk applications in the proposal.72 There was also a lack of
engagement from the public consultation concerning whether the definition
of ‘risk’ was a satisfactory one.73 Accordingly, this aspect of the proposal seems
not to rest on especially solid ground.

The legislative path


So far, this chapter has discussed the specific challenges presented within the
proposals. At this point it is helpful to reflect on the process which will have to
be undertaken in order for any legislation to come into being.
It is the Commission that holds the sole right to initiate legislation in the
EU.74 However, in spite of the White Paper, the European Parliament has seen
fit to utilise its power to invite the Commission to propose legislation, voting
by a majority to put forward its own legislative initiatives. The Commission
need not follow these, but it must respond and provide reasons should it decide
not to.75
The three initiatives concerned establishing an ethical framework, provid-
ing for liability (and requiring insurance) and intellectual property rights.76 The
proposals concerning an ethical framework largely overlap with the White

67 White Paper (n 1), 24.


68 ibid, 17.
69 Examples given included healthcare, transport, energy and parts of the public sector.
70 White Paper (n 1), 17.
71 ibid.
72 Public Consultation on AI (n 2), 8.
73 ibid, only 37% of respondents answered this question, with only 59% supporting the definition.
74 Consolidated version of the Treaty on the European Union [2012] OJ C326/15, art 17(2).
75 ibid., art. 225.
76 European Parliament (n 3).
AI and its regulation in the EU 15
Paper, whilst liability and intellectual property proposals go further than that
present in the White Paper. Liability, as has been discussed above, features
significant divergence across Member States and is challenging to harmonise.
Intellectual property did not feature as a focus of the White Paper and here
Parliament suggests that intellectual property rights should only be granted to
humans and that care needs to be taken to protect human innovation and the
EU’s ethical principles.
The use of Parliament’s legislative initiative is not especially common, hav-
ing been used only 29 times between 2009 and 2019.77 The utility of deploy-
ing it when a White Paper is in existence and where a legislative process on
the matter—where Parliament itself can make amendments78—is supposedly
imminent is somewhat questionable. Especially given that there is a relatively
low success rate in gaining Commission approval of such Parliamentary initia-
tives (with 7 of the 29 mentioned above resulting in Commission action).79
The Parliamentary press release titled ‘Parliament leads way on first set of EU
rules for Artificial Intelligence’ is potentially misleading and presents an inter-
esting (and arguably problematic) alternative power base in the process.80
Such institutional positioning, which took place in October 2020, pales in
significance when compared to the ongoing pandemic. As indicated above,
shortcomings in managing vaccination stocks and controversy over a proposed
embargo of vaccines entering Northern Ireland—proposals which were subse-
quently U-turned—have (very much at the time of writing) become promi-
nent issues raising some questions of competence for the Commission.81 At
best, this is unhelpful for the progress of any EU initiatives, and the reality for
most topics is that at current minds are focused elsewhere; AI is no different.
This context, coupled with the complexities identified concerning the reg-
ulation of AI in the EU above, means that imminent regulation is unrealistic.
A cautionary tale can be found in the somewhat comparable General Data
Protection Regulation, which took approximately eight years from consulta-
tion to application.

The Court
The prospect of an extended period before the specific AI legislation is passed
begs the question as to what the interim shape of ‘regulation’ will be. Whilst

77 S. Kotanidis, ‘Parliament’s right of legislative initiative’ (2020) European Parliamentary Research


Service, 5 <https://www.europarl.europa.eu/RegData/etudes/BRIE/2020/646174/EPRS_
BRI(2020)646174_EN.pdf> accessed 27 May 2021.
78 Consolidated Version of the Treaty on the Functioning of the European Union [2012] OJ C326/01,
art 294.
79 Kotanidis (n 77).
80 European Parliament (n 3).
81 K. Adler, J. Campbell, ‘EU Vaccine Export Row: Block Backtracks On Controls for NI: Analysis’
(BBC, 30 January 2021) <https://www.bbc.co.uk/news/uk-55865539> accessed 27 May 2021.
16 Regulating artificial intelligence in industry
the above relevant EU rules that currently apply to AI have been referred to
in context, the remaining rules will originate from Member States themselves.
However, the prospect of (now) 27 divergent rule sets is minimised due to the
significant role of the Court of Justice concerning internal market rules. The
reality is that Member State rules are likely to create a barrier to market access
in goods and services (particularly as this criterion is interpreted broadly), and
such rules must be objectively justified by Member States or be removed.
The precise line that the Court takes concerning any AI issues before it in
the interim period is, it is submitted, also likely to be influenced by the current
position adopted by the EU institutions (in particular the Commission). The
symbiotic relationship between the EU legislature and Court is an entrenched
one and reflects a reality in which negative harmonisation (through the Court’s
case law) is as important as positive harmonisation (through secondary legisla-
tion). The result is that both the legislature and the Court appear to pursue
similar ends but through differing means.82
By way of example, this was famously seen concerning the free movement
of goods and efforts to remove non-discriminatory barriers to trade. Whilst
harmonising legislation was awaited, the Court stepped in and through its
Cassis de Dijon judgment established the principle of mutual recognition—
essentially turning the matter on its head—and allowing market access in the
absence of sound reasons otherwise.83
As stated above, concerning AI, it is likely that EU free movement provi-
sions would be engaged as national measures are likely to create barriers to
trade. At this point, the Court can assume a high degree of control through the
use of the proportionality test, which is recognised to be highly flexible.84 The
analysis of Member State justifications for any barrier to trade before the Court
is potentially rigorous. Member States must demonstrate the appropriateness,
necessity and proportionality of the measure.85 Historically it is recognised that
this takes place against a backdrop in which the Court adopts a purposive
approach to interpretation—one which favours integration and, consequently,
the removal of barriers to trade.86

82 Compare, for example, Case 120/78 Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein


(‘Cassis de Dijon’) [1979] ECR 649 and Commission Directive on the Provisions of Article 33 (7),
on the abolition of measures which have an effect equivalent to quantitative restrictions on imports
and are not covered by other provisions adopted in pursuance of the EEC Treaty [1969] OJ L13/29.
83 Cassis de Dijon (n 82), para. 14.
84 See e.g. W. Sauter, ‘Proportionality in EU Law: A Balancing Act?’ (2013) 15 Cambridge Yearbook of
European Studies, 439–466.
85 See in combination Case C-55/94 Reinhard Gebhard v Consiglio dell’Ordine degli Avvocati e Procuratori
di Milano [1995] ECR I-4165, para 37; Case C-110/05 Commission v Italy (‘Trailers’) [2009] ECR
I-519, para 59 and Case C-112/00 Eugen Schmidberger v Austria [2003] ECR I-5659, para 79.
86 For analysis of these aspects see e.g. M. Cappelletti, M. Seccombe, JHH Weiler (eds) Integration
Through Law: Europe and the American Federal Experience (Walter de Gruyter and Co 1986) and M.
Dawson, B. de Witte, E. Muir (eds), Judicial Activism at the European Court of Justice (Edward Elgar
Publishing 2013).
AI and its regulation in the EU 17
It is also noticeable that the intensity of proportionality analyses vary accord-
ing to the subject matter.87 For instance, the increased importance of envi-
ronmental protection and consumer protection can be tracked through a
relaxation of the intensity of the review of the case law.88 This essentially sees
Member States being more readily able to justify barriers to trade on this basis.
But, interestingly for our purposes, it is notable that the converse can also be
true. It can be the case that a particular aspect of market access or a promising
area for trade can give rise to a more intrusive proportionality analysis, thereby
making Member State justification more challenging.
This appears to have arisen concerning access to pharmaceutical products
via the internet. In DocMorris the Court was unwilling to accept a blanket ban
on the basis of public health protection.89 Instead, it observed that:

internet buying may have certain advantages, such as the ability to place
the order from home or the office, without the need to go out, and to
have time to think about the questions to ask the pharmacists, and these
advantages must be taken into account.90

Accordingly, the weight of the advantage of technological advancement was


going to impact upon any other justification. Ultimately, here, there was no
justification for banning non-prescription items and prescription items were
only permitted to be banned after intensive analysis concerning specific risk
to individuals, with the argument put forward by the German government,
concerning social security and the integrity of the German health system, being
rejected.91

87 Beck notes that the Court, too, is drawn to areas of political fashion. See: G. Beck, The Legal Reasoning
of the Court of Justice of the EU (Hart Publishing 2012), 390.
88 Schemmel and de Regt note that ‘few areas of [EU] law and policy that have been shaped and influenced
more positively by the jurisprudence of the ECJ than the area of environmental protection’, M.L Schem-
mel, B. de Regt, ‘The European Court of Justice and the Environmental Protection Policy of the
European Community’ (1994) 17(1) Boston College International and Comparative Law Review, 53,
54. See similarly S. Bell, D. McGillivray, O. Pedersen, Environmental Law (8th ed., OUP 2013), 225;
E. Paunio, Legal Certainty in Multilingual EU Law: Language, Discourse and Reasoning at the European
Court of Justice (Ashgate Publishing 2013), 87–94. More proactive efforts to protect the consumer
in more recent case law can be seen in Joined Cases C-402/07 and C-432/07 Sturgeon and Oth-
ers [2009] ECR I-10923 and Case C-497/13 Froukje Faber v Autobedrijf Hazet Ochten BV [2015]
ECLI:EU:C:2015:357. Historically consumers had typically been expected to be reasonably circum-
spect and been afforded less protection, e.g. Case C-470/93 Verein gegen Unwesen in Handel und Gew-
erbe Köln.e.V. v Mars GmbH [1995] ECR I-1923, para. 24 and Case C-210/96 Gut Springenheide
and Tusky [1998] ECR I-4657, para. 31. For criticism that the Court’s approach to consumer protec-
tion has now gone too far see JHH Weiler,‘Epilogue: Judging the Judges – Apology and Critique’, in
Maurice Adams and others (eds), Judging Europe’s Judges (Hart Publishing 2013), 245.
89 Case C-322/01 Deutscher Apothekerverband eV v 0800 DocMorris NV and Jacques Waterval [2003]
ECLI:EU:C:2003:664.
90 ibid., para. 113.
91 ibid., paras. 120–123.
18 Regulating artificial intelligence in industry
It is necessary to point out that this was a decision taken in 2003, before
the increasing ubiquity of online shopping. As such, it demonstrated (i) some
forward thinking from the Court concerning technological potential92 and (ii)
a disinclination to leave regulation to Member States. This is not to say that the
Court was a lone vanguard; indeed, there was already an EU Directive in place
concerning distance selling. But notably the Directive permitted Member
States to adopt more strict standards so as to protect the consumer, here though
the Court recognised that such a right was still subject to the residual free
movement provisions.
These cases are indications of the Court’s capacity to carry forward the EU’s
interests through integrating opportune sections of the market both prior to
and, indeed, after legislation has been passed. Accordingly, the Court’s view is
going to be of importance for a long time to come.
Hence, whilst specific references to artificial intelligence in case law have
thus far been sporadic and few, it is notable that they have appeared welcoming.
Unsurprisingly, references to AI more frequently appeared in the Opinions
of Advocates General, advising the Court, where a more discursive approach
to reasoning is adopted than that taken by the Court itself. These so far indi-
cate acute awareness of the potential and strategic importance of AI for the
EU moving forward. Indeed, an Advocate General recently pointed out, ‘the
opportunities for virtual storage or artificial intelligence programmes inevitably
transform the way in which the profession and its practice are conceived’.93 The
case concerned the (somewhat traditionalistic) legal sector and is a reminder
of AI’s prospective reach, and judicial sensitivity to this. Another Advocate
General drew attention to the fact that the definition of the term ‘product’ may
need to be revisited in light of modern technologies and cited with approval
a Commission report on the matter.94 For its part, the General Court was
unwilling to limit the scope of the definition ‘smart’ to humans alone.95

Way forward
Practically, it appears likely that the development of AI in the EU will remain
more atomised than the Commission hopes, with Member States and par-
ticipants valuing established centres and networks above ‘lighthousing’. It
also appears that there is significant pressure to review the proposed approach

92 Writing shortly after judgment Pike and Pioch noted of the ‘limited’ market at the time, with Doc-
Morris receiving only 130 orders per day. R.A. Schmidt, E.A. Pioch, ‘Pills by Post? German retail
pharmacies and the internet’ (2003) 105(9) British Food Journal 618–633, 618.
93 Case C-99/16 Jean-Philippe Lahorgue v Ordre des avocats du barreau de Lyon [2017] ECLI:EU:C:2017:107,
Opinion of AG Wathelet, para. 2.
94 Case C-410/19 The Software Incubator Ltd v Computer Associates UK Ltd [2020] ECLI:EU:C:2020:1061,
Opinion of AG Tanchev, para. 29.
95 Case T-48/19 smart things solutions GmbH v European Union Intellectual Property Office (EUIPO)
[2020] ECLI:EU:T:2020:483.
AI and its regulation in the EU 19
to risk (which will leave many applications outside of its remit unless those
responsible voluntarily participate). These revisions would also help to address
what remains a significant underlying apprehension regarding AI in the EU
generally.
More fundamentally, as more AI systems are integrated into everyday life, it
is predicted that it will have a globally transformative effect on economic and
social structures similar to the effect of other general-purpose technologies,
such as steam engines, railroads, electricity, electronics, and the internet.96
Whilst it is easy to get caught up in the innovative aspects of AI, AI still has
a long way to go before it can be considered safe for all parties involved. The
wide use of data is a double-edged sword, allowing machines to make quicker
decisions but also exposing several vulnerabilities that may potentially make
ethical compromises. Analysing the different ethical challenges raised by AI, it
is clear that transparency, explainability and accountability are interconnected
and bound by a common theme of trust.97 The Commission in its White Paper
refers to this as the ‘ecosystem of trust’, which should give citizens the confi-
dence to use AI systems and companies and public bodies the legal certainty to
innovate using AI.
It is imperative that both regulators and businesses share this vision of build-
ing trust by implementing AI with a sound ethical framework. The consumers
are crucial too, and for AI to succeed the individuals impacted by it must be
sure that their fundamental rights are not being compromised. Placing trust as
the irreplaceable ingredient, businesses need to identify the specific benefits
that AI technology can bring and weigh it against the costs of deploying it
responsibly and ethically.
With a large number of businesses deploying or exploring AI options, it
may be mistakenly believed that the success of AI is a matter of financial profit.
What is far more significant, however, is how it connects directly to humans
impacted by it. Putting human well-being at the core of development sets both
a realistic goal as well as concrete means to measure the impact of AI.98 The
Commission is of the view that human agency and oversight are the key ele-
ments of a trustworthy AI system.99 AI should empower human beings, allow-
ing them to make informed decisions whilst protecting fundamental rights.
The ‘human’ element needs to be present in all aspects of AI, such as human-
in-the-loop, human-on-the-loop, and human-in-command.
The Commission is of the view that human involvement is key to ensuring
that human autonomy is not undermined, and the objective of trustworthy and
ethical AI remains intact. Human oversight is seen as essential in several aspects

96 J. Howard, ‘Artificial Intelligence: Implications for the Future of Work’ (2019) 62 Am J Ind Med
917–926.
97 Tsakiridi (n 17), 9.
98 Dignum (n 11), 49.
99 Ethics Guidelines for Trustworthy AI (n 7).
20 Regulating artificial intelligence in industry
of its application, from monitoring and designing to the output of the AI sys-
tem. One might question, though, if human review is necessary to achieve
effective AI outcomes, are we really achieving the impact that AI promised to
bring about?
Ironing out these issues will take time. In the interim, companies can be
confident that the Court will not allow divergent national regulations to
emerge. However, beyond a reluctance to allow Member States to regulate
this sector the Court will have — and should have — less to say and will likely
be guided by the legislative process. This is welcome as, at current, the last
thing AI in the EU needs is another dissonant voice.
2 The impact of facial recognition
technology empowered by artifcial
intelligence on the right to privacy
Natalia Menéndez González

Introduction: Concept and functions of FRT


Most definitions of ‘facial recognition technology’ (FRT) are influenced by
the different functions performed by the technology.1 There are three distinct
functions of FRT. The first is ‘identification’, sometimes referred to as ‘one-
to-many comparison’. ‘Identification’ means that the template of a person’s
facial image is compared to many other templates stored in a database.2 The
FRT then returns a score for each comparison, indicating the likelihood that
two images refer to the same person’.3 Consequently, the name ‘automated
facial recognition’ (AFR) is sometimes used to refer to this process.4 The sec-

1 For the purposes of this chapter, FRT is defined as a technology capable of producing a template
of the face image of a subject and comparing it with photos of pre-existing facial portraits. Then,
software is used to construct a replica by processing photographs of the subject to identify or validate
its identity or to attribute a certain characteristic to the subject. See K. Hamann and R. Smith,‘Facial
Recognition Technology:Where Will It Take Us?’ (2019) 34 (1) CJM <https://www.americanbar.org
/groups/criminal_justice/publications/criminal-justice-magazine/2019/spring/facial-recognition
-technology/> accessed 27 May 2021. Further definitions of FRT:‘From a technical viewpoint, facial
recognition is a subcategory of the sphere of artificial intelligence known as “computer vision”.’ in
Consultative Committee of the Convention for the Protection of Individuals with regard to Auto-
matic Processing of Personal Data, Facial Recognition: current situation and challenges (T-PD(2019)05rev);
‘[FRT] allows the automatic identification of an individual by matching two or more faces from
digital images. It does this by detecting and measuring various facial features, extracting these from the
image and, in a second step, comparing them with features taken from other faces.’ See: Fundamental
Rights Agency, Facial recognition technology: fundamental rights considerations in the context of law enforcement
(FRA focus, 2019) 2. Moreover, ‘automatic processing of digital images which contain the faces of
individuals for identification, authentication/verification or categorisation of those individuals.’ See:
Article 29 Working Party, Opinion 02/2012 on facial recognition in online and mobile services (00727/12/
EN WP 192, 2012).
2 ‘The biometric template […] is a structured reduction of a biometric image’; see I. Iglezakis, ‘EU
Data Protection Legislation and Case-Law with Regard to Biometric Applications’ (2013), available
at <http://dx.doi.org/10.2139/ssrn.2281108> accessed 27 May 2021.
3 Fundamental Rights Agency (n 1).
4 R (on the application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC 2341.This
is the first case within the European continent regarding the use of FRT. In this case, FRT is named
as AFR.

DOI: 10.4324/9781003246503-3
22 Regulating artificial intelligence in industry
ond function is known as ‘verification’ or ‘authentication’, also referred to as
‘one-to-one comparison’. This application compares two biometric templates
to determine the likelihood that two images show the same person.5 If the like-
lihood is above a certain threshold, identity is verified. It is essentially an iden-
tification operation. However, instead of being performed against all the facial
templates stored within a database, it is only against a unique template (for
instance, when a person unlocks their mobile phone using facial recognition).6
Finally, FRT can also be used for so-called ‘categorisation’, i.e. face analysis. In
this application, the technology is not used to identify or match individuals, but
to obtain certain characteristics of individuals, which do not necessarily allow
for identification.7 Such characteristics might be sexual orientation, gender,
age, mental health, the potentiality to commit a crime, and many more.

How does AI affect FRT?


In the past, FRT was limited to identification and verification functions
(the first and the second functions mentioned above).8 Furthermore, the
facial images fed into the system had to meet very specific requirements. For
instance, the photographs had to be portrayed in an ‘ID-style’, with the subject
facing forwards and looking straight at the camera. However, the technology
developed, particularly in terms of resolution. Eventually, AI was added, which
improved the accuracy of identification and verification. For instance, it could
now identify faces in movement, when not looking directly at the camera, and
even individuals wearing accessories such as glasses or hats. Probably the most
controversial aspect brought by AI relates to categorisation. Below is a non-
exhaustive list of a few examples of the impact of AI on FRT:

• Wang and Kosinski claim to have achieved a facial recognition system


using neural networks to extract sexual orientation from a facial image.9
• There are also several research projects working on the detection of
depression from facial images, such as the study conducted by Zhou, Jin,
Shang, and Guo.10 They used a deep convolutional neural network to
produce a score based on, what they call, the ‘generated depression activa-
tion map (DAM)’.

5 Fundamental Rights Agency (n 1).


6 Apple.com, ‘Use Face ID on your iPhone or iPad Pro’ (22 January 2021) <https://support.apple
.com/en-us/HT208109#unlock> accessed 27 May 2021.
7 Fundamental Rights Agency (n 1).
8 K.A. Gates, Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance (NYU
Press 2011) 274
9 M.W. Kosinski,Y. Wang, ‘Deep Neural Networks Are More Accurate Than Humans at Detecting
Sexual Orientation From Facial Images’ [2018] JPSP 246
10 X. Zhou, K. Jin,Y. Shang, G. Guo, 'Visually Interpretable Representation Learning for Depression
Recognition from Facial Images' [2018] 11 (3) IEEE Trans.Affect. Comput.
The impact of FRT on the right to privacy 23
• FRT was also used in the iBorderCtrl project in personal interviews with
travellers. The objective of the project was to speed up border-crossing
procedures. To achieve this goal, FRT was used to detect whether people
were lying to prevent illegal crossings. The choice of FRT was based on
criteria of scalability, workload reduction and the intention of avoiding
mistakes made by human agents. It was also backed by the fact that FRT
is a non-intrusive technology since it is unnoticeable by the individual
subject to it.11

These examples give a better idea of the realm that AI application has opened
for the categorisation function of FRT.

Are facial images personal data?


Facial images are the input data that the FRT system receives and correspond-
ingly attaches a name and surname (in the case of identification and verifica-
tion) or another trait (in the case of categorisation). Since FRT is fed with
facial images, it becomes necessary to discern whether these images constitute
personal data and, consequently, whether certain legal regimes, like the EU’s
General Data Protection Regulation (GDPR), apply. The latter is considered
as one of the paramount data protection laws in the world and, although its
scope is regional (EU),12 data protection regulations all over the world look to
the GDPR as a role model.
The term ‘personal data’ is defined in Article 4.1 of the GDPR as

any information relating to an identified or identifiable natural person


(‘data subject’); an identifiable natural person is one who can be identified,
directly or indirectly, in particular by reference to an identifier such as a
name, an identification number, location data, an online identifier or to
one or more factors specific to the physical, physiological, genetic, mental,
economic, cultural or social identity of that natural person.13

Initially, facial images might be considered personal data, as long as they allow
for the identification of a person. With only a facial image (a relatively easy thing
to obtain, for instance, from a social network) a vast amount of (extremely)
personal data can be acquired with the use of FRT. However, it is also pos-
sible that facial images can be processed and/or noise added to them so that it is

11 European Commission Horizon 2020, ‘Periodic Reporting for period 2 - iBorderCtrl (Intelligent
Portable Border Control System)’ (European Commission CORDIS EU Research Results, 31 March
2021) <https://cordis.europa.eu/project/id/700626/reporting> accessed 27 May 2021.
12 Article 3.1 General Data Protection Regulation [2016] OJ 2 119/33.
13 Article 4.1 GDPR
24 Regulating artificial intelligence in industry
impossible to deduce the identity of a specific person from them.14 If this is the
case, facial images would not considered personal data but non-personal data.15
Another important, and frequently unnoticed, aspect that should be con-
sidered when analysing whether facial images are personal data or not is their
potential reversibility to allow identification. If we consider a facial image used
to identify a subject as personal data, we fall under the realm of data protection
and, therefore, the GDPR applies. Within the GDPR, not all personal data
are considered to be on the same ‘level’ and there is a special category called
‘sensitive data’. This data can potentially reveal ‘racial or ethnic origin, political
opinions, religious or philosophical beliefs, or trade union membership, […]
genetic data, biometric data, […] data concerning health or […] a natural per-
son’s sex life or sexual orientation’.16 Due to its ‘sensitive nature’ the processing
of these data is not allowed under the GDPR, except in specific cases contem-
plated by the law.17 Biometric data (facial images used to identify a person) are
automatically considered as sensitive data.18
Consequently, facial images can be considered ‘personal data’ when they
are obtained to perform identification and verification using FRT. Moreover,
many current FRT applications that perform a categorisation function would,
as a result of their performance, reveal extremely personal data, such as sex-
ual orientation, mental health conditions, and many more. These can also be
considered as personal data. The use of FRT to perform categorisation has
received a lot of backlash in the scholarship. Academics believe that this field
of study known as ‘sentiment analysis’ (also referred to as ‘physiognomic AI’)
has no scientific base and, therefore, lacks validity.19
Although FRT (and AI) is not mentioned explicitly within the GDPR,
many of the provisions apply to the technology. Sartor has argued that the
introduction of Internet-related terms within the GDPR is a reaction to the

14 ‘Noise is an unwanted component of the image [that] […] can degrade the accuracy of […] face
recognition’ See: I. Budiman, D. Suhartono, F. Purnomo, M. Shodiq, ‘The effective noise removal
techniques and illumination effect in face recognition using Gabor and Non-Negative Matrix Fac-
torization’ (International Conference on Informatics and Computing, Mataram, October 2016),
32–36
15 The Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November
2018 on a framework for the free flow of non-personal data in the European Union applies to ‘data
other than personal data’ which, as already stated, is defined within Article 4.1 GDPR.Therefore, we
have a negative definition of non-personal data.
16 Article 9.1 GDPR
17 Article 9 GDPR. See also E. Kindt, Privacy and Data Protection Issues of Biometric Applications (Springer
2013), 124–144
18 See Article 4.14 GDPR.
19 A. Daub,‘The Return of the Face’ (Longreads, October 2018) <https://longreads.com/2018/10/03
/the-return-of-the-face/> accessed 27 May 2021; J. Empspak, ‘Facing Facts: Artificial Intelligence
and the Resurgence of Physiognomy’ (Undark, 11 August 2017) < https://undark.org/2017/11/08
/facing-facts-artificial-intelligence/> accessed 27 May 2021; K.Crawford,‘Time to regulate AI that
interprets human emotions’ [2021] Nature 592.
The impact of FRT on the right to privacy 25
lack of these terms in the previous Data Protection Directive due to its his-
torical and social context.20 Therefore, it appears that the GDPR has failed
to include AI-related terms in the same fashion. It may be because of the
relatively new and innovative nature of the latest developments of AI, which
were not predicted at the time the GDPR was drafted. It might also have to
do with the fact that the GDPR is not oriented to any technology exclusively,
rather intending to address all threats for the right to data protection regardless
of where they come from. In general, academics have argued that the GDPR
fails to provide guidance on how to deploy AI-enhanced technologies while
respecting its content. The main critiques are focused on its wide and often
vague content.21

Legal challenges concerning FRT


FRT poses several legal challenges: from algorithmic fairness, transparency and
accountability to the right to privacy and data protection. Since the technol-
ogy has experienced an exponential increase in its deployment over the last
5 years, these challenges have been brought to the fore. The COVID-19 health
emergency, for instance, entailed a vast FRT deployment. Being a contactless
and unintrusive technology, FRT is perfect for monitoring compliance with
quarantine or lockdown measures, or the obligation to wear a facial mask.
Therefore, it becomes increasingly necessary to address the legal challenges
associated with the technology to avoid human rights violations and to enforce
a lawful FRT deployment.

Fairness
From the algorithmic fairness point of view, FRT might be biased against cer-
tain characteristics, including race, gender, and others.22 It is well-known that
facial recognition algorithms are still struggling to identify people of colour.23
Further, the literature has brought to light the harms that the use of FRT for
identification/verification entail for transgender and non-binary people.24 This

20 G. Sartor, F. Lagioia, The impact of the General Data Protection Regulation (GDPR) on artificial intelligence
(2020 Panel for the Future of Science and Technology - STOA) 49.
21 S.Wachter,‘Data protection in the age of big data’ [2019] 2(1) Nature Electronics, 6–.; Sartor, Lagioia
(n 20) 7; S. Wachter, B. Mittelstadt, L. Floridi, ‘Transparent, Explainable, and Accountable AI for
Robotics’ [2017] Sci. Robot, 2–3
22 J. Buolamwini,T. Gebru,‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender
Classification’ (1st Conference on Fairness,Accountability and Transparency, New York City, Febru-
ary 2018).
23 T. Simonite, ‘The best algorithms struggle to recognize black faces equally’ (WIRED, 22 July
2019) <https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/>
accessed 27 May 2021.
24 N. Stevens, O. Keyes, ‘Seeing infrastructure: race, facial recognition and the politics of data’ (2021)
Cult. Stud. <https://ironholds.org/resources/papers/seeing_infrastructure.pdf> accessed 27 May
26 Regulating artificial intelligence in industry
is caused by training datasets composed of facial images that belong mainly to
white people, especially men. Moreover, FRT still doesn’t have a robust foun-
dation to recognize faces that have been altered due to an accident or paraly-
sis or individuals who have undergone facial surgical procedures.25 If training
datasets are the reflection of the society we are living in and that society is built
upon inequality, FRT performance is going to perpetuate that inequality.26
It also should be considered that FRT, even in its ‘physical display’, has
developed by taking as a reference an ‘average’ subject model. Therefore, rep-
resentation and research are absent on how people with disabilities or with
craniofacial differences relate to FRT. This field is known as the ‘politics of
artefacts’ and has scarcely studied FRT, but it could help to make it fairer.27
Some argue that FRT is often designed to comprise a variety of interests and
dismiss others. As an example, Introna and Wood mentioned that ATM bank
machines are designed failing to contemplate somebody in a wheelchair or
unable to enter a pin code due to a disability.28 Furthermore, the trade-off
between fairness and accuracy has been one of the main obstacles to improve-
ment in this sense. Whereas computer scientists claim that their aim is accuracy
within FRT systems, not questioning the social inequality mentioned within
the previous paragraph, lawyers are more focused on FRT reflecting diversity
and the desire for a fairer society.

Accountability
The question of whether, and how, accountability can be demanded from
algorithms is widely discussed in the literature.29 Who should be held account-
able? According to what criteria, and to whom? The European Data Protection
Supervisor has expressed his concerns about these questions and the literature

2021; O. Keyes, ‘The Misgendering Machines:Trans/HCI Implications of Automatic Gender Rec-


ognition’ (2018) 2 HCI <https://dl.acm.org/doi/10.1145/3274357> accessed 27 May 2021.
25 M. Mills, M.Whittaker, Disability, Bias, and AI (2019 AI Now Institute Report) 17–24.
26 M. Le Bui, S. Umoja Noble,‘We’re Missing a Moral Framework of Justice in Artificial Intelligence:
On the Limits, Failings, and Ethics of Fairness’ in Markus D. Dubber, Frank Pasquale, and Sunit Das
(eds), The Oxford Handbook of Ethics of AI (OUP 2020) 163
27 L.Winner,‘Do Artifacts Have Politics?’ [1980] Daedalus 109:1
28 L. Introna, D.Wood, 'Picturing Algorithmic Surveillance:The Politics of Facial Recognition Systems'
[2002] Surveill. Soc. 2, 3-4.
29 C. Castets-Renard, Accountability of Algorithms in the GDPR and Beyond:A European Legal Framework
on Automated Decision-Making, (2019) 30 Fordham Intell. Prop. Media & Rny L.J., 91.available at: https://
ir.lawnet.fordham.edu/iplj/vol30/iss1/3; B.Wagner, ‘Algorithmic accountability:Towards Account-
able Systems’ in Giancarlo Frosio (ed), Oxford Handbook of Online Intermediary Liability (OUP 2020)
680ff; J.A. Kroll, J. Huey, S. Barocas, E/W. Felten, J.R. Reidenberg, D.G. Robinson, H.Yu,‘Account-
able Algorithms’ [2017] U. Pa. L. Rev., 633; A. Koene, Ch. Clifton,Y. Hatada, H.Webb, M. Patel, C.
Machado, J. LaViolette, R. Richardson, D/ Reisman, 'A governance framework for algorithmic
accountability and transparency' [2019] European Parliamentary Research Service <https://www
.europarl.europa.eu/RegData/etudes/STUD/2019/624262/EPRS_STU(2019)624262_EN.pdf>
accessed 27 May 2021.
The impact of FRT on the right to privacy 27
is exploring the possibility of establishing an authority, a watchdog, for FRT.
Along this line, there has recently been some interesting inputs regarding the
deployment of an FDA for FRT.30 The central claim is that ‘[a]ddressing the
trade-offs among the risks and benefits of complex FRT requires the crea-
tion of a new federal office.’31 The authors come to this conclusion by draw-
ing analogies with regulatory structures for other complex industries that have
been successfully tackled by federal agencies.32
An interesting example of regulation concerning accountability presents the
Washington State’s Bill, which is devoted to FRT.33 This regulation is almost
entirely focused on what is called an ‘accountability report’. This accountabil-
ity report must be filed by any state or local agency developing, procuring, or
using FRT and define the reason for which the technology is to be used. It
would contain a summary of the service’s intended usage, information of the
service’s rate of false matches, data security provisions, monitoring protocols,
and feedback channels. Further, any ‘accountability report’ must go through a
democratic approval process to be revised every two years. Regarding transpar-
ency, the report must be made available on the respective agency’s website and
forwarded to the competent legislative authority for publication on its website.
With regards to fairness, a service provider is required to provide an application
programming interface to allow independent monitoring for consistency and
unequal output disparities across distinct subpopulations. If the independent
research findings reveal unequal performance disparities across subpopulations,
the supplier shall formulate and execute a strategy to resolve the reported per-
formance differences within 90 days of receiving the results. Also, the methods
and data used within the independent research must be revealed to the supplier
in a way that enables complete replication to mitigate the reported discrepan-
cies. Additionally, an agency that uses such a service to make decisions that
have legal or equally important consequences for people must assess the service
in operating environments before implementing it, in order to achieve the
highest quality outcomes by following the service developer’s instructions. The
Bill also regulates the obligation of any agency using FRT to educate all people
who run such technology or manage personal data collected by it on a regular
basis. Finally, the Bill contemplates certain restrictions such as the prohibi-
tion to use FRT to perform real-time or near-real-time detection, or initiate
continuous monitoring unless a warrant is issued, exigent conditions arise, or a
court order is obtained for the sole purpose of finding or recognizing a missing
individual or identifying a deceased person.

30 A.Tutt, 'An FDA for Algorithms' [2017] ALR 69, 83-123


31 ibid.
32 ibid.
33 Substitute Senate Bill 6280-2019-20 concerning the use of facial recognition services (effective 1
July 2021)
28 Regulating artificial intelligence in industry
Privacy and data protection
As previously explained, facial images, the food for FRT might be consid-
ered personal data. Therefore, there is a wide range of issues that FRT poses
for the right to privacy and data protection. Starting with consent, there are
plenty of deployment scenarios where the individuals might be subject to FRT
without giving their consent. For instance, FRT has been deployed in several
supermarkets across Europe.34 The supermarkets have claimed that their inten-
tion when deploying such technologies was to prevent the entrance of people
with a preventive or active restraining order against the premises or any of its
workers. However, this use has been deemed disproportionate by several Data
Protection Authorities (DPA) and consequently banned.35
Article 9.1 of the GDPR provides that:

[p]rocessing of personal data revealing racial or ethnic origin, politi-


cal opinions, religious or philosophical beliefs, or trade union member-
ship, and the processing of genetic data, biometric data for the purpose of
uniquely identifying a natural person, data concerning health or data con-
cerning a natural person’s sex life or sexual orientation shall be prohibited.

However, paragraph 2(a) of the same provision indicates that ‘[p]aragraph


1 shall not apply if one of the following applies: […] the data subject has given
explicit consent to the processing of those personal data for one or more speci-
fied purposes.’
Further, Article 9.2(e) of the GDPR establishes the possibility of performing
‘processing related to personal data, which are manifestly made public by the
data subject’. This could be the case with pictures willingly shared on public
profiles on social network services (SNS). This use of facial images from SNS
has been reported as performed by Clearview AI. This is an American com-
pany selling facial recognition services whose main clients are law enforcement
agencies within the US.36 The company claims their programme helps to

34 A.R. Aguiar, ‘La AEPD mantiene 6 meses después de su investigación contra Mercadona por sus
cámaras de reconocimiento facial, que terminará antes de julio de 2021’ (Business Insider, 29 Decem-
ber, 2019) <https://www.businessinsider.es/sigue-investigacion-mercadona-reconocimiento-facial
-781491> accessed 27 May 2021; J.Wakefield,‘Co-op facial recognition trial raises privacy concerns’
(BBC News, 10 December 2020) <https://www.bbc.com/news/technology-55259179> accessed
27 May 2021
35 EDPB ‘Dutch DPA issues Formal Warning to a Supermarket for its use of Facial Recognition Tech-
nology’ (EDPB, 26 January 2021) <https://edpb.europa.eu/news/national-news/2021/dutch-dpa
-issues-formal-warning-supermarket-its-use-facial-recognition_es> accessed 27 May 2021.
36 K. Hill, ‘The secretive company that might end privacy as we know it’ (New York Times, 18 January
2020) <https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition
.html> accessed 27 May 2021
The impact of FRT on the right to privacy 29
identify perpetrators and victims of crimes and track down hundreds of
at-large criminals, including paedophiles, terrorists, and sex traffickers. It is
also used to help exonerate the innocent and identify the victims of crimes
including child sex abuse and financial fraud.37

They allege to have built an impressive facial images database only by using
public information from the open web.38 From the moment the company
started to gain certain popularity, it has faced a great deal of opposition for the
lack of transparency on the origin of the facial images used within its (training)
database as well as for its potential to become a tool for police abuse.
Clearview AI has also been subject to recent attention from law enforce-
ment agencies across the EU. Clearview’s activities have been the object of
several parliamentary questions on whether: (a) their services were in use by
anyone in the EU; (b) they hold any data of EU citizens and (in that case) how
they had been processed; and (c) they were consistent with the EU data pro-
tection regulation and the EU–US bilateral agreements on privacy.39 However,
several members of the European Parliament have shown their disagreement
with the European Commission’s ambiguous response.40 Clearview AI’s prac-
tices prompted the European Data Protection Board (EDPB) to issue a letter
on this matter.41 They concluded that ‘the use of a service such as Clearview
AI by law enforcement authorities in the European Union would, as it stands,
likely not be consistent with the EU data protection regime’.42 As a conse-
quence of this, in January 2021 Hamburg’s DPA deemed Clearview AI’s bio-
metric profiles of Europeans illegal. It also ordered the company to delete the
mathematical hash values representing the biometric profile of Matthias Marx,
a Hamburg resident and member of the Chaos Computer Club whose bio-
metric profile had been added to Clearview’s searchable database without his
knowledge.43 Finally, in February 2021, the Swedish DPA fined the Swedish

37 See official website of Clearview: <https://clearview.ai/> accessed 27 May 2021.


38 ibid.
39 Question for written answer E-000491/2020 to the Commission. Rule 138. Stelios Kouloglou
(GUE/NGL) (28 January 2020). Question for written answer E-000507/2020 to the Commission.
Rule 138. Sophia in ‘t Veld (Renew), Moritz Körner (Renew), Michal Šimečka (Renew), Fabienne
Keller (Renew), Jan-Christoph Oetjen (Renew), Anna Júlia Donáth (Renew), Maite Pagazaur-
tundúa (Renew), Olivier Chastel (Renew) (28 January 2020).
40 S. Stolton, ‘MEPs furious over Commission’s ambiguity on Clearview AI scandal’ (Euractive, 3
September 2020) <https://www.euractiv.com/section/data-protection/news/meps-furious-over
-commissions-ambiguity-on-clearview-ai-scandal/> accessed 27 May 2021.
41 Letter from EDPB to MEPs Sophie in ‘t Veld, Moritz Körner, Michal Šimečka, Fabiene Keller, Jan-
Christoph Oetjen,Anna Donáth, Maite Pagazaurtundúa and Olivier Chastel (10 June 2020).
42 ibid.
43 Der Hamburgische Beauftragte für Datenschutz und Informationsfreiheit, ‘Consultation prior to an
order pursuant to Article 58 (2) (g) GDPR.’ (Ref 545/2020; 32.02-102, 2021)
30 Regulating artificial intelligence in industry
Police Authority after finding that they have used Clearview AI to identify
individuals.44 The DPA concluded that

The Police has failed to implement sufficient organisational measures to


ensure and be able to demonstrate that the processing of personal data, in
this case, has been carried out in compliance with the Criminal Data Act.
When using Clearview AI the Police has unlawfully processed biometric
data for facial recognition as well as having failed to conduct a data protec-
tion impact assessment which this case of processing would require.45

In the case of face categorisation, consent might not be free and informed
because both the uses and functioning of the technology will not be clearly
explained due to its innovative nature and the intrinsic difficulty to understand
AI-empowered technologies. Therefore, the processing might be considered
unlawful. 46
Furthermore, Article 7 of the GDPR establishes the necessary conditions
for consent. It must be given freely, be specific, informed, and unambiguous.
Similarly, Article 15 of Illinois’ Biometric Information Privacy Act (BIPA)
requires informed consent for biometric data processing. The different func-
tions that FRT might perform have to be taken into account in this respect.
For instance, it might entail a different burden for the data subject to consent to
be subject to FRT to check-in at an airport instead of to one of the applications
mentioned within the second section of this chapter. Some authors strongly
relate the need to clarify consent terms to the principle of purpose limitation.
Taking into account that state-of-the-art FRT involves AI, the principle of
purpose limitation increases its importance.47 Consequently, data processors/
controllers should specify the data collected (facial image, template, name, etc.),
the functions performed by the FRT (identification/verification or categorisa-
tion), whether ulterior functions for the data collected are contemplated (such
as building a facial images database), and the data retention policy. Within US
law, the BIPA states in s. 15(b) that ‘no private entity may collect, store, or use

44 ‘Swedish DPA: Police unlawfully used facial recognition app’ (EDPB, 12 February 2021) <https://
edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition
-app_es> accessed 27 May 2021.
45 ibid.
46 E. Selinger,W. Hartzog,‘The Inconsentability of Facial Surveillance’ [2019] Loyola Law Rev. 66; Sar-
tor, Lagioia (n 20); S. Schiffner et al., ‘Towards a roadmap for privacy technologies and the General
Data Protection Regulation:A transatlantic initiative’ (Annual Privacy Forum, Barcelona, June 2018)
24-42.
47 Article 5.1.b GDPR: ‘Personal data shall be […] collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those purposes; further
processing for archiving purposes in the public interest, scientific or historical research purposes or
statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with
the initial purposes (‘purpose limitation’)’
The impact of FRT on the right to privacy 31
biometric information without first giving notice to, and obtaining a written
release or consent from, the subject’.
Furthermore, due to the innovative nature of the technology, and the low
degree of trust it enjoys, people do not possess enough knowledge and power
to understand the true impact of what they are consenting to.48 As possible
‘countermeasures’, Articles 13 and 14 of the GDPR provide that the data sub-
ject has the right to be informed about the collection and use of their personal
data. This provides for the data subjects to exercise their rights when consent
has not been given. Moreover, Article 15 GDPR provides that:

The data subject shall have the right to obtain from the controller confir-
mation as to whether or not personal data concerning him or her are being
processed, and, where that is the case, access to the personal data and the
following information: (a) the purposes of the processing; (b) the catego-
ries of personal data concerned; (c) the recipients or categories of recipient
to whom the personal data have been or will be disclosed, in particular
recipients in third countries or international organisations; (d) where pos-
sible, the envisaged period for which the personal data will be stored, or,
if not possible, the criteria used to determine that period; (e) the existence
of the right to request from the controller rectification or erasure of per-
sonal data or restriction of processing of personal data concerning the data
subject or to object to such processing; (f) the right to lodge a complaint
with a supervisory authority; (g) where the personal data are not collected
from the data subject, any available information as to their source; (h) the
existence of automated decision-making, including profiling, referred to
in Article 22(1) and (4) and, at least in those cases, meaningful informa-
tion about the logic involved, as well as the significance and the envisaged
consequences of such processing for the data subject.

Further, an apparent incompatibility between the Big Data (FRT databases are
composed of plenty of facial images from diverse backgrounds) and Article 9 of
the GDPR might be spotted. Using Big Data to train AI systems and allow it
to make inferences might be contradictory with the lawfulness of processing in
the sense that some of the outcomes of the training could not be anticipated
(such as the presence of algorithmic bias). Therefore, the GDPR principles
from Article 5, especially the purpose limitation and the data minimisation,49
should be applied in a way that does not collide with AI performance. Data
minimisation and storage limitation50 are also applicable to FRT because they

48 Sartor, Lagioia (n 20); Selinger (n 46); Schiffner et al (n 46).


49 Article 5.1.c GDPR:‘Personal data shall be: […] adequate, relevant and limited to what is necessary
in relation to the purposes for which they are processed (‘data minimisation’)’.
50 Article 5.1.e GDPR:‘Personal data shall be: […] kept in a form which permits identification of data
subjects for no longer than is necessary for the purposes for which the personal data are processed;
32 Regulating artificial intelligence in industry
raise the following questions: Where are the facial images and templates stored?
Who has access to them? Would it be possible to ‘exchange’ or even sell data-
bases of facial images or templates?
Moreover, nowadays the industry has to deal with the widespread use of
face masks due to COVID-19. This is likely to continue for some time after
the pandemic. This endeavour raises the question of whether complete facial
images would no longer be needed. Some companies claim they have already
achieved this goal.51 It will be possible to perform FRT operations, such as
identification or verification with a facial image, where the person is wear-
ing a face mask, because whole faces will not be needed for facial recogni-
tion. In line with the data minimisation principle, the fewer facial features
needed, the better for data protection. However, this might entail drawbacks
regarding the accuracy of FRT systems, enhancing the number of false-positive
identifications.
Another important set of issues on the intersection between FRT and pri-
vacy/data protection is profiling. Article 4.4 of the GDPR defines profiling as:

any form of automated processing of personal data consisting of the use


of personal data to evaluate certain personal aspects relating to a natural
person, in particular to analyse or predict aspects concerning that natural
person’s performance at work, economic situation, health, personal prefer-
ences, interests, reliability, behaviour, location or movements.

Many FRT systems fall within this definition, such as those monitoring mar-
keting preferences.52 Further, the COVID-19 health emergency has entailed
an increasing deployment of FRT for profiling uses such as monitoring tem-
perature (FRT thermal scanners),53 but also employees’ and students’ perfor-
mance and attention (including at home).54

personal data may be stored for longer periods insofar as the personal data will be processed solely for
archiving purposes in the public interest, scientific or historical research purposes or statistical pur-
poses in accordance with Article 89(1) subject to implementation of the appropriate technical and
organisational measures required by this Regulation in order to safeguard the rights and freedoms of
the data subject (‘storage limitation’).’
51 BBC News, ‘Facial recognition identifies people wearing masks’ (BBC News, 7 January 2021)
<https://www.bbc.com/news/technology-55573802> accessed 27 May 2021.
52 A. Lau, ‘Facial Recognition in Global Marketing’ (Towards data science, 25 April 2020) <https://
towardsdatascience.com/facial-recognition-in-global-marketing-8d0ca0b313c7> accessed 27 May
2021.
53 M.Van Natta, P. Chen, S. Herbek, R. Jain, N. Kastelic, E. Katz, M. Struble,V.Vanam, N.Vattikonda,
‘The rise and regulation of thermal facial recognition technology during the Covid-19 pandemic’
[2020] JLB 7
54 M. Andrejevic, N. Selwyn, ‘Facial Recognition Technology in Schools: Critical Questions And
Concerns’ [2020] Learn Media Technol., 45; A.Webber,‘PwC Facial Recognition Tool Criticised For
Home Working Privacy Invasion’ (Personnel Today, 16 June 2020) < https://www.personneltoday
.com/hr/pwc-facial-recognition-tool-criticised-for-home-working-privacy-invasion/> accessed
27 May 2021.
The impact of FRT on the right to privacy 33
Way forward
Law and technology are not independent from each other. In the current
world, a world that moves more and more towards ‘digital governance’, the
line between these two disciplines is increasingly blurring. FRT perfectly illus-
trates this conundrum. It is a tool that allows increasingly beneficial, interest-
ing, and unpredictable outcomes in a wide range of fields, and it is in danger
because of the potential threat it poses for the right to privacy and specifi-
cally data protection, and also from the algorithmic fairness, transparency, and
accountability points of view. This technology involves a challenge for a pri-
vacy/data protection model that may no longer be relevant to the present
scenario, where AI-empowered technologies are increasingly gaining strength.
The only way to keep up with this challenge is to engage the same profession-
als responsible for designing and deploying the technology in the regulatory
work, by putting the same tools that make the technology ground-breaking to
the service of protection of the subject’s rights.
One of the possible solutions to the consent conundrum might be establish-
ing a compliance standard for consent. This standard could include legal assess-
ment and possible certification (along the same line as conformity assessments
for product risks or ISO standards). It would act as an incentive for technology
suppliers to try to include privacy by design and default criteria55 in their designs
and deployments. Some authors are for these instruments to, among other rea-
sons, demonstrate compliance with the legal requirements.56 Others warn of
the risk that such soft law provisions could increase what is expected from the
industry, up to the point that they eventually become formal requirements.57 In

55 Article 25 GDPR: ‘1. Taking into account the state of the art, the cost of implementation and the
nature, scope, context and purposes of processing as well as the risks of varying likelihood and sever-
ity for rights and freedoms of natural persons posed by the processing, the controller shall, both at
the time of the determination of the means for processing and at the time of the processing itself,
implement appropriate technical and organisational measures, such as pseudonymisation, which are
designed to implement data-protection principles, such as data minimisation, in an effective manner
and to integrate the necessary safeguards into the processing in order to meet the requirements of
this Regulation and protect the rights of data subjects. 2.The controller shall implement appropri-
ate technical and organisational measures for ensuring that, by default, only personal data which are
necessary for each specific purpose of the processing are processed. That obligation applies to the
amount of personal data collected, the extent of their processing, the period of their storage and
their accessibility. In particular, such measures shall ensure that by default personal data are not made
accessible without the individual's intervention to an indefinite number of natural persons. 3. An
approved certification mechanism pursuant to Article 42 may be used as an element to demonstrate
compliance with the requirements set out in paragraphs 1 and 2 of this Article’.
56 S/ Chun, ‘Facial Recognition Technology: A Call for the Creation of a Framework Combining
Government Regulation and a Commitment to Corporate Responsibility’ [2020] 21 N.C. J.L. &
Tech., 99–135
57 G. Marchant,‘Soft Law’ Governance Of Artificial Intelligence’ (AI Pulse, 25 January 2019) <https://
aipulse.org/soft-law-governance-of-artificial-intelligence/> accessed 27 May 2021; L. Edwards, M.
Veale,‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You
Are Looking For’ [2017] 16(1) DLTR.
34 Regulating artificial intelligence in industry
this case, the role of the Data Protection Authorities and the EDPB might be
crucial in monitoring compliance and assessing these certifications.
There is no international legally binding instrument that would regulate
FRT. The lack of specific regulation leads to adapting existing tools, such as
the GDPR. This may lead to situations where the GDPR is wrongly inter-
preted and/or applied. Moreover, the aim of the GDPR is not to answer to
the challenges posed by AI but to build the basis of the data protection regime
within Europe; thus, the regulation has a scope beyond AI, and its mean-
ing may be lost or deviated. There is a current proposal for an AI Act by the
European Commission but, taking into account the duration of the EU legisla-
tive process, we will still have to wait a little bit to see a EU regulatory instru-
ment within the field.
In order to tackle the privacy problems of FRT it could be beneficial to
consider differential privacy solutions. The term denotes a mathematical for-
malization of the foregoing idea that we should be comparing what someone
might learn from an analysis if any particular person’s data was included in the
dataset with what someone might learn if it was not.58 In the FRT field, this
would entail applying a small ‘perturbation’ to the face templates and only stor-
ing those ‘perturbated’ templates, minimizing risks in case of a data breach.59
Other technical tools, such as k-anonymization,60 homomorphic
encryption,61 or phenotypically or demographically diverse data augmentation
using GAN (Generative Adversarial Networks),62 might also play an important
role in this respect. GAN might be used to create ‘artificial’ facial images to
train FRT systems. This possibility would entail two advantages: on the one
hand, it would allow the introduction of phenotypically or demographically
diverse faces, defeating algorithmic bias. On the other hand, it would consider
privacy and data protection implications in case of a training database data
breach.
One more option would be to create a brand-new regulation specifically
for this technology. However, this option is not necessarily feasible, as shown

58 M. Kearns,A. Roth, The Ethical Algorithm (OUP 2019) 256


59 M.Arachchige, P. Chamikara, P. Bertok, I. Khalil, D. Liu, S. Camtepe, ‘Privacy Preserving Face Rec-
ognition Utilizing Differential Privacy’ [2020] 97 Comput. Secur., 101951.
60 Kearns, Roth (n 58).
61 A-R. Sadeghi, T. Schneider, I. Wehrenberg, ‘Efficient Privacy-Preserving Face Recognition’ (Inter-
national Conference on Information Security and Cryptology, Seoul, December 2009), 229–244.
62 ‘In the proposed adversarial nets framework, the generative model is pitted against an adversary: a
discriminative model that learns to determine whether a sample is from the model distribution or
the data distribution.The generative model can be thought of as analogous to a team of counterfeit-
ers, trying to produce fake currency and use it without detection, while the discriminative model
is analogous to the police, trying to detect the counterfeit currency.’ See: I. Goodfellow, J. Pouget-
Abadie, M. Mirza, B. Xu, D.Warde-Farley, S/ Ozair,Y. Bengio,Yoshua, ‘Generative Adversarial Net-
works’ (Proceedings of the International Conference on Neural Information Processing Systems
(NIPS 2014) Montreal, December 2014), 2672–2680.
The impact of FRT on the right to privacy 35
by the example of Washington State’s Bill entirely devoted to FRT.63 This Bill
marks the commitment of the regulator with the fairness, transparency, and
privacy issues of FRT, holding at least accountability when a direct address is
not possible. The ‘accountability report’ might resemble the ‘risk assessment’
contemplated within the proposal of an AI Act by the EU Commission. This
might seem to be the most popular formula when dealing with the potential
impact of FRT on fundamental rights. Only time will tell whether this was a
successful formula or not.

63 Substitute Senate Bill 6280-2019-20 concerning the use of facial recognition services (effective July
1, 2021).
3 The malicious use of artifcial
intelligence against government
and political institutions in
the psychological area
Evgeny Pashentsev and Darya Bazarkina

Introduction
The rapid development of AI and its widespread implementation poses new
challenges to government and public structures and to international organisa-
tions. One of these challenges is the prevention of crimes related to the mali-
cious use of AI (MUAI). For example, Russian legislation does not mention
any crime related to socially dangerous acts through the use of neural networks,
AI, or acts committed by AI itself.1 Another challenge is posed by the lack of
regulation concerning the use of AI to influence the psychological security
(PS) of a society.
Although the example of Russia was given above, this is not just a problem
for one country. The lack of regulations of the use of AI, including in the field
of PS, is recognised at the level of the United Nations as well.2 Meanwhile,
the issues of MUAI are becoming more and more urgent when discussing
the problems of the psychological destabilisation of political systems, especially
in the context of the crisis caused by the COVID-19 pandemic. For exam-
ple, during the period of quarantine and self-isolation, the number of cases
of phishing increased,3 and the harm caused by phishing can multiply when
fraudsters use AI. Phishing sites are often disguised as government sites, which
can become a serious threat to the reputation of state and supranational bodies.
Samples of terrorist propaganda that actively use AI images are already being
distributed, and in terrorist practice there are already crimes involving the use
of unmanned aerial vehicles (UAV) or search tools based on big data (to track

1 I. Mosechkin,‘Iskusstvennyj intellekt i ugolovnaja otvetstvennost': problemy stanovlenija novogo vida


subyekta prestupleniya (Artificial intelligence and criminal responsibility: problems of formation of a
new type of criminal subject)’ [2019] Vestnik Sankt-Peterburgskogo universiteta. Pravo (Bulletin of
Saint Petersburg University. Law), 461.
2 The Right to Privacy in the Digital Age. Report of the United Nations High Commissioner for
Human Rights [2018] A/HRC/39/29, art. 16.
3 Y. Stepanova, ‘Parents have fallen like children. Online fraudsters took advantage of the demand for
benefits’ (Kommersant, 14 May 2020), <www.kommersant.ru/doc/4343398> accessed 27 May 2021.
See also: E. Pashentsev and D. Bazarkina, Malicious Use of Artificial Intelligence and International Psychologi-
cal Security in Latin America (ICSPSC, 2020) 48.

DOI: 10.4324/9781003246503-4
The use of AI against government institutions 37
down victims).4 Such crimes can become a powerful factor in the destabilisa-
tion of political systems in cases where opposing political forces (rightly or
wrongly) accuse their opponents of using them.
A separate threat is presented by cases of MUAI, in which the very use of AI
technologies (such as deepfakes) is aimed at destabilising PS. In the near future,
the practice of MUAI may begin to outstrip not only the development of legal
mechanisms for preventing and responding to it, but also the very understand-
ing of new realities by civil and political institutions, as well as by the academic
community. Therefore, it is necessary today to search for ways to respond to
the crimes of the future.
The tasks of this chapter are to identify the extent of the threat to govern-
ment and political institutions from MUAI in the area of PS, to determine the
state of legal mechanisms for preventing MUAI and responding to it, and to
offer recommendations for improving this response.

MUAI and new threats to government and political


institutions in the psychological area
The threats to PS from MUAI can be divided into three levels.5 At the first
level, these threats are associated with the deliberately distorted interpretation
of the circumstances and consequences of AI development for the benefit of
antisocial groups. In this case, AI itself is not involved in the destabilisation of
international psychological security (IPS). The destructive (open or hidden)
impact is the creation of a false image of AI in people’s minds.
The field of MUAI is wide: the unjustified use of drones; threats of cyber-
attacks on vulnerable infrastructure; the manipulation of cryptocurrencies; and
much more. Where MUAI is aimed not at managing target audiences in the
psychological sphere but primarily at committing other malicious actions (for
example, the destruction of critical infrastructure), we can talk about the sec-
ond level of MUAI.
The professional use of the means and methods of psychological warfare
can raise the perceived level of threat above or below what is appropriate.
Moreover, the use of AI in psychological warfare already makes hidden (latent)

4 D. Bazarkina,‘Exploitation of the Advanced Technologies’ Image in Terrorist Propaganda and Ways to


Counter It’ in D. Bazarkina, E. Pashentsev and G. Simons (eds), Terrorism and Advanced Technologies in
Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat (Nova Science Publish-
ers, 2020), 57.
5 D. Bazarkina and E. Pashentsev,‘Artificial Intelligence and New Threats to International Psychologi-
cal Security’ [2019] 17(1) Russia in Global Affairs, 147. See more on the levels of threats to IPS in the
context of MUAI: Evgeny Pashentsev, ‘AI and Terrorist Threats: The New Dimension for Strategic
Psychological Warfare’ in Bazarkina, Pashentsev Simons (n 4), 83; E. Pashentsev,‘The Levels of MUAI
Threats to International Psychological Security’ in Experts on the Malicious Use of Artificial Intelligence:
Challenges for Political Stability and International Psychological Security. Report by the International Center for
Social and Political Studies and Consulting (OneBook.ru 2020), 7.
38 Regulating artificial intelligence in industry
campaigns of perception management more dangerous; this will only increase
in the future. Therefore, MUAI, which is aimed primarily at causing damage
in the psychological sphere, deserves independent and very close attention,
representing a special third level of threat to IPS.
MUAI can involve a variety of AI technologies. More than ever, the rapid
development of AI in the world is a real problem for the vulnerability of physi-
cal objects controlled by complex systems that are partially or completely based
on AI. The active integration of the industrial internet, where AI products
operate, and the use of predictive analytics in industry, transport, and agri-
culture, while ensuring the proper level of security, can significantly increase
the efficiency of an enterprise, an industry, and then the economy. However,
deliberately entering distorted or completely fake data into a self-learning AI,
for example, can lead to the spraying of toxic chemicals in fields or create a
major traffic jam or accident (using the traffic intensity prediction mechanism).
This type of MUAI can discredit the AI mechanism itself and its manufacturer,
as well as those who made the decision to implement it, and even ordinary
users who chose a particular AI product.
New risks are undoubtedly associated with the implementation of synthetic
AI products, which include, for example, the management of physical objects
along with user identification systems based on voice or video images. This
problem concerns the ‘malicious use of deepfakes’. In a narrow sense, the
process of creating deepfakes means the addition of one digital image or video
on top of another in such a way that the added image appears to be part of the
original. However, without rejecting differences in specific technologies, we
consider it possible to use the term ‘deepfake’ in a broader sense, combining
the original process with a set of existing and future technologies for construct-
ing pseudo-reality. These technologies are based on the capabilities of AI to
create or modify images, video, sound, and text.
The malicious use of chatbots with the addition of a synthesis system that
is able to use a real person’s voice is much more dangerous than the use of
regular chatbots, which are not very convincing in a conversation. If today
a chatbot with the voice of Robbie Williams is created for good purposes,6
nothing will prevent future criminals from creating such a bot with the voice
of Osama bin Laden or other such person. In the information sphere, there
are also ‘ranking and deranking’ technologies. Ranking technology uses AI
to increase, while deranking technology is used to lower, the ranking of sites
in search engines, adjusting the popularity of an event or phenomenon. The
technology of deranking allows, for example, internet companies, if they par-
ticipate in psychological warfare on the side of a particular political force, to

6 Synthetic, ‘Robbie Williams Chatbot’ (Synthetic, 2020) <www.syntheticagency.co/portfolio/robbie


-williams-chatbot/> accessed 27 May 2021.
The use of AI against government institutions 39
use their search engines to discredit political opponents, and there is already
real evidence of this.7
The political discrediting of government structures ‘contributes’ to the
spread of phishing, in which criminals gain access to confidential user data
(usernames and passwords) by sending mass emails on behalf of popular brands,
or personal messages from within various services, for example, on behalf of
banks or social networks. AI phishing can be more difficult to identify than
regular phishing attacks, because of the personalisation of messages.
In summary, it is possible to determine the purpose for which MUAI is car-
ried out in the course of psychological warfare. Psychological warfare is always
aimed at inflicting a direct (although often latent) blow to public consciousness
and (through victory in this sphere) a bloodless general victory over the enemy.
MUAI in the context of IPS is aimed at gaining an advantage in psychologi-
cal warfare through quantitatively and qualitatively new forms of influence on
individual and public consciousnesses.

Classifcations of MUAI
It is possible to offer the following classifications of MUAI according to the
degree of its readiness: (1) existing MUAI practices; (2) existing MUAI capa-
bilities that have not yet been used in practice – these capabilities are associated
with a wide range of rapidly emerging new AI features, not all of which are
immediately included in the range of implemented MUAI features; (3) future
opportunities for MUAI based on ongoing developments and future research
(assessment should be given for the near, medium, and long terms); and (4)
unidentified risks, also known as ‘unknown unknowns’. Not all developments
in the sphere of AI can be accurately assessed. A willingness to meet unex-
pected latent risks of MUAI is crucial.
Another classification of MUAI is also possible: (1) by territorial coverage
(local, regional, global); (2) by the degree of damage (minor, significant, large,
catastrophic); (3) by propagation velocity (slow, fast, rapid); and (4) by propa-
gation form (open, latent).
The risk of MUAI is significantly increased by the integrated use of AI tech-
nologies by intruders. However, we should not forget that the threat of AI to
PS is primarily anthropogenic – caused by human activity, not AI specifics (at
least at the current stage of AI development).

7 For example, in 2017 Google stated that it would lower the ranking of reports by the Russian state-
run news agencies Russia Today (RT) and Sputnik. Eric Schmidt, CEO of Alphabet (the company that
owns Google) said that the search engine needed to fight the distribution of misinformation, but
some media publications said this step was a form of censorship. See: BBC,‘Google to ‘derank’ Russia
Today and Sputnik’ (BBC News, 21 November 2017) <www.bbc.com/news/technology-42065644>
accessed 27 May 2021.
40 Regulating artificial intelligence in industry
The legal response to MUAI in the EU and the USA
The development of legal means for the prosecution of MUAI is becoming
one of the main tasks for lawmakers who are developing mechanisms for regu-
lating AI. According to the legal and policy documents of the EU and the US
(currently leading in the field of AI regulation), it is possible to trace a change
in the attitude of governments to the problem of MUAI in the field of PS.

The European Union


The EU is recognised as one of the key jurisdictions where the regulation of
robotics and AI is under development.8 The European Parliament’s Resolution
of 16 February 2017 with Recommendations to the Commission on Civil Law
Rules on Robotics, in which the term ‘electronic person’ was first proposed,9
is innovative. The ethical aspect of the resolution is primarily an indication
of the growing split in society and the shrinking middle class, against which
the development of robotics can lead to a high concentration of wealth and
influence in the hands of a minority. Not least, for this reason, the European
Parliament calls for those involved ‘in the development and commercialisation
of AI applications’ to ‘build in security and ethics at the outset’.10 In the aspect
of ensuring PS, it is especially important to emphasise the danger of AI concen-
tration in the hands of any anti-national state elites, as well as other aggressive
non-state actors, including criminal (up to terrorist) organisations.
The human rights approach to the problem of MUAI is declared in the
relevant publications of the European Union Agency for Law Enforcement
Cooperation (Europol).11 Europol is considering the full range of threats asso-
ciated with MUAI, recognising that AI and machine learning technologies are
steadily transforming the security landscape and becoming more accessible.
Based on an analysis of the problem of MUAI conducted at the University
of Oxford,12 Europol recommends that law enforcement agencies study the
possible scenarios. For example, attackers can use AI to create and distribute
malicious content (social engineering attacks or phishing emails), and to detect
new attack vectors or the vulnerabilities of potential victims. AI technology
can be deployed to select attack targets, set priorities, or respond to changes in
the behaviour of targets. The impact on citizens of large-scale disinformation

8 A. Neznamov (ed), Novye zakony robototehniki. Reguljatornyj landshaft. Mirovoj opyt regulirovanija robo-
totehniki i tehnologij iskusstvennogo intellekta (New Laws of Robotics. The Regulatory Landscape. Global
Experience in Regulating Robotics and Artificial Intelligence Technologies) (Infotropic Media 2018) 5.
9 European Parliament Resolution of 16 February 2017 with Recommendations to the Commission
on Civil Law Rules on Robotics (2015/2103(INL)) [2018] OJ C 252/239.
10 ibid.
11 Europol, Do Criminals Dream of Electric Sheep? How Technology Shapes the Future of Crime and Law
Enforcement (European Union Agency for Law Enforcement Cooperation (Europol) 2019) 10.
12 M. Brundage and others, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
(Future of Humanity Institute, University of Oxford 2018) 27.
The use of AI against government institutions 41
is noted as a serious problem for the EU. The use of AI can further increase
the impact of hybrid threats, since the mass production of false information
can be automated by attackers with little or no technical knowledge. Deepfake
technologies give attackers the ability to spread misinformation by imperson-
ating others. Criminals have already used sound deepfakes, posing as com-
pany executives in an attempt to deceive employees of the organisation.13 All
these threats may become subject to the law in the expanded EU legislation,
especially since the process of this improvement is supported by new strategic
documents in the field of AI.
The European Commission released its plans for the future of AI in the EU
on 19 February 2020. The strategy is set out in two main documents. These are
the ‘Report on the Safety and Liability Implications of Artificial Intelligence,
the Internet of Things and Robotics’14 and the ‘White Paper on Artificial
Intelligence – A European Approach to Excellence and Trust’.15 The latter
contains a section headed ‘An Ecosystem of Trust: Regulatory Framework for
AI’, which evaluates the tools available to the Union for regulating technology.
The EU Commission notes, ‘Citizens fear being left powerless in defending
their rights and safety when facing the information asymmetries of algorithmic
decision-making, and companies are concerned by legal uncertainty’.16 It is
extremely important that the Commission notes both the ability of AI to pro-
tect the security of citizens and allow them to enjoy their basic rights, and the
risks, including those caused by MUAI. ‘Lack of trust’ is noted as the main fac-
tor holding back the wider spread of AI. Thus, the EU strategy leads us to the
assumption that, against the background of the existing information warfare
around the world, the first-level threats to PS (the desire of aggressive actors
to play on the fears and distrust of citizens to discredit politicians who actively
advocate the introduction of AI) may become extremely relevant.
According to the European Commission, the use of AI can lead to viola-
tions of the right to free assembly, to discrimination based on gender, racial or
ethnic origin, religion or belief, disability, age or sexual orientation, and to vio-
lations of the right to effective protection and a fair trial in court, and consumer
rights. At the same time, the Commission points primarily to the anthropo-
genic nature of all the violations listed above,17 which can also be a starting
point for developing standards for the prevention and prosecution of MUAI.

13 Europol (n 11), 10.


14 Commission, ‘Report to the European Parliament, the Council and the European Economic and
Social Committee, on the Safety and Liability Implications of Artificial Intelligence, the Internet of
Things and Robotics’ COM (2020) 64 final.
15 Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence and
Trust’ COM (2020) 65 final, ch 5.
16 ibid.
17 ibid.
42 Regulating artificial intelligence in industry
The European Commission in April 2021 released their ‘Proposal for a
Regulation on a European approach for Artificial Intelligence’. The Proposal
notes that AI

technology can also be misused and provide novel and powerful tools
for manipulative, exploitative and social control practices. Such practices
are particularly harmful and should be prohibited because they contradict
Union values of respect for human dignity, freedom, equality, democracy
and the rule of law and Union fundamental rights, including the right
to non-discrimination, data protection and privacy and the rights of the
child.18

The legal regulation of AI in general, and MUAI in particular, is at an early


stage of its development, even in the EU. This is shown by the examples of
deepfake regulation. In different countries, only the first legislative steps are
being taken in this area. In 2018 the EU released the document ‘Tackling
Online Disinformation: A European Approach’,19 which examines disinforma-
tion in general and provides a set of guidelines, including guidelines for protec-
tion against deepfakes.

Disinformation is a powerful and inexpensive – and often economically


profitable – tool of influence … new, affordable, and easy-to-use technol-
ogy is now available to create false pictures and audio-visual content (so-
called ‘deep fakes’), offering more potent means for manipulating public
opinion.20

The EU calls for greater public participation and greater transparency in iden-
tifying the origin of the internet content, and an independent European net-
work has been established to verify the sources and process of its creation.

The United States of America


In the US, AI security issues were discussed in the document entitled ‘The
National Artificial Intelligence Research and Development Strategic Plan
2016’.21 This demonstrates that the problem of MUAI can be linked to two

18 Commission, ‘Proposal for a Regulation of the European Parliament and Of the Council. Laying
Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Cer-
tain Union Legislative Acts’ COM (2021) 206 final 2021/0106 (COD), preamble, art. 15.
19 Commission, ‘Communication to the European Parliament, the Council, the European Economic
and Social Committee and the Committee of the Regions.Tackling online disinformation: a Euro-
pean Approach’ COM (2018) 236 final.
20 ibid., ch 2.
21 US National Science and Technology Council, The National Artificial Intelligence Research and Develop-
ment Strategic Plan (National Science and Technology Council, Networking and Information Tech-
The use of AI against government institutions 43
strategies. The first of these is to ‘ensure the safety and security of AI systems’,
which implies ensuring not only the safety of the production process of AI
products, but also the clarity of its work at each stage. This strategy also aims to
provide for the reliable protection of AI products from cyber-attacks.
The strategy for developing ‘law-abiding’ AI (‘understand and address the
ethical, legal, and societal implications of AI’) is also closely related to the
task of maintaining human control over AI. The authors of the Strategic Plan
emphasise the importance of the ability of AI to make independent decisions in
accordance with human ethical standards and current legislation. In this regard,
it is extremely important to indicate the need for interdisciplinary research
teams at least to work to answer the question of which data will be used to
train the AI.22 In the case of MUAI, including MUAI aimed at destabilising
PS, this approach is a convenient starting point for discussing the subjectivity
of future offences.
Legislative initiatives in the field of AI are growing in the United States. In
2018, five draft laws were proposed to regulate AI, in 2019 there were eight,
and during the first half of 2020 (until 19 June) there were ten.23 Deepfake
technology is one of the MUAI technologies aimed at destabilising PS that is
most actively discussed by American lawmakers. The first project was a federal
bill to regulate deepfakes, the ‘Malicious Deep Fake Prohibition Act of 2018’,24
which was introduced in the United States in December 2018, and the ‘Deep
Fakes Accountability Act’25 was introduced in June 2019. Legislation against
deepfakes has also been introduced in several states, including California, New
York, and Texas.
As a result of an intense discussion of MUAI issues in the United States on
20 December 2019, President Donald Trump signed the nation’s first federal
law related to ‘deepfakes’. The deepfake legislation was part of the National
Defense Authorization Act for Fiscal Year 2020 (NDAA).26 In addition to
deepfakes, a separate section of the law (section 5708) is dedicated to facial
recognition technology and its use by the intelligence community. This is also
an important step in preventing MUAI attacks against the national security of

nology Research and Development Subcommittee 2016) <https://www.nitrd.gov/PUBS/national


_ai_rd_strategic_plan.pdf> accessed 27 May 2021.
22 ibid.
23 Center for Data Innovation, ‘AI Legislation Tracker – United States’ (Center for Data Innovation,
19 June 2020) <www.datainnovation.org/ai-policy-leadership/ai-legislation-tracker/> accessed 27
May 2021.
24 S.3805 – Malicious Deep Fake Prohibition Act of 2018’ (USA).
25 H.R.3230 – Defending Each and Every Person from False Appearances by Keeping Exploitation
Subject to Accountability Act of 2019 (USA).
26 M. Ferraro, J. Chipman and S. Preston, ‘First Federal Legislation on Deepfakes Signed Into Law’
(WilmerHale, 23 December 2019) <http://www.wilmerhale.com/en/insights/client-alerts
/20191223-first-federal-legislation-on-deepfakes-signed-into-law> accessed 27 May 2021.
44 Regulating artificial intelligence in industry
the state, since the law prohibits the transfer of technology to aggressive politi-
cal actors.27
Under the law, the Director of National Intelligence (DNI) must, no later
than one year after its adoption, provide Congress with a report on how facial
recognition technology is used for intelligence purposes, under what circum-
stances the technology should be shared with entities outside the jurisdiction
of the United States, and whether the use of this technology threatens the
constitutional rights of citizens.
However, the dispute over the technology has not subsided with the adop-
tion of the law. In 2020, lawmakers in the House of Representatives who
were ‘wary of facial recognition’ were working on legislation that would halt
its progress until the potential risks could be understood and mitigated.28 In
June 2020, a group of lawmakers in the USA again proposed legislation that
would impose a federal moratorium on the use of facial recognition technol-
ogy by law enforcement agencies, the first effort to put a temporary ban on the
technology nationwide. The proposed federal moratorium comes after cities
such as San Francisco and Boston, citing privacy concerns and racial bias in the
technology, passed their own bans on facial recognition.29
Unlike the report on facial recognition technology, a report on the impact
of deepfake technology on US national security must be submitted to Congress
no later than 180 days after the law was passed.30 The NDAA defines deepfakes
as ‘machine-manipulated media’, which is a broader definition than the one in
the 2018 Bill. This broader interpretation of deepfakes covers not only videos,
but also any other materials created with the help of AI and designed to form
pseudo-reality.31 The effectiveness of the transition to this interpretation has
been proved in real practice, such as in the experiment conducted by Max
Weiss from Harvard. He used a text generation programme to create 1,000
comments in response to a government call on a Medicaid issue. ‘These com-
ments were all unique, and sounded like real people advocating for a specific

27 National Defense Authorization Act for Fiscal Year 2020. Conference Report to Accompany S. 1790.
December 9, 2019. 116th Congress 1st Session, House of the Representatives. Report 116–333
(USA), section 5708.
28 A. Boyd, ‘Lawmakers Working on Legislation to ‘Pause’ Use of Facial Recognition Technology’
(Nextgov, 15 January 2020) <www.nextgov.com/emerging-tech/2020/01/lawmakers-working-leg-
islation-pause-use-facial-recognition-technology/162470/ > accessed 27 May 2021.
29 A. Ng,‘Lawmakers propose indefinite nationwide ban on police use of facial recognition’ (CNet, 25
June 2020) <www.cnet.com/news/lawmakers-propose-indefinite-nationwide-ban-on-police-use
-of-facial-recognition/> accessed 27 May 2021.
30 National Defense Authorization Act for Fiscal Year 2020. Conference Report to Accompany S. 1790.
December 9, 2019. 116th Congress 1st Session, House of the Representatives. Report 116–333
(USA), section 5709.
31 See, for example, E. Pashentsev, ‘Malicious Use of Deepfakes and Political Stability’ In Florinda
Matos (ed) Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics ECI-
AIR 2020 Supported By Instituto Universitário de Lisboa (ISCTE-IUL), Portugal, 22 – 23rd October 2020
(Academic Conferences International Limited 2020) 100.
The use of AI against government institutions 45
policy position. They fooled the Medicaid.gov administrators, who accepted
them as genuine concerns from actual human beings’. Later, Weiss identified
the comments and asked for them to be removed, ‘so that no actual policy
debate would be unfairly biased’.32 The experiment was very instructive and
speaks about the possible practice of MUAI through the creation of fake texts,
which we know nothing about yet.
Legislative activity and the adoption of strategic planning documents in the
EU and the United States allow us to trace the development of AI regulation
alongside a gradual identification of AI issues, including in the field of PS.
Precedents in this area often stimulate the improvement of legal mechanisms
for the prosecution of MUAI, but today the important role of an interdiscipli-
nary approach to predicting future crimes is already evident (more than ever,
due to the rapid development of technology).

The legal response in China and Russia


China
According to available open sources, Chinese regulation of AI seems to be
at an early stage of development. Official documents such as the ‘Notice
of the State Council Issuing the New Generation of Artificial Intelligence
Development Plan’33 focus primarily on the positive aspects of using AI – it
helps in making accurate assessments and predictions of the main trends in
the development of infrastructure and social security. In the field of PS, the
Chinese government also sees broad positive opportunities to ‘timely grasp the
change of group awareness and psychology, respond actively decision-making,
significantly improve the ability and level of social governance, and it is indis-
pensable for the effective maintenance of social stability’.34 At the same time,
new challenges are recognised, including the danger of AI’s destructive impact
on ‘government management, economic security and social stability, and even
global governance’, which may lead to ‘problems of changes in employment
structure, impact law and social ethics, violating personal privacy and challenge
international relations’.
In 2019, an expert committee established by China’s Ministry of Science
and Technology (MOST) released a document outlining eight principles
for AI governance and ‘responsible AI’. The first principle, ‘Harmony and

32 B. Schneier,‘Bots Are Destroying Political Discourse As We Know It’ (The Atlantic, 7 February 2020)
<https://www.theatlantic.com/technology/archive/2020/01/future-politics-bots-drowning-out
-humans/604489/> accessed 27 May 2021.
33 China State Council, ‘Notice of the State Council Issuing the New Generation of Artificial Intel-
ligence Development Plan [2017], 35; F. Sapio,W. Chen and A. Lo (trs) (The Foundation for Law and
International Affairs, 2017) <https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of
-Artificial-Intelligence-Development-Plan-1.pdf> accessed 27 May 2021.
34 ibid.
46 Regulating artificial intelligence in industry
Friendliness’, implies that ‘AI development … should conform to human val-
ues, ethics, and morality … it should be based on the premise of safeguarding
societal security and respecting human rights, avoid misuse, and prohibit abuse
and malicious application’.35 Thus, in China, the issue of countering MUAI,
including in the field of PS, is officially recognised as a priority.
MUAI issues are widely discussed in academic and public circles, which
provides many opportunities for developing regulatory mechanisms. A group
of researchers from Tsinghua University has highlighted a wide range of threats
associated with MUAI. In particular, they pointed out the virtual online space/
storage facilitates the collection and exchange of personal data, analysis and
exchange of information (including identification data, medical information,
credit records, personal locations, and movement information). However, at
the same time, this makes it difficult to determine the causes and extent of
such data leaks.36 From the point of view of PS, it is worth paying attention
to conditions that will affect the human psyche, such as ‘unpredictability and
irreversibility of the blurring of time and space’, as well as virtual and objective
reality, in which Chinese experts see potential risks.
Among the main MUAI threats that are noted by the Tsinghua University
report are threats that we would classify as belonging to the second level: fraud
(including fraud in social networks, based on personal information obtained
illegally), hacking of AI-based authentication systems, and the malicious use of
UAV, autonomous vehicles and robots equipped with AI.37 The problems of
psychological security are discussed in closed, primarily military, structures, but
it is worth emphasising that it is important that these issues are studied by civil-
ian analytical centres dealing with the problems of AI implementation, espe-
cially in the context of the development of China’s cybersecurity initiatives.
The most well-known example of the regulation of MUAI in China is the
ban on the use of deepfakes to mislead audiences.38 In 2019, China announced
new rules governing video and audio content on the internet, including a
ban on the publication and distribution of ‘fake news’ created using AI and
virtual reality. Deepfake technologies can ‘endanger national security, disrupt
social stability, disrupt social order and infringe upon the legitimate rights and

35 Ministry of Science and Technology (MOST) of the People's Republic of China,‘Translation. Gov-
ernance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artifi-
cial Intelligence. June 17, 2019’ (New America, 17 June 2019) <https://perma.cc/V9FL-H6J7>
accessed 27 May 2021.
36 China Institute for Science and Technology Policy at Tsinghua University, China AI Development
Report 2018 (China Institute for Science and Technology Policy at Tsinghua University 2018) 94.
37 ibid.
38 国家互联网信息办公室 文化和旅游部 国家广播电视总局 (State Internet Informa-
tion Office Ministry of Culture and Tourism State Administration of Radio and Television),
‘关于印发《网络音视频信息服务管理规定》的通知 (Notice on Issuing the ‘Regulations on the
Administration of Network Audio and Video Information Services’)’ 2019年11月18日 (18 Novem-
ber 2019) <http://www.law-lib.com/law/law_view.asp?id=671676> accessed 27 May 2021.
The use of AI against government institutions 47
interests of others’, according to the transcript of a press briefing published
on the website of the Cyberspace Administration of China (CAC). Any use
of these technologies should be clearly marked and clearly visible to internet
users. Failure to comply with these rules can be considered a criminal offence,
according to the CAC website.39 It is significant that it is not the technology
that is prohibited, but the deliberate misleading of the audience with the help
of the technology.

Russia
MUAI is in the spotlight in Russia, as evidenced by the statement of President
Vladimir Putin in September 2017. ‘Artificial intelligence is not only the future
of Russia, it is the future of all mankind. There are huge opportunities and
threats that are difficult to predict today’.40 In 2018, the Russian Ministry of
Defence hosted the first conference entitled ‘Artificial Intelligence: Problems
and Solutions’, at which the Defence Minister Sergei Shoigu set a task for
military and civilian specialists to join forces in developing AI technologies to
counter possible threats in the field of technological and economic security.41
Thus, the main focus of attention is threats of MUAI that are not directly related
to psychological impact but can ‘regenerate’ into second-level PS threats.
On 10 October 2019, Russia adopted the ‘National Strategy for the
Development of Artificial Intelligence for the Period up to 2030’.42 The points
on human rights, which can serve as a starting point for developing mecha-
nisms for preventing and prosecuting MUAI, are contained in two sections of
the document, one of which is ‘Basic principles for the development and use of
artificial intelligence technologies’ (section III). Among these principles, which
must be observed when implementing the Strategy, are:

a) protection of human rights and freedoms: ensuring the protection of


human rights and freedoms guaranteed by Russian and international legis-
lation, including the right to work, and providing citizens with the oppor-
tunity to gain knowledge and acquire skills for successful adaptation to

39 Y.Yang, B. Goh and E. Gibbs, ‘China seeks to root out fake news and deepfakes with new online
content rules’ (Reuters, 29 November 2019) <https://www.reuters.com/article/us-china-technol-
ogy/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSK-
BN1Y30VU> accessed 27 May 2021.
40 RIA ‘Novosti’, ‘Putin: the leader in the field of artificial intelligence will become the ruler of the
world’ (RIA ‘Novosti’, 1 September 2017) <https://ria.ru/20170901/1501566046.html> accessed
27 May 2021.
41 K. Fedorov, ‘Shoigu called on scientists to unite to work on artificial intelligence’ (Zvezda TV, 14
March 2018) <https://tvzvezda.ru/news/forces/content/201803141458-vp29.htm> accessed 27
May 2021.
42 President of the Russian Federation,‘Decree of 10.10.2019 No. 490 on the development of artificial
intelligence in the Russian Federation’ (Russia) (Official Legal Information Portal, 11 October 2019)
<http://publication.pravo.gov.ru/Document/View/0001201910110003> accessed 27 May 2021.
48 Regulating artificial intelligence in industry
the digital economy; b) security: the inadmissibility of using AI for the
purpose of intentionally causing harm to citizens and legal entities, as well
as preventing and minimizing the risks of negative consequences of using
AI technologies.43

One of the main aims of creating a comprehensive system for regulating public
relations arising from the development and implementation of AI technologies
in the Russian strategy is the development of ethical rules for human interac-
tion with AI.44
The federal law entitled ‘On Conducting an Experiment to establish special
regulation in order to create the Necessary Conditions for the Development and
Implementation of AI Technologies in the subject of the Russian Federation –
the City of Federal Significance Moscow’45 caused both positive and negative
reactions in the media. The law contains an indication that experiments must
not violate the rights of citizens:

The result of the establishment of a pilot legal regime may not be the
restriction of constitutional rights and freedoms of citizens, imposition of
additional duties, violation of the unity of economic space in the territory
of the Russian Federation or other depreciation of the safeguards guaran-
teeing protection of the rights of citizens and legal entities.46

The experimental task of establishing a legal regime was previously specified


in the National Strategy for the development of AI. At present, given the
ambiguous attitude to this experiment in Russian society, government agen-
cies will probably have to not only control the experiment, but also explain
the nature of the technologies being introduced, and the benefits and risks of
their use for citizens.
In Russia, as in many countries of the world, there is no special regulation
for the use of large data sets by robots equipped with AI,47 which can affect the

43 ibid.
44 ibid. See also English translation: M. Konaev,A.Vreeman and B. Murphy (trs) ‘Original CSET Trans-
lation of Decree of the President of the Russian Federation on the Development of Artificial Intelli-
gence in the Russian Federation’ (CSET, 28 October 2019) <https://cset.georgetown.edu/research
/decree-of-the-president-of-the-russian-federation-on-the-development-of-artificial-intelligence
-in-the-russian-federation/> accessed 27 May 2021.
45 President of the Russian Federation, ‘Federal Law No. 123-FZ of 24.04.2020 on conducting an
experiment to establish special regulation in order to create the necessary conditions for the devel-
opment and implementation of artificial intelligence technologies in the subject of the Russian
Federation – the Federal City of Moscow and amending articles 6 and 10 of the Federal law on
personal data’ (Russia) (Official Legal Information Portal, 24 April 2020) <http://publication.pravo.gov
.ru/Document/View/0001202004240030?index=0> accessed 27 May 2021.
46 ibid., ch 1, art. 5.
47 Sberbank, Analiticheskij obzor mirovogo rynka robototehniki (Analytical review of the global robotics market)
(Sberbank 2019), 246.
The use of AI against government institutions 49
implementation of an experimental legal regime. Given that Russian legislation
in the field of AI is currently at the very beginning of its development, a lot
of work remains to be done. How it will respect the principle of transparency,
make the work of AI understandable, and specify in policies and federal law the
non-discriminatory access of citizens to the results of AI? These are fundamen-
tal points from the point of view of the rights of citizens, and of the attitude
of citizens themselves to the implementation of innovation. Today they are
extremely important for the development of society and technology.

A systemic approach to creating conditions


for successful legal counter-measures
Undoubtedly, a systemic interdisciplinary approach and collaboration between
researchers and practitioners is extremely important in the development of
legal norms and strategic policy plans in the field of AI. The European expe-
rience of promoting research and sharing expertise, described in some detail
here, may be useful in this regard. To monitor the implementation of AI, the
European Commission has established AI Watch, an information service for
monitoring the development, implementation, and impact of AI in Europe.
AI Watch is part of the European Commission’s Joint Research Centre (JRC),
in collaboration with the Directorate-General for Communications Networks,
Content and Technology (DG CONNECT). The European Commission,
in particular, cooperates with the OECD in the analysis of the national AI
strategies of EU Member States. The OECD and the European Commission
share content for publication on their platforms, such as the OECD AI Policy
Observatory and AI Watch.48 An important task of AI Watch was the forma-
tion of a research community in the field of AI in the EU.
The experience of the international cooperation established by the BRICS
countries (Brazil, Russia, India, China, and South Africa) is a useful exam-
ple. BRICS created a platform for the exchange of ideas and research results,
CyberBRICS. The developers of the CyberBRICS project set out three goals:
to compare existing regulations; to identify best practice; and to develop policy
proposals in the fields of cybersecurity regulation (including the regulation of
personal data), internet access policies, and strategies for the digitalisation of
public authorities in the BRICS countries.49 Achieving these goals will help
develop legal and policy mechanisms for regulating ICT, particularly AI.
The BRICS ministers responsible for science, technology, and innova-
tion have been cooperating since 2014, adopting a number of documents to
develop a legal framework, including the ‘Memorandum of Understanding on

48 V.Van Roy, AI Watch – National Strategies on Artificial Intelligence:A European Perspective in 2019, EUR
30102 EN (Publications Office of the European Union 2020), 3.
49 CyberBRICS, ‘About Us’ (CyberBRICS, 2021) <https://cyberbrics.info/about-us/> accessed 27
May 2021.
50 Regulating artificial intelligence in industry
Cooperation in Science, Technology and Innovation’. Since then, cooperation
in the field of ICT has been developing rapidly, as evidenced by the promotion
of the BRICS Digital Partnership in 2016, the adoption of the Declaration of
the BRICS Presidential Summit for a Collaboration for Inclusive Growth and
Shared Prosperity in the 4th Industrial Revolution, and the development of
the Enabling Framework for the Innovation BRICS Network.50 We should
consider new initiatives such as the BRICS Partnership on New Industrial
Revolution (PartNIR), the Innovation BRICS Network (iBRICS Network),
and the BRICS Institute of Future Networks. The establishment of the first
BRICS Technology Transfer Centre (in Kunming) and the first BRICS
Institute of Future Networks (in Shenzhen) in China demonstrates not only
the country’s leadership in AI, but also its interest in developing technologi-
cal cooperation in BRICS.51 Liu Duo, the President of China’s Academy of
Information and Communications Technologies, has said that the Institute in
Shenzhen will focus on policy research on 5G, the industrial internet, AI, vehi-
cle internet, and other technologies. Additional efforts will be made to encour-
age the exchange of ideas between the five countries and to organise more
training events and exchange programmes.52 The iBRICS platform was estab-
lished by the 2019 summit in Brasilia53 and provides contacts between science
parks and technology clusters of the BRICS countries, specialised associations,
and supporting structures to encourage start-ups, including in the field of AI.
Unfortunately, the limited scope of this chapter does not allow us to
describe more practical experiences in the nascent field of MUAI regulation.
However, it is already obvious that the open access to strategic planning docu-
ments allows citizens (including researchers) to create a complete picture of
the development of AI, as well as the risks and threats associated with it.
Currently, there is an active discussion about the fundamental principles that
should govern the use of AI in the collection, processing, and use of data.
The strategic documents of the European Union declare the need to ensure
personal control by citizens over their personal data. In the United States,
for example, there was previously a proposal to revise this principle in favour
of the effective management of personal data for the benefit of citizens.54

50 L. Belli, ‘CyberBRICS: A Multidimensional Approach to Cybersecurity for the BRICS’ In L. Belli


(ed) (forthcoming), CyberBRICS: Mapping Cybersecurity Frameworks in the BRICS (FGV Direito
Rio 2020) 17, 20. See: BRICS – Brasil 2019, ‘Enabling Framework for the Innovation BRICS
Network (‘iBRICS Network’)’ (BRICS – Brasil 2019, 2019) <http://brics2019.itamaraty.gov.br/
images/documentos/Enabling_Framework_iBRICS_Network__Final.pdf> accessed 27 May 2021.
51 Belli (n 50) 21–2.
52 Ma Si, ‘BRICS Cooperation Continues With New Institutional Branch’ (China Daily, 6 August
2019) <http://global.chinadaily.com.cn/a/201908/06/WS5d49395ea310cf3e3556430a.html>
accessed 27 May 2021.
53 BRICS – Brasil 2019 (n 50).
54 M. MacCarthy, ‘It’s Time for a Uniform National Privacy Law’ (CIO, 23 August 2018) <https://
www.cio.com/article/3300106/it-s-time-for-a-uniform-national-privacy-law.html> accessed 27
May 2021.
The use of AI against government institutions 51
However, in either case, the use of AI cannot be protected from threats of an
anthropogenic nature, when political or business actors acting in their own
narrow self-interest are able to use even voluntarily transmitted information to
the detriment of society. Thus, more than ever, the coordinated work of all
progressive social forces is needed to raise citizens’ awareness not only about
the degree of AI development, but also about the economic, social, and politi-
cal trends of the contemporary world. The higher the level of this awareness,
the better the consciousness of citizens will be protected from the threats of
MUAI in the field of PS.

Conclusions
The materials studied, including those whose analysis is not included in this
chapter, show that, at both the national and the international levels, there are
no comprehensive mechanisms for the legal regulation of the use of AI, includ-
ing regulations to prevent acts of MUAI or to prosecute individuals and groups
who commit such acts. The national legislation of some countries introduces
regulations for the use of certain technologies, such as the creation of deep-
fakes, but, in general, the legal aspects of regulating the use of AI in the PS
system are still under discussion.
The ability of AI to make autonomous decisions raises the question of AI
subjectivity for lawmakers (to determine who is responsible for the offence:
the AI itself, its developer, or users who have trained the AI using distorted
data), and this question can only be resolved with the close cooperation of
AI specialists, experts in PS, psychologists, and lawyers at the national level.
The first attempts are in fact being made to regulate the use of AI. In these
conditions, it is extremely difficult for international institutions to develop
such measures until the countries with the highest indicators of AI develop-
ment create their own norms and put forward proposals on international legal
issues based on them. It is significant, for example, that at the UN level at the
time this chapter is being written, there are no resolutions with instructions
for regulating deepfakes. It is all the more important to start a comprehensive
study of MUAI today before MUAI comprehensively damages and disorgan-
ises society. The COVID-19 pandemic has become a powerful catalyst for the
digitalisation of a wide variety of services, which inevitably leads to the growth
of MUAI (this is already proven, for example, by the increase in the number
of phishing attacks). In these circumstances, the legislative actors are forced
to work in the context of a double crisis – a pandemic and rising geopolitical
tension.
In the new environment, comprehensive research on MUAI in the context
of human rights is extremely important for the PS system, with a combina-
tion of the results of work in two areas: (1) the ‘technical’ direction – scenario
analysis of situations in which MUAI occurs, based on the technical capabilities
of AI itself; (2) the ‘social’ direction – analysis of social developments that takes
52 Regulating artificial intelligence in industry
into account possible threats to civil rights and freedoms and, in this context,
the specific risks of using AI.
The synthesis of the results of work in these areas will allow us to formulate
both favourable and unfavourable scenarios for the implementation of AI in
the context of PS.
When developing a human-oriented approach to the problem of AI, it
is important to formulate the goals of technological development correctly.
It is most effective to set socially-oriented goals and objectives using specific
planned indicators (such as the UN Sustainable Development Goals55). It is
advisable to refine general policy formulations in accordance with social goals.
A socially-oriented approach avoids the objectification of citizens and preserves
their subjectivity when regulating the use of AI in the context of PS.

55 United Nations, ‘Sustainable Development Goals’ (UN, 2020). <www.un.org/sustainabledevelop-


ment/> accessed 27 May 2021.
4 Leveraging artifcial intelligence
in citizenship by investment
programmes
Jeiel D. Joseph

Introduction
Citizenship by investment (CBI) is a process whereby a country grants an
applicant citizenship in exchange for an investment in that country’s economy.1
CBI is different from other residency schemes because it grants an applicant
not only residency and the right to work, but also a passport with unlimited
stay the inclusion of political rights and traditional obligations of the respective
national domain.2
Required investment types and amounts of money involved vary in the dif-
ferent CBI host countries, although most nations offering CBI, such as Antigua
and Barbuda or Malta give applicants a choice between making a contribution
to a government-run national development fund or a significant real estate
purchase or corporate investment.3
CBI programmes are unique as they do not fit into traditional models of
naturalisation and citizenship. CBI are an expedited form of naturalisation
through some means of investment, and their use has been considered by many
industry protagonists to be unique access to second citizenship, rather than by
birth right acquisition, be it by descent (jus sanguinis) or by birth in the terri-
tory (jus soli).4
The reasons behind prospective applicants gaining second citizenship may
depend on many factors, such as visa-free access to other counties, education
and healthcare for the applicant’s children, or economic benefits, including
freedom of movement of capital without restriction.5 There may also be push
factors affecting applicants to acquire CBI. For instance, when countries begin

1 J. Veteto, ‘The Alienability of Allegiance: An International Survey of Economic Citizenship Laws’


[2014], 48 The International Lawyer 1, 79-103; CBI Index,‘What is Citizenship by Investment?’ avail-
able at: <https://cbiindex.com/> accessed 27 May 2021.
2 Freisleben (n 1) 12-14.
3 ibid.
4 G. Clayton, G. Firth, Immigration & Asylum Law (8th ed., OUP 2018), 72–75.
5 M. Micheletti, ‘Sustainable Citizenship and The New Politics of Consumption’ [2012] 644 Annals of the
American Academy of Political & Social Science, 88–120.

DOI: 10.4324/9781003246503-5
54 Regulating artificial intelligence in industry
to face emergencies, economic crisis, and/or political turmoil, often wealthy
people are the first to migrate abroad in search of a safer environment.6
The term ‘high-net-worth individual’ is used by some segments of the finan-
cial services industry to describe a person whose investible wealth (assets such
as stocks and bonds) significantly exceed a given amount.7 For them, second
citizenship is a compelling investment option. Such programmes have existed
for decades; for instance, the United States, United Kingdom, and Canada
have had ‘Immigrant Investor Programmes’ dating back to the mid-1980s or
mid-1990s.8 Small states have offered a more direct route to citizenship with-
out, or on the basis of very limited, residency requirements. These currently
include nations such as Cyprus, Dominica, and St Kitts and Nevis; however,
in the past it included Ireland and several Pacific Islands. In this regard, CBI
programmes can be very different in scope and character, as every nation has
shaped its programme in consonance with its specific needs and priorities.9 All
of these programmes purport to stimulate growth and employment through
attracting more foreign capital and investment by way of offering expedited
citizenship to high net-worth individuals.10
Nevertheless, the facilitation of CBI programmes potentially creates serious
corporate and immigration compliance risks for governments, including the
potential for: (a) money laundering, and (b) border security risks.
In consideration of the above, corporate and immigration due diligence
are areas that are currently being transformed by AI, but have yet to be intro-
duced into CBI programmes unilaterally. Uniform corporate and immigration
due diligence, or the lack thereof, are the key risk areas in CBI programmes,
particularly regarding facilitating money laundering and border security risks.
Prior to considerations for uniformed due diligence bolstered by AI, it is
important to illustrate the justifications for and importance of due diligence in
CBI programmes.

Due diligence
In the context of CBI programmes, due diligence can be described as an
investigation, audit, or review of a CBI applicant to confirm the legitimacy of

6 R. Sharma,‘The Millionaires Are Fleeing. Maybe You Should,Too’ (NY Times 2018) <https://www
.nytimes.com/2018/06/02/opinion/sunday/millionaires-fleeing-migration.html> accessed 27 May
2021.
7 A. Hayes, ‘High Net Worth Individual’ (Investopedia, 21 March 2021) <https://www.investopedia
.com/terms/h/hnwi.asp> accessed 27 May 2921.
8 The number of US EB-5 investor visas increased five-fold from 2010, but this still represents only
2% of annual immigration to the U.S.
9 X. Xu,A. El-Ashram, J. Gold,‘Too Much of a Good Thing? Prudent Management of Inflows Under
Economic Citizenship Programs’ [2015] International Monetary Fund,Working Paper WP/15/93,
13–20.
10 Ibid.
Leveraging AI in citizenship by investment programmes 55
the application and assess the potential risks under consideration. At its most
basic level, due diligence is a series of investigative and auditing processes that
identify, corroborate, and verify information about an individual or a business
entity so that CBI host states can be protected against money laundering and
border security risks.11 CBI due diligence should specialise in thorough back-
ground checks on criminal records, corruption, sanctions, litigation, frozen
assets, concealment, tax fraud, financial crime and embezzlement, via police
databases, etc. Notwithstanding these risks, justifications for bolstering due
diligence in corporate and immigration compliance with AI tools is mainly
motivated by the number of applications, which indicates the rapid growth
of the industry, and the significant revenue that is generated by CBIs govern-
ments. Risk potential is amplified through the growing number of applicants
and the potential for one or more malfeasant applicants to circumvent current
due diligence methods and ruin the programme’s reputation.
Statistically, in the Caribbean territories, applications had been on the rise
in most cases until 2020. This is exemplified in the 2021 Budget Statement for
Antigua and Barbuda, which was presented by Prime Minister Honourable
Gaston Browne. During his speech he outlined, among other things, the
importance of revenues from their citizenship programme. In 2020, a total of
366 applications were received, representing a 22% decrease from the previous
year.12 Nonetheless, the CBI generated $115.7 million in 2020 alone. From its
inception in 2012, Antigua and Barbuda have naturalised over 4000 citizens
through their programme.13 Just 188km south, in Dominica, their citizen-
ship by investment unit (CIU) reported 2,100 applications to its CBI between
July 2018 and June 2019.14 This number represented the highest application
approval figure officially recorded in a single year among Caribbean CBI. In
this premise, it is worthwhile noting that St Kitts and Nevis, whose govern-
ment declines to disclose precise annual data as it is considered a trade secret,
may have rivalled Dominica in 2019.15 Regardless, Dominica issued 5,815
passports to investors and their family members between 2017 and 2020 and
brought the country $1.2 billion. Conclusively, in St Lucia, according to news

11 Oxford Analytica, ‘Due diligence in Investment Migration: Best Approach and Minimum Standard
Recommendations’, (Report 1, Investment Migration Council, January 2020) <https://investment-
migration.org/wp-content/uploads/DD-in-IM-Best-Approach-and-Minimum-Standard-Recom-
mendations-January-2020.pdf> accessed 27 May 2021.
12 Hon. Gaston Brown., ‘Antigua and Barbuda Budget Statement 2021’ (January 2021) available at
<https://ab.gov.ag/pdf/budget/Budget_Speech_2021.pdf > accessed 27 May 2021.
13 Ch. Nasheim, ‘Antigua CIP Data: Applications Dip, 97% Pick Contribution Option’ (Investment
Migration Insider, 3 January 2020) <https://www.imidaily.com/caribbean/antigua-cip-h1-2019-data
-applications-dip-97-pick-contribution-option/> accessed 27 May 2021.
14 Ch. Nasheim, ‘Dominica CIP approved 2,100 Applications in Last 12 Months, Likely a World
Record’ (Investment Migration Insider, 21 September 2019) <https://www.imidaily.com/caribbean/
dominica-cip-approved-2100-applications-in-last-12-months-likely-a-world-record/> accessed 27
May 2021.
15 ibid.
56 Regulating artificial intelligence in industry
reports, over 1,000 individuals have been granted St Lucian citizenship as of
2020, its fifth year of operation, and this has raised $1.1 billion.16
In the EU, Bulgaria, Cyprus, and Malta are the only Member States operat-
ing CBI schemes.17 Malta understandably places a yearly cap on the number of
citizenship by investment applicants who can be naturalised. Their CBI natu-
ralised a total of 3,673 applicants between its inception in 2013 and 2018. Malta
generated over €900 million euros from the commencement of its CBI, which
represented 7.2% of Malta’s 2017 Gross Domestic Product.18 Comparatively,
the Cyprus Ministry of Finance stated the results of their CBI from its incep-
tion, and according to the results of the audit between June 2013 and December
2019, 2,855 investors received Cyprus passports for investments, and in 7 years
the programme generated €9.7 billion.19
Globally, CBI has grown into an estimated $3-billion-dollar industry, with,
reportedly, 9,000 CBI applicants approved worldwide in 2018/2019.20 It is
estimated that in 2020 in Turkey alone there were about 13,000–14,000 appli-
cants, despite the COVID-19 pandemic.21 This figure is expected to grow
because programmes have become much more affordable and as a result of
CBI competition to attract more wealthy individuals.22 This drop in price also
presents a problem as it could give easier access to individuals with malfeasant
intentions. It is suggested that the current status quo of due diligence within
the CBI industry should be supported with various machine learning tools
increasing its effectiveness and efficiency in consideration of the above-illus-
trated growth in the industry, and the amplified risks that are presented with
this forecasted growth.
In support of improving due diligence in CBI, the Investment Migration
Council,23 in coordination with BDO USA (assurance, tax, and advisory ser-
vices), Exiger (global regulatory and financial crime, risk, and compliance

16 J. St. Amiee.,‘188 Granted citizenship through St. Lucia CIP 2019–2020: Chinese lead the way’ (St
Lucia:The Star, 24 December 2020) <https://stluciastar.com/188-granted-citizenship-through-saint
-lucias-cip-in-2019-2020-chinese-lead-the-way-again/> accessed 27 May 2021.
17 European Commission ‘Investor Citizenship and Residence Schemes in the European Union‘ Report from
the Commission to the European Parliament, The Council, the European Economic and Social
Committee and the Committee of the Regions, Brussels, 23 Jauary.2019, COM(2019) 12 final, 3.
18 Office of the Regulator (Individual Investor Programme), ‘Fifth Annual Report on the Individual
Investor Programme of the Government of Malta’ (ORiip Annual Report, November 2018) availa-
ble at <https://orgces.gov.mt/en/Documents/Reports/Annual%20Report%202018.pdf> accessed
27 May 2021.
19 European Commission Report (n 18), 19–20.
20 Oxford Analytica (n 12).
21 La Vida – Golden Visas, ‘Turkey’s Record Breaking Citizenship by Investment Programme’, 4 May
2021 <https://www.goldenvisas.com/turkeys-record-breaking-citizenship-program> accessed 27
May 2021.
22 Dominica and St Kitts and Nevis now offer CBI for as low as $100,000.
23 The Investment Migration Council, as illustrated on their website, is the worldwide association for
investor immigration and CBI, bringing together the leading stakeholders in the field.
Leveraging AI in citizenship by investment programmes 57
company), and Refinitiv (financial markets data and infrastructure institution),
formed a due diligence working group to examine the state of due diligence in
the entire investment migration sector and explore the potential for minimal
standards, greater transparency, and information sharing across the industry.
This working group conducted two reports, the first of which provides a criti-
cal overview of due diligence processes and an assessment of whether current
due diligence practices need improvement.24 Thereafter, the second report
illustrates recommendations for basic minimum standards for providers of due
diligence within the industry.25 It becomes apparent in these recommenda-
tions that the introduction of specific AI tools becomes even more realistic for
potentially establishing minimum standards within the industry.
Currently, there is little to no due diligence coordination or uniform stand-
ards between CBI active nations and AI could potentially assist with this. In
this premise, currently, licensed agents are required to perform initial admin-
istration of due diligence checks and present their applications to government
CIUs. As such, agents have the first opportunity to identify and reject appli-
cants who do not meet the due diligence requirements, and whether or not an
agent makes an adequate initial decision depends partly on the quality of due
diligence they conduct.26
CIUs should, amongst other things, conduct verification of the validity of
the individual’s police and court records and check national databases and other
government records. Machine learning techniques and algorithms used in soft-
ware such as ‘Veriff’27 can help in this regard. Moreover, CIUs should liaise
with international law enforcement agencies to obtain details on outstanding
warrants or suspicion of international crime activity. CIUs should focus on
the overall integrity and security of not just individual programmes but an
overall industry perspective. It is suggested that CIUs should incorporate the
most innovative and technically advanced tools for conducting multi-tiered
and multilaterally coordinated due diligence.
Due diligence tasks can create significant administrative strains on licensed
agents and CIUs, as they are required to be not only thorough but also proac-
tive, as currently in CBI there is no collaboration concerning corporate and
immigration due diligence.
Implementation of AI is likely to offer great opportunities in this area.
Machine learning tools would assist with the manual workload of CIUs and can

24 Oxford Analytica,‘Due Diligence in Migration, Current applications and Trends’, Report 1, January
2020, <https://www.refinitiv.com/content/dam/marketing/en_us/documents/partners/due-dili-
gence-in-investment-migration-current-applications-and-trends.pdf> accessed on 27 May 2021.
25 Oxford Analytica, ‘Due diligence in Investment Migration: Best Approach and Minimum Standard
Recommendations’, Investment Migration Council Report 2, January 2020 <https://investment-
migration.org/wp-content/uploads/DD-in-IM-Best-Approach-and-Minimum-Standard-Recom-
mendations-January-2020.pdf> accessed 27 May 2021.
26 Oxford Analytica (n 12).
27 Veriff is an automated global identification service which simplifies compliance.
58 Regulating artificial intelligence in industry
help agents with conducting preliminary due diligence. Moreover, automatic
facial recognition techniques in the context of immigration biometrics and bor-
der security could efficiently maximise currently manual due diligence processes.
The screening of applicants biometrics by CIUs is essential for maintaining
security and for safeguarding the credibility of individual CBI programmes.28

Corporate compliance in CBI


The foreign direct investment generated by these CBI programmes may be
substantial; however, it may also introduce significant macroeconomic impli-
cations across various sectors of the CBI nation’s economy.29 For example, the
inflows in St Kitts and Nevis, and to a lesser extent in Dominica, have grown
to a significant share of Gross Domestic Product, affecting aggregate demand
and raising questions about risks to macroeconomic stability and fiscal sustaina-
bility.30 Depending on the programme, there are three main types of inflows:

i) contributions to the government-related application fees;


ii) non-refundable contributions to governments or quasi-government funds
(e.g. National Development Funds); and
iii) investments in the private or public sector, which can be often sold or
redeemed after a specific period of time.31

Beyond the potential macroeconomic risks, the CBIs, if exploited, will help to
hide or facilitate financial, economic, and/or organised crimes, which include
bribery, corruption, and/or money laundering. The last problem is signifi-
cant. Money Laundering can be defined as ‘a process criminals use to hide,
control, invest, and benefit from the proceeds of their criminal activities. It
occurs where criminals attempt to create a legitimate background for their
money’.32 Machine learning algorithms have been shown to be particularly
useful in monitoring transactions and suspicious activity. These two key aspects
have been recognised by the Financial Action Task Force and the majority of
the OECD Member States, as key characteristics of money laundering, and
thereby are at the heart of national anti-money laundering standards.33

28 Oxford Analytica – Applications and Trends (n 25).


29 D. Ding, I. Otker,‘Strengthen the Caribbean Region Integration’ (IMF, 4 February 2020) <https://
www.imf.org/en/News/Articles/2020/02/04/NA020420-Strengthening-Caribbean-Regional
-Integration> accessed 27 May 2021.
30 Xu, El-Ashram, Gold (n 10).
31 ibid.
32 The World Bank, Combating Money Laundering and the Financing of Terrorism: A comprehensive Training
Guide, ’ (The World Bank,Washington D.C. 2009) 5.
33 Financial Action Task Force., ‘FAFT 40 Recommendations’ (October 2003) <https://www.fatf-gafi
.org/media/fatf/documents/FATF%20Standards%20-%2040%20Recommendations%20rc.pdf>
accessed 27 May 2021.
Leveraging AI in citizenship by investment programmes 59
Until very recently, corporate institutions have relied on traditional, rules-
based anti-money laundering transaction monitoring and name screening sys-
tems, which generate high numbers of false positives due to rules thresholds.34
Singapore, as a top global financial centre, has a ‘front row seat’ to these money
laundering threats. In fact, the country has taken the lead in addressing the
evolving ‘threatscape’ through innovative initiatives, solutions, and forums, as
seen in the continued run of the Singapore Fintech Festival by the Monetary
Authority of Singapore.35 Innovation in CBI compliance is needed both to
reduce false positives in applications and to bring about greater effectiveness
in the manner in which money laundering risks are monitored and addressed
by CIUs.
In CBI, machine learning techniques such as anomaly detection can be
used to identify previously undetected transactional patterns, data anomalies,
and relationships among suspicious individuals and entities.36 AI-based pro-
grammes such as Luminance37 can be taught to recognise suspicious behaviour
and investment risk, and rate them accordingly. A common challenge in CBI
programmes is transaction monitoring and suspicious activity alerts, and it is
here where machine learning can teach computers to classify due diligence
alerts as being of high, medium, or low risk.38
Corporate investments and the facilitation of international transactions are
major business activities that have significant impacts on national economies.39
AI applications used for due diligence can also help make an informed decision
about CBI applicants and can be trained to identify the behavioural character-
istics or indicators that highlight when activity is truly suspicious.40 Machine
learning can be applied to name screening, where agents and CIUs are required
to screen applicants against global lists of known criminals and blacklisted or
sanctioned organisations. In the current status quo, the rules-based approach is

34 Deloitte,‘The case for artificial intelligence in combating money laundering and terrorist financing:
A deep dive into the application of machine learning technology’ (Deloitte 2018) <https://www2
.deloitte.com/content/dam/Deloitte/sg/Documents/finance/sea-fas-deloitte-uob-whitepaper
-digital.pdf > accessed 27 May 2021.
35 ibid.
36 ibid.
37 Luminance is the leading artificial intelligence platform for the legal profession, with over 250 cus-
tomers in more than 50 countries. Luminance’s machine learning technology reads and forms an
understanding of documents, helping lawyers to perform the most thorough and rapid document
reviews across practice areas including due diligence, contract negotiation, regulatory compliance
reviews, property portfolio analysis and eDiscovery.
38 B. El Nakib., ‘Understanding Artificial Intelligence to fight money laundering’ (Compliance Alert, 19
December 2017) <https://calert.info/details.php?id=1614> accessed 27 May 2021.
39 Institute of Chartered Accountants England and Wales, ‘AI in Corporate Advisory – Investment,
M&A and transaction services’ (ICAEW) <https://www.icaew.com/technical/corporate-finance/
ai-in-corporate-advisory> accessed 27 May 2021.
40 Deloitte (n 35).
60 Regulating artificial intelligence in industry
both onerous and manual, resulting in increased workload for compliance, as
well as potential gaps in surveillance and monitoring.41
Some other areas that are gaining traction include fraud detection, auto-
mated reporting, and enhanced surveillance, including voice, video, text, and
pattern-based transaction monitoring. Machine learning technology in fraud
detection currently allows banks to accurately predict if an account is at risk of
being compromised, and it is proposed that similar techniques be adapted in
CBI programmes to accurately predict if CBI applicants have financial risks.42
Companies such as ‘DataVisor’ provide specialist software that can track fraud-
ulent transactions. DataVisor combines applied machine learning capabilities
with powerful investigative workflows and an intelligence network of more
than 4 billion user accounts to provide real-time fraud signals, insights, and
protection to preserve vital trust and security. In 2020 the company claimed
that the software detected up to 30% more fraud with as much as 90% more
accuracy.43
CBI governments typically spend large resources on information about
potential applicants, particularly in the context of the programme investment
and due diligence administration requirements. The introduction of an AI
chatbot to CBI could potentially minimise expenses and boost efficiency in this
respect, thereby helping to boost programme earnings. A chatbot is a real-time
virtual online assistant that can simulate human conversational behaviour.44 In
online ‘conversations’ with potential applicants, chatbots can ask and answer
questions, and generally interact ‘intelligently’. Data from these online chats
can be collected and analysed in order to generate better results in the future.45
Conclusively, all monitoring systems and analytics, not just machine learn-
ing applications, depend on high-quality data. Given the unprecedented avail-
ably of corporate data globally, there is arguably a renewed widespread interest
in applying data-driven machine learning methods to problems for which the
development of conventional corporate solutions is challenged.46

Immigration compliance in CBI


‘Border management’ concerns the administration of immigration policy.
While its precise meaning may vary according to a national CBI programme,
it usually relates to the rules, techniques, and procedures regulating activities

41 ibid.
42 R. Shroff., ‘Artificial Intelligence for Risk Reduction in Banking: Current Uses’ (Towards Data Sci-
ence, 16 January 2020) <https://towardsdatascience.com/artificial-intelligence-for-risk-reduction
-in-banking-current-uses-799445a4a152> accessed 27 May 2021.
43 ibid.
44 ibid.
45 ibid..
46 O. Simeone., ‘A Very Brief Introduction to Machine Learning with Applications to Communication
Systems’ [2018] 4 IEEE Transactions on cognitive Communications and Networking, 650–651.
Leveraging AI in citizenship by investment programmes 61
and traffic across defined border areas or zones. AI is gradually being used to
perform certain tasks in this respect, including identity checks, border security
and control, and analysis of data about visa and asylum applicants.47 There are
known examples of such use of AI in Canada and Germany. In the context
of CBI programmes, certain unique security risks are identified, including the
applicants fleeing justice in their home jurisdiction, and abusing visa-free travel
privileges granted through the purchased passport. Loss of visa-free travel privi-
leges is exemplified in the 2014 revocation of visa-free access between St Kitts
and Nevis and Canada for all St Kitts and Nevis citizens because of due dili-
gence practices, or the lack thereof, in their CBI programme.48 In this regard,
it is argued that one malfeasant applicant could potentially place irrefutable risks
and liabilities onto not only domestic programmes but to the CBI in general.
An interesting example of fleeing justice was posed by the 2019 case of an
Indian businessman Mehul Choski, who was accused in India of alleged fraud-
ulent activity against the Punjab National Bank.49 Although he held Antigua
and Barbuda citizenship through their CBI since 2017, and there was an extra-
dition treaty between these two states, Mr Choski started legal proceedings in
Antigua and Barbuda against his extradition to India and surrendered his Indian
citizenship. As reported in 2021, the Prime Minister of Antigua and Barbuda,
Gaston Browne, remains adamant that Mehul Choksi must leave the country.
As he is ruining the image of the country’s CBI programme. At the time of
writing, the case is still being legally challenged by Mr Choski in the Antigua
and Barbuda courts. 50
In best practice jurisdictions, the due diligence of travellers is informed
by document readers, which provide an efficient and accurate mechanism
to extract data from travel documents, automatically triggering watch lists
searches, enabling biographic and biometric identity verification, and record-
ing the entry of the traveller.51 In the ultimate expression of this automation,
travellers interact with self-service ‘eGates’ and kiosks on entry (or departure)
without processing input from border agency staff, thus releasing resources
for redeployment to other security or facilitation objectives.52 Currently, such

47 A. Beduschi., ‘International Migration Management in the Age of Artificial Intelligence’ (Oxford


Academic Migration Studies, 10 February 2020) <https://doi.org/10.1093/migration/mnaa003>
accessed 27 May 2021.
48 Government of Canada., ‘St. Kitts and Nevis citizens now need a visa to travel to Canada’ (22
November 2014) <https://www.canada.ca/en/immigration-refugees-citizenship/news/notices/
notice-kitts-nevis-citizens-need-visa-travel-canada.html> accessed 27 May 2021.
49 N. Chauhan.,‘ED files new charge sheet to aid Mehul Choksi’s extradition’ (Hindustan Times, 20 July
2020) <https://www.hindustantimes.com/india-news/ed-files-new-charge-sheet-to-aid-mehul
-choksi-s-extradition/story-gFGXZyKikbi0QxKc2PcQqO.html> accessed 27 May 2021.
50 M. Osagboro, ‘Choksi ‘ruining CIP image’ and must leave’ (The Daily Observer, 25 January 2020)
<https://antiguaobserver.com/choksi-ruining-cip-image-and-must-leave-pm/> accessed 27 May
2021.
51 ibid.
52 ibid.
62 Regulating artificial intelligence in industry
‘e-Government AI projects’ exist in Canada and Germany. They provide a range
of AI functions to support automated assessment of various types of applica-
tions submitted for the purposes of immigration compliance.
In Canada, the government has been in the process of developing a sys-
tem of ‘predictive analytics’ to automate due diligence activities currently con-
ducted by immigration officials, and to support the evaluation of immigrant
and visitor applications. The government has also sought input from the pri-
vate sector related to the 2018 project for an ‘AI Solution’ in immigration
decision-making and assessments.53
Germany has piloted projects using technologies with face and dialect rec-
ognition for decision-making in asylum determination processes.54 The system
applies a number of tools, such as automated name transcriptions, automated
dialect identification to validate the registered country of origin, automated
picture checks to prevent doubling files of an asylum-seeker, and sourcing
information from smartphones to gather further identity information in case
identity documents were lost.55
Such AI Modules can be applied to CBI programmes to facilitate multi-tier
due diligence, with minimum standards being recognised across the industry.
By bolstering due diligence with AI it is argued that agents and CIU will not
only become more efficient, but the best practice standards for the mitigation
of the aforementioned risks can be realised more effectively. They can be inte-
grated into the specific CIU application forms and processing systems, which
include workflow processing, i.e. administrative tasks, and document man-
agement. The proven services illustrated above include rule-based assessment,
scheme-based suggestions, data mining, case-based reasoning, and machine
learning.56 If applied in CBIs, they would streamline processes and workflows
while at the same time ensure all applications are processed fairly and accu-
rately, in recognition of all relevant laws and regulations.57

Challenges with the introduction of AI


Following the implementation of AI to CBI programmes, the governments
would need to respond to the potential social and legal challenges brought by

53 P. Molnar and L. Gill, Bots at the gate:A Human Rights Analysis of Automated Decision making in Canada
immigration and refugee systems (University of Toronto 2018), 14–23.
54 Beduschi (n 48).
55 Migration Data Portal, ‘AI-enabled identification management of the German Federal Office for
Migration and Refugees’ <https://migrationdataportal.org/data-innovation-59> accessed 27 May
2021.
56 A Chun.,‘An AI Framework for the Automatic Assessment of e-Government Forms’ [2007] AAAI,
1684–1690.
57 ibid.
Leveraging AI in citizenship by investment programmes 63
AI. These may include algorithmic bias, data exploitation, and inequality with
regards to whom citizenship is granted.58
In 2020, internet giant Amazon used AI to build a resume-screening tool in
hopes that it could make the process of evaluating applications more efficient.
They built an algorithm using resumes the company had collected for a dec-
ade. Since those resumes came primarily from men, Amazon’s system taught
itself that male candidates were preferable and then the company scrapped
the tool.59 This is an example of so-called ‘algorithmic bias’. It poses a serious
challenge to the use and governance of AI.60 This is because machine learning
in CBI programmes will involve ‘training’ the AI, using examples and data of
how humans have previously made administrative decisions. These data exam-
ples or sets may contain social biases and, consequently, this may be reflected
in the AI-based system’s decisions.61
Hypothetically, if the data used for training AI in CBI programmes con-
tained only applicants from one ethnic background and included the specific
associated risks of applicants from that background, the AI decisions across the
board would reflect the biases, attitudes, and opinions in consideration of that
application. The same holds for other decisions such as corporate investment
administration and immigration considerations.
Wong illustrates that it is extremely difficult to create AI training that is
neutral and independent of all human biases.62 One approach for bias neutrality
is to use decision data examples from humans with a wide spread of cultural
and social backgrounds.63 There is an increasing emphasis on human rights in
the conversation about the ethical and social issues raised by AI technologies.64
In consideration of CBI, gender discrimination is a main consideration.
Gender, in the context of CBI nations, refers to the roles, behaviours,
activities, attributes, and norms that a given society at a given time considers
appropriate for men and women. These attributes, opportunities, and relation-
ships are socially constructed and learned through socialisation processes.65 The
objectives here are to identify gender factors that influence investment and
immigration behaviour and to specify the investment decision-making pro-
cess, particularly with respect to female applicants. Empirical investigations of

58 ibid.
59 J. Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 11
October 2018) <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight
-idUSKCN1MK08G> accessed 27 May 2021.
60 P-H Wong, ‘Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics
and Governance of AI’ [2020] 33 Philosophy & Technology, 705–715.
61 ibid.
62 ibid.
63 ibid.
64 ibid.
65 A. Mackay, Border Management and Gender (DCAF 2018), 7.
64 Regulating artificial intelligence in industry
gender differences in risk taking do point in the direction of less risk taking by
women than by men.66
Relating to this, it is suggested that CBI programmes must consider the
investment and immigration compliance risks associated specifically with high-
net-worth females and sovereign border controls. The integration of gender
policy into AI border controls in CBI is essential to comply with international
and regional legal frameworks,67 instruments and norms concerning security,
gender equality, and human rights.68 Consequently, AI systems, not limited to
CBI, should be designed and operated so as to be compatible with the princi-
ples of human dignity, fundamental rights and freedoms, and cultural diversity,
thereby avoiding algorithmic bias. The introduction of AI presents novel social
challenges and risks that will require coordinated responses.69
Given its multidimensional character, AI inherently touches upon a full
spectrum of legal fields, from legal philosophy, human rights, contract law, tort
law, labour law, criminal law, tax law, investment law, and procedural law, to
name a few. Moreover, data collection also raises jurisdictional problems. It
would be impossible to address all these aspects in detail in one chapter. Some
of these issues are discussed in other chapters in this volume.

Conclusions
The current AI systems used in corporate and immigration compliance could
be applied to CBI programmes. Under corporate compliance, AI could
help identify malfeasant applicants, and those involved with the proceeds
of crime, including money laundering. Corporate due diligence in CIUs
would thereby prove to be more effective and efficient. Alternatively, for
immigration, simple biometric administrative tasks and overall border secu-
rity due diligence would vastly improve, much in line with the AI module
already being used in the immigration sector in Canada and Germany. In
this regard, data must be uploaded without algorithmic bias and represent
the true dynamics of the CBI programme’s specific considerations relating
to gender bias.
It is important to formulate a solution for risk assessment and mitigation
while maintaining a balance between non-discrimination, border security, and
investment opportunities. CIUs should, amongst other things, focus on money

66 C. Eckel, P. Grossman, ‘Forecasting Risk Attitudes: An Experimental Study of Actual and Forecast
Risk Attitudes of Women and Men’ [2002] 23 Virginia Tech 205-207.
67 For instance with the United Nations Convention on the Elimination of Discrimination Against
Women, 18 December 1979, 1249 UNTS 13.
68 Mackay (n 66), 18–20.
69 M. Brundage et al.‘The malicious use of artificial intelligence: Forecasting, prevention, and mitiga-
tion’ (Future of Humanity Institute and the Centre for the Study of Existential Risk 2018) <https://
arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf> accessed 27 May 2021.
Leveraging AI in citizenship by investment programmes 65
laundering and border security to make sure that their programmes provide
sustainable long-term profits by using the most innovative and technically
advanced tools for due diligence. National governments practicing CBI must
overall respect the global legal standards set by institutions such as the European
Commission, as well as the relevant international legal standards, as aforemen-
tioned, designed to prevent money laundering and other risks.
5 Artifcial intelligence
application in advance
healthcare decision-making
Potentials, challenges and
regulatory safeguards
Hui Yun Chan

Overview
Healthcare decision-making is one of the areas impacted by AI, where it is
continuously integrated into healthcare services to facilitate better and more
efficient delivery of patient care. AI in end-of-life care recently garnered atten-
tion when a recent study investigated the application of AI-assisted decision-
making for predicting patients’ chances of survival and recovery.1 The study
highlighted the potential of AI application in enhancing doctors’ confidence
in clinical assessments and the likelihood of harm to patients arising from risks
in using AI.2 Advance decision-making is an important part of healthcare for
guiding treatment decisions and particularly useful for planning future care and
treatment, ranging from long-term illnesses, such as progressive illnesses, to
terminal illnesses, such as cancer. It is often used in end-of-life care planning,
and thus it is valuable to examine AI’s deployment in this area.

AI in healthcare
AI is increasingly applied in healthcare provision across a range of disciplines,
such as cancer, neurology, cardiology, urology, patient systems, diagnostics,
and medical imaging.3 It is often utilised to manage and analyse data, make
decisions, transcribe information, and assist doctor–patient communication.
Examples of specific applications across the healthcare sector include image

1 T. Lysaght, H.Y. Lim,V.Xafis, et al.,‘AI-Assisted Decision-making in Healthcare’ (2019) 11 ABR,299.


2 ibid., 309, 310.
3 E.B. Sloane and R.J. Silva, ‘Artificial intelligence in medical devices and clinical decision support
systems’ (2020) Clinical Engineering Handbook; G. Rong, A. Mendez, E.B. Assi, B. Zhao, M. Sawan,
‘Artificial Intelligence in Healthcare: Review and Prediction Case Studies’ (2020) 6 Engineering, 291
(AI applications in living assistance, biomedical information processing and research, bladder volume
prediction and epileptic seizure prediction); Nuffield Council on Bioethics,‘Bioethics Briefing Note:
Artificial Intelligence in Healthcare and Research’ May 2018.

DOI: 10.4324/9781003246503-6
AI application in advance healthcare decision-making 67
interpretation in medical imaging,4 early stroke predictions, diagnoses and
prognoses,5 septic shock diagnoses and treatment of chronic obstructive pul-
monary disease,6 end-of-life decision-making,7 personalised healthcare and
augmented diagnoses8 (such as subgroups of diabetes or autism), and in the
realm of primary healthcare with predictive modelling on health data and busi-
ness analytics for primary care providers.9 Private providers have trialled AI in
NHS hospitals for analysing test results, determining the right treatment, and
ensuring the escalation of patient care to the right specialist immediately.10
In addition to being valuable for identifying treatment patterns and diag-
noses through mining troves of digital health records, AI is used to support
healthcare professionals in making better decisions through clinical decision
support systems. This system analyses relevant patient data to suggest areas of
concerns or potential complications and proposes care pathways, ranging from
post-surgical patient care practices, medications (range, doses, allergies, and
interactions with other prescriptions), monitoring, and intermittent follow-
ups within a cost-effective and safe environment.11 This application offers the
benefit of ‘reducing medical errors [and] increas[ing]healthcare consistency and
efficiency’,12 thus lessening the burdens on clinicians in diagnosing complex
cases. Its application in public health threats response is known, such as in the
Ebola epidemic, for identifying patient reports of illness, matched with their
travel history, through swift processing of vast information relating to diseases
and treatments.13
AI is equally utilised in personalised medicine. An example of this is the
partnership between Auckland University of Technology and the Knowledge

4 L.D. Jones, D. Golan, S.A. Hanna, and M. Ramachandran,‘Artificial Intelligence, Machine Learning
and the Evolution Of Healthcare: A Bright Future or Cause for Concern?’ (2018) 7 Bone Joint Res.,
223.
5 F. Jiang,Y. Jiang, H. Zhi, et al.,‘Artificial intelligence in healthcare: past, present and future’ (2017) 2
Stroke Vasc Neurol e000101. doi:10.1136/svn-2017-000101.
6 S. Reddy, J. Fox and M.P. Purohit,‘Artificial Intelligence-Enabled Healthcare Delivery’ (2019) 112(1)
J R Soc Med., 22, 23.
7 Lysaght (n 1).
8 Callaghan Innovation, ‘White Paper: Thinking Ahead: Innovation Through Artificial
Intelligence’<https://www.callaghaninnovation.govt.nz/sites/all/files/ai-whitepaper.pdf> accessed
27 May 2021.
9 H. Liyanage, S.-T. Liaw, J.Jonnagaddala et al.,‘Artificial Intelligence in Primary Health Care: Percep-
tions, Issues, and Challenges Primary Health Care Informatics’ (2019) Working Group Contribution
to the Yearbook of Medical Informatics.
10 Royal Free London NHS Foundation Trust, ‘Our Work with Google Health UK: DeepMind’
<https://www.royalfree.nhs.uk/patients-visitors/how-we-use-patient-information/our-work-with
-deepmind/> accessed 27 May 2021; NHS England, ‘Artificial Intelligence in Health and Care
Award’ <https://www.england.nhs.uk/aac/what-we-do/how-can-the-aac-help-me/ai-award/>
accessed 27 May 2021.
11 Sloane (n 3), 561, 562.
12 Reddy (n 6), 23.
13 Sloane (n 3),562.
68 Regulating artificial intelligence in industry
Engineering Discovery Research Institute that has produced high success rates
for predicting strokes in patients: ‘95 per cent for one day ahead, and 70 per
cent for 7 and 11 days ahead of a stroke occurring’.14 This is beneficial for
preventive measures to manage long-term disability or pre-empt death, and
saving cost in the long-term. Such remarkable accuracy has also appeared in
other specific areas of healthcare, for example, up to 80% precision in heart
disease detection and 94% accuracy in Japanese cancerous growth endoscopy.15
AI has much to offer in healthcare in ordinary times, but what about its utility
in extraordinary times, such as the COVID-19 pandemic that has swept the
world in2020? The pandemic created a profound sense of urgency in the pop-
ulation for taking care of healthcare matters and for decision-making for medi-
cal treatment in the event individuals became infected with COVID-19.16 The
nature of COVID-19, which primarily affects respiratory functions, has ampli-
fied difficult clinical decisions when deciding whether to resuscitate patients,
and when withdrawing or withholding ventilation support for patients who
are admitted into hospitals. AI could play an important role in assisting clini-
cians and healthcare providers make these decisions, for example, in rapidly
identifying the trajectory of health and illness based on patient profiles using
the latest available scientific evidence of how COVID-19 develops and affects
individuals. The vast network and deep learning features of AI are useful for
simultaneously processing multiple scientific databases relating to COVID-19
and treatment options conducted around the world. In addition to navigat-
ing these evidence-based databases, AI could be deployed to access broader
information, such as the patient’s travel and medical history and likelihood of
survival, to present a clearer profile of clinical outcomes and potential health
advice. The promise of AI appears to be boundless. The next section explores
its potential for supporting clinical decision-making, focusing on advanced
healthcare decision-making.

Potentials of AI application in advance


healthcare decision-making
Advance decision-making is part of healthcare provision and mostly utilised
in planning for end-of-life care in the event of incapacity. An advance deci-
sion provides the opportunity for people to consent to or refuse treatment at a
future time when they become unable to do so. The decision made in advance
is either expressed verbally or in writing. There are two primary aspects to this
form of decision-making. The first aspect relates to validity at the time the

14 Callaghan Innovation (n 8).


15 S. Dalton-Brown, ‘The Ethics of Medical AI and the Physician-Patient Relationship’ (2020) 29
CQHE, 115, 116.
16 H.Y. Chan, ‘The Underappreciated Role of Advance Directives: How the Pandemic Revitalises
Advance Care Planning Actions’ (2020) 27(5) Eur J Health Law, 451.
AI application in advance healthcare decision-making 69
decision is made and the second part concerns its application when it is sought
to be implemented. In order for doctors to implement the patients’ expressed
wishes, the decision must be both valid and applicable. An advance decision is
valid if it is made when the person has the mental capacity to do so, has vol-
untarily made the decision, and with the full understanding of the nature of
the decision. When it comes to being implemented, it will be determined if
the circumstances presented still exist or if the decision is within the scope of
treatment, and that the person has not evidenced any changes of mind. Some
of the difficulties healthcare providers encounter in both aspects range from the
inability to determine if the decision has been made validly or if it still applies
to the current circumstances. These difficulties not only generated conflict
between families and doctors, but have also led to judicial disputes.17
AI has the potential to alleviate some of the difficulties with this form of
healthcare decision-making. When it comes to planning and taking decisions,
AI can perform tasks that assist in the process of doctor–patient consultation.
The first aspect of decision-making requires doctors to be assured of the patient’s
mental capacity, voluntariness, and understanding of the decision. Mental
capacity assessments are usually conducted by a panel of consultants, compris-
ing psychologists or psychiatrists, in order to form a professional view regarding
the mental state of the individual in question. In assessing mental capacity, AI
can perform the role of predicting mental state based on the data of patients’
behaviours and responses to questions, and then compare it to the personal
observations of the consulting doctor to reach a decision. This is particularly
beneficial where the patient has an underlying mental health condition, such
as depression or psychosis, where mental capacity is liable to fluctuate. These
professional assessments, combined with output from AI-integrated processing
can produce a more nuanced profile of the individual’s overall mental state
at any particular time. It is similarly beneficial in situations where there exist
differences in medical opinions about the patient’s mental state, as often dem-
onstrated in court proceedings. Judges often turn to the evidence of experts
to form a conclusion about whether the requirement has been satisfied, draw-
ing from doctors’ notes and reports. The AI algorithm provides an additional
source of information to support or refute the findings. Although this aspect
of AI has not been specifically applied in this area, it is reasonable to postulate
that AI will be utilised in the not-too-distantfuture for assisting judges and
the courts in making determinations on the mental state of individuals when
they are being questioned. We can already see some AI applications in the
fields of psychiatry and psychology where AI technologies are used in analys-
ing and predicting particular traits, behaviours, responses and mental states of

17 Re AK (Medical Treatment: Consent) [2001] 1 FLR 129; other cases where there were doubts render-
ing the decisions inapplicable include Re T (Adult: Refusal of Medical Treatment) [1992] 4 All ER 649;
NHS Trust v T [2004] EWHC 1279 Fam; W Healthcare NHS Trust v KH [2004] EWCA Civ 1324.
70 Regulating artificial intelligence in industry
individuals afflicted by a range of mental illnesses, such as depression, suicidal
ideation or schizophrenia, and progressive disorders such as dementia.18
Patients need to be informed about their health prognoses to formulate their
future medical care. This would necessitate not only an understanding of the
person’s mental state but also health conditions, diagnoses, and prognoses if they
are suffering from an illness. It is very likely that AI-assisted application would
be beneficial in making future treatment plans for individuals suffering from
terminal or progressive illnesses, such as dementia, Parkinson’s or Alzheimer’s.
Machine learning and deep learning functionalities enable the wide, in-depth
cross-referencing of health records and databases to form patterns and patient
profiles to provide a most likely outcome to the doctors. Additionally, the
datasets from these recognised illnesses can be explicated through disease data-
bases and matched with the particular patient profile to produce a personalised,
initial assessment regarding the way forward. It helps doctors and patients for-
mulate decisions that are closest to the patient’s preferences based on previous
outcomes or behaviours. A comprehensive outlook for such illnesses would
enable patients to have a better comprehension of their long-term health tra-
jectories and to undertake anticipatory remedial measures or propose treatment
strategies to address their health conditions, including quality of life and care
considerations. This diversity of information enables patients and doctors to
make better-informed decisions when planning their future care and treat-
ment. Doctors can identify the nuances of the medical prognoses produced
by AI surrounding the illness, prognosis, and projected life span, which can
then be mapped against individual health conditions, background, values, and
quality of life considerations. In offering specific and contextualised advice to
patients, the doctors would need to make this information comprehensible to
the patients in order for them to formulate their decisions. This is because the
information may risk becoming overwhelming if not carefully presented.
Once patient understanding and mental capacity are established, the next step
is to record preferences regarding future medical treatment and care. Natural
language processing is helpful for searching and locating medical records and
for voice recognition to record decisions or for note-taking of when consulta-
tions occurred. These records, however, need to be cross-checked with doc-
tors to verify their accuracy when making a final decision.
The second aspect of decision-making is the implementation stage, where
three important considerations arise: whether there are any changes in terms
of the person’s thinking about the decision and personal circumstances and the
scope of the treatment. While the scope of the treatment and personal circum-
stances may be determined, a change of mind may be slightly complicated.
This is because when the decision is sought to be applied, the patient is usually

18 M. Brunn, A. Diefenbacher, P. Courtet et al., ‘The Future is Knocking: How Artificial Intelligence
Will Fundamentally Change Psychiatry’ (2020) 44 Acad Psychiatry, 461; C. Su, Z. Xu, J. Pathak, et al.,
‘Deep learning in mental health outcome research: a scoping review’ (2020) 10 Transl Psychiatry,116.
AI application in advance healthcare decision-making 71
already unconscious, thus unable to re-confirm the decision. This is particu-
larly challenging when families contest the decision or where it is in conflict
with the doctors’ clinical assessment regarding treatment. AIapplication may
not be sufficiently nuanced to assess whether a particular patient would have
changed their mind as it is very much subject to the individual’s thought pro-
cess at that time. This aspect would be better confirmed by the families of the
patient rather than AI.
The enthusiasm for AI applications and their potential in healthcare deliv-
ery may have eclipsed concerns generated by its uses and limitations. Cautions
have been raised regarding its inherent risks arising from bias, inaccuracies,
agency, responsibility, accountability, and intelligibility.19 Some have expressed
restrained inclinations for its adoption, being cautious of the implications aris-
ing from data access and governance,20 while others have questioned the likeli-
hood of AI replacing human functions, which appears to lend weight to the
claim that AI capabilities are unduly inflated.21 The deep learning mechanisms
present socio-technical concerns affecting the decision-making process, lead-
ing to common issues such as biased or erroneous outcomes, questionable cor-
relations and transparency doubts.22 The topical concerns that are relevant to
advance healthcare decision-making where AIintegration potentially generates
risks and liability are explored in the next section. The challenges are classified
into two broad aspects, the first in relation to issues of risks with reliability and
trust, and second in relation to autonomy in decision-making and its implica-
tions for doctor–patient relationships.

Challenges: Risks, reliability, and trust


AIintegration in healthcare presents risks for clinical decision-making. Examples
include medical errors, prescription biases and non-contextual interpretations
of clinical results, and potential variables resulting from undue dependence
on AI analysis.23 It is rightly suggested that the overall context is important
for assessing outcomes when AI is used.24 Non-contextual interpretations in
clinical decision-making are significant because they risk neglecting impor-
tant and relevant attributes of the individual patient, the relationship with the
wider families, communities, cultural norms, and geographical discrepancies.

19 S. Larsson, ‘The Socio-Legal Relevance of Artificial Intelligence’ (2019) 103 Droit et Societe, 573;
Liyanage (n 9); Dalton-Brown (n 15); C. Macrae, ‘Governing the Safety of Artificial Intelligence in
Healthcare’(2019) 28BMJ Qual. Saf., 495; D Schönberger, ‘Artificial Intelligence in Healthcare: A
Critical Analysis of the Legal and Ethical Implications’ (2019) 27 Int. J. Law Info.Tech., 171.
20 Jones (n 4); Reddy (n 6).
21 Reddy (n 6).
22 Schönberger (n 19), 175, 177.
23 Liyanage (n 9), 45; see also I. Giuffrida and T.Treece, ‘Keeping AI Under Observation: Anticipated
Impacts on Physicians’ Standard of Care’ (2020) 22 Tul. J.Tech.& Intell. Prop., 111.
24 Larsson (n 19), 588, 592.
72 Regulating artificial intelligence in industry
These confluences of factors often impinge on the decision-making process
and outcomes, where preferences may change over time, as in the case of
advance decision-making, presenting uncertainties with the reliability of the
decision. The issue of reliability is similarly influenced by the characteristics of
AI in terms of bias, when datasets are replicated across similar patient groups,
resulting in an unintended blanket approach to advising patients. Human inter-
vention is thus required for interpreting outcomes generated by AI to ensure
that such biases are removed, and that any obvious divergence from norms
and contexts are remedied to reflect decisions that are better aligned with the
values, norms, and relationships of the patient.
While AI offers predictions of what people who suffer from similar health
conditions would likely choose, the decisions may not necessarily reflect what
a particular patient would want, as AI-assisted decision-making lacks nuance.
AI has the advantage of injecting reasons and analytical skills into the decision-
making process but may be less likely able to predict the subjective needs
and desires of the individual. Due to the nature of the outcomes of advance
decision-making affecting overall health and decisions that might entail risks of
death, its application should include considerations of what is at stake, as the
consequences would differ in different areas of clinical decision-making, with
different ramifications. A distinguishing factor between AI and human intel-
ligence is the amount of discretion involved in healthcare decision-making.
While AI is renowned for its precision, breadth, and depth, it lacks contextual
nuance, value judgement, intuition, and clinical discretion, which are essential
to healthcare decision-making. Further, AI is unable to replicate the experi-
ence accumulated over the years that is comparable to a doctor’s training.
It is unsurprising that its suitability for healthcare decision-making has been
questioned.25 AI’s utility is more recognisable in an environment that operates
within clear and definite answers; consequently, its capabilities may be more
limited to resolving non-dichotomous problems, which are often found in
the real world.26 Healthcare decision-making often entails complex balances
and sensitive, competing social considerations. Macrae accurately highlighted
that AI integration into healthcare created new risks and augmented existing
concerns, leading to magnified biases arising from various assumptions that are
‘insensitive to local care processes’.27
The issue of reliability is closely connected to trust. Trust in an AI-integrated
healthcare system is a major concern confronting healthcare providers.28 It is

25 H.Surden,‘Artificial Intelligence and Law:An Overview’ (2019) 35 Ga. St. U. L. Rev., 1305, 1323.
26 ibid., 1331: an example is human involvement in performing legal tasks, with the more repetitive and
mechanical aspects delegated to AI.
27 Macrae (n 19), 495.
28 M. Ryan,‘In AI We Trust: Ethics,Artificial Intelligence, and Reliability’ (2020) Science and Engineering
Ethics<https://doi.org/10.1007/s11948-020-00228-y> accessed 27 May 2021; L Shinners, C Aggar,
SGS Smith, ‘Exploring healthcare professionals’ understanding and experiences of artificial intel-
ligence technology use in the delivery of healthcare:An integrative review’ (2019) J.Health Inform.,1.
AI application in advance healthcare decision-making 73
particularly significant in healthcare settings underpinning the doctor–patient
relationship in facilitating medical treatment, recovery, healing, and health
promotion, which are integral to health policies and ethics.29 AI is perceived
as untrustworthy due to its lack of emotive capability and inability to be held
responsible for actions that usually require normative accounts of trust.30 Such
concern was reflected in a study that found that

healthcare professionals were less likely to use AI in the delivery of health-


care if they did not trust the technology or understand how it was used to
improve patient outcomes or the delivery of care which is specific to the
healthcare setting.31

A similar study reported the importance of trust as the basis for adopting AI
in healthcare.32 Avoiding patient harm is one of the key aims of healthcare
treatment; as such, where trust is lacking in AI-integrated healthcare decision-
making, measures are essential to safeguard patients against risks of harm.33 An
example of safeguarding patients against the risk of harm arising from AI-based
system is sepsis treatment, where the authors of the study advocated for a
dynamic rather than static standard assurance that reflects the nature of AI,
thus underpinning each clinical decision with the obligation to avoid harm to
patients, ensuring good patient care, and securing accountability to patients.34
It is clear that reliability and trust are tightly woven issues in AI-integrated
healthcare systems. Both notions contribute to safety concerns when assessing
the outcomes predicted by machine learning and whether AI has served its
purpose in facilitating clinical decision-making.35 Concerns about the quality
and safety of such decision-making, arising from the likelihood of erroneous
or biased outcomes, should be mediated by human confirmation or verifica-
tion to eliminate any insensitivity to the actual preferences and circumstances
of that particular case. This layer of safeguard permits a more intuitive human
approach to the decision-making process. As highlighted above, socio-tech-
nical concerns necessitate consideration of human, social, and organisational
networks where AI is deployed.36 Although one may suggest that healthcare

29 R.C. Feldman, E.Aldana and K. Stein,‘Artificial Intelligence in the Health Care Space: How We Can
Trust What We Cannot Know’ (2019) 30 Stan. L.& Pol’y Rev., 399, 404.
30 Reddy (n 6).
31 Shinners (n 28).
32 W. Fan, J. Liu, S. Zhu et al., ‘Investigating the Impacting Factors for the Healthcare Professionals to
Adopt Artificial Intelligence-Based Medical Diagnosis Support System (AIMDSS)’ (2018) Ann. Oper.
Res. doi:10.1007/s10479-018-2818-y.
33 I.Habli, T Lawton and Z Porter, ‘Artificial intelligence in health care: accountability and safety
(2020) 98 Bull.World Health Organ.,251.
34 ibid., 253.
35 Macrae (n 19); R.Challen, J. Denny, M. Pitt, et al.,‘Artificial intelligence, Bias and clinical safety’(2019)
28 BMJ Qual. Saf.,231.
36 Macrae (n 19), 497.
74 Regulating artificial intelligence in industry
decision-making is comparable to AI in terms of its intrinsically unstable nature
due to changing preferences, medical advancements, and inherent risks, the
risks in an AI-integrated healthcare system are augmented where some aspects
are beyond the control of clinicians. Consequently, liabilities arising from its
integration into healthcare applications warrant the development of safeguards
and preventive measures to avoid patient harm.37 Robust, continuous testing
and the open publication of safety reports may help to engender trust in its use,
as well as public consultations to identify the acceptability of such risks and
ways to mitigate them in order to shape regulatory approaches.
Intelligibility is another important feature influencing the reliability and
safety of and trust in AI. Intelligibility is often associated with the ‘black box’
nature of AI in relation to how it reached its decision and outcome. This
apprehension has appeared in AI-automated judicial decision-making, with
fears of potentially discriminating sentencing decisions that depart from notions
of justice and fairness.38 In dispelling these concerns, human intervention is
essential to rectify any injustice and remove mechanistic, standardised decision-
making that amplifies systemic risks in the administration of justice.39 A similar
approach is essential in addressing such concerns in healthcare. In a typical
healthcare decision-making encounter, patients and doctors engage in conver-
sations to understand the nature of the illness and clarify concerns regarding
treatment options. This opportunity may be less likely to occur if the decision
is AIgenerated as there would be difficulty in understanding the algorithm
used for the decision-making. This is especially crucial as healthcare decisions
impact people’s lives, with potential risks manifesting in the future. Pathways
to engender trust in healthcare in alleviating the ‘black box’ problem in AI
include ensuring information integrity, healthcare provider competencies, and
patient interest to foster assurance in AI-assisted decision-making.40 Another
option for unravelling the intelligibility of AI is via the right to an explanation
of the decision-making process in the General Data Protection Regulation
(GDPR).41 Although this right is predominantly focused on automated deci-
sions, it can be applied to explain the algorithms of AI-assisted healthcare deci-
sion-making, such as risks, healthcare data, and information affecting patients.42

37 ibid.
38 M.Guihot, A.F. Matthew and N.P. Suzor,‘Nudging Robots: Innovative Solutions to Regulate Arti-
ficial Intelligence’ (2017) 20 Vand. J. Ent.& Tech. L.,385, 410.
39 ibid., 416.
40 Feldman (n 29), 413.
41 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free move-
ment of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) OJ L
119, 04.05.2016; cor. OJ L 127, 23.5.2018, hereinafterGeneral Data Protection Regulation, Recital
71,Arts 9(1), 4(15), and 22.
42 See for example T.Hoeren and M.Niehoff, ‘Artificial Intelligence in Medical Diagnoses and the
Right to Explanation’ (2018) 4 Eur. Data Prot. L. Rev., 308, 317, 318.
AI application in advance healthcare decision-making 75
The reliability, trust and intelligibility of AI have raised several important
concerns in the decision-making process. This aspect is only part of the picture.
The second major concern stems from the ethical dilemma of autonomy in
decision-making and implications to doctor–patient relationships.

Challenges: Autonomy in decision-making and


impact on the doctor–patient relationship
Ethical concerns such as individual autonomy and best interest are often
embedded in healthcare decision-making. Autonomy in healthcare deci-
sion-making connotes that people are entitled to make decisions that are in
accord with their worldview, even if it is not necessarily in their best inter-
ests.43 AI-integrated healthcare decision-making invites a consideration of how
far the decision represents an accurate reflection of the person’s autonomous
exercise of choice. This would depend upon the extent to which AI is used
in the decision-making process. If the AI application is primarily assistive in
nature, then it is more probable that the autonomy of the individual and the
healthcare providers is less likely to be encroached upon. Where AI is applied
solely or extensively for predicting the likely choices that an individual would
make, then the consideration becomes more complex. While AI is valuable for
generating a profile representation of possible patient decisions based on speci-
fied criteria, deviations would normally occur at the individual level. Hence,
a patient’s preference would differ from the generalised sample, resulting in
a different outcome in a particular situation. Patients who are able to express
their choices would be able to articulate their preferences, but those who are
suffering from dementia or in a locked-in situation may not be able to exercise
these choices. Adopting a purely rational analytics approach in making deci-
sions for these patients risks neglecting other important considerations, such as
any prior expressed wishes, the person’s outlook, and individuality in respond-
ing to this type of scenario. In some instances, AI-applied interventions could
negatively implicate individual autonomy, where patient choices are curtailed
due to risk projections or the outcome is estimated not to be in the patient’s
best interests.44
The increasing AIintegration in healthcare raises the possibility of its func-
tionality replacing human control and clinical expertise affecting established
doctor–patient relationships.45 Such alarm stems from the seemingly infalli-
ble characteristic of AI compared to humans who are susceptible to fatigue,

43 T.Beauchamp and J. Childress, Principles of Biomedical Ethics (5th edn, OUP, 2001), 57.
44 Nuffield Council of Bioethics (n 3).
45 J. Powell,‘Trust Me, I’m a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test’
(2019) 21(10) J. Med Internet Res., e16222 doi: 10.2196/16222;T.Zapusek, ‘Artificial Intelligence in
Medicine and Confidentiality of Data’ (2017) 11 Asia Pacific J. Health L.& Ethics, 105.
76 Regulating artificial intelligence in industry
particularly after long hours of patient consultations or surgical procedures.46
Dalton-Brown, in exploring this issue, dismissed the notion of AI entirely
replacing doctors, on the basis that such a move would dehumanise healthcare,
as healing and caring remain the central focus of healthcare practice.47 Doctors
are more perceptive in their analysis of therapeutic options in the doctor–
patient relationship compared to AI, which contributes towards cultivating
patient trust and enhancing the quality of the decision-making process through
expressions of care and empathy.48 Common humanity is implicit in the doc-
tor–patient relationship, which supports patient autonomy. The relationship
embodies assurance and trust, which is beyond diagnoses and prognoses, ren-
dering doubts about whether AI could emulate such empathy. It would not be
hard to draw comparisons between the formulaic and technical limitations of
AI and the human touch elicited by human language, expressions and gestures.
In formulating healthcare decisions to be implemented in the future, being
conscientious of patient values and the ability to weigh conflicting priorities
when reaching treatment decisions necessitate medical acuity rather than purely
intelligence. Ethical and moral judgements are prevalent in the decision-making
process, nurtured by mutual confidence, implied social cues, and compassion.
These elements are hardly said to be possessed by AI.49 Additionally, while AI
applications could be trained to enhance accuracy or sensitivity, it is uncertain
if they could be trained to mirror human articulation. Thus far, AI application
in healthcare is facilitative in nature, and is helpful for alleviating the pressures
of administrative tasks and the more advanced diagnoses processes. Human
interface remains the central focus of the doctor–patient relationship.
The above analyses have revealed the specific areas of concern implicating
advance healthcare decision-making, highlighting the importance of context
and human intervention in the decision-making process. Multiple consid-
erations affect healthcare outcomes, moving beyond the diagnostics, such as
personal values and familial relationships. Clinical discretion, acumen, and
experience of healthcare professionals for weighing competing considerations
when advising patients are important elements that are not captured sufficiently
by AI-integrated applications. Consequently, AI-integrated interventions are
unlikely to usurp the doctor–patient relationship, but they are valuable for
supporting the decision-making process, and in approximating human deci-
sion-making. The next section explores regulatory options for safeguarding
healthcare decision-making where AI is integrated into the system.

46 Dalton-Brown (n 15), 118.


47 ibid.
48 ibid.
49 Nuffield Council of Bioethics (n 3).
AI application in advance healthcare decision-making 77
Regulating and safeguarding decision-making
process in AI-integrated healthcare system
The challenges identified above raise questions about the appropriate ways
to safeguard the process of making these decisions and ameliorate safety con-
cerns in potential regulatory frameworks. It specifically requires lawmakers to
engage with AI developments in healthcare integrations affecting the private
domains of individual life. It is clear that there is a delicate balance to achieve
when pursuing an innovative drive to capitalise on the advantageous aspects of
AI for doctors and patients, and managing the risks arising from AI interven-
tions. Lawmakers are aware that different regulatory approaches will result in
different outcomes; hence, it becomes pertinent to examine existing accom-
modations in law. One instance would be to draw from the prevailing legal
framework in the tort system for specific issues arising from privacy and con-
fidentiality concerns, while another method would be to create new legisla-
tion to cater for the particular risks in the absence of adequate governance
measures. The former may mean a swifter response from existing laws, drawn
from EU legislation concerning rights, while the latter enables a more specific
response to the identified risks, but may suffer from various uncertainties due
to the developing nature of AI. It may also be the case that the law might be
perceived as playing ‘catch-up’ with AI progress. Each approach has its own
merits and drawbacks.
Guihot and colleagues offered a helpful analysis of the available regulatory
options for responding to the problems that arise from AI applications across
various fields.50 They suggested that governments are capable of influencing
industry stakeholders by setting expectations of the directions they wished to
pursue, consequently shaping regulatory goals and forms while encouraging
continuous engagement with stakeholders for identifying potentials and risks.51
The authors similarly favoured a multi-level risk-based approach towards regu-
lation.52 This approach has the advantage of enabling AI growth, while allowing
governments to maintain oversight on its progress.53 As a result, any regulations
developed would be more flexible for mediating the risks presented by AI.54
Most countries have adopted a cautious ‘wait-and-see’ approach towards
emerging AI issues, while the European Union have demonstrated a more pro-
active and developed initiative into AI regulation. Relevant examples drawn
from the UK, and at the European Union level, Sweden are useful for high-
lighting the range of approaches for responding to the application of AI in the
healthcare sector. The European Commission has developed various initiatives
in consultation with industry experts and among Member States for identifying

50 Guihot (n 38), 438, 440.


51 ibid., 385.
52 ibid., 392.
53 ibid., 397.
54 ibid. 421, 427.
78 Regulating artificial intelligence in industry
potential regulatory approaches towards AI and robotics, particularly in rela-
tion to automated decision-making and accommodating GDPR compliance.55
The Commission adopted a permissive, risk-based approach when seeking to
harness the potential of AI while managing the risks arising from its application,
underpinned by EU rights and values, and guided by responsible, ethical AI.56
The collective EU approach culminated in a Declaration of Cooperation on
Artificial Intelligence, with the majority of EU Member States becoming signa-
tories to the Declaration in 2018. The EU White Paper identified a risk-based
approach towards regulation, ascertaining high–low risks, and areas of potential
risk when mapping specific strategies to deal with these risks without creating
undue burden on the stakeholders.57 This approach responds to the various
challenges of risk and the need to inculcate trust in AIinterventions, which
supports a human-centred attitude when administering areas of AIintegration
systems. Such an approach is particularly essential in the healthcare sector,
where there are apparent risks involving the care and management of popula-
tion health.
The UK has pursued a similar approach to governing AI for the purpose
of economic and technological transformation.58 The Centre for Data Ethics
and Innovation created by the government serves in an advisory capacity for
government, regulators, and industry towards developing AI frameworks.
Partnerships established between Google DeepMind and NHS hospitals signal
collaborations in the healthcare sector towards delivering AI-integrated patient
care. The AI Committee of the House of Lords rightly indicated the need to
resolve the intelligibility aspect (‘black box’) of AI for engendering trust in
AIinterventions due to its considerable effect on the life of individuals.59
Sweden, as an EU Member State, is a signatory to the Declaration, and
is guided by a National Strategy for AI in striving to embrace the opportu-
nities presented by AI towards maximising its competitiveness, encouraging

55 Library of Congress, ‘Regulation of Artificial Intelligence in Selected Jurisdictions’ January 2019


accessed 27 May 2021.
56 ibid. See also European Commission, Artificial Intelligence for Europe (Comm) April 2018, and a follow
up in December 2018.
57 European Commission, White Paper on Artificial Intelligence –A European approach to excellence and trust
COM (2020) 65 final.
58 Department for Business, Energy and Industrial Strategy, Industrial Strategy: Building a Britain Fit
for the Future (White Paper, Cm. 9528, 2017). This document detailed the government’s policy on
developing AI, including investments in science, research, and innovation, and robotics as one of its
four grand challenges.
59 M.C. Buiten, ‘Towards Intelligent Regulation of Artificial Intelligence’ (2019) 10 Eur. J. Risk Reg.,
41, 42; see also UK House of Lords Artificial Intelligence Committee, AI in the UK: Ready,Willing
And Able? (HL Paper 100 2018) para 105; European Parliament Committee on Legal Affairs, Civil
law Rules on Robotics(2015/2103 (INL) 10; Regulation (EU) 2016/679 on the protection of natural
persons with regard to the processing of personal data and on the free movement of such data, and
repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1, Arts 12,
14 and 22.
AI application in advance healthcare decision-making 79
public–private collaborations, and pursuing active cooperation with other
Nordic countries.60 Sweden’s approach is aimed at securing innovation and
development while mitigating AI-generated risks, supported by an ethical and
inclusivity framework. AI applications have been applied in a broad num-
ber of sectors in Sweden, such as healthcare, education, and transportation.
Its healthcare sector has witnessed active cross-disciplinary and cross-sectoral
partnerships, including, among others, the Analytic Imaging Diagnostic Arena
programme that promotes research into imaging diagnostics.61 This attitude
demonstrates a risk-based strategy without curtailing AI growth, with potential
benefits for ethical and sustainable development.
These country-specific examples illustrate the balance sought when
addressing the particular risks and biases found in AI-integrated applications.
Consequently, classifying the levels of risk allows actual risks to be identified,
facilitating a better understanding of how decisions are reached, and mov-
ing a step forward in satisfying questions of transparency and accountability.62
Similar notions of governance are echoed in AI analytics in automated image
analysis in healthcare.63 Ho and colleagues highlighted that ‘the nature of gov-
ernance of biomedicine is increasingly risk-based, context-specific, case-sen-
sitive, decentralised, collaborative and in terms of its knowledge constituents,
pluralistic’.64 Identifying the risks within a contextual framework in healthcare
decision-making is the first step towards understanding pivotal issues for deter-
mining the purpose and use of AI in the clinical process. In considering the
type of regulatory approach to be adopted, it is crucial to appreciate the goals
that are sought to be achieved. As demonstrated above, industry engagement is
a favoured approach, as most AI research and development are conducted by
private entities. It thus becomes significant to engage actively with the sector
to maximise the potentials, while safeguarding public wellbeing, safety, and
security. A better comprehension of the risks enables the courts and lawmakers
to identify the best ways to respond to these tangible risks and the extent of
liability to be imposed as part of the regulatory safeguards.
AI is becoming gradually integrated into delivering patient care, where
healthcare decision-making remains one of the key areas in the healthcare
sector. In the context of advance decision-making for future healthcare treat-
ments, adopting a specific, risk-based approach is consistent with the broad

60 Country report – Sweden, National strategies on Artificial Intelligence: A European perspective


(2019)<https://ec.europa.eu/knowledge4policy/ai-watch/sweden-ai-strategy-report_en> accessed
27 May 2021;;Government Offices of Sweden, Ministry of Enterprise and Innovation,National
Approach to Artificial Intelligence (Article no: N2018.36, 2018).
61 Ibid.
62 Buiten (n 59). Similar doubts on the myth of AI were also raisedby Surden (n 25) where the author
questioned the actual capabilities of AI.
63 C.W.L. Ho, D. Soon, K. Caals, and J.Kapur,‘Governance of automated image analysis and artificial
intelligence analytics in healthcare’(2019) 74 Clin. Radiol.,329.
64 ibid.,330.
80 Regulating artificial intelligence in industry
regulatory framework discussed above. Such an approach facilitates a more
flexible and accurate control of the risks arising from AI-integrated healthcare
systems. The purpose of AI-supported healthcare decision-making is to assist
doctors and patients with making better decisions through increased accuracy
and the best available evidence. Although it would not be entirely impossible
to have AI robots conducting clinical consultations and recording healthcare
decisions at some future point, human interventions are essential to control and
modulate any risks arising from these modified approaches to patient care. For
example, risks arising from transcribing errors in doctor–patient consultations
via natural language processing can be mitigated with further human verifica-
tion procedures. Projections of illness trajectories and life spans from AI analy-
ses for particular patients would need to be mediated with individual quality
of life considerations in determining future healthcare treatments. This aspect
would require carefully populating information into the AI system to enable a
more focused, quality outcome. Appreciating the limitations of AI allows for
better oversight of its use in healthcare decision-making. Long-term govern-
ance includes regular testing of AI-integrated applications to preserve their reli-
ability, and consequently minimise potential liabilities arising from their use.
To make AI intelligible to patients and doctors it is imperative that they
are able to retrace the decision-making steps, experiment with the variations
that exist as presented by AI-assisted analyses, and incorporate important con-
texts into the decision-making process. An option to achieve responsible and
accountable decisions is informed consent. The concept of informed consent
is a hallmark of healthcare decision-making.65 Consent is equally underpinned
by the broader framework of confidentiality and privacy within the GDPR
and human rights protections. Patients should be consulted and their consent
obtained before AI-integrated applications are used in the decision-making
process. They should be informed about the purpose of using AI interventions
in the process, the risks, effects, scope, extent, and limitations so that they are
informed when making the final decision regarding their future healthcare.
Being conscious of this aspect can allay fears regarding patient autonomy and
ensure that the values designed for and in the AI system are truly reflective of
what people want. These choices would be subject to agreed reviews between
the doctor and patient so that they are assured of the authenticity of the deci-
sions made that are supported by AI. This process permits doctors and patients
to confirm the reliability of the decision reached, and if the decision is deemed
harmful, such elements can be removed.
The uncertainties of AI and advance decision-making influence one another.
The uncertainties that are present in advance decision-making are mediated
by the advice and information provided through doctor consultations. The
additional intelligibility of AI could also be resolved with an understanding

65 Montgomery v Lanarkshire Health Board (Scotland) [2015] UKSC 11 where it was ruled that patients
are no longer passive recipients of healthcare.
AI application in advance healthcare decision-making 81
of the AI algorithm. In this sense, doctors will be confronted with the chal-
lenge of learning about how AI affects their provision of advice to patients
by explaining the AI-assisted decision-making process. Risks arising from this
stage are important to address, as they potentially affect the liability of the
expected standard of care. This feature entails doctors initially appreciating the
mechanisms and then translating them to the benefit of the patient so that they
can grasp the information and finally make a choice. In an advance decision-
making context, this means safeguarding the choices in spite of the risks that AI
presents but taking steps to reduce these risks. Lawmakers would need to take
into account this aspect when accommodating the additional risks involved in
creating or modifying regulations.
The evolving nature of AI has presented regulatory difficulties and oppor-
tunities. Any regulatory approaches selected by particular countries, either by
establishing specific laws for AI or including AI-related issues under existing
branches of law, remain a real governance challenge.66 While this challenge
persists, it is equally valuable to consider establishing ethical codes and profes-
sional standards to assist healthcare providers in dealing with immediate issues
that arise in the healthcare decision-making process67 rather than automatically
resorting to drafting legislation.68 An example of a proposal for establishing
ethical codes and an ethics committee is in the area of regulating care robots
(AI robots that provide care for the elderly and assisted living).69 A code of eth-
ics and expectations of professional standards illuminate a governance frame-
work encompassing healthcare providers, manufacturers, and government
agencies towards maintaining trust in the healthcare system and to mitigate
risks to patient autonomy, privacy, and confidentiality, ultimately safeguarding
patients from harm.70 One beneficial outcome from instituting these codes and
standards is the ability to offer patients assurance and promote the trust that
their healthcare decisions are being made in a responsible manner, and that
doctors are guided by established norms when carrying out their professional
obligations. Both values are critical in preserving therapeutic doctor–patient
relationships.
Similarly, continuous training for healthcare providers and educating
patients about the applications that affect their healthcare decisions are vital if
AI-integrated interventions become more prevalent in the healthcare system.
Implementing appropriate policies and ensuring adequate training for the safe
and effective use of AI are ways to minimise risks arising from AIapplications in
healthcare.71 An example of reducing medical negligence is to identify appro-

66 Nuffield Council of Bioethics (n 3).


67 See for example suggestions for ethical codes for professionals: Liyanage (n 9), 45.
68 Buiten (n 59), 48.
69 V.K. Blake,‘Regulating Care Robots’ (2020) 92 Temp. L. Rev.,551.
70 ibid., 585.
71 Giuffrida (n 23).
82 Regulating artificial intelligence in industry
priate standards of care and to take steps to prevent errors from happening.72
Strategies include examining the quality of installed AI applications, regular
monitoring and maintenance of the application and security features, iden-
tifying the specific risks to particular individuals, and always obtaining the
informed consent of patients whenever AI is sought to be applied.73

Conclusions
This chapter has outlined the various potentials of AI-integrated systems in
healthcare decision-making, focusing on advance decision-making. The
healthcare sector has experienced advancements notably in diagnostics and
illness predictions, through continuously sophisticated AI neural network
technology and machine learning. The natural language processing and deep
learning features of AI are valuable for informing healthcare professionals and
patients when making informed decisions. Advance decision-making in par-
ticular would benefit from AI technology in facilitating different types of treat-
ment and future healthcare plans. We have also briefly considered how AI
would be beneficial in times of pandemic for assisting healthcare providers
who are under enormous pressure when delivering healthcare services to the
population, particularly in making difficult clinical decisions in continuing or
withdrawing treatments. The evolving, fluid, and opaque nature of AI ren-
der it far from simplistic, despite its seemingly boundless possibilities available
to be harnessed for the benefit of society. Concerns are raised regarding the
risks such applications entail, questions of patient autonomy, conflict regard-
ing decision-making, the level of human intervention, and the impact upon
existing doctor–patient relationships. It is, however, recognised that AI cannot
completely replace some of the tasks within the purview of clinicians.
The continuous use and integration of AI in healthcare provision will con-
tinue to generate questions that require unique responses. Lawmakers have
been attempting to respond to the issues that arise, with a menu of possible
regulatory options to govern these challenges and mediate the risks arising from
its application in healthcare. The more favourable view adopted is to regulate
its use based on the level of risk presented by AI, and according to its context.
These considerations reveal important concerns that legislation can help to
address to minimise the impact of AI on everyday life, particularly where it
is increasingly used in the healthcare sector. An appropriate understanding of
AI-related challenges and legal and ethical issues for advance decision-making
can guide approaches to better appreciate its use, assist doctors in implement-
ing people’s healthcare decisions, and for lawmakers to regulate this developing
technology in healthcare provision.

72 ibid., 121.
73 ibid.
6 Artifcial intelligence in
the legal profession
Damian M. Bielicki

The current use of AI in legal practice


Law is an integral part of how societies operate. The legal profession is often
perceived as traditional and hierarchical in the sense that certain structures,
etiquettes, and skills have always played an important role. And yet advanced
AI technologies have entered this profession, allowing automated document
analysis and management, predictive analysis of court processes, court applica-
tion processing, and even decision-making. This shows that AI has been imple-
mented in different aspects of legal work, and it is changing how lawyers work.
One of the biggest areas where AI has been implemented is contract analysis
and automation. Beyond this, however, AI is also used in predictive analytics,
administration of justice, legal research, and other areas. It is not possible to dis-
cuss in detail all the different technologies within one chapter and, therefore, the
scope here is limited to some key examples, including legal research, contract
analytics, predictive analytics, and administration of civil and criminal justice.

Legal research
Fundamentally, legal research is about finding what the law is and the relevant
sources of that law.1 Without proper research, lawyers simply would not be
able to adequately advise their clients. Particularly in the common law system,
finding the right source of law is sometimes challenging due to the doctrine
of judicial precedent, according to which many of the primary legal principles
have been made and developed by judges from case-to-case. Consequently,
lawyers must be able to find the right precedent and, unfortunately, the system
of law reporting in many places around the world has not always been effec-
tive.2 Among the most known examples of AI software are the ones provided

1 N.Waisberg,A. Hudek, AI for Lawyer: How Artificial Intelligence is Adding Value,Amplifying Expertise, and
Transforming Careers (Wiley 2021), 107.
2 J. Embley, P. Goodchild, C. Shephard, S. Slorach, Legal Systems & Skills (4th ed., OUP 2020) 145ff; S.
Wilson, H. Rutherford,T. Storey, N.Wortley, B. Kotecha, English Legal System (4th ed., OUP 2020),
165ff.

DOI: 10.4324/9781003246503-7
84 Regulating artificial intelligence in industry
by Westlaw and LexisNexis, but there are many other tools on the market.3
Most of these tools provide intelligent document analysis, where users can
upload a document and the AI will automatically suggest authorities that are
relevant to the document but not cited in it. They will also reveal how courts
historically treated the relevant precedents. Moreover, they deliver data-driven
insight, or litigation analytics, providing information on judges, courts, dam-
ages, attorneys, law firms, and case types in order to reveal certain patterns and
help build a case strategy.4 Therefore, these AI-driven tools contain far more
than just search engines to locate specific laws or cases.

Contract analytics
Contract law is one of the most fundamental spheres of law that applies to
every industry and business relationship. A legally enforceable agreement gives
rise to obligations and certain protections for the parties involved.5 Drafting
and reviewing contracts can be very time consuming, particularly if they are
lengthy, containing dozens or even hundreds of pages. AI-enhanced contract
analysis and review can speed up the process significantly, and it can help
reduce the impact of human error.6
There are dozens of companies offering contract analytics.7 Although they
use a variety of different codes, from the users’ point of view the whole process
is almost the same. First, a contract must be uploaded to the software. Second,
either the user has to indicate what needs to be found in the contract or, in the
case of more advanced platforms, AI automatically generates a list of potential
aspects to be reviewed.8 Some platforms go further by offering contract man-
agement databases, and the possibility to work across different contracts at the
same time. To a certain extent, contract analytics programmes still need human
intervention, and their main role is to work in conjunction with their users, to
make the overall process faster, accurate, and more cost-effective.

Predictive analytics
It is not uncommon for litigants to ask their lawyers about the realistic out-
come of their case. The answer often depends on a variety of different factors,

3 Se for instance: Casetext, Fastcase, Judicata, Ross Intelligence, vLex Justice.


4 See for instance:Westlaw Edge Features available at <https://legal.thomsonreuters.com/en/products
/westlaw/features> accessed 27 May 2021; LexisNexis Innovative Tools and Features available at
<https://www.lexisnexis.com/en-us/products/Lexis/innovations.page> accessed 27 May 2021.
5 E. Macdonald, R.Atkins, Koffman & Macdonald’s Law of Contract (9th ed., OUP 2018) 1-2;T.T.Arvind,
Contract Law (2nd ed., OUP 2019), 4ff.
6 Waisberg, Hudek (n 1), 131–135.
7 For example: DocuSign, eBravia, Eigen Technologies, Heretik, Kira Systems, LawGeex, LegalSifter,
LexCheck, LexPredict, Luminance, to name a few.
8 Waisberg, Hudek (n1), 136.
Artificial intelligence in the legal profession 85
including core evidence, the legal method of dispute settlement, the relevant
jurisdiction, the judges, and even the experience of the lawyers involved.
However, nowadays AI-driven platforms can provide intelligence and predic-
tive analysis of court processes and outcomes so that lawyers can take that into
account as well, in addition to their highly educated guesses.
Once a user of the platform inputs all the relevant facts of the case, AI
compares them to every other case that has gone to court. AI then generates
a report with predictive analytics covering a wide range of different aspects,
including: (a) comparison of the case against overall trends and patterns; (b) the
potential success rate if the case goes to court; (c) the likelihood of a particular
judge granting a specific type of motion, (d) typical damage awards for a spe-
cific type of cases, and (e) lawyers past performance and litigation experience.
This is a very valuable information that can help clients and their lawyers make
a more informed decision on how to handle the case.9

Administration of civil justice


An interesting example of modern administration of justice is presented by the
Civil Resolution Tribunal (CRT) based in British Columbia, Canada. The
CRT is an online tribunal designed to resolve four types of disputes: (1) small
claims disputes of up to CAD $5,000; (2) strata property disputes of any amount;
(3) vehicle accident and in injury claims of up to CAD $50,000, and (4) socie-
ties and co-operative association disputes of any amount.10 Although it is fully
online, it is not fully AI-driven or a fully automated tribunal. Applications to
the CRT are done through the so-called “Solution Explorer”, which is an
inference technique (a questions-and-answers form) that uses a basic form of
AI called an expert system.11 The CRT presents an important case study for
future, more AI-driven courts and tribunals.
A more advanced example is Courtal – a digital court information system
designed for the Estonian Ministry of Justice.12 It allows for AI-driven elec-
tronic administration of the work of the Estonian courts at all levels.13 It allows
automatic registration of cases, hearings, and judgments, automatic allocation
of cases to judges, creation of summons, publication of judgments, and even

9 Examples of platforms offering such services include, but are not limited to: Bloomberg Law, Lex
Machina, Premonition or Solomonic, to name a few.
10 See official website of the CRT: < https://civilresolutionbc.ca/faq/> accessed 27 May 2021.
11 S. Salter, “What is the Solution Explorer?” (BarTalk, April 2018) <https://www.cbabc.org/Bar-
Talk/Articles/2018/April/Features/What-is-the-Solution-Explorer> accessed 27 May 2021;
J. Zeleznikow, “Using Artificial Intelligence to provide Intelligent Dispute Resolution Support”
[2021] Springer: Group Decision and Negotiation (April 2021), 14–15; A.D. Reiling, “Courts and
Artificial Intelligence” [2020] 11(2) International Journal for Court Administration 8, 4
12 See official Courtal website: <https://courtal.com/portfolio/estonia/> accessed 27 May 2021.
13 ibid.
86 Regulating artificial intelligence in industry
support of bailiff activities.14 Moreover, as of 2021, the Estonian Ministry of
Justice is piloting an “AI judge” project to adjudicate small claims disputes of
less than €7,000. In concept, AI would issue a decision that could be appealed
to a human judge. It is hoped that this project can clear a backlog of cases and
make the whole judicial process more timely and cost-effective.15
One more recent project worth mentioning is the so-called SUPACE –
Supreme Court Portal for Assistance in Courts Efficiency, which is an AI-driven
assistive tool implemented by the Supreme Court of India. It is not designed
to make decisions but to analyse the data received in the filing of cases to sup-
port the work of the judges. If successful at the Supreme Court level, it could
be implemented in the lower courts to help to resolve the problem of delays
and case management. The project was officially launched in April 2021, and,
therefore, at the time of writing, there is still little information available about
this system.16 However, it has been revealed that the system includes a chatbot
that can give an overview of the case, information about the relevant law, etc.
The system is subject to training by human users.17

Administration of criminal justice


AI can also be used in the criminal justice system. The most known exam-
ples of the use of AI algorithms in criminal matters include: (1) Public Safety
Assessment (PSA), (2) Pre-Trial Risk Assessment Instrument (PTRA), (3)
Correctional Offender Management Profiling for Alternative Sanctions
(COMPAS), and (4) Harm Assessment Risk Tool (HART).
The PSA is used following a person’s arrest when a judge must decide
whether the person should be released or should be detained to await trial.
The system predicts failure to appear in court and the probability of commit-
ting another crime.18 In doing so, the PSA generates a score that is calculated
based on several factors, including age, current offence, pending charges, any
prior misdemeanour, felony or violent conviction, prior failure to appear pre-
trial, and prior sentence to incarceration. The PSA does not analyse sensitive

14 ibid. See also official E-Estonia portal: <https://e-estonia.com/solutions/security-and-safety/e-jus-


tice/> accessed 27 May 2021.
15 E. Nitler,“Can AI Be a Fair Judge in Court? Estonia Thinks So” (Wired, 25 March 2019) <https://
www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/> accessed 27 May 2021; T.
Vasdani, “From Estonian AI Judges to Robot Mediators in Canada, U.K” (LexisNexis) <https://
www.lexisnexis.ca/en-ca/ihc/2019-06/from-estonian-ai-judges-to-robot-mediators-in-canada-uk
.page> accessed 27 May 2021.
16 S. Mehra, “AI is Set to Reform Justice Delivery in India” (India AI, 7 April 2021) <https://indi-
aai.gov.in/article/ai-is-set-to-reform-justice-delivery-in-india> accessed 27 May 2021; P. Chawla,
“SUPACE: AI tool to address delays in the Indian Judiciary” (Asiacom, 19 April 2021) <https://
asiacom.in/supace-ai-tool-indian-judiciary/> accessed 27 May 2021.
17 ibid.
18 See official website of the Advancing Pretrial & Policy Research (APPR): <https://advancingpre-
trial.org/psa/about/> accessed 27 May 2021.
Artificial intelligence in the legal profession 87
personal information, such as race, gender, nationality, religion, education,
employment, home address, marital status, etc.19 The system was developed
by the Laura and John Arnold Foundation (LJAF) and, as of 2021, it is used in
19 states in the United States of America.20 Similarly to the PSA, the PTRA
is a risk assessment instrument designed to predict a risk of failure to appear
in court and new criminal arrests. It was developed by the Administrative
Office of the US Courts (Office of Pretrial and Probation Services) and is used
by United States probation and pre-trial services officers.21 The key differ-
ence between the PTRA and the PSA is the scoring algorithm that includes
different measuring factors. In particular, the PTRA takes into account the
defendant’s age, criminal history, instant conviction, educational attainment,
employment status, residential ownership, substance abuse problems, and even
citizenship status.22 It therefore includes some of the sensitive information that
PSA does not. Both the PSA and the PTRA are seen as merely supportive
tools, in addition to other factors and the evidence available to judges, so that
they can weigh the different factors and make a more informed decision.
As with all technologies of this kind, both the PSA and the PTRA have
their supporters and opponents. The former claim that this algorithm can
lead to fewer people being held in prison prior to trial. Consequently, they
are likely to avoid the economic consequences of the arrest, including loss
of employment, custody of their children, etc. Moreover, it often leads to
less use of financial conditions of release (release on bail).23 Notwithstanding
these arguments, the systems seem to have enormous consequences. In par-
ticular, the opponents raise concerns over the way predictions are made. That
is because the forecast is based on data from many individuals classified within
certain group risk. Consequently, the score is based on certain traits shared by
the individual with a group who either succeeded or failed at a certain rate in
the past; it is therefore not individualised.24
COMPAS was developed by Northpointe Inc. with the aim of evaluating
the risk of recidivism when judges determine sentences in criminal cases. The

19 ibid.
20 These states are Arizona, California, Florida, Illinois, Kentucky, Louisiana, Missouri, Montana, New
Jersey, New Mexico, North Carolina, Ohio, Pennsylvania, South Dakota, Tennessee, Texas, Utah,
Washington, and Wisconsin. See official website of the LJAF: <https://www.arnoldventures.org/
work/release-decision-making/> accessed 27 May 2021.
21 See official website of the United States Courts: <https://www.uscourts.gov/services-forms/proba-
tion-and-pretrial-services/supervision/pretrial-risk-assessment> accessed 27 May 2021.
22 T.H. Cohen, C.T. Lowenkamp,W.E. Hicks,“Revalidating the Federal Pretrial Risk Assessment Instru-
ment (PTRA): A Research Summary” [2018] 82 Federal Probation, 2, 24; S.L. Desmarais, E.M. Low-
der, Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors and Defense Attorneys (Safety+Justice
Challenge, February 2019), 4.
23 B. Buskey,A.Woods,“Making Sense of Pretrial Risk Assessment” (National Association of Criminal
Defense Lawyers, June 2018) <https://www.nacdl.org/Article/June2018-MakingSenseofPretrialRi
skAsses> accessed 27 May 2021.
24 ibid.
88 Regulating artificial intelligence in industry
algorithm rates the person from one (low risk) to ten (high risk), and the score
is based on the answer to 137 questions either answered directly by the defend-
ant or pulled from criminal records, if any.25 Although the system is in use in
only a few American jurisdictions,26 it was criticised for the lack of transparency
in the algorithm.27 This was acknowledged by the Wisconsin Supreme Court
in State v Loomis,28 where the Court acknowledged that risk scores failed to
explain how exactly data was used to generate the results.29 Moreover, it was
stated that “this court’s lack of understanding of COMPAS was a significant
problem in the instant case”.30
HART is a British experiment conducted by Durham Constabulary and
Cambridge University. It is aimed at the reduction of reoffending by identify-
ing and helping offenders who are likely to commit an offence in the next two
years.31 The AI learnt from decisions made between 2008 and 2012 and the
algorithm is based on around 30 factors, of which some are not related to the
crime committed.32 A major criticism of this system concerned the fact that
the system categorised individuals into various groups according to their ethnic
origin, income, education levels, and other factors.33

Regulations
Like in other industries, there is an ongoing debate on the impact of AI on
certain key features, such as data security, fundamental rights, and many more.

25 R. Koulu, L. Kontiainen, How will AI shape the future of Law? (Legal Tech Lab 2019), 79–80.
26 For instance, New York,Wisconsin and California.
27 M. Legg, F. Bell, Artificial Intelligence and the Legal Profession (Hart 2020), 226–228
28 881 N.W.2d 749 (Wis. 2016).
29 ibid. See also: H.-W. Liu, Ch-F. Lin,Y-J. Chen, “Beyond State v Loomis: artificial intelligence, gov-
ernment algorithmization and accountability” [2019] 27 International Journal of Law and Information
Technology, 127.
30 State v Loomis, at 774. See also: A. Deeks, “The Judicial Demand for Explainable Artificial Intel-
ligence” [2019] 119 Columbia Law Review, 7, 1844.
31 University of Cambridge, “Helping Police Make Custody Decisions Using Artificial Intelligence”
(UoC Research, 26 February 2018) <https://www.cam.ac.uk/research/features/helping-police
-make-custody-decisions-using-artificial-intelligence> accessed 27 May 2021; M. Burgess, “UK
police are Using AI to Inform Custodial Decisions – But it Could Be Discriminating Against the
Poor” (Wired, 1 March 2018) <https://www.wired.co.uk/article/police-ai-uk-durham-hart-check-
point-algorithm-edit> accessed 27 May 2021.
32 For instance, postal code, gender etc. See more:Appendix 1 to the European Ethical Charter on the
Use of Artificial Intelligence in Judicial Systems and their Environment, titled “In-depth study on
the use of AI in judicial systems, notably AI applications processing judicial decisions and data”, avail-
able at: <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c> at
51
33 B. Min, G. Ferris, Regulating Artificial Intelligence for Use in Criminal Justice Systems in the EU:
Policy Paper (Fair Trials Publication), available at: <https://www.fairtrials.org/sites/default/files
/Regulating%20Artificial%20Intelligence%20for%20Use%20in%20Criminal%20Justice%20Sys-
tems%20-%20Fair%20Trials.pdf> at 17-18, accessed 27 May 2021.
Artificial intelligence in the legal profession 89
There are several instruments providing, either directly or indirectly, some
regulations concerning AI. Most of them are discussed in the other chap-
ters of this volume, including the EU’s General Data Protection Regulation
(GDPR),34 the White Paper on AI,35 the US AI-related bills,36 or Japan’s Act
on Protection of Personal Information.37 Their main focus is on achieving vari-
ous social policies, including data and privacy protection, prevention of bias or
discrimination, accountability, and many more.38 However, all of them address
the AI issues in general, without a particular focus on the legal profession.
One instrument that is dedicated to regulating AI for lawyers is the European
Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and
their environment,39 adopted in December 2018 by the European Commission
for the Efficiency of Justice (CEPEJ).40 The Charter includes five core princi-
ples, as follows: (1) Principle for respect of fundamental rights; (2) Principle of
non-discrimination; (3) Principle of quality and security; (4) Principle of trans-
parency, impartiality and fairness; and (5) Principle “under user control”. The
first principle requires that any use of AI must be in compliance with the rights
guaranteed by the European Convention on Human Rights (ECHR)41 and the
Convention on the Protection of Personal Data.42 At the heart of this principle
is the right to a fair trial. The second principle forbids the use of data that may
lead to discrimination of any kind against individuals or groups. Moreover,
such data should not lead to deterministic analyses or uses.43 The third principle
requires the designers of machine learning models to include multidisciplinary

34 Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data, and repealing Directive 95/46/EC (General
Data Protection Regulation) [2016] OJ L119. See also: J. Buyers, Artificial Intelligence: The Practical
Legal Issues (Law Brief Publishing 2018), 41–51.
35 White Paper on Artificial Intelligence: a European approach to excellence and trust, COM (2020)
65 Final (Brussels, 19 February 2020).
36 E.g. the Algorithmic Accountability Bill or the Facial Recognition and Biometric Technology Mor-
atorium Bill.
37 Act No. 57 of 2003, available in English at: <https://www.cas.go.jp/jp/seisaku/hourei/data/APPI
.pdf> accessed 27 May 2021.
38 Legg, Bell (n 27), 312.
39 CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and
their Environment, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg, 30 Decem-
ber 2018, available at: <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018
/16808f699c> accessed 27 May 2021.
40 CEPEJ is the body established by the Council of Europe to improve the quality and efficiency of
European judicial systems. See official website of the organisation: <https://www.coe.int/en/web/
cepej> accessed 27 May 2021.
41 Convention for the Protection of Human Rights and Fundamental Freedoms (European Conven-
tion on Human Rights, as amended), opened for signature in Rome on 4 November 1950, came
into force on 3 September 1953.
42 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data,
ETS No. 108 as amended by the CETS amending protocol No. 223.
43 CEPEJ (n 39), 9.
90 Regulating artificial intelligence in industry
project teams, which would draw from the expertise of legal practitioners as
well as academics from a wide range of social sciences. Any input to the sys-
tem should come from certified sources as well.44 The fourth principle calls
for a balance between the intellectual property of companies providing the
AI-driven systems and the need for transparency, impartiality and fairness.45
Finally, the fifth principle calls for the autonomy of the users, giving them the
possibility to review the data and to control their choices.46
These five principles correspond with similar principles that flow from
standards of professional conduct and professional codes of ethics. These regu-
lations impact the way that lawyers can, or should, use AI.47 There are certain
benefits as well as areas of concern arising from the increased use of AI systems
in the legal sector, particularly by solicitors, barristers, and judges. These areas
include, but are not limited to, the rule of law, fundamental rights, transpar-
ency, and explainability.

The rule of law


There is a great variation in theories about the rule of law, but its central idea
is about good governance, fair law-making, and applying the law in a just
way.48 Different professional legal standards and codes of ethics mention it as
one of the key principles, requiring solicitors, barristers, judges, and other legal
professionals to uphold the rule of law.49 Some of the technologies, particularly
those used in the administration of justice, may affect essential functioning ele-
ments of the rule of law. With that in mind, the Organisation for Economic
Co-operation and Development (OECD) stipulated in its principles on AI that:

AI actors should respect the rule of law, human rights and democratic val-
ues, throughout the AI system lifecycle. These include freedom, dignity and
autonomy, privacy and data protection, non-discrimination and equality,
diversity, fairness, social justice, and internationally recognised labour rights.50

44 ibid., 10
45 ibid., 11.
46 ibid., 12.
47 Legg, Bell (n 27), 64–69.
48 A. Dennett, Public Law Directions (OUP 2019)143–145; N. Parpworth, Constitutional and Administra-
tive Law (11th ed. OUP 2020), 36–51.
49 For instance: Solicitors Regulation Authority (SRA) for England and Wales mentions it as the core
principle for solicitors. See: SRA Principles at <https://www.sra.org.uk/solicitors/standards-regula-
tions/principles/> accessed 27 May 2021. Similarly, Rule A1 of the UK Bar Standards Handbook
(BSB) highlights that the society is based on a rule of law; see version 4.6 adopted on 31 December
2020, available at: <https://www.barstandardsboard.org.uk/the-bsb-handbook.html> accessed 27
May 2021. Similar provisions can be found in regulations of barristers and judges in many other
jurisdictions.
50 S. 1.2.(a) of the OECD Council Recommendation on Artificial Intelligence, adopted on 22 May
2019, C/MIN(2019)3/FINAL.
Artificial intelligence in the legal profession 91
The OECD further recommends that to this end AI actors should implement
appropriate mechanisms and safeguards, including capacity for human determi-
nation51 In June 2019, the G20 adopted so-called human-centred AI Principles
with the exact same provision drawn from the OECD Principles.52
One of the key aspects of the rule of law is access to justice. The AI-driven
projects such as the Canadian CRT, Estonian Courtal, and a variety of cloud-
based technologies, help with the realisation of this right. The CRT proved
particularly useful during the COVID-19 outbreak when it remained open and
fully operational 24 hours a day, 7 days a week. At the same time, millions of
people worldwide did not have proper access to justice, particularly in places
where regional and national lockdowns were imposed. Access to justice was an
issue in many places even before the pandemic, particularly due to the lack of
legal aid.53 Even though during the pandemic many countries offered online
hearings using audio-visual means of communication, the backlog of cases in
some reached record levels. A good example was the United Kingdom, where,
according to the House of Lords, by December 2020 the backlog had reached
crisis level, with over 8,000 people being held in custody awaiting trial.54 It
therefore seems appropriate to suggest that there should be an ongoing invest-
ment and development of AI-driven technologies that would support the work
of workers in the legal sector.

Fundamental rights
Predictive analytics tools become more and more popular as they deliver a very
useful intelligence, including information about the relevant law, a histori-
cal record about damages awarded, but also information about the individuals
involved. This leads to the possibility of profiling judges, lawyers, jurisdictions,
and other things.55 On one side, it may undermine the proper functioning of
justice, but on the other side it makes public activities more transparent by
allowing citizens to know and evaluate their judges, lawyers, and so on. This
practice is already known in the United States,56 but has been banned in France

51 S. 1.2.(b), ibid.
52 G20 Ministerial Statement on Trade and Digital Economy,Annex I G20 AI Principles, S. 1.2.
53 Koulu, Kontiainen (n 25), 66–67.
54 House of Lords, Covid-19 and the Courts, 22nd Report of Session 2019-2020, HL Paper 257 of 30
March 2021, 4–5.
55 “Profiling” is defined in art. 4(4) of the GDPR as: “any form of automated processing of personal
data consisting of the use of personal data to evaluate certain personal aspects relating to a natural
person, in particular to analyse or predict aspects concerning that natural person’s performance at
work, economic situation, health, personal preferences, interests, reliability, behaviour, location or
movements”. See also: Buyers (n 34), 45.
56 D.L. Chen, “Judicial analytics and the great transformation of American Law” [2019] 27Artificial
Intelligence and Law, 15–42. See also Appendix 1 to the European Ethical Charter (n 32), 27.
92 Regulating artificial intelligence in industry
with respect to judges and court clerks.57 These two contradictory examples
show that attitudes to this technology vary considerably.
Another common trait often associated with AI is the potential for bias
against a person or a group of people. Based on the examples of technologies
mentioned earlier, it could, for instance, be a bias against a lawyer or a judge,
but also against a particular group, e.g. based on age, gender, or other char-
acteristics. The bias comes from the data input into the system, from which
the AI is learning.58 Removing bias from machine learning is an area of ongo-
ing research but, in principle, it is possible to remove and modify input traits
that can cause bias.59 The problem of bias is inextricably linked to respect for
fundamental rights. A common concern across different industries is the right
to privacy and prohibition of discrimination. However, in the context of the
legal profession, special attention requires the right to a fair trial, particularly
if we take into account the “AI judge” project piloted in Estonia or the sys-
tem implemented in the Supreme Court of India. For instance, a chatbot that
gives a summary of the case shall not generate discriminatory outcomes so
that neither party is at a disadvantage, either directly or indirectly. Moreover,
the tools should be used with due respect to judges’ independence in their
decision-making. In the case of AI giving the actual decision in a case, it should
be transparent and explainable, and should be subject to scrutiny by all parties
involved, as well as the general public.60

Transparency and explainability


Transparency concerns the accuracy and relevance of information, allowing one
to make an informed decision. Transparency in the context of AI relates mostly
to information about the companies and their AI platforms being offered, but
it may also relate to clients wanting to know how their case will be handled
through the use of these platforms. When an AI performs a task, humans must
be able to trust the outcome and the reason for the outcome.61 Transparency
also implies the possibility of explaining why a particular decision was reached
by AI, as well as the possibility to challenge it through an official inquiry or
a complaint that would allow revision and/or improvement of the system/
services. The latter is often referred to by another term, namely “explainabil-

57 Article 33 of the French Justice Reform Act states that “no personally identifiable data concerning
judges or court clerks may be subject to any reuse with the purpose or result of evaluating, analysing
or predicting their actual or supposed professional practices”. It may lead to five years imprisonment.
See more: J.Tashea,“France bans publishing of judicial analytics and prompts criminal penalty” (ABA
Journal, June 2019) <https://www.abajournal.com/news/article/france-bans-and-creates-criminal
-penalty-for-judicial-analytics> accessed 27 May 2021.
58 LexisNexis PSL TMT Team (ed.), An Introduction to Technology Law (LexisNexis 2018), 555.
59 Waisberg, Hudek (n 1), 88–89.
60 Min, Ferris (n 33), 2
61 LexisNexis (n 58), 553–555.
Artificial intelligence in the legal profession 93
ity” or “explainable AI”, which goes a step further than transparency.62 The
matter of general regulations concerning transparency and explainability has
been covered in previous chapters of this volume, particularly in the context
of the GDPR and the EU White Paper. Beyond this, however, there are
some developments being made by the IEEE63 Standards Association that has
initiated a series of standards for AI and autonomous system designers. These
are being developed under the name “IEEE P7000™ series”, and include
13 separate standards for the “future of ethically aligned autonomous and intel-
ligent systems”.64 Particularly relevant to the legal profession are the P7001
Standards on Transparency for Autonomous Systems.65 Their aim is to prepare
standards for developing autonomous technologies that “can assess their own
actions and help users understand why a technology makes certain decisions in
different situations”.66 The Working Group has defined five groups who will
benefit from the standards, among which are lawyers and expert witnesses.67
The focus is on measurable, testable levels of transparency, so that autonomous
systems can be objectively assessed. It is hoped that the project will offer ways
to provide transparency and accountability for AI systems in the legal and other
professions.68 As of May 2021, the standards are still in development.

Confidentiality
Lawyers have a duty to keep the affairs of their clients confidential. Moreover,
special protection is given to all communications between lawyers and clients,
as well as documents prepared for litigation, and they must not be disclosed to
third parties. The latter is commonly known as “legal professional privilege”.69
Lawyers constantly deal with sensitive information, including personally
identifiable information, data relating to litigation, e-mail communication, and
much more. Therefore, they have to be particularly vigilant when embrac-
ing new technologies. Many of the AI-enhanced platforms are cloud-based,

62 A. Deeks (n 30), 1833; M. Brkan, G. Bonnet,“Legal and Technical Feasibility of the GDPR’s Quest
for Explanation of Algorithmic Decisions: of Black Boxes,White Boxes and Fata Morganas” (2020)
11(1) EJRR, 18–50.
63 Institute of Electrical and Electronics Engineers.
64 Drafts of some of the standards are available on IEEE Ethics in Action website: <https://ethicsinac-
tion.ieee.org/p7000/> 28 May 2021.
65 A draft of these standards can be downloaded from the official website: <https://2020.standict.eu/
standards-watch/ieee-p7001-transparency-autonomous-systems> 20 May 2021.
66 IEEE Ethics in Action (n 75).
67 The other groups include users, safety certifiers or agency, accident investigators, and the wider
public.
68 European Parliamentary Research Service, The ethics of artificial intelligence: Issues and initiatives (Euro-
pean Parliament, March 2020), 67–70.
69 J. Herring, Legal Ethics (2nd ed., OUP 2017), 150–153.
94 Regulating artificial intelligence in industry
meaning they are stored on a network of remote servers hosted on the inter-
net.70 This might have been useful in situations like the COVID-19 outbreak
where accessing on-premises IT infrastructure could be challenging. However,
embracing AI and high connectivity may bring the risk of cyber threats and
data breaches.71 Although some AI platforms pride themselves on offering
bank-grade security and powerful encryption for both their in-cloud and on-
premises services, even that does not guarantee 100% security.72 Therefore, it is
important to investigate whether using cloud technology impacts confidential-
ity and legal professional privilege.
In most jurisdictions, in general, lawyers may use AI cloud-based technolo-
gies, although the relevant provisions vary in their details. For instance, in the
United States, the matter is dealt with by provisions established by state bar
associations, and most of them require that the cloud system is secure and its
provider is a reputable organisation. However, many of them highlight that
a lawyer’s obligation to protect does not end once a reputable provider is
selected; the security measures include continuous reasonable care, and peri-
odical reviews.73 In Europe, the Council of Bars and Law Societies of Europe

70 More on that see: R. Leenes, S. De Conca,“Artificial Intelligence and Privacy – AI enters the house
through the Cloud” in W. Barfield, U. Pagallo (eds), Research Handbook on the Law of Artificial Intel-
ligence (Edward Elgar 2018), 280–305.
71 M. Lanterman, “AI and its impact on law firm cybersecurity” (Minnesota State Bar Association)
<https://www.mnbar.org/resources/publications/bench-bar/columns/2019/10/02/ai-and-its
-impact-on-law-firm-cybersecurity> accessed 27 May 2021.
72 ibid.
73 See for instance:Alabama State Bar, OGC Formal Opinion 2010-2:“Retention, Storage, Ownership,
Production and Destruction of Client Files”, available at: <https://www.alabar.org/office-of-gen-
eral-counsel/formal-opinions/2010-02/>; Alaska Bar Association, Ethics Opinion 2014-3: “Cloud
Computing & the Practice of Law”, available at: <https://alaskabar.org/wp-content/uploads/2014
-3.pdf>; Connecticut Bar Association – Professional Ethics Committee, Informal Opinion 2013-7
on Cloud Computing, available at: <https://www.ctbar.org/docs/default-source/publications/
ethics-opinions-informal-opinions/2013-opinions/informal-opinion-2013-07>; Illinois State Bar
Association Professional Conduct Advisory Opinion no. 16-06: “Client Files; Confidentiality; Law
Firms”, available at: <https://www.isba.org/sites/default/files/ethicsopinions/16-06.pdf>; Ken-
tucky Bar Association, Formal Ethics Opinion KBA E-437 of 21 March 2014, available at: <https://
cdn.ymaws.com/www.kybar.org/resource/resmgr/Ethics_Opinions_(Part_2)_/kba_e-437.pdf>;
Louisiana State Bar Association, Public Opinion 19-RPCC-021: “Lawyer’s Use of Technology”,
available at: <https://www.lsba.org/documents/Ethics/EthicsOpinionLawyersUseTech02062019
.pdf>; Professional Ethics Commission of the Maine Board of Overseers of the Bar, Opinion #207:
“The Ethics of Cloud Computing and Storage”, available at: <https://www.mebaroverseers.org/
attorney_services/opinion.html?id=478397>; Massachusetts Bar Association, Ethics Opinion 12-03,
available at: <https://www.massbar.org/publications/ethics-opinions/ethics-opinion-article/ethics
-opinions-2012-opinion-12-03>; New York State Bar Association Committee on Professional Eth-
ics, Ethics Opinion 842, available at: <https://nysba.org/ethics-opinion-842/>; Oregon State Bar
Opinion no. 2011-188 (revised 2015): “Information Relating to the Representation of a Client:
Third-Party Electronic Storage of Client Materials”, available at: <http://www.osbar.org/_docs/
ethics/2011-188.pdf>; Pennsylvania Bar Association Committee on Legal Ethics and Professional
Responsibility, Formal Opinion 2011-200, available at: <http://www.slaw.ca/wp-content/uploads
/2011/11/2011-200-Cloud-Computing.pdf>; Virginia State Bar Standing Committee on Legal
Artificial intelligence in the legal profession 95
(CCBE) created a set of guidelines that require lawyers to: (a) take into account
data protection laws and professional secrecy principles; (b) undertake a pre-
liminary investigation of the services, particularly in terms of experience, repu-
tation, technical evaluation, access offline, security etc.; (c) evaluate in-house
IT infrastructure as opposed to online; (d) consider contractual precautions
and transparency; and (e) carry out an impact assessment.74 These Guidelines
pre-date implementation of the GDPR, and so they now must be read in
conjunction with the GDPR provisions. In particular, special attention should
be paid when using AI platforms that involve data storage and processing on
servers in another country. Personal data can only be transferred to third coun-
tries in compliance with the conditions for cross-border data transfers set out
in Chapter V of the GDPR.75 One more example of similar regulations comes
from England and Wales, where the matters of confidentiality and privilege
are regulated by the Solicitors Regulation Authority (SRA) Standards and
Regulations76 and the Bar Standards Board (BSB) Handbook.77 They require
identification, monitoring, and management of all risks.78 Moreover, any rel-
evant information outsourced to, and held by, third parties must be available
for SRA/BSB or their agent for inspection.79

Conclusions
The impact of technology on the legal profession was highlighted by the
American Bar Association in its comments to the Model Rules of Professional
Conduct as follows: “a lawyer should keep abreast of changes in the law and
its practice, including the benefits and risks associated with relevant technology
(…)”.80 The different examples of AI application in legal services show that AI
can help lawyers amplify their expertise and their work. AI can be particularly
helpful in times like the COVID-19 outbreak, and generally when certain
work can or has to be done remotely. Moreover, the pandemic forced courts

Ethics, Legal Ethics Opinion 8972: “Virtual Law Office and Use of Executive Suites”, available at:
<https://www.vsb.org/docs/1872-final.pdf>.
74 CCBE Guidelines on the Use of Cloud Computing Services by Lawyers, adopted on 07 September
2012, available at: <https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/IT
_LAW/ITL_Position_papers/EN_ITL_20120907_CCBE_guidelines_on_the_use_of_cloud_com-
puting_services_by_lawyers.pdf> accessed 27 May 2021.
75 See art. 44–50 GDPR.
76 SRA Standards and Regulations available at: <https://www.sra.org.uk/solicitors/standards-regula-
tions/principles/> accessed 27 May 2021.
77 BSB (n 49).
78 SRA s. 2.5. of the Code of Conduct for Firms and Rules C8–C9 of the BSB Handbook respectively.
79 See S. 5.2. of the SRA Code of Conduct for Firms and Rule rC86 of the BSB Handbook respec-
tively.
80 ABA, Model Rules of Professional Conduct, Rule 1.1. Competence – Comment, available at:
<https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of
_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/> accessed 27 May 2021.
96 Regulating artificial intelligence in industry
to embrace information and communications technology more closely. In this
context, the Canadian CRT presents an important case study for future, more
AI-driven courts and tribunals. However, the role of certain tools, particu-
larly the ones that may influence pre-trial decision-making, is heavily debated.
Although an ongoing process of investment and development of AI-driven
technologies should be encouraged, a greater collaboration is needed between
engineers, researchers, scholars, and the legal sector. Hopefully, with greater
collaboration comes greater transparency and explainability. People within the
legal profession need to have a better understanding of the tools available, their
strengths and weaknesses, and in particular their “predictive” features. The
examples presented in this chapter show that these are of particular importance
with respect to the tools used for the administration of justice. Interestingly,
transparency and explainability also raise the question of how “autonomous”
AI systems truly are, and to what extent they should be. It appears that there
needs to be some balancing interplay where AI is required to produce reports
to human operators with an explanation of the AI’s operation and decisions.
The degree of development of AI tools is very varied, and their strengths
and weaknesses have been expressed. Embracing AI and high connectivity may
seem inevitable, but it may as well bring yet another risk, namely data breaches.
Lawyers have always been expected to meet the highest standards, and this
principle extends to the protection of client data. Therefore, with the invest-
ment in AI technologies, appropriate investment in cyber security safeguards
must follow. This includes not only the technical means for protecting the
data storage and connectivity, but should involve implementation of relevant
policies, and taking out cyber insurance. Even though all these measures may
not prevent a cyber breach/attack (arguably nothing can), they are becoming
the industry standards.81 Since lawyers often deal with sensitive personal and
confidential data, they must remain particularly vigilant and play an active role
in continuous monitoring to of the AI platforms at their disposal. This is critical
for the protection of fundamental rights, and to strengthen the core principle
of the rule of law.

81 For example, the SRA requires that solicitors in England and Wales, and registered European lawyers,
take out and maintain indemnity insurance, and that the insurances provides adequate and appropri-
ate cover in respect of the services provided. See:There is an interesting cyber insurance guidance
created by the UK National Cyber Security Centre (NCSC); see: NCSC,“Cyber Insurance Guid-
ance” (NCSC, 6 August 2020) <https://www.ncsc.gov.uk/guidance/cyber-insurance-guidance>
accessed 27 May 2021.
PART II

Vertical AI applications
7 Artifcial intelligence
An earthquake in the copyright
protection of digital music
Luo Li

A smart path of music: From evolution to revolution


Music plays a very important role in the history of humanity. ‘There are four
evident purposes of music: dance, ritual, entertainment personal, and com-
munal, and above all social cohesion, again on both personal and communal
levels.’1 The invention of the printing press fundamentally changed the distri-
bution of music. Since the first machine-printed music appeared in the 19th
century, music composers started to rely on the reproduction of sheet music to
spread their music in a much faster and more tangible way than before.2 The
birth of the phonograph enabled new forms of recording music and distribu-
tion. With the invention of vinyl records, the performance of music no longer
only relied on musicians; later, cassette tapes became the dominant format;3 it
was at this point in history that copyright piracy entered the stage. After the
success of CDs in the 1990s, music distribution set forth on a digital journey
followed by an improvement in internet connection and digital technology.
While musicians celebrated easy cross-border transmission, they also struggled
with how to control (as well as receive reasonable remuneration for) their
music when online downloading and streaming became more popular. In fact,
there are raising concerns on the legal impact of digitalisation on the music
industry and relevant parties,4 since such impact influents the whole-human-
group creators.
If the above technology changes are treated as revolutions in the music
industry, AI will have the same if not more impact. Fears of machines replac-
ing human musicians in music creation have not stopped since Professor David

1 J. Montagu,‘How Music and Instruments Began:A Brief Overview of the Origin and Entire Devel-
opment of Music, from Its Earliest Stages’ (Frontiers in Sociology, 20 June 2017) <https://doi.org/10
.3389/fsoc.2017.00008> accessed 27 May 2021.
2 MN2S,‘The History of Music Distribution’ (MN2S, 4 September 2020) <https://mn2s.com/news/
label-services/the-history-of-music-distribution/> accessed 27 May 2021.
3 ibid.
4 WIPO Conference on the Global Digital Content Market organised by the World Intellectual Prop-
erty Organisation since April 2016 have discussed surrounded copyright issues between human music
creators and their publishers, producers and distribution platforms in the digital age.

DOI: 10.4324/9781003246503-9
100 Regulating artificial intelligence in industry
Cope at the University of California in Santa Cruz started experimenting with
algorithmic composition in 1981. Professor Eduardo Miranda, one of the lead-
ing researchers in the AI/music field, shows a more positive attitude toward
AI-produced music. He calls AI the ‘means to harness humanity rather than
annihilate it … consider computer-generated music as seeds, or raw materials,
for … compositions … not so interested in understanding creativity with AI
rather … interested in AI to harness … creativity’.5 However, this does not
diminish the debate on humans vs machines in music creation but makes it
more pertinent, and AI-related experiments in making music have been boom-
ing since 2015. For example, Melodrive allows game development companies
to create custom soundtracks via its AI system, and this significantly reduces
music costs by up to 90%; Jukedeck was a UK based startup using AI to auto-
matically produce music.6 Users only needed to choose a mood, style, tempo,
and length then the AI system would produce a soundtrack matching the ele-
ments set by its users (users could get five songs a month for free).7 Many IT
giants have also experimented with AI music projects: Google-owned startup
DeepMind has a project called WaveNet whose goal is to create ‘a deep gen-
erative model of raw audio waveforms … able to generate speech which mim-
ics any human voice’;8 Twitter’s LnH project encourages the user to tweet the
LnH bot a song title and the bot then tweets back a short instrumental track
composed by the software; Google’s project Magenta in 2016 aimed to cre-
ate compelling music by using a machine learning system; in China, IT giant
Baidu developed an AI composer using image-recognition software to turn an
image into a song;9 哼趣10 is a piece of Chinese software that can produce a
piece of music based on the user humming a short melody, and users can also
edit the produced music by selecting different musical instruments, styles, and
lengths.
All of the above send a clear message: AI systems are becoming more popu-
lar for music creation and unavoidably competing with human musicians to
some extent. It is predicted that in the future AI will play significant roles along
three lines in the music industry according to the development of AI music

5 L.Trandafir, ‘On Creativity, Music and Artificial Intelligence: Meet Eduardo R Miranda’ (Landr, 18
August 2016) <https://blog.landr.com/meet-eduardo-miranda/> accessed 27 May 2021.
6 S. Dredge, Music’s Smart Future – How will AI Impact the Music Industry? (BPI 2006), 6.This report was
made by Music Ally’s Editor-in-Chief Stuart Dredge for the British Phonographic Industry.
7 ibid. Some news reports show that Jukedeck has been acquired by TikTok to fix the issue of the
higher royalties requested by major labels when TikTok’s music licensing expires. D. Sanchez, ‘As
TikTok’s Music Licensing Reportedly Expires, Owner ByteDance Purchases AI Music Creation
Startup JukeDeck’ (Digital Music News, 23 July 2019) <https://www.digitalmusicnews.com/2019
/07/23/tiktok-bytedance-acquires-jukedeck/> accessed 27 May 2021.
8 A. van den Oord and Sander Dieleman,‘WaveNet:A Generative Model for Raw Audio’ (DeepMind,
8 September 2016) <https://deepmind.com/blog/article/wavenet-generative-model-raw-audio>
accessed 27 May 2021.
9 Dredge (n 6), 7–8.
10 哼趣 is pronounced as heng qu in Chinese pinyin system. It means enjoy humming in English.
Copyright protection of digital music 101
applications: self-entertainment, environmental simulation, and assistant and
independent music creation.
First of all, AI applications, including Amper Score,11 Jukedeck, and 哼趣,
make music creation an easy one-minute task for public users using and sharing
AI-produced music to social media for fun purposes. Public users’ interven-
tion in music creation is minimised as the produced music relies highly on AI
applications’ self-operation and algorithm design. However, it is hard to say
whether AI should be deemed as the creator of the resulting music because the
creative capability of an AI system seems to be doubtable. Second, without pay-
ing a higher copyright licence fee to human musicians, low-cost AI-produced
background music or AI-simulated music in the surrounding environment
may occupy a large part of the leisure industry, including restaurants, pubs,
and other public places. Game development companies or other longer-term
projects with tight budgets may also use AI music to reduce costs. Melodrive
can ‘compose an infinite stream of original, emotionally variable music in real-
time – the idea being that it adapts to what’s happening within the game at
a particular point in time’.12 This case provides a good example of whether
this background music, produced by an AI system, should be deemed as a
protected copyrightable work, and therefore whether the AI system should be
treated as its author. Finally, AI could also assist human musicians in the music
creation, or even AI could be engaged as a key collaborator in the music crea-
tion. Australia-based startup Popgun announced that their AI product Splash
Pro was a tool for ‘empowering millions of musicians to discover, be inspired
and express their creativity’.13 Collaboration rather than the replacement of
human input is the purpose here. This, however, would raise the question of
who is the creator of such work. It could be considered to be a hybrid work
made by both humans and AI systems. Furthermore, it is also possible that
AI systems could one day create music independently with minimal human
intervention. Therefore, how to deal with music independently created by AI
is still up for debate.
However, the artificial nature of AI systems and their so-called ‘creation
processes’ are beyond traditional views and interpretations of what defines a
copyrightable work and what relates to creativity in a copyright context; this
affects the fundamentals of the existing copyright law system and could result
in a total re-evaluation of what should be protected by copyright law.

11 Amber Score <https://www.ampermusic.com/> accessed 27 May 2021.


12 S. Dredge, ‘Melodrive Debuts AI-music Generation System for Games’ (MusicAlly, 23 November
2018) <https://musically.com/2018/11/23/melodrive-ai-music-generation-games/> accessed 27
May 2021.
13 Popgun <https://popgun.ai/> accessed 12 November 2020.
102 Regulating artificial intelligence in industry
AI’s capability for creativity
Without a universal agreed definition of AI, it is difficult to understand the
nature of an AI system and its capability for creativity and determine whether
its produced outputs qualify for copyright protection. A presentation made by
John Launchbury, the Director of the Information Innovation Office (I2O)
at the Defense Advanced Research Projects Agency (DARPA) in the United
States Department of Defense, divided the research into AI into three waves
based on its characteristics, which are ‘handcrafted knowledge’, ‘statistical
learning’, and ‘contextual adaptation’.14 Therefore, it is perhaps possible to
explore the capabilities of AI systems through an understanding of AI research
at different stages and therefore capture the resulting outputs produced by AI
systems and test them for signs of creativity.

AI-supported
The AI systems based on ‘handcrafted knowledge’ have no learning capabili-
ties, poor handlining of uncertainties, and can only deal with narrowly defined
problems.15 Such AI systems should be treated as machines following rules
defined by humans and supporting humans to complete a task. Therefore, such
AI-produced music should be called AI-supported music, embracing the full
intervention of humans and under the substantial control of humans, because
the AI systems only execute human-set rules to produce music.16 This is similar
to humans setting rules in a software programme where figures are input into
the software, producing a result based on the pre-set rules. The difference is
that an AI system can deal with a larger amount and more complicated rules.
Obviously, there is no creativity on the side of the AI system and, therefore,
from a copyright perspective, there is no doubt of the human authorship of the
resulting music produced by the AI systems.

AI-assisted
In psychology, novelty and appropriateness are two important components
for identifying creativity – that is to say, the product or process resulting from
a creative activity is new and contains value. Nevertheless, this definition is
ambiguous when assessing the degrees of ‘new’ and ‘value’. An opinion doubts
that machine’s activities enable to qualify any creativity element, because it
believes creativity ‘has often been looked at in a spiritual way, being seen as the

14 J. Launchbury, ‘DARPA Perspective on AI’ Defense Advanced Research Projects Agency <https://
www.darpa.mil/about-us/darpa-perspective-on-ai> accessed 27 May 2021.
15 ibid.
16 L. Li, Intervention Report for the WIPO Conversation on Intellectual Property and Artificial Intelligence (Third
Session) (2020) WIPO <https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelli-
gence/conversation_ip_ai/pdf/ind_li.pdf> accessed 27 May 2021.
Copyright protection of digital music 103
only uniquely ‘human’ characteristic, one that defines an area of experience
… creative thinking is a bastion of human dignity in an age where machines,
especially computers, seem to be taking over routine skilled activities and eve-
ryday thinking’.17
In fact, AI systems have not only been taking over routine skilled activities.
The second wave of AI research is evidence of this. Such AI systems are based
on ‘statistical learning’ in Launchbury’s presentation. Because of the advanced
development of big data and computational power as well as better algorithms
systems (the three pillars of the success of AI systems), present AI systems can
define rules by themselves through clustering and classifying massive datasets,
and then use these defined rules to predict and make decisions by themselves
to some extent. This is why humans think they can have ‘conversations’ with
Apple’s Siri and how AI applications can ‘create’ music. In a human science
context, the term ‘creativity’ seems mysterious as it is often explained with
vague notions such as ‘inspiration’ and ‘intuition’,18 but these notions indicate
that creativity requires a source of imagination. Humans can imagine things
they have never seen; however, the ‘imagination’ of most AI composers is
either constrained or minimised, when compared with the non-restricted free-
dom and high flexibility of the human imagination.
The reality is that AI systems are still based on statistical learning, which
means AI systems cannot learn and imaginatively predict things with features
beyond the data supplied by humans – AI only has a limited decision-making
capability and constrained prediction capability rather than full freedom of such
capabilities (like humans). Besides, such AI systems do not understand and
explain the data, but instead analyse existing features of the data. Just like the
statement by MuseNet on its website ‘MuseNet was not explicitly programmed
with an understanding of music, but instead through discovered patterns of
harmony, rhythm, and style by learning to predict the next token in hundreds
of thousands of MIDI files’.19 Furthermore, the limited decision-making capa-
bility of AI systems depends on human engineers’ setting the algorithm. While
human engineers set abstract parameters in the algorithm system, AI compos-
ers can produce different musical compositions when users input the same
conditions; they would produce the same result if they were setting concrete
parameters in the algorithm. Material human intervention appears at every
stage of the production process (from the training data selected by humans to
the algorithm settings determining the degree of AI decision-making). In this
case, human contribution is central and AI systems merely have an assistant sta-

17 A. Cropley, ‘Definitions of Creativity’ in Mark Runco and Steven Pritzker (eds), Encyclopaedia of
Creativity (3rd ed., Elsevier 2020), 319–320.
18 R. López de Mántaras,‘Artificial Intelligence and the Arts:Toward Computational Creativity’ Open-
Mind BBVA <https://www.bbvaopenmind.com/en/articles/artificial-intelligence-and-the-arts
-toward-computational-creativity/> accessed 27 May 2021.
19 OpenAI, MuseNet <https://openai.com/blog/musenet/> access 27 May 2021.
104 Regulating artificial intelligence in industry
tus. Therefore, it is appropriate that the outputs produced by such AI systems
are called AI-assisted outputs. However, considering that AI music applica-
tions (second wave AI systems) can produce music that is not predictable by
humans, even with their limited decision-making abilities, those unpredictable
elements can be seen as evidence for the material contributions of AI applica-
tions. Therefore, it is worth considering whether this could tailor a certain
degree of creativity, but this may largely depend on how existing copyright law
interprets the term ‘creativity’ and whether such interpretation could extend to
this limited decision-making situation.

AI-generated
In its report on the Conversation on Intellectual Property and Artificial
Intelligence, the World Intellectual Property Organisation (WIPO) defines
‘AI-generated’ as ‘the generation of an output by AI without human
intervention’.20 This is similar to Launchbury’s third wave of AI systems based
on ‘contextual adaptation’.21 Such AI systems learn more from the understand-
ing of real-world phenomena than from data, with the abilities of explanation,
reasoning, and abstracting perceived information, to develop new meanings by
themselves.22 ‘Developing new meanings’ insinuates creation because it sug-
gests that AI systems are ‘thinking’: they try to integrate perceived information,
then AI system would explain and reasoning these information. More impor-
tantly, they abstract and explore novel things beyond such information. The
capabilities of abstracting and exploring novel things from existing information,
from author’s view, eventually towards creative activity. There is no doubt
that creativity exists in their resulting outputs. In the author’s view, the outputs
produced by such AI systems should be called AI-generated outputs instead of
AI-created outputs to distinguish them from human creations. This is because
the nature of such AI systems and their produced outputs is beyond the scope
of the existing copyright system designed specifically for human creations.

AI-produced music: Alien to copyright?


Authorship
The official website of the WIPO gives a brief introduction to the Berne
Convention for the Protection of Literary and Artistic Works. It states that the
Berne Convention ‘deals with the protection of works and the rights of their

20 Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (2020) WIPO/IP/AI/2/
GE/20/1 REV. <https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip
_ai_2_ge_20_1_rev.pdf> accessed 27 May 2021.
21 Launchbury (n 14).
22 ibid.
Copyright protection of digital music 105
authors’. Therefore, whether or not the term ‘author’ covers non-human
23

beings would bring about a totally different result: protection or non-pro-


tection. Unfortunately, the Berne Convention does not define authorship,
considering technology development was somewhat different in 1886, which
is when the Convention was adopted, this is not surprising. The definition
or interpretation of the term ‘author’ is perhaps considered unnecessary since
common logic links this term to persons who create the works. Early in 1992,
Professor Ricketson defended the opinion of human authorship in the Berne
Convention.24 In fact, no matter the rules that the Convention grants moral
rights to authors (it seems weird to say machines have any moral rights) or
the protection term that regulates author’s lifetime plus a period of time after
author’s death, they indicate the Berne Convention retains the human-centred
notion of authorship and author rights.25 From this point of view, the identity
of AI authorship is difficult to clarify at the international level. This issue, how-
ever, seems much clearer in national copyright laws.
The US Copyright Office provides a clear interpretation of authorship that
allows the creator to ‘register an original work of authorship, provided that
the work was created by a human being’.26 This followed an infringement dis-
pute of a self-portrait of a macaque monkey.27 In the case of Naruto v Slater,28
a macaque monkey took a series of photographs of itself while David Slater,
a professional photographer, left his camera in a nature reserve. The photog-
rapher published a book including a self-portrait of the macaque monkey,
but the authorship was disputed by the People for the Ethical Treatment of
Animals, who claimed that the monkey should be the copyright owner of the
self-portraits.29 In Naruto, the central issue was whether an animal could claim
copyright and bring a copyright infringement claim under US copyright law.
Finally, the Ninth Circuit determined that the macaque monkey had neither
a constitutional nor statutory standing to claim copyright infringement.30 This
was because judges found the words used in copyright law ‘imply humanity

23 Berne Convention for the Protection of Literary and Artistic Works,WIPO <https://www.wipo.int
/treaties/en/ip/berne/> accessed 27 May 2021.
24 S. Ricketson,‘People or Machines:The Bern Convention and the Changing Concept of Authorship’
(1991-1992) 16 Columbia VLA Journal of Law & Arts. See also J. Ginsburg,‘People Not Machines:
Authorship and What It Means in the Berne Convention’ (2018), 49 International Review of Intellectual
Property and Competition Law, 131–135.
25 ibid.
26 The Compendium of US Copyright Office Practices: Chapter 300, para 306. <https://www.copy-
right.gov/comp3/chap300/ch300-copyrightable-authorship.pdf> accessed 27 May 2021.
27 S, Gibbs, ‘Monkey Business: Macaque Selfie Can’t be Copyrighted, Say US and UK’ (The Guard-
ian 22 August 2014) <https://www.theguardian.com/technology/2014/aug/22/monkey-business
-macaque-selfie-cant-be-copyrighted-say-us-and-uk> accessed 27 May 2021.
28 888 F3d 418 (9th Cir 2018).
29 ibid., 425.
30 ibid., 420.
106 Regulating artificial intelligence in industry
and necessarily exclude animals that do not marry and do not have heirs enti-
tled to property by law’.31
China’s copyright law also clarifies that ‘works of Chinese citizens, legal
entities or other organisations, whether published or not, shall enjoy copyright
in accordance with this Law’.32 Current technologies cannot create AI systems
that are intelligent enough to enter into a contract or claim the legal capabil-
ity to sue for any infringement of its own rights. Therefore, an AI system is
not able to be recognised as a legal entity or other organisation. In practice,
Beijing Internet Court gave a clear interpretation of human authorship in April
2019 in its judgement of Feilin v Baidu.33 One of the disputes, in this case, was
whether a report on the judicial analysis of the film and entertainment indus-
try automatically generated by software named Wolters Kluwer China Law
& Reference constituted a work under Chinese copyright law. Although the
Court admitted that its content satisfied the formality requirement of a literary
work and its selection, judgement and analysis of relevant data reflected origi-
nality to some extent, the Court further held, however, that originality is not
a sufficient condition to qualify a piece of work, whereas a person creating the
work is a necessary condition in Chinese copyright law. Furthermore, even if
in the case of Shenzhen Tencent v Yingxun Tech in December 2019 the Shenzhen
Nanshan District Court gave its judgement in favour of an AI-produced liter-
ary work being protected under copyright law, the Court refused to admit that
software-generated works were protected by copyright based on the rationale
that software is not an author.34 Courts in both cases tried to locate human
intervention in the creative activities.
The Copyright, Designs and Patents Act 1988 (CDPA) in the United
Kingdom states ‘author, in relation to a work, means the person who create[s]
it’.35 Obviously, this definition excludes any space for AI being an author.
Interestingly, the CDPA provides a special section for computer-generated
works, which is perhaps closest to the situation of AI-assisted works, saying ‘[i]n
the case of a literary, dramatic, musical or artistic work which is computer-gen-
erated, the author shall be taken to be the person by whom the arrangements
necessary for the creation of the work are undertaken’.36 In other words, even
if a piece of music is made by an AI machine, its author is not the machine

31 ibid., 426.
32 Copyright Law of the People’s Republic of China 2010 (amended), art 2.WIPO <https://wipolex
.wipo.int/en/text/466268> accessed 27 May 2021.
33 Feilin v Baidu (2018) Beijing Internet Court <https://www.bjinternetcourt.gov.cn/cac/zw
/1556272978673.html> accessed 27 May 2021.
34 Shenzhen Tencent v Yingxun Tech (2019) Shenzhen Nanshan District People’s Court,Yue Min Chu
No.14010. zhongguo caipan wenshuwang (China Judgements Online) <https://wenshu.court
.gov.cn/website/wenshu/181107ANFZ0BXSK4/index.html?docId=30ba2cab36054d80a864ab8
000a6618a> accessed 27 May 2021.
35 Copyright, Designs and Patents Act 1988, s. 9(1).
36 ibid., s. 9(3).
Copyright protection of digital music 107
itself but the person or persons providing the database and creating an algo-
rithm to instruct the machine on how to make the music. These are computer
programmers or software engineers rather than AI systems. However, there
are no equivalent rules for sound recordings, and the right to be identified as
author or director and maintain moral rights does not apply to computer-gen-
erated work.37 The CDPA does not give equivalent treatment to the human
authors of computer-generated work since they are perhaps seen as providing
less of an intellectual contribution to such works, compared with normal works
(e.g. literary works) made fully by humans. In the European Union, both the
Software Directive and the Database Directive allow the EU Member States to
define authorship of a computer programme but make it clear that the ‘author
of a database/computer program shall be the natural person or group of natural
persons who created the base/program …’.38
Therefore, it can be argued that it is difficult to pursue a machine-author-
ship possibility under the existing copyright framework (no matter national and
international levels).

Originality
The criteria for copyright protection should assess whether a legislative sys-
tem implies that a human author must contribute to a copyrightable work.
Copyright law protects expressions of an idea rather than the idea itself, in
which originality is the key factor for obtaining copyright. The UK copyright
system has a low originality requirement in that a work is original as long as it
embraces the author’s sufficient skills, labour and judgement. Although there
are possible imminent changes to the originality requirement in the UK after
the case of Infopaq International A/S v Danske Dagbaldes Forening,39 it is no doubt
that the words “skills” and “labour” could only attribute to or link to humans.40
Moreover, both the Software Directive and the Database Directive in the EU
declares that originality means an author’s own intellectual creation.41 The
Infopaq case further highlights such a link between originality and the author’s
own intellectual creation, and its interpretation of originality is understood as
a reflection of an author’s own personality.42 ‘Personality refers to individual

37 ibid., s. 79(2)(c) and s. 81(2).


38 Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal
protection of databases [1996] OJ L77/20, art. 4(1); Directive 2009/24/EC of the European Parlia-
ment and of the Council of 23 April 2009 on the legal protection of computer programs [2009] OJ
L 111/16, art. 2(1).
39 C-5/08 Infopaq International A/S v Danske Dagbaldes Forening [2009] ECJ 465.
40 A. Rahmatian, ‘Originality in UK Copyright Law: The Old “Skill and Labour” Doctrine Under
Pressure’ (2013) International Review of Intellectual Property and Competition Law, 44, 4–34.
41 Directive 96/9/EC, art. 1(3) and Directive 2009/24/EC, art. 3(1).
42 [2009] ECJ 465.
108 Regulating artificial intelligence in industry
differences in characteristic patterns of thinking, feeling and behaving’.43 There
is no doubt that personality clarifies the necessity of a human as being the
author of a work, and this also indicates an author’s freedom to make creative
choices. Humans have full freedom during creative activities, although their
practice may be restrained by technology, value, and the social system at that
time. This perhaps is the reason why the CJEU in the case of Football Dataco
Ltd and Others v Yahoo! UK Ltd and Others emphasised the impossibility of
constituting a free and creative choice if making a piece of work is the result of
‘technical considerations, rules or constraints’.44
In this case, a musical output produced by a machine is not possible,
thus qualifying the originality requirement in a copyright context. With the
acknowledgement of the use of AI and its related machine learning activities
and the understanding that the current copyright framework blocks the pos-
sibility of AI being given author status, the EU has attempted to deal with
robotics-related legal issues in the 2017 Recommendations to the Commission
on Civil Law Rules on Robotics. However, this still far from clarifies the issue
of whether an AI system owns the works it ‘created’.

An alternative solution?
While national and international copyright systems give little space to inter-
pret non-human authorship and therefore originality, it is still worth exploring
alternative solutions to deal with AI-produced works in a copyright context.
This is valuable not only for stimulating AI-related creations and for invest-
ment purposes but also for coping with uncertainties and foreseen copyright
issues.

Computer-generated works
It is necessary to discuss alternative solutions that could be applied under the
current copyright system. UK copyright law identifies that the author of a
computer-generated work is ‘the person by whom the arrangements neces-
sary for the creation of the work are undertaken’.45 This seems to be a pos-
sible avenue for AI-produced music to pursue copyright protection. However,
there is a view that doubts the precise meaning of the word ‘arrangements’ as it
suggests plans and preparations to make things happen (here it means planning
and preparing to create music), and the persons making such arrangements
are varied; they could be users, programme designers, software investors, or

43 ‘Personality’ American Psychological Association <https://www.apa.org/topics/personality/>


accessed 27 May 2021.
44 C-604/10 Football Dataco Ltd and Others v Yahoo! UK Ltd and Others [2012] ECJ 115.
45 Copyright, Designs and Patents Act 1988, s. 9(3).
Copyright protection of digital music 109
instructors training and instructing the programmers. In Nova Productions Ltd
46

v Mazooma Games Ltd & Ors, the Court held that the game’s designer was the
person by whom arrangements were undertaken, rather than the users who
played the game and generated unique images during play.47 However, this is
not straightforward for music creation. While professional musicians adopt AI
music software during the creative process, they prefer to look at AI software as
a useful tool rather than dominating their creation. In fact, however, musicians
may be unconsciously inspired and influenced by AI autonomous generation.
The whole creative process is a trial according to the musician’s adjustment,
judgement, and selection of relevant sources provided by the AI software,
which embraces an interactive impact-collaboration element: musicians are
inspired by AI-produced music trials, made based on their own adjustment,
judgement, and selection of sources; AI software continually produces music
corresponding to added elements and selections that gradually match the music
with the musician’s creative ideas. Musicians may modify, adapt, and integrate
these trials together with their own ideas, and a final resulting work is created.
From the author’s view, a piece of music produced in this way should also be
treated as AI-assisted music as the AI plays an important assistant role in help-
ing the human musician complete a final work. From this perspective, it seems
strange to say that the AI music software designer/engineer is a co-creator
and more importantly that AI-produced music produces an unclear boundary
between human authored music and computer-generated music. It is difficult
to identify which part belongs to the human musician’s intellectual creation
and which part belongs to the AI system’s efforts. From the author’s perspec-
tive, it is more like a collaborative work with co-author status, especially, we
can forecast the future AI software is capable of making decision-making or
achieving largely autonomous accompaniment while engaging human musi-
cians’ creation process.. From this point of view, the participation of AI in
creation is beyond the scope of a mere tool in traditional computer-generated
work, and thereby a category of computer-generated works would not be
defined enough for such types of AI-produced music (both AI-assisted and
-generated music). Nevertheless, the copyright system is clear that only human
authors qualify for copyright.

Works made for hire or legal entity work?


Some scholars discuss the possibility of treating AI-produced works as works
made for hire regulated under US copyright law, and a common issue is ‘how
[do] the machines have enough intelligence to take a decision to agree or

46 J. McCutcheon ‘Curing the Authorless Void: Protecting Computer-Generated works following


ICETV and Phone Directories’ (2013)1 Melbourne University Law Review, 53–56.
47 Nova Productions Ltd v Mazooma Games Ltd & Ors (CA). Reference: [2007] EWCA Civ 21.
110 Regulating artificial intelligence in industry
disagree to the contractual agreement?’48 Furthermore, there is a further con-
cern on how AI, its program designers, and its users accommodate a com-
missioner–creator relationship or an employer–employee relationship in an
AI-produced work.49 After all, it is unorthodox to treat AI as an employee
without constituting ‘in the course of employment’. Besides, AI-produced
outputs do not belong to the situation of commission work because there is
no evidence to show that AI is consigned by humans/enterprises to complete
a commission work.
The Shenzhen Tencent case in China gave some hope that AI-produced work
could be seen as a legal entity – a nice Christmas gift for AI-produced works (the
judgement was made on 24 December 2019).50 Software developed by Tencent
(Claimant) named Dreamwriter as an intelligent writing assist system that had
helped them complete 300,000 works a year since 2015. A financial article by
Dreamwriter was published on the Tencent Stock website on 20 August 2018,
and then it was copied in full and made available to the public through the
defendant’s website. The Court held the article was a literary work from the per-
spective of formality – being capable of reproduction and the basis of its appear-
ance.51 The key to constituting a literary work in this case was the originality
requirement. The Court stated the content of the article reflected the selections,
analysis, and judgement of data and stock information on that day, with logic and
clear expression and reasonable structure – therefore, it was original.
When determining whether this article reflected the creator’s individual
choices, judgement, and skills, the Court noted the differences between the
creation of this article and the typical method of creating an article, and it
admitted there was a time gap between the creation group’s selections and
arrangements and the actual act of writing that the Dreamwriter software
executed.52 However, the Court believed the lack of synchronisation resulted
from the characteristics of the technical process and refused to treat the auto-
matic execution process as creation since this was based on treating the soft-
ware as the creator to some extent.53 Interestingly, the Court stated that the
creation of this article had directly resulted from Tencent’s employee activities
in individual arrangements and selections, and the software merely provided
a technical tool.54 The Court was reluctant to accept AI autonomous opera-
tion as an intellectual creation process, but instead treated it merely as a smart

48 P. Devarapalli,‘Machine Learning to Machine Owning: Redefining the Copyright Use Only Own-
ership from the Perspective of Australian, US, UK and EU law’ (2018) 40 European Intellectual Prop-
erty Review, 11, 722–788.
49 A. Bridy,‘Coding Creativity: Copyright and the Artificially Intelligent Author’ (2012) Stanford Tech-
nology Law Review, 5, 1–28.
50 Shenzhen Tencent v Yingxun Tech (2019) (n 34).
51 ibid.
52 ibid.
53 ibid.
54 ibid.
Copyright protection of digital music 111
tool and preferred to trace the human intervention as much as possible. The
Court treated this article as an integrated intellectual creation made by both
employees’ judgements and selections and the operation of the software – and
that the article was a literary work protected under Chinese copyright law.
Furthermore, according to art. 11 of the copyright law in China, ‘[w]here
a work is created according to the intention and under the supervision and
responsibility of a legal entity or other organisation, such legal entity or organi-
sation shall be deemed to be the author of the work’.55 The disputed article
was an integrated intellectual creation made by multiple teams sharing differ-
ent tasks, reflecting Tencent’s need to publish a financial article.56 The whole
employee group creating the article had multiple teams that Tencent super-
vised; this article was published on Tencent’s website with a note at the end
stating, ‘this article was automatically written by Tencent’s robot Dreamwriter’,
which points to Tencent as the author (the claimant). The Court stated that
this indicated Tencent would bear all relevant liabilities coming from the dis-
puted article.57 Therefore, the Court held the disputed article was the work of
a legal entity, and Tencent, the claimant, was deemed as the author and there-
fore entitled to copyright.
In this case, the Court emphasised the direct connection between the
form of expression in the disputed article and the employee group’s intel-
lectual activities in arranging and selecting the input, setting the conditions,
template and language style. To some extent, the Court was more in favour
of users (the employee group) and played down the software’s autonomous
role. Considering the programme designers and programme users were all
from Tencent, there was no dispute over Tencent’s authorship. However,
if the users and programme designers were independent and separate parties,
this judgement could be seen as biased to the programme users, which con-
flicts with the preference of game designers in the Nova case. Unfortunately,
the Court in the Shenzhen Tencent case did not make any direct connections
between the originality of the article and the software designers. The issue
therefore remains unclear. Jukedeck’s model seems to answer this question to
some extent: it allows its users to generate five free songs a month before they
start paying for each track, but users need to pay more to obtain copyright
regardless. In other words, programme designers/owners own the copyright,
whereas its users who generate the songs only enjoy limited personal use.
The copyright system is primarily designed to regulate human creative
activities. From the author’s view, it would not be suitable to overly engage
in searching for human intellectual labour in technology-produced goods, as
ignorance of technology’s contribution may hinder technology innovation
and the related dissemination of cultural productions. After all, it can predict

55 Copyright Law of the People’s Republic of China 2010, art. 11 (n 32).


56 Shenzhen Tencent v Yingxun Tech (2019) (n 34)
57 ibid.
112 Regulating artificial intelligence in industry
that AI research and its development towards to a highly autonomous and
self-decision-making future, which is totally different from those technologies
which still substantially rely on human-decision-making. Therefore, admitting
a technology’s contribution to creation and its future directions, embracing
minimised or zero human intervention AI-produced music, is key for con-
sidering an appropriate legal system to respond to such forthcoming radical
changes in the music sector. One thing we must recognise is that a single law
can resolve some issues but not all.

Conclusion: Uncertainties in AI-produced music


While technology is developing faster than ever before and keeps bringing
new things into being with totally different natures/features, human-drive leg-
islations always lag behind and are challenged by these complex issues. The
appearance of AI systems may overturn or at least challenge a series of tradi-
tional views concerning authors, creativity, and the process of creation in both
the common sense of humanity and copyright context. While humans and AI
systems start a competitive relationship in the music industry, understanding
the nature of AI systems and AI-produced music, and questioning the legal
status of AI in the existing copyright system is so important to identify the
resulting issues, such as infringement, remunerations, and legal responsibilities
of relevant parties. It can be seen that the existing copyright system tries to
maximise its protection against technology changes, which is why there are
a variety of sui generis rights under the umbrella of the copyright system that
has arisen, but none of them hit the fundamentals of its framework. This time,
however, the existing copyright system seems stuck, as the radical revolution of
AI systems is challenging its fundamental doctrines: human-nature creation and
human creativity/originality. Nevertheless, it is worth considering the possibil-
ity of re-interpreting key terms and doctrines in a copyright context to map
these technology changes. Perhaps light protection of AI-assisted music under
the existing copyright system could be considered. However, to what extent
should this light protection be? For AI-generated music, the best way, from
the author’s view, is to build a new system specific to robotics to resolve the
relevant issues, but how to harmonise this new system with other laws regulat-
ing human activities is also a big question.
In fact, identifying AI systems and the copyrightability of their works is just
the starting point of AI-related copyright issues. There are many surround-
ing issues that also challenge or will challenge the whole music industry. One
issue of concern, raised by WIPO Conversation on AI and IP, is the possible
infringement of AI training data. The reason that AI systems can ‘create’ music
is that they rely on a powerful database to support their machine learning,
which they use to produce the pieces of music. Therefore, whether and how to
protect or/and govern the database is a big question. However, one issue here
is that the information on a training database consists of music or products cre-
ated by humans. For example, an AI composer needs to learn a massive amount
Copyright protection of digital music 113
of human-created music to gather its features and define rules before producing
any musical pieces. If these human-created music works are copyrightable and
still in a copyright-protected term, whether the use of these works without
permission of the copyright owners would constitute a copyright infringement
is a question that needs more investigation. Furthermore, while the existing
copyright system provides a defence – fair use – it is unclear whether using data
for AI self-learning can be qualified as fair use in private study and research. In
addition, while an AI composer produces a piece of music that embraces the
features of folk music belonging to certain ethnic minority groups/indigenous
communities, or if an AI composer produces a piece of music that integrates
different sources of folk music, it is unclear if this would be treated as a misap-
propriation of the traditional music culture. Without a clear identification of
the legal status of AI systems in a copyright context, these questions remain
difficult to answer.
8 Artifcial intelligence and risk
preparedness in the aviation
industry
Kinga Kolasa-Sokołowska

Introduction
With the foundations laid by pioneers and experimenters,1 aviation has always
been at the forefront of progress. Being safety-oriented,2 the aviation com-
munity concentrates most of its efforts on enhancing the protection of life
and health of those involved in aerial operations, and to that end it is eager to
explore the potential of emerging technologies.3
The ‘lesson-learned’ approach is the basis for most of the safety improve-
ments in this sector.4 While technology and human knowledge have tremen-
dously advanced over the years, the industry has never been more vulnerable
than it is today.5 Its financial specifics (e.g. perishable inventory, high fixed
costs, excess capacity, reliance on external factors and cycles influencing

1 H. Matthews, Pioneer Aviators of the World: A Biographical Dictionary of the First Pilots of 100 Countries
(McFarland 2003), 179–188;V.P. Relly et al., ‘History of Aviation – A Short Review’ (2017) 1(1) J.
Aircr. Spacecr.Technol., 30.
2 Article 44(a), (d), (h) ICAO Convention on International Civil Aviation (Chicago Convention), 7
December 1944, (1994) 15 UNTS 295. See also F. Pellegrino, The Just Culture Principles in Aviation
Law:Towards a Safety-Oriented Approach (Springer 2019), 1–44; R.Abeyratne, Strategic Issues in Air Trans-
port: Legal, Economic and Technical Aspects (Springer 2012), 19–164; R.J. Andreotti,‘Promoting General
Aviation Safety:A Revision of Pilot Negligence Law’ (1992) 58 J.Air. L. & Com., 1089.
3 R.John Lofaro, K.M. Smith,‘The Aviation Operational Environment: Integrating a Decision-Making
Paradigm, Flight Simulator Training and an Automated Cockpit Display for Aviation Safety’ in E.
Abu-Taieh, A. El Sheikh, M. Jafari (eds), Technology Engineering and Management in Aviation: Advance-
ments and Discoveries (IGI Global 2012), 241–282, R. Arnaldo Valdés et al.,‘Aviation 4.0: More Safety
Through Automation and Digitization’ in M. Gemal Kushan (ed) Aircraft Technology (IntechOpen
2018), 25–42.
4 See Ch-T. Lu et al., ‘Another Approach to Enhance Airline Safety: Using Management Safety Tools’
(2006) 11(2) JAirTransp 113; P. Gomes,‘New Strategies to Improve Bulk Power System Security: Les-
sons Learned from Large Blackouts’ in IEEE Power Engineering Society General Meeting (IEEE 2004),
vol. 2, 1703–1708; M.M. Sokołowski, Regulation in the European Electricity Sector (Routledge 2016),
60–63, M.M. Sokołowski, European Law on Combined Heat and Power (Routledge 2020), 31–33.
5 M. Linz, ‘Scenarios for the Aviation Industry: A Delphi-based Analysis for 2025’ (2012) JAirTransp-
Management 22, 28-35.

DOI: 10.4324/9781003246503-10
AI and risk preparedness in the aviation industry 115
demand)6 coupled with the increasing values and complexity of modern air-
craft, digital transformation and increasing interdependence of aviation actors,
and the global pandemic, all add to the severity of the challenges to come.
Therefore, aside from learning from dramatic or disruptive events that have
already taken place, aviation professionals are seeking to increase their capac-
ity to take corrective action in advance. Compliance with industry guidelines
and standards is of primary importance in this regard, as is appropriate risk
management in a broader sense. It should be noted, however, that the degree
of advancement of aviation equipment, combined with the volume of aerial
operations taking place in the contemporary world, have seriously impaired the
industry’s abilities to recognise and avert danger in time.
In the above context, the use of AI provides a number of opportunities and
strengthens human efforts in the air transport domain.7 This chapter addresses
the recent applications of AI in aviation. It examines the potential of AI for the
necessary improvements in the area of risk management and calls for a world-
wide regulatory initiative to ensure safe implementation and further develop-
ment of AI in the aviation sector. The chapter also includes the impact of
COVID-19 on the industry.

The current state of AI in aviation


For decades, the aviation sector has debated the utility of AI,8 but it is only
in recent years that AI’s potential has sparked serious interest among industry
players.9 Some airlines, airports and manufacturers have even taken a closer
look at AI technology and decided to invest in the development of AI solu-
tions.10 This shift can be partly attributed to a significant improvement in tech-
nological capabilities, including developments in areas such as data availability,

6 P.S. Dempsey,‘The Financial Performance of the Aviation Industry Post-Deregulation’ (2008–2009),


45 Hous. L. Rev., 421, 447–453.
7 See X. Zhang, S. Mahadevan,‘Ensemble Machine Learning Models for Aviation Incident Risk Pre-
diction’ (2019) 116 Decis Support Syst., 48, T. Shmelova,Y. Sikirda, A. Sterenharz (eds), Handbook of
Research on Artificial Intelligence Applications in the Aviation and Aerospace Industries (IGI Global 2020).
8 Ch.B. Ross, ‘Artificial Intelligence and Human Error Prevention: A computer aided decision mak-
ing approach.Technical Report Number 2’ (1978), 1–12; Donald Michie ‘New Face of AI’ (1977),
6; International Air Transport Association (IATA),‘AI in Aviation White Paper’ (June 2018) <www
.iata.org/contentassets/b90753e0f52e48a58b28c51df023c6fb/ai-white-paper.pdf> accessed 27 May
2021.
9 ibid.
10 For example, in 2019 Japan Airlines, in collaboration with Accenture, begun testing AI and voice
recognition technology to speed up its passengers’ check-in phase at the international check-in
counters at Tokyo’s Narita and Haneda Airports. See: C. Bright, ‘Japan Airlines trialing AI-powered
voice recognition technology at Tokyo check-in desks’ (13 March 2019) <www.businesstraveller
.com/business-travel/2019/03/13/japan-airlines-trialling-ai-powered-voice-recognition-technol-
ogy-at-tokyo-check-in-desks> accessed 27 May 2021.
116 Regulating artificial intelligence in industry
computing power, storage capacity,11 and, as a result, AI accessibility and afford-
ability for aviation-related businesses.12 Until recently, the use of AI in aviation
has been very limited, but the immaturity of its implementation should not be
a reason to ignore AI’s relevance and potential impact on the sector.
In terms of the current presence of AI in aviation, an examination of indus-
try reports, roadmaps, and policies reveals that the initial emphasis was primar-
ily on the effective management of large flows of passengers and aircraft. Many
aviation stakeholders decided to explore various AI-driven solutions aimed at
addressing capacity issues, such as problems with delays and passenger satisfac-
tion.13 ‘Virtual agents and chatbots’ and predictive analytics have been two of
the most common applications of AI.14 The former has automated and accel-
erated customer communication, whereas the latter has enabled, for example,
forecasting demand for specific products15 or the likelihood of equipment fail-
ures before they occur.16 Moreover, potential applications of AI in the fields of
automatic scheduling for flights and crews,17 and air traffic management18 have
been considered.
Although the use of AI in the commercial aviation business is currently
beyond public concern, the industry has already initiated several projects aimed
at addressing this issue in a comprehensive way. For instance, in 2018, the
International Air Transport Association (IATA) published the ‘AI in Aviation
White Paper’,19 with the purpose of increasing knowledge about AI across the
aviation sector. Other aims included providing a wider explanation of AI’s
benefits to the industry, identifying risks and prospects associated with the use
of AI, and giving recommendations concerning the commencement of AI
application in aviation businesses.20 The document explains AI basics, presents
its capabilities and case studies for the benefit of the aviation sector. It pro-
vides information on the IATA’s initiatives in this area and argues that AI is

11 Federation of European Risk Management Associations (FERMA), ‘Artificial Intelligence Applied


to Risk Management’ (December 2019), 7 <www.ferma.eu/app/uploads/2019/11/FERMA-AI
-applied-to-RM-FINAL.pdf> accessed 27 May 2021.
12 IATA (n 8).
13 Société Internationale de Télécommunications Aéronautiques (SITA), ‘2019 Air Transport IT
Insights’ <www.sita.aero/resources/type/surveys-reports/air-transport-it-insights-2019> accessed
27 May 2021.
14 ibid.
15 For instance, easyJet uses AI to predict how many food and beverage products are expected on
various flights, see Tanya Powley ‘EasyJet Looks to AI to Cut Delays and Deliver its Bacon Butties’
(Financial Times, 17 November 2015) <www.ft.com/content/9017e37a-8c59-11e5-a549-b89a1d-
fede9b> accessed 27 May 2021.
16 SITA (n 13).
17 ibid.
18 S. Mondoloni, N. Rozen, ‘Aircraft trajectory prediction and synchronization for air traffic manage-
ment applications’ (2020) 119 Prog.Aerosp. Sci., 100640.
19 IATA (n 8).
20 ibid., 6.
AI and risk preparedness in the aviation industry 117
inevitably coming to the sector.21 According to IATA, the industry can either
disregard this truth at its own peril or explore the possibilities of AI and actively
engage in the AI revolution. It further elaborates that despite many challenges
to overcome, embracing AI faster than others is the only way to remain in
business on a long-term basis.22
Recently, various AI-related activities have been undertaken in the
European aviation sector. For instance, in 2020 the European Union Aviation
Safety Agency (EASA) developed a roadmap for AI in aviation.23 The main
purpose of this document was to address the role of AI in aviation, to define
goals that should be accomplished and steps to follow in order to respond to the
emerging questions relating to public trust, ethics, certification and standardisa-
tion processes.24 Also, in 2020, the European Aviation/ATM AI High Level
Group was formed. Led by the European Organisation for the Safety of Air
Navigation (EUROCONTROL), the Group brought together a broad class
of representatives, including airlines, airports, air navigation service providers
(ANSP), manufacturers, the European Commission, as well as military and
staff organisations, to comprehensively address the issue of AI in their report.25
By way of this document, the participants involved intended to improve the
understanding of AI in the aviation sector and demonstrate its major con-
sequences for private and public entities, for civil or military requirements.
Simultaneously, they expressed their commitment to developing their own
relevant AI strategies. Moreover, they sought to identify barriers to the wider
use of AI in the full spectrum of aviation/ATM operations and determine the
framework modifications needed to foster the development of AI in European
aviation.26
COVID-19 undoubtedly had an impact on these considerations and
goals.27 The industry, focused on survival and the challenges of reducing its
cash burn, placed the greatest value on flexibility and resilience. The next step
and the surest way to recover from the crisis is by accepting the possibility
that COVID-19 will become endemic, and its presence will have an effect

21 ibid., 20.
22 ibid., 20.
23 European Union Aviation Safety Agency, ‘Artificial Intelligence Roadmap: A Human-Centric
Approach to AI in Aviation’ (February 2020) <www.easa.europa.eu/newsroom-and-events/news/
easa-artificial-intelligence-roadmap-10-published> accessed 27 May 2021.
24 ibid, 2.
25 European Aviation Artificial Intelligence High Level Group, ‘The FLY AI Report. Demystifying
and Accelerating AI in Aviation/ATM’ (March 2020) 10 <www.eurocontrol.int/publication/fly-ai
-report> accessed 27 May 2021.
26 ibid.
27 Airports Council International (ACI) and IATA, ‘The NEXTT Vision in a Post- Covid-19 World’
(October 2020) 1 <https://nextt.iata.org/dist/i18n/zh_CN/pdf/nextt-vision-post-covid-19-world
.pdf> accessed 27 May 2021.
118 Regulating artificial intelligence in industry
for an extended time.28 With the realisation of this comes the understanding
that neither governments nor industries can continue to pursue a strategy of
complete risk avoidance indefinitely.29 The risk of COVID -19 needs to be
managed effectively, aircraft need to fly, and borders need to be open. This,
in turn, demands prompt action and regulatory initiative to enable practical
access to and application of available risk mitigation solutions.30 As sound scien-
tific evidence becomes available, such COVID-19-induced risk management
mechanisms should be subjected to continuous reassessment and replacement
by methods that prove to be more effective.31
In this light, the need to recover, restore confidence and satisfy public con-
cerns with regards to the protection of passengers’ and aviation professionals’
health will likely drive long-term progress in the field of AI as a risk management
and safety improvement tool in aviation.32 For example, AI can support real-
time infection control, assist in passengers screening, and improve the tracing
of the virus spread as well as the identification of high-risk cases.33 Additionally,
thanks to such AI applications as predictive maintenance, AI can support the
aviation industry when it comes to operational safety. In particular, that it is of
utmost importance that commercial aviation continues to prove that air travel
can operate safely by complying with rules and processes for the highest levels
of safety.34 At the same time, significant changes, involving health safety, envi-
ronmental impact and application of new technologies, in the business need to
be accepted and understood to ensure that the aviation industry returns stronger
and is able to cope with new challenges that will inevitably arise.35
COVID-19 has brought almost everything to a complete halt, including the
simple business of flying,36 but early indications suggest that the aviation indus-
try that will emerge from the COVID-19 crisis will be even more eager to

28 N. Phillips, ‘The coronavirus is here to stay — here’s what that means’ (16 February 2021) <www
.nature.com/articles/d41586-021-00396-2> accessed 27 May 2021.
29 IATA ‘Travel and managing the risks of Covid-19’ (12 April 2021) <https://airlines.iata.org/analysis
/travel-and-managing-the-risks-of-covid-19> accessed 27 May 2021.
30 M.M. Sokołowski, ‘Regulation in the Covid-19 Pandemic and Post-Pandemic Times: Day-Watch-
man Tackling the Novel Coronavirus’ (2020), Transform. Gov. People Process. Policy, DOI 10.1108/
TG-07-2020-0142.
31 ACI & IATA (n 33).
32 J.Ye ‘The role of health technology and informatics in a global public health emergency: practices
and implications from the Covid-19 pandemic’ (2020) 8(7) JMIR Med. Inform., e19866.
33 R. Vaishya et al., ‘Artificial Intelligence (AI) Applications for Covid-19 Pandemic’ (2020), 14 (4)
337–339.
34 ‘[T]here is a common understanding that this is no time for cutting corners on safety – the reputa-
tion of the industry in this respect remains as vulnerable as ever.The public will not accept a lapse
in safety standards because of the pandemic’, P. Ky, ‘Life Beyond Covid-19 – How Will Aviation
Need to Change?’ (16 November 2020) <https://www.eurocontrol.int/article/life-beyond-covid
-19-how-will-aviation-need-change> accessed 27 May 2021.
35 ibid.
36 P. Suau-Sanchez et al., ‘An Early Assessment of the Impact of Covid-19 on Air Transport: Just
Another Crisis or the End of Aviation as We Know It?’ (2020) 86 JTranspGeogr 102749.
AI and risk preparedness in the aviation industry 119
apply AI-driven solutions.37 This time though, the focus has changed and most
efforts are expected to shift in emphasis from enhancing customers experience
or managing capacity issues to improving health safety and limiting human-to-
human interaction using various means of contactless travel.38

Potential of AI in aviation risk management context


Paradoxically, from a risk management perspective, entirely new risks are not
of major concern.39 The main difficulty, though, lies with risks that are already
present but have become more challenging to detect, or that have started to
present themselves in unknown, niche ways.40 The most current example
being risks associated with passenger’s health and fitness to fly.41 There is little
chance to provide aviation professionals with the expertise and training suffi-
cient to detect and manage a wide array of potential threats brought on board
and in the surroundings of the aircraft. The complexities and dynamics of the
risks will always challenge human efforts in this regard.
For those engaged in aviation activity, the materialisation of risk can mean
the loss or damage to property or assets, death or injury of aviation staff, or
loss and damage resulting in liability to third parties.42 To better grasp the
ever-changing risk contexts and fluctuations, aviation needs improved tools
to recognise and understand the occurring anomalies. In this sense, AI offers a
variety of opportunities.
When it comes to risk preparedness, what looks particularly promising is the
capacity of AI to learn, to self-develop, and to, with no delay, apply acquired
information to situations or issues not previously experienced.43 In this, AI is
capable of capturing trends that neither human nor traditional analytics systems

37 See ‘Air Canada CleanCare+:TouchFree Bag Tagging coming to more Canadian airports’ (Air Can-
ada, July 2020), <www.aircanada.com/content/aircanada/ca/en/aco/home/about/media/media
-features/touch-free-bag.html> accessed 27 May 2021, S. Singh, ‘How COVID is Revolutionising
Touchless Airport Technology’ (Simple Flying, 17 July 2020) <https://simpleflying.com/touchless
-airport-technology-covid/> accessed 27 May 2021,‘JAL tests contactless check-in kiosks at Tokyo
Haneda Airport’ (Airport Technology, 25 November 2020) <www.airport-technology.com/news/jal
-contactless-check-in-kiosks> accessed 27 May 2021.
38 ACI and IATA (n 33), 2.
39 A. Olofsson, S. Öhman,‘Views of Risk in Sweden: Global Fatalism and Local Control – An Empiri-
cal Investigation of Ulrich Beck's Theory of New Risks’ (2007) 10(2) J. Risk. Res., 177.
40 Deloitte, ‘AI and Risk Management: Innovating with Risk Confidence’ (2018), 7 <https://www2
.deloitte.com/content/dam/Deloitte/uk/Documents/financial-services/deloitte-uk-ai-and-risk
-management.pdf> accessed 27 May 2021.
41 H. Caplan,‘Passenger Health – Who’s in Charge?’ (2001) 26 Ai r& Space L., 203.
42 G. Leloudas, L. Haeck, ‘Legal Aspects of Aviation Risks Management’ (2003), 28 Annals. Air Space
L., 149.
43 M.S. Hamid, K. Kaiser, ‘Learning Intelligent Behavior’ in Grigoris Antoniou, John Slaney (eds)
Advanced Topics in Artificial Intelligence: 11th Australian Joint Conference on Artificial Intelligence, AI-98
Brisbane,Australia, July 13-17, 1998, Selected Papers (Springer 1998), 143–154.
120 Regulating artificial intelligence in industry
are capable of recognising.44 It absorbs massive quantities of data both histori-
cally and in real time, and uses it to capture such discrepancies. This feature is
of prime importance, as in most cases emerging threats arise along with pattern
irregularities.45 Moreover, this makes AI particularly suitable for the aviation
industry, which is highly reliant on digital data streams between air and ground
systems.46
The steady growth of the amount of information exchanged makes the
aviation sector promisingly rich in data.47 However, when it comes to data
processing, the tools currently in use are unable to manage the volume of
information gathered from aircraft systems, surveillance, airlines, airports or
other industry stakeholders.48 Partly because about 90 per cent of data pro-
duced is unstructured,49 which means that it cannot be positioned and analysed
in a conventional way in columns and rows.50 In this context, the advantage
of AI lies in its ability to handle unstructured data.51 Today, cognitive capabili-
ties, including data mining, machine learning, and natural language processing,
can take over old methods of analysis and draw conclusions from unstructured
data, resulting in better and faster detection of known and unknown hazards.52
In the risk management context, the major advantages of AI can be divided
into the following areas:

1) Data processing: the use of vast volumes of structured and unstructured


data; updating patterns and variations of datasets;
2) Efficiency: increased automation of regular support in the risk manage-
ment processes;
3) Real time and predictive: identification of emerging risks, issuance of risk
mitigation recommendations, faster and more accurate reaction to poten-
tially hazardous situations;
4) Business decision-making: improved processes thanks to better under-
standing and recognition of risks.53

44 See G.Tayfur, Soft Computing in Water Resources Engineering:Artificial Neural Networks, Fuzzy Logic And
Genetic Algorithms (WIT Press 2012), 4.
45 IATA (n 8), 18.
46 European Aviation (n 31).
47 ICAO,‘Artificial Intelligence and Digitalization in Aviation’, ICAO Assembly – 40th Session, 1 August
2019, 2 <www.icao.int/Meetings/a40/Documents/WP/wp_268_en.pdf> accessed 27 May 2021.
48 Deloitte,‘Why Artificial Intelligence is a Game Changer for Risk Management’ (2016), 1 <https://
www2.deloitte.com/content/dam/Deloitte/us/Documents/audit/us-ai-risk-powers-performance
.pdf> accessed 27 May 2021.
49 ibid.
50 ibid.
51 F. Sassite et al., ‘A Smart Data Approach for Automatic Data Analysis’ in Vikrant Bhateja, Suresh
Chandra Satapathy, Hassan Satori (eds) Embedded Systems and Artificial Intelligence. Proceedings of ESAI
2019, Fez, Morocco (Springer 2020), 691.
52 Deloitte (n 46).
53 FERMA (n 11), 14.
AI and risk preparedness in the aviation industry 121
Specifically, in the aviation business, AI can support risk management pro-
cesses by enabling real-time aircraft safety monitoring54 and passengers’ health
monitoring,55 improving assets protection,56 enhancing dynamic resource allo-
cation (e.g. minimising queues at the airports by means of predictive analytics
and passenger notification applications),57 etc.
The potential of AI stirs up the imagination of the air transportation indus-
try, fuelling visions of a future in which aviation actors commonly adopt AI
for a variety of purposes, also including AI application in the cockpit environ-
ment.58 Although when aiming for AI transition it is fine to have ambitious
objectives, it is also essential to start with a proof of concept.59 The usefulness
and commercial viability of such an objective for the particular entity, such as
an airline, airport, or aviation organisation, can be proven by means of testing
or a pilot project.60 The launch of a single, small-scale initiative to test the use
of AI within a specific organizational structure can expose challenges that may
arise during the transformation in a safe and controlled manner.61 Applying this
strategy can save a lot of struggle in the future.
When considering the use of AI for the purpose of risk management, it
should be remembered that currently AI algorithms are tasked with perform-
ing specified operations with restricted scope. Visual or speech recognition
and text analysis can serve as examples.62 While for that particular task AI is
indeed capable of outperforming humans, the all-purpose and infinitely capa-
ble AI is mostly at the conceptual stage.63 For this reason, the creation of a
hybrid ecosystem based on the combination of computational methods and
human intelligence will likely result in the best outcome.64 Today, making a
risk-based decision requires two different but inextricably entwined elements:
objective data and a subjective feeling of what is to be achieved or lost by the
decision. Objective calculation and subjective perception are vitally impor-
tant; neither is quite enough by itself.65 On the contrary, to the machines,

54 W. Bellamy III, ‘Delta Develops Artificial Intelligence Tool to Address Weather Disruption, Improve
Flight Operations’ <www.aviationtoday.com/2020/01/08/delta-develops-ai-tool-address-weather
-disruption-improve-flight-operations> accessed 27 May 2021.
55 For instance,ViatorAero system equipped with abilities to monitor passengers temperature or tired-
ness, see Aviation Business News, ‘How Artificial Intelligence is Now Supporting the Aviation
Industry’ (Aviation Business News, April 2018) <www.aviationbusinessnews.com/low-cost/artificial
-intelligence-aviation-industry> accessed 27 May 2021.
56 IATA (n 8), 16.
57 IATA, (n 8),13.
58 R.Abeyratne, Megatrends and Air Transport (Springer 2017), 173–200.
59 FERMA (n 11) 21.
60 ibid.
61 ibid.
62 ibid.
63 ibid, 14.
64 European Aviation (n 31), 25.
65 P.L. Bernstein, Against the Gods:The Remarkable Story of Risk (Wiley 1998), 119.
122 Regulating artificial intelligence in industry
human choices are mostly the product of the automatic and repetitive use of
their expertise and acquired skills, rather than of contemplation, evaluation and
thoughtful assessment. In unprecedented, time-sensitive conditions, individu-
als often need to make decisions without much consideration.66 In this respect,
‘human–machine teaming’67 seems to be the best way to go. Through the use
of AI, present constraints in risk management processes can be overcome, and
the decisions based on gut feeling would become data-based and systemat-
ic.68 AI being available 24/7 and 100 per cent up to date can enable a clearer
understanding of risks, provide timely information and support to designated
aviation staff, and with that it can tremendously increase the sector’s chances of
taking corrective and preventive steps in time.
In the coming years, AI is expected to support and enhance the perfor-
mance of various aviation actors69 by assisting in areas where its abilities greatly
outweigh those of human operators, such as detection and identification of
new risks, fleet and staff management, air traffic management, etc.70 Although
its capabilities are still limited, and many challenges of a regulatory or ethical
nature remain, the aviation industry should be encouraged to lay down the
foundations for the application of the technology early, taking one small step
at a time.

Legal and regulatory framework: The need for a unifed


global response
AI is steadily being built into the complicated machinery of commercial avia-
tion, and this process is likely to speed up. Therefore, the globally coordinated
response with the objective of establishing principles governing its utilisation
has become a matter of urgency. It is somehow troublesome that, despite the
widening use of AI in the aviation business and the attention it has garnered
among the decision-makers, the international response to this issue is still
lacking.71
Airlines, when performing international flights, are often a subject to
numerous, frequently contradictory, laws regulating various spheres of their

66 U. Pagallo,‘Algo-Rhytms and the Beat of the Legal Drum’ (2018) 31 Philos.Technol., 507, 521.
67 T. Kistan, ‘Innovation in ATFM:The Rise of Artificial Intelligence’ (presentation at ICAO Air Traf-
fic Flow Management Global Symposium, Singapore, 20–22 November 2017) <https://www.icao
.int/Meetings/ATFM2017/Documents/3-Trevor%20Kistan%20%20ICAO%20ATFM%20Global
%20Symposium%20-%20AI%20in%20ATFM,%20Thales,%20T.%20Kistan%20-%20PRESENTA-
TION.pdf> accessed 27 May 2021.
68 FERMA (n 11), 14.
69 European Aviation (n 31), 25.
70 ibid.
71 R. Abeyratne, ‘Key Legal Issues in ICAO: A Commentary and Review’ (2019) 44 Air & Space L.,
53, 66.
AI and risk preparedness in the aviation industry 123
operation.72 These include matters such as passenger rights and air carrier lia-
bility, as well as health safety standards and protective measures to be fol-
lowed when flying to a designated country. For a business, which is global in
nature, such a patchwork of regulations is highly problematic. Similarly, the
impact of AI goes beyond any particular entity or even beyond the borders
of any sector.73 Hence, the use of AI solutions can have a disruptive effect on
those involved in aviation activities, such as airlines, manufacturers, airports,
air navigation service providers, or maintenance service providers. Most cer-
tainly, it will also significantly affect larger circles. Consequently, there is a
strong need to foster global collaboration and consolidate isolated, regional,
or sectoral industry efforts.74 The complexity of the subject matter requires
the involvement of a wide range of actors from diverse backgrounds and with
broad expertise to collaborate on relevant policies and to address public and
ethical matters relating to AI.75
Preferably, such actions should be taken under the auspices of the
International Civil Aviation Organization (ICAO), a specialised agency of
the United Nations, established by the Chicago Convention76 of 1944 and
entrusted with tasks of developing ‘principles and techniques of international
air navigation’77 and promoting ‘planning and development of international air
transport’.78
Given the wide variety of AI solutions and potential applications, one could
make various comments on the most pressing legal issues that need to be
addressed in the wake of AI implementation in the aviation sector. However,
two particular problems, accelerated by the COVID-19 crisis, call for the
immediate attention of the international civil aviation community. Namely,
the safety of AI applied and data governance in the aviation sector equipped
with AI solutions.
From the very beginning of commercial air transportation, safety was a major
concern for the aviation community. Not surprisingly, then, ICAO, from its
inception, has been focused on safety and has played a vital role in establish-
ing rules focused on the safe operation of aircraft. The industry has evolved
over the years, and the construction of aircraft has improved together with the
voluminous body of hard and soft air laws addressing aviation safety, and sub-
sequently aviation security. As a result, today, there is no equivalent mode of
transport that can be associated with such a high degree of safety and with such

72 Regarding conflicting public health measures. See A. Grout, N. Howard, R. Coker, Elizabeth M.
Speakman,‘Guidelines, Law, and Governance: Disconnects in the Global Control of Airline-Associ-
ated Infectious Diseases’ (2017) 17(4) Lancet Infect. Dis., e118, 119.
73 European Aviation (n 31), 12.
74 See for instance European initiatives like European Aviation (n 31) and EASA (n 29).
75 Deloitte (n 46), 23.
76 Chicago Convention,Article 43.
77 ibid,Article 44.
78 ibid.
124 Regulating artificial intelligence in industry
rigorous boarding procedures as a commercial airliner. AI implementation will
undoubtedly alter the landscape of safety and security risks. In the light of the
above, the use of AI, being a risk management solution and an emerging safety
and security risk79 at the same time, is of great interest to ICAO.
It is worth noting that the issue of AI has been raised at the ICAO forum on
a few occasions. For example, during the Thirteen Air Navigation Conference,
Singapore presented a paper titled ‘Potential of Artificial Intelligence in Air
Traffic Management’.80The paper highlighted AI’s capabilities in the decision-
making domain, and its potential for guaranteeing that aviation is not hin-
dered by human limitations.81 Moreover, during the 40th Assembly Session,
the Working Paper on AI was presented by the International Coordinating
Council of Aerospace Industries (ICCAIA) and Civil Air Navigation Services
Organisation (CANSO), calling for Standards and Recommended Practices
(SARPs) amendments and actions, particularly in areas such as certification,
operations, qualification, and data sharing and training.82 Nevertheless, despite
ICAO involvement in numerous initiatives in the field of AI,83 including par-
ticipation in the EUROCAE Working Group 114 (WG-114) on Artificial
Intelligence,84 there is no clear indication of any action attempted for the review
or improvement of relevant SARPs. Apart from actions aimed at exploring and
promoting the potential of AI, ICAO should strive to ensure that an adequate
framework for the safe use of AI in commercial aviation is in place.

79 M. Brundage et al.,‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Miti-
gation’ (2018), 3–7.
80 ICAO, ‘Potential of Artificial Intelligence (AI) in Air Traffic Management (ATM)’ Thirteenth Air
Navigation Conference ICAO, Montréal, 9–19 October 2018 <www.icao.int/Meetings/anconf13/
Documents/WP/wp_232_en.pdf> accessed 27 May 2021.
81 ibid., 3.
82 ICAO (n 53).
83 As listed:‘Partnering in the UN AI for Good Annual Global summit, presenting work session on AI
in aviation as well as participate in mobility sessions; Hosting internships, developing in-house deep
learning AI models showcasing natural language processing techniques for aeronautical information
management as well as document summarization; Supporting local AI company networks, start-ups
and incubators such as Thales AI@Centech and Concordia District 3 by providing ideas and coach-
ing, Collaborating with McGill on the introduction of AI in aviation inside the McGill data science
and machine learning program. Broadening the horizon of AI in aviation through workshops organ-
ized in collaboration with CRIAQ, Exploring the creation of an AI in Aviation Focus Group under
the hospices of ITU to address issues related to compliance and certification, Collaboration with
XPrize Foundation by providing AI challenges and participating in the Global Initiative on AI and
Data Commons, Collaboration with ITU on their United 4 Smart Sustainable Cities (U4SSC) ini-
tiative including the use of AI for urban mobility; Participation in the EUROCAE Working Group
114 (WG-114) on Artificial Intelligence’, ICAO,‘Artificial Intelligence (AI)’ <https://www.icao.int
/safety/Pages/Artificial-Intelligence-(AI).aspx> accessed 27 May 2021.
84 EUROCAE,‘New working group:WG-114 / Artificial Intelligence’ (19 June 2019) <www.eurocae
.net/news/posts/2019/june/new-working-group-wg-114-artificial-intelligence> accessed 27 May
2021.
AI and risk preparedness in the aviation industry 125
Further to the above-indicated safety concerns, another issue that calls for
global action is data governance in the aviation sector. It goes without saying
that AI requires sufficient quality and a proper quantity of information to fulfil
its function85 so as to avoid the GIGO (‘garbage in, garbage out’) effect.86 But
data governance is much more than data quality and quantity management;
it is also about ensuring that acquired data are reliable and available when
needed,87 that proper infrastructure is provided and secure (also when it comes
to cyber risks),88 that data sharing frameworks are in place,89 and that issues
such as potential data breaches are addressed and managed.90 As the challenges
associated with processing data are not new to international aviation,91 it is to
be hoped that they will also be addressed in the AI context.
As the aviation sector can benefit only from a unified, coordinated approach,
the cooperation between the ICAO, states, aviation stakeholders, and industry
representatives to define standards for AI, especially in the above-discussed
areas (i.e. safety and data governance), is necessary.

Conclusions
In the face of the intense financial distress caused by COVID-19, the imple-
mentation of AI may not appear as a basic need for the industry. Beyond doubt,
commercial aviation needs to learn from what has gone wrong, and beyond
doubt this is the right and proven path in the aviation sector. However, relying
solely on past successful strategies may prove to be not only ineffective in the
present but also hazardous to the future of commercial aviation. The aviation
industry needs to widen its perspective beyond the challenges of conducting
business in today’s pandemic environment, beyond solving the COVID-19
induced problems, not only to let the industry grow, but also to limit the risks
of major crises recurring in the future. As Professor Brian Havel, Director of
the Institute of Air and Space Law at McGill University, put it: ‘[i]nternational
aviation is not a business, where problems disappear if we simply leave them
alone, in which matters can be left to wait forever’.92 COVID-19 has put the
air transportation industry under stress and called for greater flexibility and

85 FERMA (n 11), 5.
86 Bernstein (n 71), 177.
87 FERMA (n 11), 11
88 ibid.
89 European Aviation (n 31), 16.
90 FERMA (n 11), 12.
91 P. Mendes de Leon, ‘The Fight Against Terrorism Through Aviation: Data Protection Versus Data
Production’ (2006) 31 Air & Space L., 320.
92 B. Havel, ‘Keynote Speech’Worldwide Airports Lawyers Association XI Conference, Bogota, 9–11
October, 2019 <https://www.abiaxair.com/wala2019/post_conference.php> accessed 27 May
2021.
126 Regulating artificial intelligence in industry
agility to deal with challenges of long-continued uncertainty.93 Therefore, the
aviation community is seeking to implement various improvements using the
available technologies at hand. AI, as one of the most promising advancements
when it comes to risk identification, calls for special attention. AI is a disruptive
technology and a risk in itself. In the wake of its accelerating implementation
in the aviation sector, such issues as safety and data governance need to be
addressed without any delay. Today, addressing the issue of AI in the aviation
industry is a matter of urgency both from a regulatory and business perspective.
The appropriate framework is required to establish trust and overcome nega-
tive perceptions of AI, to maintain and further improve levels of safety in civil
aviation, to reduce liability and insurance associated challenges, and to provide
businesses with an acceptable level of certainty when it comes to the AI use.
Therefore, a coordinated, global response involving a wide range of AI experts
and industry actors is necessary to ensure that these objectives are met. The
European efforts in the field of AI, including EASA and EUROCONTROL
initiatives, can serve as inspiring examples and a solid foundation for the inter-
national discussion to be held under the lead of ICAO.94

93 ACI & IATA (n 33), 6.


94 The views expressed in this chapter do not necessarily reflect the views of entities with which the
Author is affiliated.
9 Autonomous AI, smart seaports,
and supply chain management
Challenges and Risks
Mitja Kovač

AI, smart seaports, and intelligent ships


The task of handling an ever-increasing amount of cargo in a safe, efficient
and environmentally friendly way is one of the biggest challenges facing sea-
ports and the wider maritime industry. Literature defines “ports” as logistic
networks where cargo, money and information flow intertwined with the dif-
ferent aspects of political, economic, social, technological and environmental
issues.1 Such logistic networks employ AI in order to automatize operation
areas, increase communication and profitability.2 Seaport activities are indeed
becoming more technological.3 Currently, the Internet of Things (IoT)4 is
particularly considered as an important technological addition.5
A “smart port”, on the other side, can be defined as a fully automated
port where all devices are connected via IoT.6 Yang et al. offer examples of

1 C.A. Duran, F. Palominos, F.M. Cordova, “Applying Multi-criteria Analysis in a Port System” [2017]
122 Procedia Computer Science, 478–485; M.K. Othman, N.S. Rahman, A. Ismail, H.A. Saharuddin,
“Factors Contributing to the Imbalance of Cargo Flows in Malaysia Large-scale Minor Ports Using
a Fuzzy Analytical Hierarchy Process (FAHP) Approach” [2019] 35 The Asian Journal of Shipping and
Logistics, 1, 13-23.
2 See e.g.Y.Yang, M. Zhong, H.Yao, F.Yu, X. Fu, O. Postolache, “Internet of Things for Smart Ports:
Technologies and Challenges” [2018] 21 IEEE Instrumentation and Measurement Magazine, 1, 34–43;A.
Botti,A. Monda, M. Pellicano, C.Torre,“The Conceptualization of the Port Supply Chain as a Smart
Port Service System:The Case of the Port of Salerno” [2017] 5 Systems, 2, 35.
3 C.A. Duran, F.M Cordova, F.Yanine, E. Carrillo, “Fuzzy Knowledge to Detect Imprecisions in Stra-
tegic Decision Making in a Smart Port” [2020] 9 International Journal of Advanced Trends in Computer
Science and Engineering, 3, 377–380.
4 “Internet of Things (IoT), as defined by the IEEE, is a network of items including sensors and embed-
ded systems which are connected to the Internet and enable physical objects to gather and exchange
data”; G. Jayavardhana, B. Rajkumar, S. Marusica, M. Palaniswami,“Internet of Things (IoT):A Vision,
Architectural Elements, and Future Directions” [2013] 29 Elsevier Future Generation Computer Systems,
7, 1645–1660.
5 Yang et al. (n 4).
6 ibid. See also X. Li,W. Xu,“A Reliable Fusion Positioning Strategy for Land Vehicles in GPS-denied
Environments based on Low-cost Sensors” [2017] 64 IEEE Trans. Ind. Electron., 4, 3205–3215; M.R.
Kaloop, M.A. Sayed, D. Kim, E. Kim,“Movement Identification Model of Port Container Crane based
on Structural Health Monitoring System” [2014] 50 Structural Engineering and Mechanics, 1, 105–119.

DOI: 10.4324/9781003246503-11
128 Regulating artificial intelligence in industry
projects in the ports of Rotterdam, Hamburg, Le Havre, Shanghai, Vigo, and
Singapore relating to smart ports and the IoT.7 These include sensing tech-
nologies, automated quayside cranes, automated guided vehicles for container
handling and yard cranes.8 A network of smart sensors and actuators, wireless
devices and data centres make up the key infrastructure of smart ports, which
allow the port authorities to provide essential services in a faster and more effi-
cient manner.9 Moreover, according to some estimations, an automated con-
tainer terminal can save at least 25% more energy and reduce 15% more carbon
emissions than a traditional terminal.10 Thus port systems are undergoing a pro-
found transformation, implementing technologies associated with distributed
smart sensors and actuators, data communication and internet connectivity for
remote and automatic operation, control optimization based on AI, as well as
big data analysis.11 Such smart ports employ AI-related information systems
that “manage, monitor, and store massive amounts of data (e.g. maritime traffic
and logistic data) and provide large-scale computerised and paperless services
in smart ports”.12 The diversity of the gathered data and information then
enables “smart port applications to adapt to the dynamic requirements of a

7 See for instance:A. Belfkih, C. Duvallet, B. Sadeg,“The Internet of Things for Smart Ports:Applica-
tion to the Port of Le Havre” [2017] Proc. Int. Conf. Intell. Platform Smart Port (IPaSPort), 15–16;
K.H. Kim, B.H. Hong, “Maritime Logistics and Applications of Information Technologies” [2010]
Proc. 40th Int. Conf. Comput. Industrial Eng., 1–6; Port of Rotterdam Authority, “The Smart Port
Doesn't Stop at the City Limits” (8 April 2019) https://www.portofrotterdam.com/en/news-and
-press-releases/the-smart-port-doesnt-stop-at-the-city-limits accessed 27 May 2021; Hamburg Port
Authority, “Smart-port: The Intelligent Port”, available at <https://www.hamburg-port-authority
.de/en/hpa-360/smartport/> accessed 27 May 2021; C.L Botana,“Environmental Actions. Port of
Vigo: Green Port” [2015] Proc.Atlantic Stakeholders Platform Conf. (ASPC), 1–4.
8 ibid.
9 The major drivers in smart ports are productivity and efficiency gains. Yau et al. suggest that the
latest version of ports are customer – and community centric smart ports that are distinguished by
five main features: a) smart port services (such as vessel and container supply chain management);
b) technologies such as data centre, networking and communication, and automation; c) use of sus-
tainable technology to increase energy efficiency and achieve reduction in greenhouse emission; d)
cluster management such as a shipping cluster that consists of geographically proximate companies
and stakeholders with their main activity being shipping; and e) development of hub infrastructures
to foster collaboration among different ports and supply chain stakeholders. See: K-L.A. Yau, S.
Peng, J. Qadir,Y-Ch. Low, M.H. Ling,“Towards Smart Port Infrastructures: Enhancing Port Activities
using Information and Communication Technology” [2020] 23 IEEE Instrumentation & Measurement
Magazine, 8, 83387–83404.
10 In such smart ports the whole terminal's modules communicate with a central control unit of the
terminal control room. H.M. Le,A.Yassine, M. Riadh,“Scheduling of lifting Vehicle and Quay Crane
in Automated Port Container Terminals” [2012] 6 Intern. J. Intelligent Inform. Database Syst., 516–531.
11 Yang et al., supra note 6. See also N. Bahnes, B. Kechar, H. Hafid,“Cooperation between Intelligent
Autonomous Vehicles to enhance Container Terminal Operations” [2016] 3 J. Innovation in Digi-
tal Ecosystems, 1, 22–29; R.H. Murofushi, J.Tavares, “Towards Fourth Industrial Revolution Impact:
Smart Product based on RFID Technology” [2017] 20 IEEE Instrum. Meas. Mag., 2, 51–56.
12 L. Heilig, S.Voÿ,“Information Systems in Seaports: A Categorization and Overview” [2016] 18 Inf.
Technol. Manage., 3, 179–201.
AI, smart seaports, and supply chain management 129
complex system handling diverse aspects and comprising different technologies
(e.g. wireless communications and embedded systems for sensing operation)”.13
The unforeseeable emergence of COVID-19 has imposed pressing chal-
lenges and opportunities on the maritime sector and to the employment of
AI. COVID-19-related quarantines, closures of ports, security checks at ports
resulting in extra waiting times for berthing operations, inland seaport tranship-
ment operations, supply chain management, and other aspects had an impact
on the overall amount of freight and ocean shipping. Greater employment of
smart ports and smart ships could mitigate such adverse effects, and AI-related
digitisation could be the legacy of the pandemic.
Analytically speaking, the entire smart port ecosystem consists of five main
areas of application: (a) smart vessel management; (b) smart container man-
agement; (c) smart port management; (d) smart energy management; and (e)
smart resource management.14 Smart AI-governed ports have the potential to
cut human error, enable active communication with the social environment,
make operations faster, safer, and more efficient, and cut carbon emissions for
environmental and energy sustainability.15 Moreover, smart ports offer adap-
tive solutions in response to a world characterized by uncertainty.16 Such AI
decision-making support systems based on a predictive model of behaviour
economize transaction costs, apply efficient risk management, improve worker
safety and enable a super-efficient allocation of resources, competitiveness, and
sustainable development.17 Finally, AI can contribute to “smart ships”, i.e. sys-
tems with completely autonomous navigation, sailing, control, and guidance.
These are sometimes referred to as “intelligent ships”.18

13 Yau et al. (n 11). See also Ch. Shuo, J.Wang, J. Zhao,“The Analysis of the Necessity of Constructing
the Huizhou Smart Port and overall Framework” [2017] Proc. Int.Conf. Intell.Transp., Big data Smart
City, 159–162.
14 Yau et al. (n 11), 83391.
15 G.A. Rodrigo, N. Gonzalez-Cancelas, B. Molina Sarrano, A. Camarero Orive, “Preparation of a
Smart Port Indicator and Calculation of a Ranking for the Spanish Port system” [2020] 4 Logistics, 9;
G. Buiza, S. Cepolina, A. Dobrijevic, M. del Mar Cerban, O. Djordjevic, and C. Gonzalez,“Current
Situation of the Mediterranean Container Ports regarding the Operational, Energy and Environ-
ment areas” [2015] Proc. Int. Conf. Ind. Eng. Syst. Manag., 530–536; J.Twidell,T.Weir, Renewable Energy
Sources (3rd ed., Routledge, 2015).
16 A. Bujak, “The Development of Telematics in the Context of the Concepts of Industry 4.0 and
Logistics 4-0” [2018] Proceedings of the e-Business and Telecommunication Networks (Springer), 509–524.
17 Ports of Singapore, Rotterdam and Hamburg, are prime examples of smart ports using AI tools to
improve their business operations.
18 See for instance M. Schiaretti, L. Chen, R.R. Negenborn, “Survey on Autonomous Surface Vessels:
Part I – A New Detailed Definition of Autonomy Levels”, in Tolga Bektaş, Stefano Coniglio,Anto-
nio Martinez-Sykora and Stefan Voß (eds), Computational Logistics (Springer, 2017), 219–233; L.Van
Cappelle, L. Chen, R.R. Negenborn, “Survey on ASV Technology Developments and Readiness
levels for Autonomous Shipping” [2017] Proceedings of the 9th International Conference on Computa-
tional Logistics (ICCL 2018), 65–79.
130 Regulating artificial intelligence in industry
Integration and collaboration
A seaport with an AI ecosystem is built around a “cyber-physical system”, i.e.
physical infrastructure with cyber facilities that impact the efficiency of the
maritime industry on various levels, including ships, ports, and associated sup-
ply chains.19 AI in the field of supply chain management is used to spot patterns
in the logistics chain and offer detailed prediction times of when vessels, lorries,
and containers will reach terminals, therefore allowing for greater planning.20
Moreover, AI may be used to predict future equipment needs, long-term yard
utilisation, container damage, numbers for gate visits, and many other aspects.21
Seaports in Rotterdam, Hamburg, Le Havre, Shanghai, Vigo, and Singapore
have, for example, established so-called “supply chain integration”. This term
refers to a “data sharing, integrity and transparent partnership with information
technology, system integration and collaboration as the guiding ideology, and
centred on the port enterprise at the core”.22 Another related process is the so-
called “supply chain structure of the enterprise”, which is formed by “related
upstream and downstream enterprises to provide a smooth flow of informa-
tion, logistics and capital throughout the supply chain from the middle source
to the remittance”.23 For example, smart vessel supply management supervises
vessels, including their choice of routes and ports, based on the location and
traffic within the ports. This is to improve their arrival punctuality at ports. On
the other side, smart container management helps with the acquisition, track-
ing, transport, storage, and repositioning of containers,24 as well as with their
trans-shipment (transfer from one vessel to another).25 Moreover, smart energy
management reduces energy consumption.26 Such an AI-operated system of

19 S.Vijay,“The Indian Ocean and Smart Ports” [2019] 14 Indian Foreign Affairs Journal, 3, 207–221.
20 A. Chiappetta,“Toward Cyber Ports:A Geopolitical and Global Challenge” [2017] 12 FormaMente, 1.
21 J.,Yang,Y. Li, X. Ding, J. Lin,“Research on Automatic Wharf Unmanned Gate System Based on Arti-
ficial Intelligence” [2019] Proceedings of the 2019 7th International Conference on Information Technology:
IoT and Smart City, 521– 524.
22 Ch-S. Lu,“Port Supply Chain Coordination Evaluation Research” [2009] 41 China Market, 1, 52–54.
See also J.Tongzon,Y-T. Chang, S-Y. Lee,“How Supply Chain Oriented is the Port Sector” [2009]
122 Int J Prod Econ., 1, 21–34;Y. Bo,Y. Meifang, “Construction of the Knowledge Service Model
of a Port Supply Chain Enterprise in a Big Data Environment” [2020] 33 Neural Computing and
Applications, 5, 1699–1710.
23 ibid.
24 Chin Liu, Hossein Jula, K Vukadinovic, and Petros A. Ioannou,“Comparing Different Technologies
for Containers Movement in Marine Container Terminals” [2000] Proc. ITSC. IEEE Intell. Transp.
Syst., 488–493. See also Yau et al. (n 11), 83391.
25 Moreover, smart port management optimizes port services, such as commodity inspection, customs
clearance, transportation planning, procedures and applications (e.g., trans-shipment, trade license, as
well as import and export permits), customer service, market information exchange, and insurance
provisioning;Yau et al. (n 11), 83392.
26 For example, Yau et al. (n 11) suggest that the Valencia and Hamburg ports are equipped with
motion-sensitive lights that are illuminated when vehicles pass by such lighting system has been
shown to reduce energy consumption by up to 80%. See also M. Jovic, N. Kavran, S.Aksentijevic, E.
AI, smart seaports, and supply chain management 131
management can provide effective schedules, resources allocation, and optimi-
sation in terms of time and cost.27
The AI-enabled ecosystem increases collaboration between the different
parties, such as port authorities, cargo owners, and third-party logistics provid-
ers, as it aligns their individual digital roadmaps. This in turn allows for mutu-
ally beneficial opportunities to improve efficiency and cut waste (by sharing
data and providing a standard interface for AI-enabled insights, predictions, and
constraints). Moreover, AI provides a much more accurate and reliable forecast
for vessel arrival and departure by up to 80%, which enormously improves the
operation of supply chain management. AI also analyses and monitors cargo
and marine vehicles in real-time. It automates challenging manual analysis,
speeds up cargo logistics planning, and improves the detection of potentially
exceptional situations.28

Potential risks and regulatory matters


The large-scale cyber connectivity of AI may lead to many security challeng-
es.29 For example, a report on cyber threats in the maritime sector prepared
by the European Network and Information Security Agency (ENISA) shows
that cyber threats are a serious concern.30 One of the most triggering issues is
how best to implement cyber security and how to keep the digital and auto-
mated ports safe. AI-operated smart ports and ships are potential subjects of
attacks that can result in safety and security risks.31 For example, infiltrating
port computers, transmitting fake GPS signals to alter vessels’ routes, altering

Tijan,“The Transition of Croatian Seaports into Smart Ports” [2019] Proc. 42nd Int. Conv. Inf. Com-
mun.Technol., Electron. Microelectron., 1386–1390.
27 Yau et al. (n 11), 83392. See also G.C. Ceyhun, “Recent Developments of Artificial Intelligence in
Business Logistics: A Ma ritime Industry Case”, in Hacioglu Umit (ed.), Digital Business Strategies in
Blockchain Ecosystems Transformational Design and Future of Global Business (Springer 2020), 343–355;
Ch.-A. Gizelis,T. Mavroeidakos, A. Marinakis, A. Litke,V. Moulos,“Towards a Smart Port:The Role
of the Telecom Industry” in Ilias Maglogiannis, Lazaros Iliadis and Elias Pimenidis (eds.), Artificial
Intelligence Applications and Innovations (Springer 2020) 128-144.
28 See e.g.A. Loukili, S.L. Elhaq,“A Model Integrating a Smart Approach to Support the National Port
Strategy for a Horizon of 2030” [2018] Proc. Int. Colloq. Logistics Supply Chain Manage. (LOGISTI-
QUA), 81–86; K. Douaioui, M. Fri, Ch. Mabrouki, E.A. Semma, “Smart Port: Design and Perspec-
tives” [2018] Proc. 4th Int. Conf. Logistics Oper. Manage., 1–6.
29 Chiapetta (n 22), 95–104. Chiapetta also suggests that software systems that support critical infra-
structures operations are becoming more and more attractive to outside cyber-attacks from cyber-
criminal interested in wreaking havoc in cyber environments. See:A. Chiappetta,“Hybrid Ports:The
Role of IoT and Cyber Security in the next Decade” [2017] 2 Journal of Sustainable Development of
Transport and Logistics, 2, 47–56.
30 ENISA,“Cyber security aspects in the maritime sector” (ENISA, 9 December 2011) <https://www
.enisa.europa.eu/publications/cyber-security-aspects-in-the-maritime-sector-1/> accessed 27 May
2021.
31 Yau et al. (n 11) 83401; M. Atif, S. Latif, R. Ahmad, A.K. Kiani, J. Qadir, A. Baig, H. Ishibuchi,
W. Abbas, “Soft Computing Techniques for Dependable Cyber-physical Systems” [2019] 7 IEEE
132 Regulating artificial intelligence in industry
the vessels’ automatic identification system signal to misreport their location,
accessing electronic chart display and information systems software to modify
maps all represent real concerns and may have disastrous consequences.32 These
risks can impact on potential liability for damages caused. The industry-specific
regulations related to this subject include provisions relating to safety and secu-
rity standards, as found in the International Ship and Port Facility Security
(ISPS) Code33 and in the EU Regulation (EC) No 725/2004 on enhancing
ship and port facility security.34 However, these regulations do not consider
cyber-attacks as possible threats or unlawful acts and are not adequate to address
the potential AI-related risk for public security.
In terms of potential liability for damages resulting from the use of emerg-
ing digital technologies, the laws of EU Member States do not contain liability
rules specifically applicable to such damages.35 The matter of liability in the EU
is not largely coordinated, with the exception of product liability law under
Directive 85/374/EC,36 some aspects of liability for infringing data protection
law,37 and liability for infringing competition law.38 For most modern techno-

Access, 72030–72049; S. Haykin, “Artificial Intelligence Communicates with Cognitive Dynamic


System for Cyber Security” [2019] 5 IEEE Trans. Cognit. Commun. Netw., 3, 463–475.
32 E. Bou-Harb, E. Kaisar, M. Austi, “On the Impact of Empirical Attack Models Targeting Marine
Transportation” [2017] 5th IEEE International Conference on Models and Technologies for Intelligent Trans-
portation Systems, July 2017, available at: <https://www.researchgate.net/publication/318135592
_On_the_Impact_of_Empirical_Attack_Models_Targeting_Marine_Transportation> accessed 27
May 2021.
33 The International Ship and Port Facility Security (ISPS) Code is an amendment to the International
Convention for the Safety of Life at Sea of 1974, entered into force on 25 May 1980, 1184 UNTS 2
34 Regulation (EC) No 725/2004 of the European Parliament and of the Council of 31 March 2004
on enhancing ship and port facility security.
35 See: Expert Group on Liability and New Technologies,“Liability for Artificial Intelligence and other
Emerging Digital Technologies”, available at: <https://ec.europa.eu/transparency/regexpert/index
.cfm?do=groupDetail.groupMeetingDoc&docid=36608>, p. 15.
36 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and
administrative
provisions of the Member States concerning liability for defective products (OJ L 210, 7 August
1985, 29), as amended by Directive 1999/34/EC of the European Parliament and of the Council of
10 May 1999, OJ L 141 20 4 June 1999.
37 For example Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April
2016 on the protection of natural persons with regard to the processing of personal data and on the
free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regula-
tion), OJ L 119, 4 May 2016, 1.
38 For example Directive 2014/104/EU of the European Parliament and of the Council of 26 Novem-
ber 2014 on certain rules governing actions for damages under national law for infringements of
the competition law provisions of the Member States and of the European Union, OJ L 349, 5
December 2014, 1. Moreover, there is also a well-established regime governing liability insurance
with regard to damage caused by the use of motor vehicles (Directive 2009/103/EC of the Euro-
pean Parliament and of the Council of 16 September 2009 relating to insurance against civil liability
in respect of the use of motor vehicles, and the enforcement of the obligation to insure against such
liability, OJ L 263, 7 October 2009, 11–31. Furthermore, EU law also provides for a conflict of tort
laws framework, in the form of the Rome II Regulation (Regulation (EC) No 864/2007 of the
AI, smart seaports, and supply chain management 133
logical ecosystems, such as smart ports, smart ships, and related AI supply chain
management, no specific liability regimes exist.39 Some suggest that general
domestic tort law rules apply. Depending on the jurisdiction, these include a
rule (or rules) introducing fault-based liability with a relatively broad scope of
application, accompanied by several more specific rules. These specific rules
either modify the premises of fault-based liability (especially the distribution
of the burden of proving fault) or establish liability that is independent of fault
(usually called strict liability or risk-based liability), which also take many forms
that vary with regard to the scope of the rule, the conditions of liability, and
the burden of proof.40 Moreover, most liability regimes contain the notion of
liability for others (i.e. vicarious liability, where someone is held responsible for
the actions of another person).41
Those jurisdictions that already allow the experimental or regular use of
highly or fully automated vehicles usually provide for coverage of damage
caused, be it by way of insurance or by reference to the general rules of law.42
For example, in the English legal system, the Automated and Electric Vehicles
Act (AEVA 2018) (c 18) provides in s. 2 that “the insurer is liable” for dam-
age incurred by the insured or any other person in an accident caused by an
automated vehicle.43 Moreover, this Act also requires the registration of auto-

European Parliament and of the Council of 11 July 2007 on the law applicable to non-contractual
obligations (Rome II), OJ L 199, 31 July 2007, 40).
39 A study group suggests that “while existing rules on liability offer solutions with regard to the risks
created by emerging digital technologies, the outcomes may not always seem appropriate, given the
failure to achieve: (a) a fair and efficient allocation of loss, in particular because it could not be attrib-
uted to those: 1) whose objectionable behaviour caused the damage; or 2) who benefitted from the
activity that caused the damage; or 3) who were in control of the risk that materialised; or 4) who
were cheapest cost avoiders or cheapest takers of insurance; (b) a coherent and appropriate response
of the legal system to threats to the interests of individuals, in particular because victims of harm
caused by the operation of emerging digital technologies receive less or no compensation compared
to victims in a functionally equivalent situation involving human conduct and conventional technol-
ogy; (c) effective access to justice, in particular because litigation for victims becomes unduly burden-
some or expensive”. See: Expert Group on Liability and New Technologies (n 38).
40 See for instance M. Martin-Casals, “Causation and Scope of Liability in the Internet of Things” in
Sebastian Lohsee, Reiner Schulze and Dirk Staudenmayer (eds.), Liability for Artificial Intelligence and
the Internet of Things (Hart 2019), 201–233; B.A. Koch, H. Koziol, Unification of Tort Law: Strict Liability
(Kluwer International Publishing 2002) 70–89; P. Giliker, Vicarious Liability in Tort: A Comparative
Perspective (CUP 2010), 228–251.
41 See for instance J. Bell, S. Boyron, S.Whittaker, Principles of French Law (2nd ed., OUP 2008) 360-417;
N. Foster, S. Sule, German Legal System and Laws (4th ed., OUP 2011), 461–475; B.S. Markesinis, H.
Unberath, The German Law of Torts:A Comparative Treatise (Hart 2002).
42 For example, § 7 of the German Road Traffic Act (Straßenverkehrsgesetz) provides for strict liability of
the keeper of the vehicle.This rule was deliberately left unchanged when the Road Traffic Act was
adapted to the emergence of automated vehicles. Similarly, French Decree n° 2018-211 of 28 March
2018 on experimentation with automated vehicles on public roads relies on the Loi Badinter of 5 July
1985 (n°85-677). See also Gerhard Wagner,“Robot Liability”, in Lohsee, Schulze, Staudenmayer (n
42), 27–62.
43 If the vehicle is uninsured, it is the owner of the vehicle who is liable instead. Ibid Lohsee.
134 Regulating artificial intelligence in industry
mated vehicles designed or adapted to be capable of driving themselves by the
Department of Transport. However, the AEVA 2018 leaves some substantive
issues open, such as the way in which the insurer can reclaim from manufac-
turers and whether there are any defences available, as well as in relation to
definitions.44 Furthermore, some suggest that apart from this legislation, the
harmful effects of the operation of emerging digital technologies in England
can be compensated under existing (“traditional”) laws on damages in contract
and in tort.45

Towards an optimal regulatory intervention


In cases of damage caused by AI within the maritime industry, similarly to
other sectors, it may be impossible to identify the party responsible for provid-
ing compensation.46 This is due to the nature of cyberspace, as well as the com-
plexity of this ecosystem. The importance of cyber security in the maritime
sector is mentioned in ENISA’s report.47 ENISA recommends that govern-
ments take appropriate measures in order to add considerations and provi-
sions towards cyber security in the national maritime regulatory frameworks.
Insightfully, ENISA supports the adoption of enhanced security standards and
practices for cyber security in the maritime sector.48 Any such approach must
involve international cooperation and heavy engagement with the private sec-
tor but should not put the governments in a position to determine the future
design and development of the involved technologies.49
Under EU law, Directive 85/374/EEC merely covers damage caused by AI’s
manufacturing defects and on condition that the injured person is able to prove
the causal relationship between the actual damage and the defect in the product
(in this case, AI).50 Moreover, the Directive deals with the most basic aspects

44 This legislation was introduced in light of issues such as regulatory disconnection, both of which
are concerns held in relation to technology law generally; M. Channon, “Automated and Electric
Vehicles Act 2018: An Evaluation in light of Proactive Law and Regulatory Disconnect” [2019] 10
European Journal of Law and Technology, 2, 1–36.
45 This applies to all fields of application of AI and other emerging digital technologies; ibid. See also C.
Amato, “Product Liability and Product Security: Present and Future”, in Lohsee, Schulze, Stauden-
mayer (n 42), 77–99.
46 See O.J. Erdelyi, J. Goldsmith, “Regulating Artificial Intelligence: Proposal for a Global Solution”
[2018] AIES 2018, 95-101;V.Wadhwa, “Laws and Ethics can’t keep Pace with Technology” [2014]
Massachusetts Institute of Technology:Technology Review, 15.
47 ENISA supra note 35, 14.
48 ibid.
49 Governments should according to ENISA also establish, identify or assign the competent national
authority to deal with cyber security aspects as applicable to the maritime sector. In most of the
countries, these competencies are not clearly established. This identified national authority should
(as applicable) be the central contact point for national cyber security initiatives within maritime
sector; ibid.
50 Article 6 of the Directive also provides that a product is defective when it does not provide the
safety which a person is entitled to expect, taking all circumstances into account, including the use
AI, smart seaports, and supply chain management 135
of causation and leaves most issues to national laws.51 This may potentially lead
to different results as to the liability regulations concerning AI technologies.52
The Directive does include some defences, which can be found in art. 7. For
instance, the producer shall not be liable if it can be proved that “having regard
to the circumstances, it is probable that the defect which caused the damage did
not exist at the time when the product was put into circulation by him or that
this defect came into being afterwards”. Moreover, the producer will not be
liable if “the state of scientific and technical knowledge at the time when he put
the product into circulation was not such as to enable the existence of the defect
to be discovered”.53 These provisions are meaningful, but they do not entirely
reflect the technological problems mentioned in this chapter. In particular, the
current product liability regime operates on the assumption that the product
does not continue to change in an unpredictable, unforeseeable manner once it
has left the production line.54 As Infantino and Zervogianni suggest, one of the
most important factors for establishing liability in many European legal systems
is foreseeability, which is based on the idea that “the defendant should be held
liable only for the damage that a reasonable person of ordinary prudence put in
his or her position would have foreseen as a likely result of his conduct”.55 This
is not necessarily the case with AI technologies, since AI may generate solu-
tions that humans might not have considered.56 According to Martin-Casals,
“foreseeability presents a vexing challenge to any legal system wishing to solve
the problem of affording redress to victims of AI caused harm”.57 He then sug-
gests that perhaps a better approach would be to rely upon theories based on
the “scope of the risk created by the defendant’s activity”.58 This “harm within
the risk” theory is actually not alien to European practice, since it can be found

to which it could reasonably be expected that the product would be put;Article 6 Council Directive
85/374/EEC.
51 They vary significantly from country to country; Martin-Casals (n 42) 227.
52 “This disparity may be increased by the acceptance of differing procedural devices alleviating the
burden of proof causation;” Ibid. See also Herbert Zech, “Liability for Autonomous Systems:Tack-
ling Specific Risks of Modern IT”, in Lohsee, Schulze, Staudenmayer (n 42), 187–200.
53 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and
administrative provisions of the Member States concerning liability for defective products.
54 Martin-Casals (n 42), 221.
55 M. Infantino, E. Zervogianni, “The European Ways to Causation” in Marta Infantino and Eleni
Zervogianni (eds.), Causation in European Tort Law (CUP 2017), 84–128.
56 M.U Schere, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and
Strategies” [2016] 29 Harvard Journal of Law & Technology, 353, 363.
57 Martin-Casals (n 42), 223.
58 For example, the American Restatement (third) builds the scope of liability factors around a risk test
and considers that an actor’s liability is limited to those harms that result from the risks that made the
actor’s conduct tortious; ibid, pp. 223.Yet, one has to note that also the Restatement (third) provides
that “pre-existing physical or mental condition or other characteristics of the person shall be taken into
account when a harm is of greater magnitude or different type than might be reasonably expected;
Article 31 Restatement (third) of law of torts: preexisting conditions and unforeseeable harm.
136 Regulating artificial intelligence in industry
as the basis of some of the tests already applied in Europe.59 Moreover, Kötz
and Wagner suggest that the protective purpose of such a rule (which refers to
those damages that are the continuing effect of the risk that made the tortfeasor
liable), together with the general risk of life (that are attributable to the victim)
enable a better solution without requiring the speculation involved in the fore-
seeability test.60 AI is a source of risk that exceeds other risks that have their
source in human behaviour.61 Thus, even in the case when harm is caused by
the unforeseeable behaviour of AI, its programmers should also be held liable.62
Law and economics literature offers some interesting responses to potential
regulatory intervention in the area of AI, which can also be applied to the mar-
itime industry. For example, Shavell suggests that if there is a party (a principal)
who has some control over the behaviour of another party (an agent/AI), then
the principal can be held vicariously liable for the losses caused by the agent.63
Hence, vicarious liability and a specific principal–agent relationship could be
introduced. The principal could be held vicariously liable for the damages
and/or losses caused by the agent (autonomous AI). This extension of liability
could lead indirectly to the reduction of risk and higher industry standards.
In particular, this could impose some of the following standards: a) compul-
sory insurance coverage (e.g. by the principal); b) regulations concerning safety
standards;64 c) subsidies for taking precautions and fines for causing harm; and
d) establishment of a worldwide publicly-privately-financed insurance fund.65
Additionally, some suggest introducing a strict liability for AI manufacturers,
which could be supplemented with a requirement that an unexcused violation
of the statutory safety standards constitutes negligence per se. Simultaneously,
compliance with this standard should not take away tort liability. 66

59 Infantino and Zervogianni emphasize that the idea often substantiates some other tests, such as the
adequacy test in Austria, Bulgaria, Poland and Portugal as the foreseeability test in England. See
Infantino and Zervogianni (n 57), 106.
60 H. Kötz, G.Wagner, Delikstrecht (Franz Vahlen, 2016), 84–92.
61 Martin-Casals (n 42), 224.
62 “In this case the harm occurs as a realization of a risk which is inherent to AI in accordance with
the dictate of the initial programming which enables the AI system to alter its behavior according to
subsequent experiences;” ibid at 224. See also Schere (n 58), 366–369.
63 S. Shavell,“The Judgement Proof Problem” [1986] 6 International Review of Law and Economics, 45–58.
64 ibid. Shavell points out that such direct regulation of safety standards will help to form principal’s
and manufacturer’s incentives to ex ante reduce risk as a precondition for engaging in an activity.
65 See for instance G. Calabresi, A.D. Malamed, “Property Rules, Liability Rules, and Inalienability:
One view of the Cathedral” [1972] Harvard Law Review, 85, 1089–1128; M.A Polinsky,“Strict Liabil-
ity vs. Negligence in a Market Setting” [1980] 70 American Economic Review, 363–367; M.L.Weitz-
man, “Prices vs. Quantities” [1974] Review of Economic Studies, 41, 477–491; D. Wittman, “Prior
Regulation versus Post Liability: The Choice between Input and Output Monitoring” [1977] 6
Journal of Legal Studies, 193–212.
66 P. Schmitz, “On the Joint Use of Liability and Safety Regulation” [2000] 20 International Review of
Law and Economics, 3, 371–382.
AI, smart seaports, and supply chain management 137
Conclusions
The task of handling an ever-increasing amount of cargo in a safe, efficient and
environmentally friendly way is one of the biggest challenges facing seaports
and the wider maritime industry. Smart AI-governed ports, have the potential
to cut human error, enable active communication with the social environ-
ment, make operations faster, safer, and more efficient, and cut carbon emis-
sions for environmental and energy sustainability. Such smart seaports strive to
provide seamless supply chain management, integrating both the supply and
demand sides to optimize the allocation of relevant resources, services, and
supervision, as well as autonomous loading and unloading.
However, the overwhelming employment of AI may lead to a variety of
new potential risks concerning maritime and supply chain security. When ships
and seaports become more and more “smarter” and need fewer and fewer
people to intervene in their activities, the question for lawmakers is how to
prevent, deter, and mitigate potential AI-generated public risks. For example,
one of the most triggering issues is how best to implement cyber security and
how to keep the digital and automated ports safe. Namely, AI-operated smart
ports and ships are potential subjects of attack that can result in safety and
security risks. Consequently, erroneous data might result in uncontemplated
losses, public safety hazards, and liability issues that call for substantive regula-
tory treatment.
Generally speaking, current legal mechanisms adequately address respon-
sibility for non-autonomous, human-supervised AI. However, if AI evolves
and becomes unsupervised, then regulatory intervention is needed. Both the
ISPS, as well as the International Maritime Organization’s Guide to Maritime
Security, should be amended to include AI-related specific risks. Similarly,
current Regulation (EC) No 725/2004 on enhancing ship and port facility
security could be amended to include AI-related risks. Among the proposed
amendments mentioned in this chapter is the inclusion of compulsory insur-
ance coverage, but also the adoption of safety standards, and an establishment
of a wide publicly-privately-financed insurance fund.
10 Artifcial intelligence and
climate-energy policies
of the EU and Japan1
Maciej M. Sokołowski

Introduction: Past visions and current challenges


Scientific and technological programmes have increasingly long-term
effects. Revolutionary and apparently visionary technological processes
such as those involved in AI … are becoming realities. We must prepare
now for the realities of tomorrow. The increasing duration of individual
technological projects, particularly major ones such as research into new
forms of energy, make it more than ever necessary to consider the long
term perspectives of today’s research decisions and their impact on future
generations.2

Although originating in the 1970s, the above quotation has not lost its sig-
nificance in the 21st century. However, due to the tremendous progress that
has been made over the years towards the advancement of AI, the Internet of
Things (IoT), robotics, and deep learning, the former ground-breaking inno-
vations have become everyday items. The AI revolution affects almost every
part of our lives, as AI finds its uses in various branches and industries.
This process does not omit the energy sector. On the contrary, it has had a
lasting effect on the energy market, gradually impacting the climate and energy
policy spheres. For instance, AI could support the modernisation of the elec-
tric grid, boost its stability, and prevent blackouts. This could be achieved by
controlling supply and demand at both local and national levels, helping energy
consumers to save energy and money by enabling the monitoring of their own
usage in real-time and adjusting the tariff accordingly, as well as facilitating the
interaction with the grid.3 Thus, AI may enhance energy efficiency and tackle

1 This chapter was prepared during a research stay in Japan under the Mobility Plus funding received
from the Ministry of Science and Higher Education (currently Ministry of Science and Education)
of Poland.
2 Commission, ‘The common policy in the field of science and technology’ (Communication) COM
(1977) 283 final.
3 STOA,‘The ethics of artificial intelligence: Issues and initiatives’ (Study) PE (2020) 634.452, 11.

DOI: 10.4324/9781003246503-12
AI and climate-energy policies of the EU and Japan 139
energy poverty,4 strengthen the production of energy from renewable sources5
(also by individuals),6 and facilitate the reduction of emissions.7
In this light, this chapter looks at AI through the prism of energy policies
and climate actions of the EU and Japan. These two examples were chosen for
analysis because both the EU and Japan have extensive experience in research
and development, which, when combined with their plans energy and climate
ambitions, including achieving net-zero greenhouse gas emissions by 2050,
could be used to highlight the needs of energy transition of important world
economies. Given the background, the study covers the public approach to
these matters along with related regulatory aspects. It juxtaposes the primary
documents, both past and current, relevant to the analysis of AI in the climate
and energy fields of these leading economies.8 In this context, the EU plans
to open its new multiannual financial framework (2021–2027) for investments
into unsupervised machine learning, energy, and data efficiency, offering sup-
port to more energy-efficient technologies and infrastructure ‘making the AI
value chain greener’.9 In Japan, high-quality data has been used for many years
to increase productivity at monozukuri10 production sites.11 In the 21st century,
AI, IoT, big data, and robotics are called the ‘most important key[s] to lead-
ing future revolution in productivity’ in Japan, as well as the ‘fourth industrial

4 J. Sokołowski, P. Lewandowski, A. Kiełczewska, S. Bouzarovski, ‘A Multidimensional Index to


Measure Energy Poverty:The Polish Case’ (2020) 15(2) Energy Source Part B, 92.
5 M. Hatti (ed.), Artificial Intelligence in Renewable Energetic Systems: Smart Sustainable Energy Systems
(Springer 2018); X. Xu, Z.Wei, Q. Ji, Ch.Wang, G. Gao, ‘Global Renewable Energy Development:
Influencing Factors, Trend Predictions and Countermeasures’ (2019) 63 Resource Policy, 101470.
This also applies to resolving conflicts in renewable energy by improving planning tools, see M.M.
Sokołowski, ‘Discovering the New Renewable Legal Order in Poland: With or Without Wind?’
(2017) 106 Energy Policy, 68.
6 M.M. Sokołowski, ‘Renewable and Citizen Energy Communities in the European Union: How
(Not) to Regulate Community Energy in National Laws and Policies’ (2020) 38(3) J. Energy Nat.
Resour. L., 289.
7 See STOA (n 2).
8 M.M. Sokołowski,‘China, Energy, Policy, Evolution, Revolution: Questions and Answers’ in Joanna
Marszałek-Kawa (ed) Economic and Energy Stability in Asia. Perspectives and Scenarios (2016), M.M.
Sokołowski, ‘When Black Meets Green: A Review of the Four Pillars of India’s Energy Policy’
(2019) 130 Energy Policy, 60.
9 Commission,‘Artificial Intelligence for Europe’ (Communication) COM (2018) 237 final, 9.
10 Monozukuri is a combination of the Japanese words mono (‘stuff ’) and tsukuri (‘make’), it could be
translated as ‘thing-making’ (with the inclusion of ‘artisanship’ and ‘manufacturing’), so its meaning
is wider as the word derives from the Japanese spirit and way of making things with dedication and
commitment, aiming for improvement, see M. Kovacic,‘The Making of National Robot History in
Japan: Monozukuri, Enculturation and Cultural Lineage of Robots’ (2018) 50(4) Crit. Asian Studies,
572, 573–574.
11 Strategic Council for AI Technology, ‘Artificial Intelligence Technology Strategy’ (31 March 2017)
<www.nedo.go.jp/content/100865202.pdf> accessed 27 May 2021, 1.
140 Regulating artificial intelligence in industry
revolution’,12 which Japan13 puts at the heart of its revitalisation strategy.14 The
‘new industrial revolution’ of the data-driven economy (reshaping business
sectors with energy as its element) is also at the centre of European attention.15
In addition, the chapter introduces a deliberation on the application of AI in
the electrical energy sector. It concerns the possibility of strengthening energy
regulations under the day-watchman model with the help of AI (showing how
much AI can assist energy regulators in their day-to-day tasks). As a result, a
bigger picture is drawn, i.e. demonstrating how AI could be regulated in the
sphere of climate and energy policy.

European climate-energy action and AI: Own incentives


and international deals
The issues of protecting the environment and fighting climate change are
extremely relevant to the EU. Due to the impact of the energy sector on
the environment and climate (energy as a ‘key factor in the achievement of
sustainable development’)16 one may find close links between these fields.17
Such an approach may be called pro-environmental, i.e. by guaranteeing that
the regulation, exercised by the EU’s institutions in the energy sector, aims
to ensure the preservation of the environment with its natural resources in a
sustainable way.18 This tendency has resulted in establishing a joint approach
under a framework of climate-energy policy of the EU.
First, the need for legislation on climate issues was officially recognised
by the European community in the late 1980s.19 In response to international

12 Government of Japan, ‘Japan Revitalization Strategy 2016’ (2 June 2016) <www.kantei.go.jp/jp/


singi/keizaisaisei/pdf/hombun1_160602_en.pdf> accessed 27 May 2021, 2.
13 Gyupan Kim, Hyongkun Lee, Boram Lee, Jungeun Lee,Wonju Son, ‘Fourth Industrial Revolution
in Japan:Technology to Address Social Challenges’ (2021) 11(2) World Econ. Brief, 1.
14 R. Jolley,‘Artificial Intelligence – Can Japan Lead the Way?’ (2015) 52(11) ACCJ Journal, 24.
15 Commission,‘Free flow of data and emerging issues of the European data economy’ (Working docu-
ment) SWD (2017) 2 final, 5.
16 Commission, ‘An overall view of energy policy and actions’ (Communication) COM (1997) 167
final, 3.
17 This integration has been confirmed by numerous EU activities and addressed in numerous Euro-
pean policy documents, e.g. Commission, ‘Strengthening environmental integration within Com-
munity energy policy’ (Communication) COM (1998) 571 final; Commission,‘A sustainable Europe
for a better world: a European Union strategy for sustainable development’ (Communication) COM
(2001) 264 final; Commission,‘A policy framework for climate and energy in the period from 2020
to 2030’ (Communication) COM (2014) 15 final.
18 M.M. Sokołowski, Regulation in the European Electricity Sector (Routledge 2016) 203; J. van Zeben,
A. Rowell, A Guide to EU Environmental Law (University of California Press 2020); M.M. Kenig-
Witkowska, ‘The Concept of Sustainable Development in the European Union Policy and Law’
(2017) 1(1) JCULP, 64.
19 Earlier, starting from the 1970s, the Community tackled emissions from industrial installation, see
Maciej M. Sokołowski, ‘Burning out coal power plants with the Industrial Emissions Directive’
(2018) 11(3) JWELB, 260, 261.
AI and climate-energy policies of the EU and Japan 141
activity and developments concerning climate change20 the Commission pro-
posed certain means to launch the European approach to tackle the green-
house effect.21 The proposed policies included: research, preventive action,
planned adaptation, and cooperation with developing countries.22 Further, var-
ied actions in this field were conducted in the 1990s, constituting the first steps
towards the European climate policy.23 In 1993, early European legislation on
climate change was passed, and the monitoring of CO2 and other greenhouse
gas emissions was initiated.24 This was further stimulated by the international
regime in which the EU has been actively participating – the United Nations
Framework Convention on Climate Change25 followed by the Kyoto Protocol
to this Convention.26
The latter, despite its rejection by the United States in 2001,27 motivated the
EU to take the leading role in the field of climate policy at the international
level.28 Moreover, the entry into force of the Kyoto Protocol in 2005 boosted
the European climate agenda, leading to the formation of the basic assumptions
of the EU climate action just two years later. This introduced three pillars:
reduction of emissions of greenhouse gases, promotion of renewable energy
sources, and enhancing energy efficiency, all with pan-European percentage
goals: generally, ‘3 × 20%’ for the year 2020.29 Between 2008 and 2009, this
proposal went through the legislative process and was passed, becoming widely
known as the Climate and Energy Package. The goals set for 2020 were revised
in 2014 by increasing the reduction of emissions to at least 40% and the growth
in renewable energy and improvement of energy efficiency to at least 27%.
The last two goals were again increased in 2018. This was achieved by adopt-

20 K. Kulovesi, ‘Climate Change in EU External Relations: Please Follow My Example (or I Might
Force You To)’ in E. Morgera (ed.) The External Environmental Policy of the European Union: EU and
International Law Perspectives (CUP 2012), 118–119.
21 Commission, ‘The Greenhouse Effect and the Community’ (Communication) COM (1988) 656
final.
22 ibid., 42–50.
23 L. Massai, European Climate and Clean Energy law and Policy (Earthscan 2012) 50.
24 Council Decision (EEC) 93/389 of 24 June 1993 for a monitoring mechanism of Community CO2
and other greenhouse gas emissions [1993] OJ L167/31.
25 Adopted May 9, 1992, entered into force March 21, 1994, 31 ILM 849.
26 The Protocol to the United Nations Framework Convention on Climate Change adopted on 1
December 1997, entered into force on 16 February 2005, 1155 UNTS 331.
27 In July 1997, the US Senate passed the Byrd-Hagel Resolution opposing the Kyoto Protocol, and
in 2001 the Protocol was rejected by the George W. Bush administration, see J. Hovi, D.F. Sprinz, G.
Bang,‘Why the United States Did Not Become a Party to the Kyoto Protocol: German, Norwegian,
and US Perspectives’ (2012) 18(1) Eur. J. Int. Relat., 129.
28 Massai (n 22), 54.
29 In details: the reduction of emissions was set by at least 20% in comparison to 1990 (or even 30%,
if an international agreement on climate had been adopted), the increase in the share of renewable
capacity had to grow by 20%, and the improvement of energy efficiency was to improve by 20%;
an exception concerned the reduction of emissions which could be decreased by 30%, see M.M.
Sokołowski, European Law on Combined Heat and Power (Routledge 2020), 99.
142 Regulating artificial intelligence in industry
ing new directives on renewable energy and energy efficiency, which deter-
mined at least 32% for a share of renewable energy sources of the EU’s gross
final consumption30 and reducing EU’s energy consumption by at least 32.5%
through improvements in energy efficiency by 2030.31
Then, in December 2015, the first multilateral agreement on climate change
covering almost all global emissions – the Paris Agreement – was signed.32 Under
this international regime, the EU’s nationally determined contribution (NDC)
was established under the wider 2030 climate and energy framework (the pre-
viously mentioned 40%, 32%, 32.5%) for emissions, renewables, and energy
efficiency, respectively. At the end of 2019, driven by climate strikes and calls
for declaring a climate emergency, the European Commission announced an
initial roadmap of key policies and measures needed to achieve the European
Green Deal.33 The Deal set as a growth strategy brings ambitious EU goals of ‘a
fair and prosperous society, with a modern, resource-efficient and competitive
economy where there are no net emissions of greenhouse gases in 2050 and
where economic growth is decoupled from resource use’.34 Furthermore, in
March 2020, a proposal of a regulation on the European Climate Law, aimed
at creating a basis for achieving EU climate neutrality, was presented.35
Finally, the framework of the European Green Deal is a result of a multi-
layer approach focused on rethinking policies for clean energy supply in differ-
ent fields. The Deal covers economy, industry, production and consumption,
large-scale infrastructure, transport, food and agriculture, construction, taxa-
tion, and social benefits.36 In all these areas, digital technologies are ‘a crucial

30 According to Eurostat, the share of energy from renewable sources in gross final energy consump-
tion in the European Union (EU) reached 18.0% in 2018, up from 17.5% in 2017 and more than
doubling the share in 2004 (8.5%), the first year for which data are available, Eurostat, ‘Share of
Renewable Energy in the EU up to 18.0%’ (23 January 2020) <https://ec.europa.eu/info/news/
share-renewable-energy-eu-180-2020-jan-23_en> accessed 27 May 2021.
31 With the withdrawal of the United Kingdom, the EU-27 energy consumption for 2020 and 2030
after adjustment accounts for primary energy consumption of no more than 1 312 million tonnes
of oil equivalent (Mtoe) in 2020 and 1 128 Mtoe in 2030 and a final energy consumption of no
more than 959 Mtoe in 2020 and 846 Mtoe in 2030; in 2019, primary energy consumption in the
EU-27 reached 1 352 Mtoe (3.0% above the efficiency target for 2020 and 19.9% away from the
2030 target) while final energy consumption reached 984 Mtoe (2.6% above the efficiency target for
2020 and 16.3% away from the 2030 target) - when compared with 2018, primary energy consump-
tion decreased by 2% at EU level and final energy consumption by 1%, Eurostat,‘Primary and Final
Energy Consumption Slowly Decreasing’ (28 January 2021) <https://ec.europa.eu/eurostat/web/
products-eurostat-news/-/ddn-20210128-1?redirect=%2Feurostat%2Fweb%2Fenergy%2Fpublica-
tions> accessed 27 May 2021.
32 Paris Agreement Under the United Nations Framework Convention on Climate Change adopted
on 12 December 2015, entered into force 4 November 2016.
33 Commission,‘The European Green Deal’ (Communication) COM (2019) 640 final.
34 ibid., 2.
35 Commission,‘European Climate Law’ (Proposal) COM (2020) 80 final.
36 ibid., 4.
AI and climate-energy policies of the EU and Japan 143
for attaining the sustainability goals of the Green Deal’.37 Here, AI is listed as
one of those digital technologies (along with 5G, cloud and edge computing,
the IoT, etc.), which will be explored due to their potential for accelerating
and maximising the effects of policies to combat climate change and protect
the environment.38 Apart from technologies, an emphasis is put on accessible
and interoperable data, which, combined with digital infrastructure based on
supercomputers, cloud computing, and networks as well as artificial intelli-
gence solutions, provide a framework for evidence-based decisions and expand
the opportunity to recognise, analyse, understand, and challenge environmen-
tal issues.39
Nevertheless, just as in the cases of emissions, renewables, and energy effi-
ciencies, which were addressed by the EU well before the 21st century, the
issue of energy-related AI has been a topic of European policies for several dec-
ades. AI applications in energy already have a presence in Europe. At the turn
of the 1970s and 1980s, the Commission’s science and knowledge institution
– the Joint Research Centre (JRC) – initiated studies on expert systems and
man–machine communication ran as part of the nuclear safety research pro-
grammes used for monitoring fissile materials management.40 As anticipated,
this could have led to further improvements in robotics, which in the early
1980s Europe was ‘fairly well-placed as regards the micro-mechanics aspect,
but somewhat behind in terms of “intelligent” robots’.41 Studies have shown
that AI could be applied in the management of the nuclear industry, covering
decision making, diagnostic systems, and modelling of operator’s behaviour.42
The last two elements were further addressed in a light-water reactor (LWR)
safety programme providing research on human factors and man–machine
interaction.43
In the 1990s, the research priorities under the European framework followed
the possibilities of improving sustainability by applying innovative and clean
technologies to production systems.44 This concerned the ‘development of clean

37 ibid,. 9.
38 ibid.
39 ibid., 18.
40 Commission, ‘The future activities of the Joint Research Centre’ (Communication) COM (1983)
107 final, 22.
41 ‘This research work should make it possible to improve basic knowledge of operator behaviour from
the standpoint of mechanisms for information acquisition by taking into account factors, circum-
stances and the environment which influence them … Such theoretical knowledge will then be
applied to the definition of the models that can be used for probabilistic risk assessment and for the
improvement and rationalization of procedures; it will also make it possible to direct development
along the desired lines and to validate new sophisticated diagnostic aid systems’, ibid.
42 Proposal for a Council Decision adopting a research programme on reactor safety (1984–1987)
[1983] OJ C250/6.
43 ibid.
44 Commission, ‘The S and T content of the specific programmes implementing the 4th Framework
programme for community research and technological development (1994–1998) and the Frame-
144 Regulating artificial intelligence in industry
production’ by, e.g. using AI to increase productivity, improve energy efficiency,
or reduce waste.45 Examples of the implemented actions show that these results
have been achieved.46 In the 2000s, in addition to research, various European
institutions gradually began to notice the benefits of implementing AI in energy-
related areas. This applies, inter alia, to situations where robots, advanced auto-
mation, and AI could improve workplace safety and cost-effectiveness.47
In the 2010s, this move accelerated, and different incentives appeared. They
included, e.g. EU plans to create digital industrial platforms, i.e. multi-sided
market gateways of interactions between several groups of economic actors.
They had several aims, including addressing the challenges of digital technolo-
gies, IoT, big data, and autonomous cloud systems, AI, 3D printing, as well
as investments in pilot and lighthouse initiatives, such as smart cities and smart
living environments.48 The adjective – ‘smart’ – has, over the years, become a
crucial element of modern energy systems. Apart from smart cities and smart
homes, one may list such notions as smart manufacturing, smart mobility, smart
farming, or smart energy, with smart metres and smart grids.49 The latter has
become the current paradigm of development of the energy sector, where
real-time smart grid data helps with the efficient operation of the system and
facilitates electricity services.50 Nevertheless, within all of these ‘smart fields’,
‘energy’ plays an important role. This refers to the optimisation of energy
usage using various methods and technologies to lower it, adapting energy
consumption based on dynamic tariffs, improving the management of self-
consumption, storage and feed-in to the grids, or changing behaviour and
consumption patterns.51

work programme for community research and training for the European Atomic Energy Commu-
nity (1994–1998)’ (Working document) COM (1993) 459 final.
45 Council Decision 94/571/EC adopting a specific programme for research and technological devel-
opment, including demonstration, in the field of industrial and materials technologies (1994–1998)
[1994] L222/23.
46 For instance,AI used in the production of timber and pulp (to predict the quality of the end product
from raw material data and progress of the production process) in one project resulted in a drop
of the electricity, water, and starch consumption of industrial users as well as the amount of waste
produced. See Commission, ‘Research and technological development activities of the European
Union’ (Annual report) COM (1998) 439 final, 32.
47 Opinion of the European Economic and Social Committee on ‘The Perspectives of European Coal
and Steel Research’ [2005] OJ C294/7.
48 The Commission underlines the convergence of these technologies (IoT, big data and cloud, robotics
and AI) as the driver of the digital change which responds ‘to major aspirations of today’s customers’
which, apart from personalization or higher safety and comfort, also have energy dimension (like
advanced sensors and big data used in industrial processes which can improve energy and resource
efficiency), see Commission, ‘Digitising European Industry: Reaping the full benefits of a Digital
Single Market’ (Communication) COM (2016) 180 final, 4, 10.
49 see Commission, ‘Advancing the Internet of Things in Europe’ (Working document) SWD (2016)
110 final, 26.
50 Commission (n 14), 25.
51 see Commission (n 45), 31–38.
AI and climate-energy policies of the EU and Japan 145
Although this technological shift, of which AI is a part, can help the energy
transition and make a step (or even a jump) towards a zero-emission economy,
there are some concerns about the ecological aspects of this move. The grow-
ing use of AI results in the increased use of natural resources, as well as elevated
demand for energy and problems with waste disposal.52 While AI, cloud com-
puting, or IoT can accelerate and optimise the effect of policies on climate
change and environmental protection, their development and operation must
be climate-neutral and environmentally friendly.53 Therefore, ‘[a]t the same
time, Europe needs a digital sector that puts sustainability at its heart’.54

AI and climate-energy policy of Japan: From ffth


generation computer system to society 5.0
Since 1974 several initiatives in the field of energy and environment have been
established. In 1974, the ‘Sunshine Project’ was initiated, offering research and
development activities in five main areas: solar energy, geothermal energy,
coal energy, hydrogen energy, and comprehensive research.55 At the time of
the 1970s energy crisis, much attention was given to solar energy being a new
alternative to conventional fossil fuels.56 Apart from encouraging the growth in
solar cell production,57 the Sunshine Project formed the basis for further pub-
lic and private activities to develop new energy technologies.58 Another pro-
environmental policy initiative started in 1978. It was the ‘Moonlight Project’
established in Japan in order to steer the development of energy conservation
technologies.59 This programme undertook joint research and development
works of the government and industry sector in the field of energy-saving
technologies, which were too expensive and risky for the private sector (e.g.
advanced gas turbines, waste heat utilization technology, and fuel cell power
generation).60
Furthermore, in response to the 1990s stagnation trends with respect to
energy research and development and to facilitate the potential of renewable
energy in Japan, the Sunshine Project evolved into the New Sunshine Program

52 STOA (n 2), 3.
53 Commission (n 25), 9.
54 ibid., 9.
55 K.Takahashi,‘Sunshine Project in Japan – Solar Photovoltaic Program’ (1989) 26(1–2) Sol. Cells, 87.
56 Y. Hamakawa,‘Present Status of Solar Photovoltaic R&D Projects in Japan’ (1979) 86 Surf. Sci., 444.
57 In 1983 this production accounted for around 5,000 kW, while in 1986 it reached 12,500 kW,
being used for the needs of consumer appliances including calculators, watches, radios, and toys, see
Takahashi (n 52), 96.
58 S. Chowdhury, U. Sumita, A. Islam, I. Bedja, ‘Importance of Policy for Energy System Transforma-
tion: Diffusion of PV Technology in Japan and Germany’ (2014) 68 Energy Policy, 285, 288.
59 M. Tatsuta, ‘New Sunshine Project and New Trend of PV R&D Program in Japan’ (1996) 8(1–4)
Renew. Energy, 40.
60 Y. Fukasaku,‘Energy and Environment Policy Integration:The Case of Energy Conservation Policies
and Technologies in Japan’ (1995) 23(12) Energy Policy, 1063, 1067.
146 Regulating artificial intelligence in industry
of 1993.61 This was enabled by reorganisation, which included combining the
‘Sunshine Project’ with the ‘Moonlight Project’ and the ‘Global Environmental
Technology Program’ in 1993.62 With the objective of developing innovative
technology for the needs of sustainable growth, the new programme included
the development of low-cost photovoltaic (PV) technologies.63 Along with the
incentive programmes such as ‘Residential PV System Dissemination Program’,
as well as its predecessor ‘Residential PV System Monitoring Program’ Japan
was able to build up a self-supporting market for PV.64
As a result of these incentives, during the 1980s and 1990s, many demon-
stration projects and basic research and development work were supported in
Japan, creating the necessary demand for solar cells and continued improve-
ment of conversion efficiency and PV economics.65 Thanks to these projects,
Japan has been a long-time world leader in solar energy;66 however, with the
reduction and then discontinuation of solar subsidies in 2005, the PV market
in Japan has stagnated.67 To change this, in 2009, the national goal of increas-
ing PV capacity by a factor of 20 by the year 2020 (taking into account the
2005 levels), and by a factor of 40 by 2040 was announced.68 In the early 2010s
this was enhanced by a new feed-in tariff for electricity production in solar
installations.69 A further discussion on the energy policy in Japan was driven by
the 2011 Fukushima nuclear accident.70
In June 2019, Japan approved its ‘Long-term Strategy under the Paris
Agreement’ (Strategy 2019).71 It brought a vision of ‘decarbonised society’
potentially to be achieved by Japan in the second half of the 21st century (or
earlier) by reducing greenhouse gases emissions by 80%.72 With seven key

61 Ch.Watanabe,‘Identification of the Role of Renewable Energy:A View from Japan’s Challenge:The


New Sunshine Program’ (1995) 6(3) Renew. Energy, 237, 238.
62 ibid., 237.
63 Tatsuta (n 56), 40.
64 Chowdhury (n 58), 289.
65 ibid., 286.
66 Mainly due to the success of ‘New Sunshine Program’ under which the policies to grow the solar
PV industry were established and growth in PV capacity was achieved – over 930 MW from 1992
to 2005, ibid 289.
67 ibid.
68 Accordingly, this would amount to a deployment of 28 GW in 2020, and 56 GW in 2040, ibid.
69 ibid.
70 W-M. Chen, H. Kim, H.Yamaguchi,‘Renewable Energy in Eastern ASIA: Renewable Energy Policy
Review and Comparative SWOT Analysis for Promoting Renewable Energy in Japan, South Korea,
and Taiwan’ (2014) 74 Energy Policy, 319, 324.
71 According to Article 4(19) of the Paris Agreement:‘[a]ll Parties should strive to formulate and com-
municate long term low greenhouse gas emission development strategies, mindful of Article 2 taking
into account their common but differentiated responsibilities and respective capabilities, in the light
of different national circumstances’.
72 Government of Japan, ‘The Long-Term Strategy Under the Paris Agreement’ (11 June 2019)
<https://unfccc.int/sites/default/files/resource/The%20Long-term%20Strategy%20under%20the
%20Paris%20Agreement.pdf> accessed 27 May 2021, 15.
AI and climate-energy policies of the EU and Japan 147
areas,73 many climate-related operations will be carried out by the business sec-
tor as it has both the financial resources and technologies which Japan plans to
utilise in the country’s climate action.74 In this regard, the country will create
an innovative environment, promoting and utilising ‘innovation for decar-
bonization’ implemented at many different levels, with the participation of
various actors – companies, investors, financial institutions, consumers, and
local governments.75 Aside from the business element of the innovation for
decarbonisation, Japan also plans to tackle social issues connected with the pro-
posed decarbonisation. Apart from general actions such as utilising Sustainable
Development Goals and the existing agendas like ‘Society 5.0’ (‘the super-
smart society’),76 Japan proposes the creation of ‘Circulating and Ecological
Economy’. It is based on the cooperation of regional communities utilising
their resources in a sustainable way to become self-reliant (as much as possible),
connected in a network of communities aimed at reaching decarbonisation
and sustainable development.77 Under this framework, ‘carbon-neutral com-
munities’ will utilise renewable generation in smart grids, and to encourage the
smooth installation of renewables, will apply, inter alia, blockchain technology.78
Moreover, the Japanese model of this type of energy community79 also con-
siders a disaster prevention element, i.e. the community’s self-sufficiency with
the help of smart grids, energy storage (batteries), fuel cells, as well as cogen-
eration.80 Other elements of these communities are: applying the demand
response and virtual power plant strengthening the position of energy prosum-
ers, along with the idea of ‘the Village Energy Management System’ (VEMS).
This last concept is aimed at optimising the use of local energy resources as well
as ‘farming-photovoltaics’, i.e. installing PV panels over arable fields, including
those abandoned,81 in a way that enables the cultivation of crops.82 One should
also note the idea to use a combination of drone, sensing technology, and AI to

73 The key areas for decarbonisation in Japan include: hydrogen; carbon dioxide capture and storage
(CCS); carbon dioxide capture and utilization (CCU); renewable energy; storage batteries; nuclear
energy; and, finally, other issues such as challenges and collaboration (both internal and external),
see ibid., 16.
74 ibid., 15–16.
75 ibid., 17.
76 F. Shimpo, ‘The Principal Japanese AI and Robot Law. Strategy and Research toward Establishing
Basic Principles’ (2018) 3 J. Law. Info. Syst., 44, 49.
77 ‘Long-Term Strategy’ (n 68), 18.
78 ibid., 54–56.
79 M.M. Sokołowski, ‘European Law on the Energy Communities: A Long Way to a Direct Legal
Framework’ (2018) 27(2) EurEnergyEnvironLawRev 60; M.M. Sokołowski, ‘Renewable Energy
Communities in the Law of the EU, Australia, and New Zealand’ (2019) 28(2) Eur. Energy. Environ.
Law Rev., 34.
80 ‘Long-Term Strategy’ (n 68) 54-56.
81 A.Visvizi, M.D. Lytras, G. Mudri,‘Smart Villages: Relevance,Approaches, Policymaking Implications’
in Anna Visvizi, Miltiadis D. Lytras, György Mudri (eds) Smart Villages in the EU and Beyond (Emerald
2019), 2.
82 ‘Long-Term Strategy’ (n 68), 51, 55, 58.
148 Regulating artificial intelligence in industry
reduce nitrous oxide emissions by controlling the amount of fertilisers applied
as well as using AI to monitor greenhouse gas emissions.83 Moreover, Japan
will promote a broader usage of new energy-efficient products employing AI,
IoT, and big data, the market for which will be created before 2040.84
Nevertheless, Strategy 2019 is not the first reference to AI as a potential
solution for Japan and its energy sector. At the beginning of the 1980s, Japan
launched the Fifth Generation Computer System project, a bold plan to rev-
olutionise computing hardware and software. The project, besides its major
technical goals, introduced various social aims, including the improvement
of energy efficiency.85 Also, in the 1980s, applications of AI in the operation
and maintenance of nuclear power plants were promoted under the long-term
national programme for the development and utilisation of nuclear energy in
Japan.86 Moreover, in the early 1980s, the Japanese electricity sector embarked
on a large effort to employ AI tools for improved system operation, planning,
diagnosis,87 and control.88 The need for these solutions was driven by different
reasons, including: shortages of qualified staff due to the retirement of post-
World War II era workers, the desire to run the system with lower margins,
increased efficiency, to speed up fault clearing and service restoration, enhance
customer relations, and provide the utility engineer with more efficient design
and planning tools.89 Additionally, at that time, various research platforms were
used to promote AI applications in Japan. The Society of Electrical Cooperative

83 ibid., 58.
84 ibid., 52.
85 C. Shunryu Garvey, ‘“AI for Social Good” and the First AI Arms Race Lessons from Japan’s Fifth
Generation Computer Systems (FGCS) Project’ (Proceedings of the 34th Annual Conference of
the Japanese Society for Artificial Intelligence, Kumamoto, June 2020) 1–2; R.P. van de Riet, ‘An
Overview and Appraisal of the Fifth Generation Computer System Project’ (1993) 9(2) Future Gener.
Comput. Syst., 83.
86 This included the creation of an autonomous AI-supported system which was used, inter alia, for the
diagnosis and corrective action of plant anomalies as well as the development of an inspection and
maintenance robot capable of making sophisticated decisions and operating in a highly radioactive
environment. See: M. Itoh, I. Tai, K. Monta, K. Sekimizu, ‘Artificial Intelligence Applications for
Operation and Maintenance: Japanese Industry Cases’, in M.C. Majumdar, D. Majumdar, J. I. Sackett
(eds) Artificial Intelligence and Other Innovative Computer Applications in the Nuclear Industry (Springer
1988), 41.
87 S. Rahman, ‘Artificial Intelligence in Electric Power Systems: A Survey of the Japanese Industry’
(1993) 8(3) IEEE Trans. Power Syst., 1211, 1212.
88 For instance, the hybrid systems using fuzzy control techniques built in Fukuchiyama and Kongosan
tunnels in 1988 have proved to be successful in precisely regulating ventilation and providing energy
savings of up to 30%. See S.J. Biondo, ‘CI Controls for Energy and Environment’ (Computational
Intelligence and Its Impact on Future High-Performance Engineering Systems: Proceedings of a
Workshop Sponsored by the National Aeronautics and Space Administration and the University of
Virginia, Hampton 27–28 June 1995), 46.
89 Rahman (n 86).
AI and climate-energy policies of the EU and Japan 149
Research, R&D Group for AI in Power Utilities, and Institute of Electrical
Engineers in Japan (IEEJ) could be found among them.90
However, in the late 1980s, Japan started losing its advantages. Although
emerging technologies had become more dynamic and cross-disciplinary, the
Japanese administration had not been well suited to play a constructive role.91
The Fifth Generation Computer System project ended in 1992 without a clear
sign of success.92 Bureaucratic rivalry, poor coordination, and strong interven-
tionism undermined the advantageous position of Japanese companies in the
race for leadership in an information technology-driven business.93 However,
various types of research and work in the field of AI in the energy sector were
still carried out. Such examples include the experimental knowledge-based
system, Japanese Energy Supply Security Expert (JESSE),94 which was a model
of decision making in the Japanese energy policy.95 Outside these systems,
certain activities were conducted in developing fuzzy logic, neural networks,
and machine reasoning algorithms for commercial applications, also as part of
the discussion on the prospects of Japan’s economic performance in the post-
bubble era.96
Nevertheless, despite vast experience in technologies, electronics, and
robotics, some problems were encountered in the growth of the AI industry
in Japan.97 As highlighted in the ‘Artificial Intelligence Technology Strategy’
of 2017,

90 For example, in 1987, IEEJ launched a research committee on AI applications in the electricity sec-
tor with the intention of summarising the data on expert systems, see ibid.
91 According to Lehmann, the Japanese ‘ministries have “succeeded” in protecting domestic players
from foreign competition but have failed to promote global players’ by being wrongly painted ‘with
the same industrial-policy brush’. See: J-P. Lehmann,‘Japan and Pacific Asia: From Crisis to Drama’,
in Gerald Segal, David S. G. Goodman (eds) Towards Recovery in Pacific Asia (Routledge 2000) 98;
C.J. McMillan,‘Going Global: Japanese Science-Based Strategies in the 1990s’ (1991) 12(2) Manage
Decis. Econ., 171.
92 Y. Mikanagi, Japan’s Trade Policy:Action or Reaction? (Routledge 1996), 138n11.
93 Lehmann (n 90).
94 D.A. Sylvan, A. Goel, B. Chandrasekaran, ‘Analyzing Political Decision Making from an Informa-
tion-Processing Perspective: JESSE’ (1991) 34(1) Am. J. Pol. Sci., 74.
95 Aimed at proposing policies based on a collection of stored plans, JESSE, by asking users about the
current energy situation in Japan, integrated multiple decision-makers and the cognitive, interpre-
tive processes forming a part of those decisions to create (from the decision-makers’ responses) a
knowledge base describing Japanese energy security. However, one should notice the inflexibility of
its integrated classification schemes, since the categories reflected the worldview of the coder, which
may not align with the worldviews of actual decision-makers. See: G. Duffy, S.A. Tucker, ‘Political
Science:Artificial Intelligence Applications’ (1995) 13(1) Soc. Sci. Comput. Rev., 1, 6.
96 J. Lee,‘Overview and Perspectives on Japanese Manufacturing Strategies and Production Practices in
Machinery Industry’ (1997) 37(10) Int. J. Mach.Tools. Manufact., 1449, 1450.
97 This concerns, inter alia, deep learning. Japan is significantly behind the US in this regard, but trying
to catch up, see Jolley (n 13).
150 Regulating artificial intelligence in industry
[w]hen looking at … papers related to AI … the number of Japanese
papers falls below the number of papers in the US and China, and it is clear
that there is insufficient investment in research and development by both
the public and private sectors.98

To provide the environment for this development, some improvements in


the structure of the Japanese administration were made. One of them was
the establishment of the Strategic Council for AI Technology. The Council
was created in 2016 with the primary responsibility for managing and coor-
dinating activities conducted by five National Research and Development
Agencies, including the New Energy and Industrial Technology Development
Organization (NEDO).99 NEDO, which focuses on energy systems, energy
conservation, and the environment, as well as industrial technology with
advancements in robotics, AI, and IoT, funds those projects on energy saving
and energy demand that apply or investigate various AI solutions.100
Finally, the issues of energy management, energy supply, energy consump-
tion, and energy saving have been addressed, among others, in the recent
Japanese ‘AI Strategy 2019’.101 The Strategy includes such incentives as the
development of an independent and distributed, disaster-resistant energy sys-
tem, collection and review of energy usage data, feeding customised messages
to each user, promotion of energy-saving actions through the combination of
behavioural insights like nudge and boost (with advanced technologies such
as AI and IoT), or the provision of infrastructure and platforms for energy big
data.102 Moreover, these types of AI solutions can help to achieve the goal of
overcoming environmental energy constraints by using Japan’s cutting-edge
technologies.103

Smart regulatory models


Apart from the policy implications of AI for climate and energy issues, AI has
also had an impact on legislation. In 2016, the European Parliament’s Science
and Technology Options Assessment (STOA) Panel presented a report on EU
laws that will be affected by the developments in the field of robotics and AI,

98 Strategic Council (n 10).


99 ibid., 2–3.
100 See NEDO, ‘Projects in the Robotics and Artificial Intelligence Fields’ (December 2019) <www
.nedo.go.jp/content/100885591.pdf> accessed 27 May 2021, 3, 12–13, 18–19.
101 Integrated Innovation Strategy Promotion Council, ‘AI Strategy 2019. AI for Everyone: People,
Industries, Regions and Governments’ (11 June 2019) <www.kantei.go.jp/jp/singi/ai_senryaku/
pdf/aistratagy2019en.pdf> accessed 27 May 2021.
102 see ibid., 43, 46, 51.
103 ‘Revitalization Strategy’ (n 11), 9.
AI and climate-energy policies of the EU and Japan 151
energy being one of them.104 Areas of concern associated with energy include,
inter alia, a potential misuse or capture of the robotics infrastructure created for
electricity supplies, or the review of labelling (energy efficiency, eco-design,
and standard product information).105 In addition, some legal instruments
that might need to be reviewed or updated were also listed, e.g., the Energy
Efficiency Directive106 or Energy Labelling Directive.107
Another field that could benefit from AI applications is regulation in the
electricity sector. While literature presents many different definitions and ways
of construing regulation,108 and the way in which regulation is interpreted var-
ies among languages109 and legal cultures,110 when discussing regulation one
may find it has both narrow and broad applications. A narrow application
refers to a set of rules adopted under binding legislation, while a broad appli-
cation could be defined by any mechanism of social control and influence.111
Moreover, regulation in its narrow application could also be defined by a ref-
erence to an activity of the regulatory authorities, where the regulation is the
regulator,112 appointed to address the market deficiencies in meeting the goals
of public (or collective) interest.113 Apart from the environment, climate, and
sustainability, presented in this chapter, these goals may be (or are traditionally)
oriented to competition and affordability (energy market and energy consum-
ers), as well as energy security.114 As discussed in this chapter, AI may – and
is – helping to reach these goals.

104 STOA, ‘Legal and ethical reflections concerning robotics’ (Policy Briefing) PE (2016)
563.501 <https://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/EPRS_
STU(2016)563501(ANN)_EN.pdf> accessed 27 May 2021.
105 ibid., 6–7.
106 Directive 2012/27/EU of the European Parliament and of the Council of 25 October 2012 on
energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing Directives
2004/8/EC and 2006/32/EC [2012] OJ L315/1 (Energy Efficiency Directive).
107 Directive 2010/30/EU of the European Parliament and of the Council of 19 May 2010 on the
indication by labelling and standard product information of the consumption of energy and other
resources by energy-related products [2010] OJ L153/1 (Energy Labelling Directive).
108 B. Morgan, K.Yeung, An Introduction to Law and Regulation:Text and Materials (CUP 2007), Barry
Barton, ‘The Theoretical Context of Regulation’, in Barry Barton, Alastair Lucas, Lila Barrera-
Hernández,Anita Rønne (eds) Regulating Energy and Natural Resources (OUP 2006) 11; S.P. Croley,
Regulation and Public Interests:The Possibility of Good Regulatory Government (PUP 2008), 81–101.
109 For instance ‘regulation’ means both a concrete legal act – like an EU Regulation or a regulation
adopted by a relevant authority – and a wide action conducted by a government and/or its entities
to steer, adjust, or influence a process or an issue, etc., see M.M. Sokołowski,‘Regulatory Dilemma:
Between Deregulation and Overregulation’ in J. Jagielski, D. Kijowski, M. Grzywacz (eds) Prawo
administracyjne wobec współczesnych wyzwań. Księga jubileuszowa dedykowana profesorowi Markowi Wier-
zbowskiemu [Administrative Law Facing Contemporary Challenges: Jubilee Anniversary Publica-
tion Dedicated to Professor Marek Wierzbowski] (CH Beck 2018) 592.
110 For instance Anglo-American vs Continental European.
111 Barton (n 107), 12.
112 Sokołowski (n 17), 87.
113 A. I. Ogus, Regulation: Legal Form and Economic Theory (OUP 1994), 1–2.
114 Sokołowski (n 17), 222–223.
152 Regulating artificial intelligence in industry
Nevertheless, what is left for further analysis is the activity of an energy
regulator using AI. In reality, despite many examples of technological advances
in AI, public administration (including regulatory authorities) still provides
services in an old-fashioned way, even though the deployment of new tech-
nologies and solutions in public administration could be very beneficial.115 It
could increase both government effectiveness and efficiency, improve citizen
satisfaction, facilitate connectivity and interactivity, and reduce administrative
burdens.116 Such improvements in the electricity sector could help to facili-
tate greater changes much faster than those introduced by the administration
responsible for energy alone.
History shows that certain regulations (or a lack thereof) may cause issues.
A good example is a deregulatory matter that occurred in California in the
2000s, which affected the quality of the services, as well as prices.117 This may
also occur in other parts of the world, including Japan, if the energy market
reform fails. In order to avoid this, a solution could be the so-called model of
the ‘day-watchman’ regulator.118 The day-watchman bridges the gap between
total subordination and complete release.119 Under this model, the state nei-
ther actively participates in the market, nor is it completely absent (balance
between the market and the state, and between public and private interests).120
To provide this balance, the regulator is a vigilant, curious, and active observer
(but not an active participant). The regulator is the day-watchman who, acting
under the state’s power, may clarify the rules of the market, provide informa-
tion to the market players, and enforce the rules through the use of sanctions.121
However, in reality this model can prove to be inadequate because the
regulator is just a human, and the wrong person in the wrong position may

115 M.Wierzbowski, R. Galán Vioque, E.o Gamero Casado, M.k Grzywacz, M.M. Sokołowski,‘Chal-
lenges and Prospects of E-Governance in Poland and Spain’ (2021) 17(1) Electron. Gov. Intl. J., 1.
116 W. Gomes de Sousa, E/ Regina Pereira de Melo, P.H. De Souza Bermejo, R.Araújo Sousa Farias,A.
Oliveira Gomes,‘How and Where is Artificial Intelligence in the Public Sector Going? A Literature
Review and Research Agenda’ (2019) 36(4) Gov. Inf. Q., 101392 <www.sciencedirect.com/science
/article/pii/S0740624X18303113> accessed 27 May 2021.
117 The liberalisation (deregulation) of the electricity sector in California caused a giant increase in the
final prices and led to a significant decline in the quality of services offered to energy users in the
early 2000s. However, this miscalculation turned out to be an important lesson for the EU and its
Member States as well as for other countries which led to a conclusion of not leaving the market-
challenge in the electricity sector alone, see Sokołowski (n 17), 113.
118 M.M. Sokołowski, ‘Rozważania o istocie współczesnej regulacji’ [Considerations on the Essence
of Modern Regulation] in A. Walaszek-Pyzioł (ed) Regulacja: innowacja w sektorze energetycznym
[Regulation: Innovation in the Electricity Sector] (CH Beck 2013), 318, M.M. Sokołowski,‘Aks-
jologia europejskiego prawa energetycznego’ [Axiology of European Energy Law], in J. Zimmer-
man (ed), Aksjologia prawa administracyjnego [Axiology of administrative law] vol. 2 (Wolters Kluwer
Polska 2017), 642.
119 M.M. Sokołowski,‘Balancing Energy Regulation: A Day-Watchman Approach’ in R.t Grzeszczak
(ed) Economic Freedom and Market Regulation: In Search of Proper Balance (Nomos 2020), 174.
120 ibid., 175.
121 Sokołowski (n 17), 89.
AI and climate-energy policies of the EU and Japan 153
lead to inadequate and ineffective regulation.122 Every good solution needs
floodgates, e.g. collegiate bodies in regulatory systems (commissions, boards,
etc.), which can correct the mistakes of the regulator.123 Nevertheless, no-one
can absolutely avoid the appointment of three, five, or seven (or any other
number) of unsuitable candidates as commissioners of such a regulatory com-
mission, as shown by past experience.124
If it comes to alternatives, one solution could be the possibility of applying
AI to support the day-watchman model. New technologies and solutions can
make this model work by making it smart enough to regulate a smart electricity
sector. This, however, introduces other issues. If this alternative is feasible, how
can this model be developed within the legal system? How can AI enhance tra-
ditional regulatory tools in the electricity sector? Could AI be treated as a sepa-
rate tool or a point of regulatory validation? What kinds of risks could emerge?
Can these risks be mitigated, and if so what is required to mitigate them? These
questions need further research125 on energy regulators and the application of
AI, both in the electricity sector and in their regulatory duties.

Conclusions
In October 2020, the European Council decided to provide, for the digi-
tal transition, at least 20% of the funds under the Recovery and Resilience
Facility. This, along with the amounts under the MFF, should help to advance
EU environmental and climate objectives by ‘unleashing the full potential of
digital technologies’.126 Moreover, the new European Research Area (ERA)
will improve Europe’s recovery and help its green and digital transformations
by fostering innovation-based competitiveness and promoting technological
sovereignty in key strategic areas like AI, data, microelectronics, quantum
computing, 5G, energy storage, renewable energy, hydrogen, zero-emission,
and smart mobility.127

122 Sokołowski (n 118), 182.


123 M.M. Sokołowski, ‘Kolegialny model ustrojowy organu jako droga do pełnej niezależności pol-
skiego regulatora sektora energetycznego’ [Collective System Model of the Administrative Body
as a Way to Full Independence of the Polish Electricity Sector’s Regulator] (2010) 13(1) Polityka
Energetyczna [Energy Policy], 99.
124 Sokołowski (n 118), 182.
125 B.W.Wirtz, J.C.Weyerer, C. Geyer,‘Artificial Intelligence and the Public Sector – Applications and
Challenges’ (2019) 42(7) IntJPublAdmin 596;T. Qian Sun, R. Medaglia,‘Mapping the Challenges
of Artificial Intelligence in the Public Sector: Evidence from Public Healthcare’ (2019) 36(2) Gov.
Inf. Q., 368; G.D. Sharma, A.Yadav, R. Chopra,‘Artificial Intelligence and Effective Governance: A
Review, Critique and Research Agenda’ (2020) 2 Sustainable Futures, 100004 <www.sciencedirect
.com/science/article/pii/S2666188819300048> accessed 27 May 2021.
126 European Council, ‘Special meeting of the European Council (1 and 2 October 2020)’ (Conclu-
sions) EUCO 13/20.
127 Commission, ‘A new ERA for Research and Innovation’ (Communication) COM (2020) 628
final, 4.
154 Regulating artificial intelligence in industry
Apart from being an element of the strategic approach in the field of energy
and the environment, which can be seen in the adopted documents, such
as strategies, policies, or programmes, AI is becoming the focus of legisla-
tive action. While the Energy Efficiency Directive and the Energy Labelling
Directive have already been amended128 or repealed,129 the adopted amend-
ments did not mention AI. It was, however, mentioned under the EU regime
on eco-design. Commission Regulation (EU) 2019/424,130 which provides
eco-design specifications for servers and data storage products, introduces the
definition of a ‘High Performance Computing (HPC) server’, which addresses
AI131 and establishes the rules on the efficiency of these types of servers. As
expressed in the preamble to this Regulation,

[e]codesign requirements should harmonise energy consumption and


resource efficiency requirements for servers and data storage products
throughout the Union, for the internal market to operate better and in
order to improve the environmental performance of those products.132

In this context, it must not be overlooked that issues related to energy con-
sumption are a permanent part of the discussion on the use of AI in the EU.133
Propelled by the climate action, steered by the growth of renewable energy
sources, reduction of emissions, and the enhancement of energy efficiency,
European energy policy moves towards climate neutrality, set to be reached
by 2050.134 In Japan, the Prime Minister Suga’s administration promised to cut
greenhouse gas emissions in Japan to net zero by the same year.135
However, this transition will not happen by itself, and much effort is still
needed. For instance, by setting up a new agency in 2021, Japan plans to
enhance the digitalisation of government functions.136 AI, robotics, IoT, cloud

128 Changes were introduced, inter alia, by Directive (EU) 2018/2002 of the European Parliament
and of the Council of 11 December 2018 amending Directive 2012/27/EU on energy efficiency
[2018] OJ L328/210, see Sokołowski (n 28), 198–200.
129 Regulation (EU) 2017/1369 of the European Parliament and of the Council of 4 July 2017 setting
a framework for energy labelling and repealing Directive 2010/30/EU [2017] OJ L198/1.
130 Commission Regulation (EU) 2019/424 of 15 March 2019 laying down ecodesign requirements
for servers and data storage products pursuant to Directive 2009/125/EC of the European Parlia-
ment and of the Council and amending Commission Regulation (EU) No 617/2013 [2019] OJ
L74/46.
131 ‘[A] server which is designed and optimized to execute highly parallel applications, for higher
performance computing or deep learning artificial intelligence applications’, see ibid, annex I (11).
132 ibid., recital 4.
133 see STOA (n 2), 28–29.
134 European Council,‘European Council meeting (12 December 2019)’ (Conclusions) EUCO 29/19.
135 S. Sasaki,‘Japan PM Suga Vows Goal of Net Zero Emissions by 2050’ Kyodo News (Tokyo, 26 Octo-
ber 2020) <https://english.kyodonews.net/news/2020/10/7a5539cd0324-japan-pm-suga-vows
-goal-of-net-zero-emissions-by-2050.html> accessed 27 May 2021.
136 ibid.
AI and climate-energy policies of the EU and Japan 155
computing, big data, and many other technological innovations have a great
potential for improving this process, making the transition smart, sustainable,
green, and efficient. Incentives similar to the Green Deal or post-COVID-19
pandemic actions are the right framework for not only unlocking the capac-
ity of AI for this transformation, but also for managing and regulating it in a
smart way.137 This also applies to the ongoing COVID-19 pandemic, where
AI is finding a variety of applications to manage changes in electricity demand
(increased residential – decreased industrial/commercial) due to shifts in job
patterns and lifestyles.138 They cover, inter alia, real-time monitoring of energy
production and consumption from various sources, including renewables, data
analytics, and the development of predictive methods for analysing volatile
energy use trends.139
In this light, AI should also be clearly recognised as a tool for achieving
climate neutrality, so its development must be low-emission, renewable-
energy-driven, and energy-efficient. This also concerns the recent proposal
of regulation laying down harmonised rules on AI (the Artificial Intelligence
Act) in the EU.140 Despite addressing AI as a key competitive advantage to sup-
port socially and environmentally beneficial outcomes, in areas such as energy
efficiency and climate change mitigation and adaptation, and briefly covers
environmental sustainability, the proposed Artificial Intelligence Act does not
include specific provisions on enhancing climate and energy issues through the
use of AI.141 If we really want to prepare for the realities of tomorrow, we must
act now and fast, since yesterday has become today in many areas related to AI.

137 M.M. Sokołowski,‘Regulation in the Covid-19 Pandemic and Post-Pandemic Times: Day-Watch-
man Tackling the Novel Coronavirus’ (2020) 15(2) TGPPP, 206 <DOI: 10.1108/TG-07-2020-
0142>.
138 R. Madurai Elavarasan, G.M. Shafiullah, K. Raju,V. Mudgal, M.T.Arif,T. Jamal, S. Subramanian,V.S.
Sriraja Balaguru, K.S. Reddy, U. Subramaniam,‘Covid-19: Impact Analysis and Recommendations
for Power Sector Operation’ (2020) 279 Applied Energy, 115739.
139 Ch. Ghenai, M.r Bettayeb, ‘Data Analysis of the Electricity Generation Mix for Clean
Energy Transition During Covid-19 Lockdowns’ (2021) Energy Source Part A <DOI:
10.1080/15567036.2021.1884772>.
140 Commission ‘Regulation of the European Parliament and of the Council Laying Down Harmo-
nised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union
legislative acts’ (Proposal) COM (2021) 206 final.
141 See Article 63(3), ibid.
11 The regulation of militarised
artifcial intelligence
Protecting civilians through legal reviews
of new weapons and precautions
Tsvetelina van Benthem

Introduction
According to the International Court of Justice, the legal principles of the law
of armed conflict (LOAC) apply ‘to all forms of warfare and to all kinds of
weapons, those of the past, those of the present and those of the future.’1 One
of the most difficult and complex discussions on the application of the inter-
national law to novel technologies is that on autonomous weapons systems
(AWS). For years, this discussion has been unfolding from the pages of aca-
demic journals and the halls of Palais des Nations in Geneva, where the Group
of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on
LAWS) regularly convenes.2 This Group has not only greatly contributed to
the sophistication of the debate on these weapons, but it has also facilitated a
dynamic exchange of State positions on the scope of international legal rules
and their application to emergent military capabilities with varying degrees of
autonomy. The Group has adopted, by consensus, eleven Guiding Principles,
the first of which affirms that ‘[i]nternational humanitarian law continues to
apply fully to all weapons systems, including the potential development and use
of lethal AWS.’3 While there is agreement that international humanitarian law4

1 Legality of the Threat or Use of Nuclear Weapons (ICJ Advisory Opinion 1986) at 86.
2 This Group was established within the framework of the Convention on Certain Conventional Weap-
ons and has been convening since 2017. Before the establishment of the Group, the Meeting of High
Contracting Parties, in 2013, agreed on a mandate on lethal AWS to be carried out by the Meet-
ing of Experts on Lethal Autonomous Weapons Systems. The Meeting of Experts held meetings in
2014, 2015 and 2016. More information on the sessions of the Group can be found on this website:
<https://unog.ch/80256EE600585943/(httpPages)/5535B644C2AE8F28C1258433002BBF14?Op
enDocument>.
3 Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous
Weapons Systems, Report of the 2019 session of the Group of Governmental Experts on Emerg-
ing Technologies in the Area of Lethal Autonomous Weapons Systems (25 September 2019) CCW/
GGE.1/2019/3, Annex IV, available at: <https://documents.unoda.org/wp-content/uploads/2020
/09/CCW_GGE.1_2019_3_E.pdf> accessed 5 May 2021 (2019 GGE on LAWS Report).
4 In this Chapter, ‘international humanitarian law’ and ‘the law of armed conflict’ will be used inter-
changeably.

DOI: 10.4324/9781003246503-13
The regulation of militarised artificial intelligence 157
applies to all weapons systems, the difficult question that remains is how exactly
it applies to those systems. This question of ‘how’ is related to the scope of the
rules of international humanitarian law, including the precise contours of the
duties imposed on parties to armed conflict.
The aim of this Chapter is to provide an overview of State positions on the
regulation of autonomous military capabilities relying on AI, and to outline
three obligations of particular importance in ensuring the safe deployment of
weapons with autonomy in decision-making: the obligation to conduct legal
reviews of new weapons, the obligation to take precautions in attack, and the
obligation to take precautions against the effects of attacks.

Autonomy and the means of warfare


To achieve the goal of weakening the military potential of the enemy, belliger-
ents employ means of warfare. This category encompasses ‘weapons, weapons
systems or platforms employed for the purposes of attack in armed conflict’.5
Under International Humanitarian Law, ‘the right of the Parties to the con-
flict to choose […] means of warfare is not unlimited’,6 and the limitations
on this right exist in the form of (1) general prohibitions related to classes of
harm – means to cause superfluous injury or unnecessary suffering;7 means
which are intended, or may be expected, to cause widespread, long-term and
severe damage to the natural environment;8 means that cannot be directed at
a specific military objective;9 means the effects of which cannot be limited,10
and (2) specific prohibitions on certain types of weapons, such as the prohibi-
tion on anti-personnel mines11 and laser weapons specifically designed to cause
permanent blindness,12 among others.
Quite naturally, states have every interest in developing means of warfare
that are reliable and effective. Among the many considerations that drive tech-
nological developments in the sphere of weaponry is the desire to overcome

5 ICRC,‘How does Law Protect in War? – Means of Warfare’, available at: <https://casebook.icrc.org
/glossary/means-warfare> accessed 27 May 2021.
6 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection
of Victims in International Armed Conflicts (adopted 8 June 1977, entered into force 7 December
1978) 1125 UNTS 3 (‘Additional Protocol I’),Art. 35(1).
7 ibid.,Art. 35(2).
8 ibid.,Art. 35(3).
9 ibid.,Art. 51(4)(b).
10 ibid.,Art. 51(4)(c).
11 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel
Mines and on their Destruction, done in Oslo on 18 September 1997, entered into force 1 March
1999, 2056 UNTS 211.
12 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW)
which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (as amended
on 21 December 2001) 1342 UNTS 137, Protocol IV.
158 Regulating artificial intelligence in industry
human fallibility, to protect one’s own forces, and to minimise costs.13
Autonomy has, in recent years, come to be seen as an attractive feature of
weapons, not least because it promises high levels of precision achieved from
a safe distance. It is therefore unsurprising that many states have made signifi-
cant advances in the research and development of such weapons and weapon
systems, and that these advances only signal the beginning of battlefield auto-
mation. In a recently updated report prepared for Members and Committees
of the United States Congress entitled ‘Artificial Intelligence and National
Security’, the US Congress confirmed that the US is already integrating AI
into combat through Project Maven, and that it does not consider that LOAC
imposes a prohibition on the development of lethal AWS.14 Russia has also
demonstrated its commitment to the harnessing of AI across sectors through
its 2019 AI Strategy.15 On the military front, the Kalashnikov company, a sub-
sidiary of the Russian state-owned Rostec corporation, acknowledged its cur-
rent research and development of AI-based weapons,16 and President Vladimir
Putin, in his 2018 address to the Federal Assembly, announced that Russia had
developed unmanned submersible vehicles that could carry either conventional
or nuclear warheads.17 China is also seeking to modernise its military by opera-
tionalising AI.18
But what exactly is the autonomy that these and other states are seeking
to achieve? Autonomy in weapons can manifest itself in a multitude of ways:
autonomy in the gathering of information, autonomy in identifying a target,
autonomy in deploying force towards a person or object. Of these types of

13 For an overview of the arguments in favour of deploying autonomous, see G.P. Noone and D.C.
Noone, ‘The Debate over Autonomous Weapons Systems’ [2015] 47 Case Western Reserve Journal of
International Law, 25.
14 Congressional Research Service,‘Artificial Intelligence and National Security’ (updated 10 Novem-
ber 2020), available at: <https://fas.org/sgp/crs/natsec/R45178.pdf> accessed 27 May 2021, 1, 15.
The importance the US places on the development of AI for military purposes was also outlined
in the Summary of the ‘2018 Department of Defence Artificial Intelligence Strategy: Harnessing
AI to Advance our Security and Prosperity’, available at: <https://media.defense.gov/2019/Feb/12
/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF> accessed 27 May 2021.
15 ‘Указ Президента РФ от 10 октября 2019 г. № 490 “О развитии искусственного интеллекта
в Российской Федерации”’ (translation by author: Proclamation by the President of the Russian
Federation from 10 October 2019, No 490 ‘On the development of artificial intelligence in the Rus-
sian Federation), available at: <http://www.kremlin.ru/acts/bank/44731>, accessed 27 May 2021.
16 ‘В России тестируют оружие с искусственным интеллектом’ (translation by author: ‘Weapons
equipped with AI being tested in Russia’), Tass, 26 February 2019, available at: <https://tass.ru/
armiya-i-opk/6157919> accessed 27 May 2021.
17 Presidential Address to the Federal Assembly, 1 March 2018, available at: <http://en.kremlin.ru/
events/president/news/56957> accessed 27 May 2021.
18 E.B. Kania,‘Chinese Military Innovation in Artificial Intelligence’,Testimony before the U.S.-China
Economic and Security Review Commission Hearing on Trade, Technology, and Military-Civil
Fusion (7 June 2019), available at: <https://s3.us-east-1.amazonaws.com/files.cnas.org/documents
/June-7-Hearing_Panel-1_Elsa-Kania_Chinese-Military-Innovation-in-Artificial-Intelligence.pdf
?mtime=20190617115242&focal=none> accessed 27 May 2021.
The regulation of militarised artificial intelligence 159
autonomy, the one that has been singled out as particularly problematic is the
autonomy that relies on sensor suites and computer algorithms to indepen-
dently identify a target and employ force towards that target without manual
human control and concrete human decision-making for the engagement of
the particular target.
AI is likely to play a key role in the development of such autonomy for the
direct engagement of persons and objects.19 This is because AI is seen as par-
ticularly well-suited to address the needs of modern battlefields: an algorithm
that can learn from its environment and adapt to changing circumstances even
in communications-degraded or -denied environments,20 and that can further
enable precision strikes in areas that may be inaccessible to ground troops. In
particular, advances in machine learning have made it possible to envisage an
entity that is capable of learning, of becoming better by experience, through
finding statistical comparisons in past data.21 Such learning military applications
would allow states to bypass detailed hand programming, which can be time-
consuming and research-heavy.22 Boothby gives the example of a machine
which ‘may observe the pattern of life in the area of interest and may then
adjust a target list fed into it in advance of the mission to take account of the
observations that it has made’.23
That the introduction of AI in decision-making on the direct engagement
of targets will necessitate changes in the procedures for training, legal reviews,
and deployment is uncontested.24 What is, however, contested is whether the
introduction of such AI capabilities will lead to a fundamental reappraisal of

19 V. Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk,Volume I: Euro-
Atlantic Perspectives (SIPRI 2019), 14.
20 Congressional Research Service,‘Artificial Intelligence and National Security’ (updated 10 Novem-
ber 2020), available at: <https://fas.org/sgp/crs/natsec/R45178.pdf> accessed 27 May 2021, 15.
21 SIPRI 2019 (n 19), 16.
22 ibid., 15–16.
23 W.H. Boothby,‘Highly Automated and Autonomous Technologies’ in W.H. Boothby (ed.), New Tech-
nologies and the Law in War and Peace (CUP 2018), 150.
24 According to the 2020 Statement of Sweden to the GGE on LAWS, ‘Weapon systems need to be
sufficiently predictable and reliable allowing for the operators to be certain that the systems will
function in accordance with the intention of the operator. It is essential that any complex system
has rigorous handling regulations, including manuals, procedures of use and methods for training.
These must all form part of the essential toolbox required for human-machine interaction and for
lethal AWS to be used in compliance International Law. Furthermore, legal advisors specialized
in international law can play a valuable advisory role in military decision-making related to the
interpretation and application of International Humanitarian Law’, Statement by Sweden at the
CCW GGE on Lethal Autonomous Weapons (LAWS), Geneva 21–25 September 2020, available at
<https://documents.unoda.org/wp-content/uploads/2020/09/200921-Anforande-LAWS-human
-control_.pdf> accessed 27 May 2021. France acknowledged the importance of technical certifica-
tions, training on the system and of the system itself, training on using AI-based command systems
(CCW GGE on LAWS, Operationalization of the 11 guiding principles at national level – Com-
ments by France (2020), available at: <https://documents.unoda.org/wp-content/uploads/2020/07
/20200610-France.pdf> accessed 27 May 2021.
160 Regulating artificial intelligence in industry
the ways in which weapons are conceptualised and regulated.25 To consider
this question, it is worth comparing these new military applications to previous
weapons. This study focuses on landmines and guided missiles as examples.
Landmines produce a blast upon exertion of a certain kinetic force.26 After
their placement in a given location, there is no need to make decisions over
specific impacts. In this sense, landmines have a degree of autonomy vis-à-vis
those using them to kill, injure, damage, or destroy enemy forces or mate-
riel. Precisely for that reason, anti-personnel landmines have been banned by
the Convention on the Prohibition of the Use, Stockpiling, Production and
Transfer of Anti-Personnel Mines and on Their Destruction.27 However, the
lack of concrete decision-making on specific impacts by armed forces does
not mean that landmines make their own ‘decisions’. Certainty, conditions
for engagement is the key characteristic of this weapon; the exact condition to
trigger the blast is precisely known in advance.
Guided missiles can be deployed across vast geographical distances, even
without physical proximity.28 What characterises these weapons is, first, the
fact that they are guided through their course, and second the fact that there
is no independence in the choice of targets. That said, geographical distance
and temporal lapses between launch and impact can have an effect on some
intervening factors, including sudden changes in weather conditions. As with
landmines, there is a certain understanding of the factors that may affect the
performance of these weapons, including their relation to the environment.
What is it, then, that makes us uneasy about the deployment of weapons
that can engage in the direct targeting of persons and objects without specific
targeting decisions made by humans? There are two factors that can explain
this unease: the first is the acceptance that these weapons can learn, thereby
acting in ways that are not specifically predicted prior to target engagement,
and the second is the potential for unexplainable AI decision-making. These
two factors are closely related.

25 H.-Y. Liu considers that ‘the prospect for weapons systems autonomy is clearly something genuinely
new, even if human soldiers can be considered as close analogues or historical precedents’ – see H.-Y.
Liu,‘From the Autonomy Framework towards Networks and Systems Approaches for ‘Autonomous’
Weapons Systems’ [2019] 10 Journal of International Humanitarian Legal Studies 89, 93.
26 See T. Krupiy,‘Of Souls, Spirits and Ghosts:Transposing the Application of the Rules of Targeting to
Lethal Autonomous Robots’ [2015] 2(16) Melbourne Journal of International Law, 1, 34–35.
27 The Anti-Personnel Mines Convention (n 11).The impact of these weapons on innocent civilians is
highlighted in the first preambular paragraph of the Convention:‘Determined to put an end to the
suffering and casualties caused by anti-personnel mines, that kill or maim hundreds of people every
week, mostly innocent and defenceless civilians and especially children, obstruct economic develop-
ment and reconstruction, inhibit the repatriation of refugees and internally displaced persons, and
have other severe consequences for years after emplacement’.
28 S.J. Freedberg, ‘Army Says Long Range Missiles Will Help Air Force, Not Compete’ (Breaking
Defense, 16 July 2020), available at: <https://breakingdefense.com/2020/07/army-says-long-range
-missiles-will-help-air-force-not-compete/> accessed 27 May 2021.
The regulation of militarised artificial intelligence 161
With regards to the first point, all weapons carry some degree of unpredict-
ability in concrete engagements. For instance, rifles can misfire, and laser beams
can get deflected due to weather conditions. It is nonetheless predictable that
inclement weather conditions can affect laser beams, even if we cannot fully
predict rapid changes in weather conditions for particular attacks. With AI
weapons, there may be a point where it will become accepted that the weap-
on’s method of target engagement need not be fully predictable to the armed
personnel deploying it if it can nevertheless be predicted that it will stay within
certain acceptable pre-programmed parameters. Spatial and temporal distance
are not, as such, determinative of increased risk of unpredictability. One could
deploy a long-range missile with a full prediction of the circumstances that
can affect its trajectory. However, as with traditional weapons, the spatial and
temporal distance could increase the risk of factors interfering with the opera-
tion of the weapon.
With regards to the second point, the possibility of accepting a degree of
unpredictability in the method of operation carries additional risks of ‘unex-
plainability’. A lack of full understanding of the ways in which an AI weapon
can interact with its environment may inhibit the detection of warning signs
by those fielding it, as well as subsequent efforts to understand the reason for
particular unintended target engagements. The need to ensure ‘full under-
standing of the weapons’ capabilities and limitations’ has been affirmed by state
delegations.29
With more tasks and decision-making transferred to weapons, the distinc-
tion between means of warfare and human agents becomes harder to sustain.
This is because a weapon is traditionally understood as an ‘extracorporeal
instrument’,30 ‘a device, system, munition, implement, substance, object, or
piece of equipment’,31 a ‘mechanism to achieve purpose’.32 In essence, this
seems to imply no more than an instrument to channel human will. Even
though AI is impacted by the will of teams of programmers, legal advisors, and
commanders who set the weapon’s purpose, tasks, and parameters, an emerg-
ing space of independent decision-making can be noticeable. In this emerging
space, there can be decisions on optimal flight patterns to a particular target,
decisions sent to human operators on target identification, or even independ-
ent decisions to directly engage a particular person or object. According to
Liu, ‘the autonomy framework foregrounds the liminal status of AWS between

29 CCW GGE on LAWS, Joint ‘Commentary’ on Guiding Principles A, B, C and D submitted by


Austria, Belgium, Brazil, Chile, Ireland, Germany, Luxembourg, Mexico, and New-Zealand (2020),
available at: <https://documents.unoda.org/wp-content/uploads/2020/09/GGE20200901-Austria
-Belgium-Brazil-Chile-Ireland-Germany-Luxembourg-Mexico-and-New-Zealand.pdf> accessed
27 May 2021.
30 Hugo Grotius, On the Law of Prize and Booty (De Jure Praedae), quoted in M. Newton and L. May,
Proportionality in International Law (OUP 2014), 263.
31 W.H. Boothby, Weapons and the Law of Armed Conflict (2nd ed, OUP 2016), 4.
32 R.L. O’Connell, Of Arms and Men:A History of War,Weapons and Aggression (OUP 1990), Ch. 1.
162 Regulating artificial intelligence in industry
agent and object’,33 and these autonomy-based similarities have even driven
some to speak of ‘robo-combatants’34 and ‘wartime software agents’.35
Whatever factual similarities there may be between the decision-making
power conferred to AI-based weapons and that typically possessed by human
agents, the normative line under LOAC remains unchanged. This is because
states consistently refer to autonomous military applications as weapons,36 and
affirm that LOAC ‘imposes obligations on states, parties to armed conflict and
individuals, not machines’.37 There are certainly very good reasons for main-
taining this line, perhaps most importantly those rooted in ensuring that respon-
sibility will not be deflected from states and human commanders through the
introduction of machine agency. It is, however, important to bear in mind that
the choice of legal paradigm between weapon and agent is not consequence-
free. When members of armed forces violate the rules of targeting in a way that
is, from the perspective of the official chain of command, unpredictable (for
instance, acts of targeting in contravention of instructions), their acts will still
give rise to state responsibility. According to Art. 91 of the Additional Protocol
I, a party to the conflict ‘shall be responsible for all acts committed by persons
forming part of its armed forces’.38 However, the picture that emerges from a
weapon ‘acting’ unpredictably is significantly more complex. Unpredictability
could be an indicator that the weapon is inherently indiscriminate, and there-
fore prohibited, as a type of weapon, under LOAC. Moreover, a weapon could
sometimes ‘act’ in a manner unpredictable to its operators and without being
inherently indiscriminate. It seems well established that missiles can veer off

33 Liu (n 25) 97. For an early examination of this blurring of boundaries between weapons and com-
batants, see H.Y. Liu, ‘Categorization and legality of autonomous and remote weapons systems’
[2012] 94 IRRC 627 636.
34 T. Chengeta, ‘Are Autonomous Weapon Systems the Subject of Article 36 of Additional Protocol I
to the Geneva Conventions?’ [2016] 23(1) UC Davis Journal of International Law and Policy, 65, 77.
35 R. Michalczak,‘Animals’ Race against the Machines’ in V.A.J. Kurki and T. Pietrzykowski (eds), Legal
Personhood:Animals,Artificial Intelligence and the Unborn (Springer 2017), 96.
36 This is evidenced by the very name of the Group of Governmental Experts on Lethal Autonomous
Weapons Systems.
37 2019 GGE on LAWS Report (n 3), para. 17(b). In the 2020 Statement of the United States to the
GGE on LAWS, we read that ‘anthropomorphizing emerging technologies in the area of LAWS can
lead to legal and technical misunderstandings that could be detrimental to the efficacy of potential
policy measures. From a technical perspective, anthropomorphizing emerging technologies in the
area of LAWS can lead to mis-estimating machine capabilities. From a legal perspective, anthropo-
morphizing emerging technologies in the area of LAWS can obscure the important point that IHL
imposes obligations on States, parties to a conflict, and individuals, rather than machines. “Smart”
weapons cannot violate IHL any more than “dumb” weapons can’, Agenda item 5(b) Characteriza-
tion of the systems under consideration in order to promote a common understanding on concepts
and characteristics relevant to the objectives and purposes of the Convention, UNCLASSIFIED//
FINAL// 09 22 2020, available at: <https://documents.unoda.org/wp-content/uploads/2020/09
/LAWS-GGE-TPs-AGenda-item-5b-Characteristics-09-22-2020-FINAL-FOR-TRANSLA-
TORS.pdf> accessed 27 May 2021.
38 Additional Protocol I (n 6),Art. 91.
The regulation of militarised artificial intelligence 163
course39 and bullets can ricochet,40 and that such accidents will not be captured
by the prohibitive rules of the LOAC. AWS, like all weapons, will be sus-
ceptible to malfunctions.41 To take the prohibition of directing attacks against
civilians,42 it is conceivable that a commander could aim to direct an attack
against a lawful military objective and yet, through malfunction, the act of
violence mediated through the weapon will hit a target different from the one
sought by the commander. The consequences of the attack will not, as such,
requalify the initial act of direction. In such cases, to determine whether there
had been a breach of International Humanitarian Law, it would be necessary to
examine the reasonableness of fielding this particular capability, all risks consid-
ered, in the circumstances of the attack.
Drawing from the above, certain degrees of unpredictability may indicate
that a weapon is prohibited per se, as it is incapable of being directed at specific
military objectives. At the same time, all weapons carry a certain degree of
unpredictability of malfunction, and what is unpredictable in particular deploy-
ments (for instance, the ricochet of a bullet) is, in fact, a predictable feature of
the weapon. Whether a weapon is inherently prohibited under the LOAC falls
within the questions asked by ‘weapons law’.43 However, the real difficulty in
determining whether a particular employment of AI-based weapons has been
conducted in breach of humanitarian law will lie in the area of uncertainty
between weapons that are indiscriminate and weapons that operate within the
regular and tolerated boundaries of expected malfunctions.
Unless it can be shown that there is something about weapons relying on
AI for targeting that places them, as a category, in a certain regulatory box
under International Humanitarian Law,44 the analysis would have to be more
fine-grained and based on the characteristics of specific models of AI weapons.
Just as within the broader category of aircraft bombs, which are not per se indis-
criminate, we have modified air bombs that are indiscriminate. It is therefore
necessary to assess each weapon system on its own merit. And even if a given

39 J. Hruska, ‘US Patriot Missile Defense System Malfunctions, Crashes in Saudi Arabia’s Capital’
(ExtremeTech, 28 March 2018) <https://www.extremetech.com/extreme/266523-us-patriot-mis-
sile-defense- system-malfunctions-crashes-saudi-arabias-capital> accessed 27 May 2021.
40 ICTY, Prosecutor v Stanislav Galić, IT-98-29,Trial Chamber Judgement, 5 December 2003, para. 251.
41 M.N. Schmitt,‘Autonomous Weapon Systems and International Humanitarian Law:A Reply to the
Critics’ [2013] Harvard National Security Journal Features, 7.
42 According to Art. 51(2) of Additional Protocol I, ‘The civilian population as such, as well as indi-
vidual civilians, shall not be the object of attack’ and, according to the ICRC Customary International
Humanitarian Law Study, Rule 1, ‘The parties to the conflict must at all times distinguish between
civilians and combatants. Attacks may only be directed against combatants. Attacks must not be
directed against civilians’.
43 B.T.Thomas,‘Autonomous Weapon Systems:The Anatomy of Autonomy and the Legality of Lethal-
ity’ [2015] 37(1) Houston Journal of International Law, 235, 247.The author provides a good overview
of the differences between ‘weapons law’ and ‘targeting law’.
44 This would be the case if they are found to be, as a class of weapons, inherently indiscriminate or
otherwise failing the test for means of warfare permitted under international humanitarian law.
164 Regulating artificial intelligence in industry
AWS is not inherently indiscriminate, it may well be that such a weapon can
only be fielded in non-urban areas. AI-based weapons will not emerge as a
monolithic group. In the future, the careful consideration of different types of
autonomous capabilities and their intended use will become a crucial factor for
ensuring meaningful constraints in deployment, including in the further clari-
fication of procedural guarantees for the mitigation of risk.

Regulating autonomous weapons: The current


state of the debate at the GGE on LAWS
In recent years, the debate on the regulation of AWS has acquired a greater
degree of sophistication. This is largely due to the meetings of the Group of
Governmental Experts (GGE) on LAWS. This group created a platform for the
dynamic exchange of views of states and non-state actors, including via work-
ing papers submitted to the group. The result of this is 11 Guiding Principles
adopted by consensus and outlining a common denominator of agreement.45
At the same time, delegations continue to diverge on certain matters, and these
divergences are highlighted in the reports adopted by the GGE.46 Despite the
importance of the Guiding Principles as an affirmation of common ground,
they do not settle many of the sharp-end controversies and do not preclude
distinct understandings of the content of the LOAC in relation to AWS. For
instance, while delegations agree on the key role of the human element in the
use of such weapons, there is no consensus on the contours of the element,
its concrete application, and the structure of ‘human–machine interaction’.47
A particularly difficult topic is that of the degree of human involvement nec-
essary, its place in the operational chain,48 and whether human input at the
development (programming) stage would be sufficient to allow compliance
with International Humanitarian Law.49 Moreover, there is significant disa-
greement on the possibility of ensuring International Humanitarian Law (IHL)

45 Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the
Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to
Have Indiscriminate Effects, Final Report (13 December 2019) CCW/MSP/2019/9,Annex III.
46 See, for example, para. 24(a) of the 2019 GGE on LAWS Report.
47 2019 GGE on LAWS Report (n 3), para. 22(a), (b).
48 For instance, Merel Ekelhof has argued that a thicker understanding of distributed control should
find its way in the discussions on AWS. According to Ekelhof,‘Whereas meaningful human control
theoretically assumes the existence of an ultimate decision (e.g., the trigger pull moment), military
practice shows that discretion is typically distributed’ – see M. Ekelhof,Autonomous weapons: Oper-
ationalizing meaningful human control (Humanitarian Law and Policy, 15 August 2018), available
at: <https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing
-meaningful-human-control/> accessed 27 May 2021.
49 2019 GGE on LAWS Report (n 3), para. 22(c). For a detailed overview of the positions of States on
the human element expressed at the GGE on LAWS, see E.T. Jensen, ‘The (Erroneous) Require-
ment for Human Judgment (and Error) in the Law of Armed Conflict’ [2020] 96 International Law
Studies, 26.
The regulation of militarised artificial intelligence 165
compliant targeting without direct human input on specific targets. Some delega-
tions have considered precautionary measures – such as testing, training, and
establishing procedures – to be an adequate way of ensuring an operation of
the weapon consistent with the rules of IHL; others have raised serious doubts
about the possibility of operating in complex operational environments with-
out ‘human judgment and context-based assessments’.50
At a very high level of generality, it can be said that all parties at the GGE
on LAWS agree that AWS stand in a special relation to battlefield risks. But
while some delegations see these weapons as a way to avoid traditional risks,
others consider the weapons themselves a significant and unprecedented risk
to civilians. For instance, in the 2020 report ‘Commonalities in National
Commentaries on Guiding Principles’, the then Chair of the Group of
Governmental Experts, His Excellency Mr Jānis Kārkliņš, noted that ‘several
commentaries argued that emerging technologies in the area of lethal AWS
could support the implementation of International Humanitarian Law due to,
inter alia, the reduction of human-related errors and risks, improved preci-
sion and accuracy, and the ability to incorporate self-destruct, self-deactivation,
or self-neutralisation mechanisms. Others argued that this outcome was not
assured and should not be assumed’.51
Moreover, the International Committee for the Red Cross (ICRC) and
the Stockholm International Peace Research Institute (SIPRI) published an
important report on the concept of meaningful human control,52 which was
subsequently cited by state delegations.53 It explored the degree of human con-
trol that needs to be ensured in view of mitigating the risks posed by AWS.
To mitigate their inherent unpredictability, the report considered the impor-
tance of controls on the weapon system’s parameters of use (for example, the
existence of fail-safe mechanisms), controls on the environment (for example,
temporal and spatial constraints in deployment), and controls through human–
machine interaction (for example, human supervision and capacity to inter-
vene in the operation of the weapon).
Concerns over unacceptable levels of risk lie at the core of these discus-
sions. This risk, manifested through the introduction of yet another element
of battlefield uncertainty and unpredictability, may generate new dangers for
protected persons and objects. Some delegations at the GGE consider that
international law, and in particular LOAC, provides significant constraints on
the development and deployment of AWS. These views have been fleshed
out in a range of statements and working papers submitted by participants in

50 2019 GGE on LAWS Report (n 3), para 24(a).


51 GGE on LAWS, Commonalities in national commentaries on guiding principles (2020) para. 17.
52 V. Boulanin, M.P. Carlsson, N. Goussac, N. Davison,‘Limits on Autonomy in Weapon Systems: Iden-
tifying Practical Elements of Human Control’ (SIPRI, June 2020), available at: <https://www.sipri
.org/publications/2020/other-publications/limits-autonomy-weapon-systems-identifying-practical
-elements-human-control-0> accessed 27 May 2021.
53 See, for instance, Intervention from Costa Rica to the CCW GGE LAWS, 21 September 2020.
166 Regulating artificial intelligence in industry
the debates.54 The following section will focus on three protective obligations
arising under LOAC, and will examine the guarantees they provide in the
development and deployment of AWS.

Protective obligations
Under Art. 51(1) of Additional Protocol I, ‘[t]he civilian population and indi-
vidual civilians shall enjoy general protection against dangers arising from mili-
tary operations’55 and under Art. 57(1), ‘[i]n the conduct of military operations,
constant care shall be taken to spare the civilian population, civilians and civil-
ian objects’.56 Civilians are exposed to a range of threats in times of conflict:
being intentionally terrorised57 or used as human shields,58 or becoming collat-
eral damage in the context of attacks against lawful military objectives,59 among

54 Recent examples are the Working paper by the Bolivarian Republic of Venezuela on behalf of the
Non-Aligned
Movement (NAM) and Other States Parties to the Convention on Certain Conventional Weap-
ons (CCW), submitted in 2020, available at: <https://documents.unoda.org/wp-content/uploads
/2020/09/G2022909.DOC4.pdf>; Statement by Costa Rica submitted on 22 September 2020,
available at: <https://documents.unoda.org/wp-content/uploads/2020/09/Intervencion-22-seti.
-2020-ESP-PDF.pdf> (in Spanish) accessed 27 May 2021.
55 Additional Protocol I,Art. 51(1).According to the 1987 Commentary to Additional Protocol I, this
first paragraph of Art. 51 acknowledges ‘that armed conflicts entail dangers for the civilian popula-
tion, but these should be reduced to a minimum’.This reduction of dangers is considered to be the
aim of the remaining paragraphs of that article.
56 Additional Protocol,Art. 57(1).This rule is also reflected in the ICRC Study on Customary Interna-
tional Humanitarian Law – see Rule 15. Principle of Precautions in Attack, available at: <https://ihl
-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule15> accessed 27 May 2021.
57 ICTY, Prosecutor v. Stanislav Galić, IT-98-29-T, Judgement and Opinion, 5 December 2003, para. 769.
At para. 597, the Majority affirmed that ‘a campaign of sniping and shelling was conducted against
the civilian population of ABiH-held areas of Sarajevo with the primary purpose of spreading ter-
ror’.
58 According to Rule 97 of the ICRC Study on Customary International Humanitarian Law,‘the use
of human shields is prohibited’, available at: <https://ihl-databases.icrc.org/customary-ihl/eng/docs
/v1_rul_rule97>. A particular instance of condemnation of the practice of using civilians as human
shields, we find in the Fifth periodic report on the situation of human rights in the territory of the
former Yugoslavia submitted by Mr.Tadeusz Mazowiecki, Special Rapporteur of the Commission on
Human Rights, pursuant to paragraph 32 of Commission resolution 1993/7 of 23 February 1993,
para. 84:‘Other Muslims, Croats and Roma (gypsies), have been arrested to provide a labour force in
conflict zones, or to act as “human shields”’.
59 The protections of international humanitarian law do not guarantee that civilians will not be
harmed. According to Principle 14 of the ICRC Study on Customary International Humanitarian
Law, ‘Launching an attack which may be expected to cause incidental loss of civilian life, injury to
civilians, damage to civilian objects, or a combination thereof, which would be excessive in rela-
tion to the concrete and direct military advantage anticipated, is prohibited’. A contrario, when the
expected loss, injury and damage of an attack launched against a lawful military objective is not
excessive, the attack will not be deemed a violation of the law under the proportionality principle.
The difficulties of assessing the rule on proportionality were highlighted in ICTY, Final Report to
the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against
The regulation of militarised artificial intelligence 167
others. Not only are the dangers faced by civilians wide-ranging, but they are
also continuously evolving. The urbanisation of conflict is an obvious example
of a changing landscape of dangers, and so are technological advancements in
the area of weaponry. In this context, certain obligations of a procedural char-
acter may gain particular significance, as they seek to guarantee the elimination
or at least the minimisation of battlefield risk. Recognising the importance of
such measures, states participating in the GGE on LAWS agreed by consensus
to Guiding Principle (g), which provides that ‘[r]isk assessments and mitigation
measures should be part of the design, development, testing and deployment
cycle of emerging technologies in any weapons systems’.60 In the Commonalities
Report mentioned above, His Excellency Mr Jānis Kārkliņš noted that:

It was suggested that the GGE LAWS should catalogue potential risks and
mitigation measures that should be considered in the design, development,
testing and deployment of weapon systems based on emerging technolo-
gies in the area of lethal AWS.61

Since Chapter 12 of this book explores the element of meaningful human


control, the focus of this part is on three protective obligations of states under
LOAC, namely the obligation to conduct legal reviews of new weapons, the
obligation to take precautions in attack and the obligation to take precautions
against the effects of attacks. These are of particular importance in the deploy-
ment of weapons that display a risk of unpredictability, as they are designed to
ensure procedural guarantees for the avoidance or minimisation of risk.

Legal reviews of new weapons


States parties to Additional Protocol I are, according to Art. 36, under an obli-
gation to conduct legal reviews of new weapons:

In the study, development, acquisition or adoption of a new weapon,


means or method of warfare, a High Contracting Party is under an obliga-
tion to determine whether its employment would, in some or all circum-
stances, be prohibited by this Protocol or by any other rule of international
law applicable to the High Contracting Party.

the Federal Republic of Yugoslavia: ‘The main problem with the principle of proportionality is not
whether or not it exists but what it means and how it is to be applied. It is relatively simple to state
that there must be an acceptable relation between the legitimate destructive effect and undesirable
collateral effects. For example, bombing a refugee camp is obviously prohibited if its only military
significance is that people in the camp are knitting socks for soldiers. Conversely, an air strike on
an ammunition dump should not be prohibited merely because a farmer is ploughing a field in the
area. Unfortunately, most applications of the principle of proportionality are not quite so clear cut’.
60 2019 Report GGE on LAWS, Guiding Principle (g).
61 GGE on LAWS, Commonalities in national commentaries on guiding principles (2020), para. 16.
168 Regulating artificial intelligence in industry
The ICRC, in its 2006 ‘Guide to the Legal Review of New Weapons, Means
and Methods of Warfare’, highlighted that this obligation flows ‘logically from
the truism that states are prohibited from using illegal weapons, means and
methods of warfare or from using weapons, means and methods of warfare in
an illegal manner’, and thus it applies irrespective of whether a state is a party
to Additional Protocol I.62 The GGE on LAWS provides a reference to the
wording of Art. 36 by stating that

[i]n accordance with states’ obligations under international law, in the


study, development, acquisition, or adoption of a new weapon, means or
method of warfare, a determination must be made whether its employment
would, in some or all circumstances, be prohibited by international law.63

Additionally, the same report acknowledges that states are free to ‘indepen-
dently determine the means to conduct legal reviews’.64 The benefit of shar-
ing best practices was also acknowledged, not just in the report,65 but also
in the submissions of individual delegations.66 As a step in the direction of
standardising state practices in the area of legal reviews, Argentina submitted
a ‘Questionnaire on the Legal Review Mechanisms of New Weapons, Means
and Methods of Warfare’.67 On substantive matters, the questionnaire raises
important questions on whether states take into account examinations already
done by other countries, whether they conduct their own tests, and whether
they have ever rejected the acquisition of a new weapon.68
According to the Australian National Commentary submitted to the GGE
on LAWS in 2020, ‘strengthening compliance with existing IHL, including
through Art. 36 reviews, is the most effective way to manage new weapons
systems, including the potential development of LAWS’.69 Due to the distanc-
ing between direct human input on concrete targeting decisions and the final
act of targeting, legal reviews need to check whether the weapon is capable of
operating within the parameters of the LOAC.70 When it comes to the mate-
rial scope of legal reviews under Art. 36, the ICRC considers that it is broad

62 ICRC,‘A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures
to Implement Article 36 of Additional Protocol I of 1977’ (Geneva, 2006), p. 4.
63 2019 Report GGE on LAWS, Guiding Principle (e).
64 ibid.,, para. 17(i).
65 ibid.
66 Working Paper Submitted by Argentina to the GGE on LAWS,‘Strengthening of the review mecha-
nisms of a new weapon, means or methods of warfare’ (4 April 2018), CCW/GGE.1/2018/WP.2.
67 Argentina,‘Questionnaire on the Legal Review Mechanisms of New Weapons, Means and Methods
of Warfare’ submitted to the GGE on LAWS (29 March 2019), CCW/GGE.1/2019/WP.6.
68 ibid, p. 4.
69 CCW GGE on LAWS, National commentaries on the 11 guiding principles of the GGE on
LAWS – Australia, available at: <https://documents.unoda.org/wp-content/uploads/2020/08
/20200820-Australia.pdf> accessed 27 May 2021.
70 Boothby (n 23) 147.
The regulation of militarised artificial intelligence 169
and that it encompasses ‘an existing weapon that is modified in a way that
alters its function, or a weapon that has already passed a legal review but that
is subsequently modified’.71 For AWS relying on algorithmic decision-making,
it can be expected that there will be updates to their parameters. An important
boundary will be that of ‘modification’ – which are the updates that qualify as
a ‘modification’ of the weapon and which are the ones that fall short of that
threshold.
Legal reviews are and should be, therefore, not only an initial, but a con-
tinuous check on the development and fielding of military capabilities. In addi-
tion to providing the necessary framework for the responsible introduction of
new weapons, this process is a source of information about the system, with all
its attendant risks and limitations. Being information-generating in nature, it
creates a pool of knowledge that can then feed into the understanding of those
using it on the battlefield. In this way, the obligation to undertake reviews
impacts our assessment under the prohibitive rules of LOAC, as the informa-
tion available to the decision-maker is crucial in judging the lawfulness of their
conduct.72

Precautions in attack
An important aspect of the protection of civilians is the obligation to take
precautions in attack.73 An aspect of this obligation applies to all military opera-
tions74 and requires the attacking party to take constant care to spare the civilian
population and civilian objects. Additionally, we have a list of specific precau-
tionary obligations in attack, including: (a) to verify that the target is a military
objective and that the attack is compliant with the principle of proportionality;
(b) to take precautions in choosing the means and methods of attack in view of
avoiding, or minimising, civilian harm; (c) to cancel or suspend an attack if it
becomes apparent that the target is a protected one or that the incidental harm
will be excessive to the military advantage anticipated; (d) to give effective
advance warning; and (e) in cases of choice between several military objectives

71 ICRC Guide Legal Reviews (n 64), citing Australian Instruction, section 2 and subsection 3(b) and
footnote 3 thereof; Belgian General Order, subsection 5(i) and (j); Norwegian Directive, subsec-
tion 2.3 in fine; US Air Force Instruction, subsections 1.1.1, 1.1.2, 1.1.3; and US Army Regulation,
subsection 6(a)(3).
72 See, for instance, the declarations made by states under the heading of ‘Rule for decision-making by
commanders’ in J. Gaudreau, ‘The reservations to the Protocols additional to the Geneva Conven-
tions for the protection of war victims’ [2003] 849 IRRC, p. 14.
73 Additional Protocol I, Art. 57, also considered a rule of customary international law – ICRC Study
on Customary International Humanitarian Law, Rule 15.
74 As explained by Quéguiner, the obligation of ‘constant care’ has a broader scope than the specific
obligations of Art. 57.This is because the concept of ‘military operations’ includes troop movements,
manoeuvres and other deployment or retreat activities carried out by armed forces before actual
combat, and is thus broader than the term ‘attack’ – see J-F. Quéguiner, ‘Precautions under the law
governing the conduct of hostilities’ [2006] 88 International Review of the Red Cross, 797.
170 Regulating artificial intelligence in industry
yielding similar military advantage, the obligation to choose the one expected
to cause the least civilian harm.75
By taking precautions, attacking forces create the conditions for observing
the principle of distinction between civilians and civilian objects, on the one
hand, and combatants and military objectives, on the other.76 In this sense, they
are an important ingredient in the protective architecture of LOAC. Despite
the linkages between precautionary obligations and other targeting rules, the
regime on precautions has autonomous significance, as it sets its own con-
straints on the conduct of attacks.77
A potential for tension between the obligations of the parties to the conflict
and the deployment of AI-based weapons for direct targeting arises through
the possibility of the independent decision-making of the weapon on the basis
of its own assessment of the changing circumstances. For instance, if an attack is
launched against a building deemed to be used for the storage of ammunitions
by the opposing side, new information on the use of the building could arrive,
indicating its civilian rather than military use. The attacking party may have the
capacity to cancel the attack or, depending on the means employed, to redi-
rect the weapon post-deployment.78 With AI-based weapons, the assessment
of new information and any respective decision may ultimately be left to the
algorithm. Any possibility to intervene in the operation of the weapon could
also be limited. Therefore, some states, including Austria and Costa Rica, have
emphasised the need for a human override over the autonomous system.79 In
the area of precautions, the United States has been consistent in affirming that
AWS can provide a whole new range of measures to protect civilians. For
example, according to the 2019 Working Paper submitted by the United States
to the GGE, ‘[e]ven if the risk of civilian casualties was not expected to be
excessive in relation to the military advantage expected to be gained, it would
be important to take further feasible precautions. For example, warnings, mon-
itoring, and self-destruct, self-deactivation, or self-neutralisation mechanisms
are all precautions that have been usefully employed to reduce the risk of

75 The specific obligations can be found in Art. 57(2), (3).


76 W.J. Fenrick, ‘The Law Applicable to Targeting and Proportionality after Operation Allied Force: A
View from the Outside’ [2000] 3 Yearbook of International Humanitarian Law, 53, 57.
77 This independent existence is affirmed by the specific rules on precautions, but also through the last
paragraph of Art. 57, which provides that ‘No provision of this Article may be construed as author-
izing any attacks against the civilian population, civilians or civilian objects’.
78 See, for instance,A.A. Haque,‘The “Shift Cold” Military Tactic: Finding Room Under International
Law’ (Just Security, 20 February 2018), available at: <https://www.justsecurity.org/52713/shift-cold
-military-tactic-finding-room-under-international-law/> accessed 27 May 2021.
79 CCW GGE on LAWS, Contribution of Austria to the Chair`s request on the Guiding Principles
on emerging technologies in the area of LAWS (2020), available at: <https://documents.unoda.org
/wp-content/uploads/2020/09/20200901-Austria.pdf>; Statement by Costa Rica submitted on 22
September 2020, available at: <https://documents.unoda.org/wp-content/uploads/2020/09/Inter-
vencion-22-seti.-2020-ESP-PDF.pdf> (in Spanish) accessed 27 May 2021.
The regulation of militarised artificial intelligence 171
civilian casualties in relation to the use of land or naval mines’.80 In the view of
the United States, the ‘use of the autonomous system might even be regarded
as an additional precaution that should be taken, consistent with IHL’.81
There is also the question of whether states are capable of discharging their
protective obligations even when an AWS is used to carry out the attack. States
should be able to take constant care to spare civilians, and civilian objects, even
when the weapon deployed has its own margin of discretion in making target-
ing decisions. To accept the opposite may make the regime of precautionary
obligations less meaningful. Of course, it may well be that these weapons will
have the capacity to channel state obligations: for instance, an algorithm that
carries out continuous verification of new information and that is also capable
of issuing an effective warning to civilians. In this sense, it may be that such
weapons become the instrument through which the duty to take constant care
and the specific precautionary obligations become operationalised. While the
question of whether AI technology will evolve in such a way as to allow this
to happen is an empirical one, some difficulties bear mentioning. First, even
though precautionary obligations do not require the result of eliminating all
civilian harm, most of them are conditioned through a ‘feasibility’ criterion,
i.e. states must ‘do everything feasible to verify’82 their objectives and ‘take all
feasible precautions in the choice of means and methods of attack’.83 The spec-
trum of feasible measures may be difficult to programme in advance. Second,
changing circumstances will require changes in the appropriate precautionary
measures. For instance, an advance warning must be ‘effective’, and the effec-
tiveness of such warnings will depend on a variety of factors, including the past
experiences of the civilian population, the messaging issued by the other side,
the conditions on the ground, the timing and possibility to ensure safe evacu-
ation.84 Third, in discharging their obligations to take precautions, attacking

80 CCW GGE on LAWS, Implementing International Humanitarian Law in the Use of Autonomy
in Weapon Systems, submitted by the United States of America, CCW/GGE.1/2019/WP.5, 28
March 2019, available at: <https://undocs.org/en/CCW/GGE.1/2019/WP.5> accessed 27 May
2021, 8(c).
81 ibid. In the US Department of Defence Law of War Manual, there is a section on requirements
to take precautions regarding specific weapons (5.2.3.4), but the weapons listed to not include
weapons relying on artificial intelligence.The line of argumentation that autonomy in weapons can
contribute to the implementation of international humanitarian law is also present in the Law of
War Manual, where we read that ‘in many cases, the use of autonomy could enhance the way law of
war principles are implemented in military operations. For example, some munitions have homing
functions that enable the user to strike military objectives with greater discrimination and less risk
of incidental harm. As another example, some munitions have mechanisms to self-deactivate or to
self-destruct, which helps reduce the risk they may pose generally to the civilian population or after
the munitions have served their military purpose’ (6.5.9.2).
82 Additional Protocol I,Art. 57(2)(a)(i). Emphasis added.
83 Additional Protocol I,Art. 57(2)(a)(ii). Emphasis added.
84 Théo Boutruche,‘Expert Opinion on the Meaning and Scope of Feasible Precautions under Inter-
national Humanitarian Law and Related Assessment of the Conduct of the Parties to the Gaza Con-
flict in the Context of the Operation “Protective Edge”’ (2015), available at: <https://www.diakonia
172 Regulating artificial intelligence in industry
forces may deploy weapons that retain an element of supervision as a guarantee
against unintended engagements. While this may go some way towards alle-
viating fears of uncheckered autonomous targeting, one must not lose sight
of the difficulties that remain: the need to ensure meaningful understanding
between the supervisor and the system.85 Of particular importance is the need
to ensure that those tasked with supervision have the ‘necessary knowledge of
risks’.86 The need for understanding within the human–machine dynamic was
made explicit by the United Kingdom in its 2020 Working Paper submitted
to the GGE:

those held responsible [need to] have a sufficient understanding of the


capabilities and limitations of the weapon system, and of the environment
in which it is to be deployed; a point which again highlights the impor-
tance of the appropriate form of human-machine interaction along with
broader considerations throughout the wider lifecycle such as training of
military personnel.87

As argued by Sassòli and Quintin, ‘feasibility evolves through experience’.88


Learning from the successes or failures of previously employed precautionary
measures is the key to the fulfilment of these obligations. This is particularly
important for the interplay between the obligation to take precautions in attack
and the obligation to conduct legal reviews of weapons. The obligation to
undertake legal reviews is not confined to ‘new’ weapons, but it also applies
to modifications of existing technologies. That is why there needs to be a
meaningful feedback loop between the conduct of specific attacks, the lessons
learned from the effectiveness of the precautionary measures employed, the
possible changes to the algorithm or the features of the human–machine inter-
action model, and any subsequent legal reviews.

Precautions against the effects of attacks


Art. 58 of Additional Protocol I contains the obligation to take precau-
tions against the effects of attacks, which requires parties to a conflict to (1)

.se/globalassets/blocks-ihl-site/ihl-file-list/ihl--expert-opionions/precautions-under-international
-humanitarian-law-of-the-operation-protective-edge.pdf> accessed 27 May 2021, 28.
85 N. Sharkey, ‘Guidelines for the human control of weapons systems (2018) International Commit-
tee for Robot Arms Control’, available at: <https://www.icrac.net/wp-content/uploads/2018/04/
Sharkey_Guideline-for-the-human-control-of-weapons-systems_ICRAC-WP3_GGE-April-2018
.pdf> accessed 27 May 2021, 2-3.
86 M. Sassòli, A. Quintin, ‘Active and Passive Precautions in Air and Missile Warfare’ [2014] 44 Israel
Yearbook on Human Rights, 69, 111.
87 CCW GGE on LAWS, United Kingdom Expert paper: The human role in autonomous warfare,
CCW/GGE.1/2020/WP.6, 18 November 2020, para. 10.
88 ibid., 87.
The regulation of militarised artificial intelligence 173
endeavour to remove the civilian population, individual civilians, and civilian
objects under their control from the vicinity of military objectives; (2) avoid
locating military objectives within or near densely populated areas; and (3) take
the other necessary precautions to protect the civilian population, individual
civilians, and civilian objects under their control against the dangers result-
ing from military operations. All three obligations are subject to the standard
of ‘to the maximum extent feasible’ found in the chapeau of the provision.89
According to the ICRC, the obligation to take precautions against the effects
of attacks is part of customary International Humanitarian Law.90 This obliga-
tion is seen as complementary to the one binding on attacking forces, and the
parallel operation of these two obligations provides the foundation for imple-
menting the principle of distinction.91
However, some of these obligations could place a very heavy burden on
parties to the conflict, which explains the language conditioned through a fea-
sibility standard. For instance, the obligation to avoid locating military objec-
tives within or near densely populated areas could be particularly difficult for
densely populated states or states with particular geographic features.92 The
term ‘feasibility’ is undefined in Additional Protocol I and has to be assessed
depending on the individual circumstances of each case. It is nevertheless clear
that feasibility is not to be appraised in hindsight, and that the decisions of the
defending forces are to be judged by reference to the circumstances and infor-
mation at the time.93
Additional difficulties arise when one considers the interplay between the
obligations of attacking forces and the obligations of defenders. These obliga-
tions have an independent existence, and the violation of one party does not
relieve the other from its own obligations. It may nevertheless be the case that
the way in which one party carries out its precautionary obligations will affect
the effectiveness of those taken by the other party, or the strategy that it should
adopt. Such questions of interplay are likely to gain particular prominence in
the context of technological developments. In his article ‘Precautions Against
the Effects of Attacks in Urban Areas’, Eric Talbot Jensen persuasively argues
that new technologies ‘will allow the defender greater situational awareness

89 Additional Protocol I,Art. 58, first sentence.


90 ICRC Study on Customary International Humanitarian Law, Rules 22–24.
91 On the relationship between these two obligations and the principle of distinction, see M. Bothe,
K.J. Partsch,W.A. Solf, New Rules for Victims of Armed Conflicts: Commentary on the Two 1977 Protocols
Additional to the Geneva Conventions of 1949 (2nd ed., Martinus Nijhoff 2013), 413.
92 Such concerns were expressed by the representatives of Italy, France and Switzerland during the
debates at the Working Group elaborating on the wording of what is now Art. 58 AP I – see E.T.
Jensen, ‘Precautions against the effects of attacks in urban areas’ [2016] 98(1) International Review of
the Red Cross, 147, 164.
93 ibid., 166, quoting from United States Military Tribunal, Nuremberg, United States v.Wilhelm List
et al., Case No. 47, Judgment (Military Tribunal V), 19 February 1948, in Trials Of War Criminals
Before The Nuremberg Military Tribunals Under Control Council Law No. 10,Vol. 11, 1950, 1296.
174 Regulating artificial intelligence in industry
as to where the civilians are, but will also increase the defender’s ability to
both segregate military forces from civilians and protect those who cannot be
segregated’.94 Among other potential benefits of the introduction of such tech-
nology, he argues that it could be used for marking protected areas:

[a]dditionally, such markings could be used to denote protected areas, par-


ticularly in a fluid battlefield. If displaced civilians were being gathered in
an ad hoc building, drone-delivered markings could be used to notify an
attacker of the protected nature of the building, even in a temporary way.95

It can also be used to warn civilians through ‘visual signals, such as flashing
lights or brightly coloured spray paint of some pre-designated colour’.96 While
it may be true that technological advancements could assist in minimising the
risks faced by civilians, a new risk could also emerge through the interaction
of measures taken by attacking and defending forces. As an example, let’s con-
sider an AI-based missile trained on data from the neighbourhood towards
which it is being deployed, including the habits of resident civilians and the
previously employed tactics. If the defender decides to employ signalling or
marking measures that are new, i.e. not part of the information the algorithm
was trained on, then these measures could increase the risk to civilians rather
than decrease it. That is because there would be a new environmental factor to
which it may react in unintended ways. This example highlights the need for
at least a basic level of understanding of how different measures may affect the
functioning of AI systems. Otherwise, certain precautionary measures could
become not only ineffective but detrimental to the safety of civilians.

Conclusions
Risk is inherent in armed conflict. Its effect on civilians can be felt in many
ways, and their vulnerability requires robust legal protection. Under LOAC,
civilians are protected through a complex web of interrelated obligations. With
the introduction of AI, these well-established rules have come under careful
scrutiny. A lot more work is needed to go past the stage of general affirmation
that international law applies to AWS, and into the concrete ways in which
it applies. The discussions at the GGE on LAWS have been instrumental in
narrowing down the most challenging questions and in laying bare the disa-
greements between the interpretations that states give to specific rules of inter-
national law. While there is common ground on a general level, that ground
gets pixelated when one enters the concrete questions of determining the per-
sons who should exercise control over these weapons, the stage of that control,

94 Jensen (n 92), 170.


95 ibid., p. 173.
96 ibid.
The regulation of militarised artificial intelligence 175
the potential constraints over their use, and the robustness of the affirmative
protective obligations pre-dating and accompanying attacks.
What transpires from the analysis of the protective safeguards is the fact that
there is insufficient specificity in the ways in which these protective obliga-
tions are to be carried out. While this flexibility may be necessary, a degree of
concretisation would bolster the protection afforded by the regime. In addi-
tion to the questions regarding the specification of legal obligations, there are
further challenges in assessing the interplay between protective obligations, in
particular the effect that the measures taken by one party could influence those
that the other should take.
All these questions are key to the responsibility of states. A state is respon-
sible for an internationally wrongful act when conduct consisting of an action
or omission is attributable to the state under international law and constitutes a
breach of an international obligation of the state. As there is uncertainty on the
scope of certain rules of LOAC, the question of whether a state has breached
LOAC becomes a difficult one to answer. According to Guiding Principle (d)
of the 2019 GGE report,

accountability for developing, deploying and using any emerging weapons


system in the framework of the CCW must be ensured in accordance with
applicable international law, including through the operation of such sys-
tems within a responsible chain of human command and control.

While this statement affirms the importance of accountability, the reference


to ‘in accordance with applicable international law’ takes us right back to the
question of specification of rules. These discussions on autonomy have high-
lighted areas of the LOAC that have never been fully determined, in particular
those pertaining to human errors, malfunctions, and, more generally, targeting
in conditions of risk. There is a window of opportunity to clarify these rules.
Knowing what LOAC expects of parties to the conflict is an important step
to ensure that civilians are effectively protected from the dangers of military
operations even amidst the changing landscape of conflict.
12 The use of Artifcial Intelligence in
armed conficts – implications for
state responsibility
Dominika Iwan

Introduction
As discussed in Chapter 11, the debate over AWS has revealed many defects in
the regulations of the LOAC and state responsibility. AWS may also affect the
balance between considerations of humanity and military necessity. However,
mens rea of LOAC violations is constructed on the basis of the intended behav-
iour of the perpetrator, which would be difficult to prove in the context of
AWS. Consequently, the law on state responsibility cannot address all the major
challenges posed by AWS. This chapter presents state liability regimes adopted
by NATO and the United States, which, arguably, increase the effectiveness of
accountability mechanisms, transparency in international law, and the victims’
sense of justice. These models allow individuals to ask for compensation for
damages and harms that occurred in the conduct of military operations.
This chapter is divided into two main parts. The first examines the impact
of the law on state responsibility on the secondary rules of LOAC, in particular
the Articles on Responsibility of States for Internationally Wrongful Acts of
20011 (ARSIWA). The second part covers the liability regime for the effects of
hostilities on the civilian side, based on the analysis of regulations from NATO
and the United States.

State’s responsibility for violations of the law of


armed conficts
Internationally wrongful acts
According to art. 2 of the ARSIWA, an internationally wrongful act occurs
if the following two elements are fulfilled: (1) the conduct of a state violates
any international obligations of that state, and (2) the conduct can be attrib-
uted to that state. The violation of a LOAC primary norm can lead to the
responsibility for an internationally wrongful act. That is because the law on

1 UN General Assembly, Resolution 56/83: Articles on responsibility of States for internationally


wrongful acts, (adopted 28.01.2002), UN Doc.A/RES/56/83 (2002).

DOI: 10.4324/9781003246503-14
The use of AI in armed conflicts 177
state responsibility is of a secondary character to the LOAC regulations, and it
applies once the primary obligation has been violated. However, a violation of
the LOAC does not necessarily equal an international crime.2
The character of state responsibility – either civil or criminal – is, however,
not entirely clear. For example, in the Fifth Report on State Responsibility,
Arangio-Ruiz concluded that international responsibility should be perceived
from both a civil and criminal perspective. The civil aspect of the responsibility
means that in some cases it can lead to compensation, while the criminal aspect
may lead to prosecution.3 However, it is important to note that, principally,
states are not under the compulsory jurisdiction of international courts. Even if
such jurisdiction was applicable, international courts do not provide enforce-
ment mechanisms. Although there is a mechanism for sanctions, it is based on
the UN Charter, and it is oriented toward international peace and security.
The emergence of new regimes of international responsibility (namely inter-
national liability and individual criminal responsibility) has shed new light on
state responsibility, with the two former perceived as a function of the lat-
ter.4 Nevertheless, they have increased interdependencies and the rise of global
issues that highly impact the nature of state responsibility. The emergence of
new technologies, and AWS among others, challenges the universal values of
humankind, thus calling for a global response.5
The nature of state responsibility further depends on the obligation violated.
Pursuant to art. 42 of the ARSIWA, the international obligations are divided
into the following categories: (1) owed to an injured state individually; (2)
owed to a group of states (obligations erga omnes partes); and (3) owed to the
international community as a whole (obligations erga omnes). An independent
category of international obligations arises under art. 40 of ARSIWA, which
refers to peremptory norm of general international law and corresponding seri-
ous breach of the obligation in question. Therefore, the character of the viola-
tion determines the subject and scope of admissible reaction to the violation.6

2 See art. 25(4) of the Rome Statute of the International Criminal Court (ICC), as well as Application of
the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia and Herzegovina v. Serbia
and Montenegro) (Judgment) [2007] ICJ Rep., 43, 173.
3 ILC,‘Fifth Report on State Responsibility by Mr Gaetano Arangio-Ruiz, Special Rapporteur’ [1993]
II ILC Yearbook 1, UN Doc.A/CN.4/453 and Add. 1–3, 253–256. See also:A. Pellet,‘The Definition
of Responsibility in International Law’, in J. Crawford, A. Pellet, S. Olleson (eds), The Law of Interna-
tional Responsibility (OUP 2010), 13.
4 K. Creutz, State Responsibility in the International Legal Order:A Critical Appraisal (Cambridge University
Press 2020) 31.
5 Crawford refers to communitarian norms entailing obligations erga omnes that enable invocation of
responsibility in the public interest. See: J. Crawford,‘Responsibility for Breaches of Communitarian
Norms: an Appraisal of Article 48 of the ILC Articles on Responsibility of States for Internationally
Wrongful Acts’, in U. Fastenrath, R. Geiger, D.E. Khan,A. Paulis, S. von Schorlemer, Ch.Vedder (eds),
From Bilateralism to Community Interest: Essays in Honour of Bruno Simma (OUP 2011), 224-240.
6 A. Cassese,‘The Character of the Violated Obligation’, in: J. Crawford, A. Pellet, S. Olleson (eds), The
Law of International Responsibility (OUP 2010), 415–416.
178 Regulating artificial intelligence in industry
In cases where the obligation is owed to an injured state individually, only an
injured state possesses the legal interest in the realisation of the responsibility.
If there is a violation of an obligation owed to the international community
as a whole, other states (not only the injured one) can invoke the responsi-
bility.7 Consequently, it is suggested that the use of certain technologies in an
armed conflict, such as ‘weapon systems that select and engage targets based
on sensor processing’,8 is a matter of concern to the international community
as a whole.9 Examples of such weapons are: the Israeli-made Harpy system
(described as an AWS ‘designed to detect, attack and destroy radar emitters’10),
the US Phalanx Close-In-Weapons System for Aegis-class cruisers,11 the UK
Taranis jet-propelled combat drone, or the Samsung Techwin.12 The list of
LOAC obligations amounting to obligations owed to the international com-
munity can be found in art. 85(1) of the Additional Protocol I, describing the
grave breaches of the I–IV Geneva Conventions. They cover, for example, the
wilful killing of protected persons, wilful cause of great suffering, or serious
injury to body or health, extensive and unjustified destruction and appropria-
tion of property. Pursuant to the 2006 report of the ILC, a significant part of
the LOAC are erga omnes obligations because of their non-reciprocal character
and protection performed in the interest of all states.13 However, the ILC gives
no examples of such obligations in the field of the LOAC. In several cases the
ICJ referred to ‘certain’ or a ‘great many’ rules of the LOAC as creating erga
omnes obligations.14 None of them puts an exhaustive list as to the violation
of exactly which LOAC rules lead to the creation of obligations of this kind.
This situation gives rise to an increased ambiguity and vagueness of obligations
owed to the international community as a whole.15

7 ILC, ‘Report of the Study Group of the International Law Commission: Fragmentation of Inter-
national Law: Difficulties Arising From the Diversification and Expansion of International Law’ (13
April 2006) A/CN.4/L.682, 395.
8 Human Rights Watch, ‘New Weapons, Proven Precedent. Elements of and Models for a Treaty on
Killer Robots’ (Human Rights Watch, 20 October 2020) <https://www.hrw.org/report/2020/10/20/
new-weapons-proven-precedent/elements-and-models-treaty-killer-robots> accessed 27 May 2021.
9 It is not clear, who can be considered as a member of the international community as a whole.This
community exists in legal terms i.e. in the context of the general law of state responsibility. See:A.L.
Vaurs-Chaumette, ‘The International Community as a Whole’, in: J. Crawford, A. Pellet, S. Olleson
(eds), The Law of International Responsibility (OUP 2010), 1023–1024.
10 Human Rights Council, ‘Report of the Special Rapporteur on extrajudicial, summary or arbitrary
executions, Christof Heyns’, 9 April 2013, UN Doc A/HRC/23/47, 45.
11 Phalanx systems have been deployed by the U.S. since 1980s. They are defence systems that auto-
matically detect, evaluate and engage anti-ship missiles and high-speed aircraft threats.
12 Human Rights Council Report (n 10), 45.
13 ILC Report (n 3) 391-392.
14 Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion) [1996] ICJ Rep., 257 [79]. See also:
Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory (Advisory Opinion)
[2004] ICJ Rep., 136, 155.
15 According to Crawford, IHL obligations are characterised as erga omnes partes and not erga omnes.This
is due to consequences flowing from their violation, as opposed, for example, to violation of the
The use of AI in armed conflicts 179
Nonetheless, such LOAC obligations are derived from art. 1 of I–IV of
the Geneva Conventions, stipulating the obligation to ensure respect to the
LOAC. This obligation was initially considered as the duty directed to the
state’s armed forces and public authorities. Nowadays, it is considered that the
erga omnes obligations impose a duty to claim performance by other states.16
In some instances, this obligation would involve the duty to ensure respect
from another state.17 In the case of AWS, the obligation to ensure respect from
another state would involve lawful measures to abstain from transferring AWS
to a state violating the LOAC, or the imposition of economic sanctions against
the violating state.18 A state is therefore obliged not to encourage persons or
groups to act in violation of the LOAC.19

Obligations of states in relation to the use of AWS in armed conflicts


The obligations of states in LOAC can be divided into two categories. First,
there are positive obligations to respect LOAC, arising irrespective of the exist-
ence of armed conflict.20 Second, there are negative obligations that shall be
fulfilled when an armed conflict occurs. The breach of the second category
may lead to grave breaches of the LOAC.21
Pursuant to art. 1 of the I–IV Geneva Conventions, as well as art. 1(1)
of Additional Protocol I, states are obliged to respect and to ensure respect
for these treaties in all circumstances. This obligation emanates not only from

right to self-determination.The latter should be of interest not only to all states but the international
community as a whole, whereas a violation of an obligation erga omnes partes entails reactions of state-
parties to a particular treaty (in this sense to I-IV Geneva Conventions). See: Crawford (n 5), 233.
16 K. Zemanek,‘New Trends in the Enforcement of erga omnes Obligations [2000] Max Planck UNYB,
4, 1, 5.
17 Palestinian Wall Case (n 14) 159. Similarly, the content of the obligation to ensure respect has been
confirmed by the ICJ in Military and Paramilitary Activities In and Against Nicaragua, (Nicaragua v. USA)
(Merits) [1986] ICJ Rep., 14, 220.The Tribunal stipulated that this obligation derived not only from
I-IV Geneva Conventions, but from the general principles of IHL.
18 Diakonia International Humanitarian Law Resource Centre,‘Accountability for violations of Inter-
national Humanitarian Law. An introduction to the legal consequences stemming from violations
of international humanitarian law’ (Diakonia, October 2013) <https://www.diakonia.se/globalassets
/documents/ihl/ihl-resources-center/accountability-violations-of-international-humanitarian-law
.pdf> accessed 27 May 2021, 4.
19 The judgment included references to violations of Art. 3 common to the I-IV Geneva Conventions.
See: Nicaragua case (n 17), 220.
20 The ICJ in Corfu Channel case concluded that the elementary considerations of humanity are ‘more
exacting in peace than in war’. Corfu Channel Case (UK v. Albania) (Merits) [1949] ICJ Rep., 4, 22
21 M. Bothe,‘Rights and obligations of the EU and its Member States to ensure compliance with IHL
and IHRL in relation to the situation of the occupied Palestinian territory. Legal Expert Opin-
ion, (Diakonia, June 2018) <https://www.diakonia.se/globalassets/blocks-ihl-site/ihl-file-list/ihl-
-expert-opionions/bothe.-july-18.-eu-obligations-under-il.pdf> accessed 27 May 2021, 11.
180 Regulating artificial intelligence in industry
treaties, but also from customary international law.22 A state shall ensure com-
pliance with the LOAC by preventing violations, and by prosecuting and
punishing LOAC violations committed by its citizens, or committed on the
territory of that state. In the context of AWS, the guarantee of compliance
can be achieved by conducting a weapons review under art. 36 of Additional
Protocol I,23 which states that:

in the study, development, acquisition or adoption of a new weapon,


means or method of warfare, a High Contracting Party is under an obliga-
tion to determine whether its employment would, in some or all circum-
stances, be prohibited by this Protocol or by any other rule of international
law applicable to the High Contracting Party.

Although this obligation constitutes a necessary safeguard for LOAC compli-


ance with respect to the development and use of AWS, there is no institution-
alised mechanism enabling a verification of whether a state fulfils the obligation
(except for art. 84 of Additional Protocol I concerning an obligation to com-
municate24). Moreover, art. 36 of the Protocol does not prevent private entities
from developing weaponised technologies since its obligations are imposed on
states only. Nonetheless, in discussions on the legality of AWS, many states
presented review procedures relating to new means and methods of warfare.25
AWS do have a direct reference in US legislation. The Guidelines for Review

22 See art. 1 of the Basic Principles and Guidelines on the Right to a Remedy and Reparation for Vic-
tims of Violations of International Human Rights and Humanitarian Law, adopted and proclaimed
by General Assembly resolution 60/147 of 16 December 2005.
23 Some scholars suggest that there is a corresponding, albeit not identical, customary rule in relation
to the weapons review. Jevglevskaja indicates that under Harvard Manual and Tallin Manual 2.0, the
core of the obligation seems to have become a customary rule. The core there is the obligation to
conduct a pre-deployment analysis of weapons. N. Jevglevskaja,‘Weapons Review Obligation under
Customary International Law’ [2018] ILS, 94, 186, 213–214. For supporting the implied character of
the obligation, Boothby recalls practice of certain States (namely the U.S. and Sweden) concerning
weapons reviews that preceded the adoption of the Additional Protocol I, and the existence of some
weapons law provisions that are in fact customary. Therefore, in the pre-deployment phase a non-
State party to the Protocol has to review the legality of using a weapon by verifying compliance with
its customary obligations. See:W.H. Boothby, Weapons and the Law of Armed Conflict (OUP 2009), 341.
24 Pursuant to art. 84 of the Additional Protocol I, State parties shall communicate to one another not
only their official translations, but also the laws and regulations adopted to ensure the application
of the Protocol. However, this is rather a rudimentary mechanism for ensuring IHL compliance
concerning new weapon systems.
25 Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapon Systems, 12–16 December
2016, UN Doc CCW/CONF.V/2 (10 June 2016), 48–51. Furthermore, Stockholm International
Peace and Research Institute prepared a case study concerning several weapons reviews. See: V.
Boulanin, M.Verbruggen,‘SIPRI Compendium on Article 36 Reviews’ [2017] SIPRI Background
Paper, passim. For publicly available national legislation see, for example, 6 U.S. Air Force, Legal
Reviews of Weapons and Cyber Capabilities, 27 July 2011, Instruction 51-402; Belgium, Défense, Etat-
Major de la Défense, Ordre Général – J/836, establishing La Commission d'Evaluation Juridique
des nouvelles armes, des nouveaux moyens et des nouvelles méthodes de guerre, 18 July 2002; New
The use of AI in armed conflicts 181
of Certain Autonomous or Semi-Autonomous Weapon Systems provide a
two-fold duty of assessing AWS legality.26 A double-check is required in both
the development and deployment phases. Therefore, states are not willing to
present any of the results of their review procedures concerning AWS; how-
ever, the procedures themselves are indeed publicly communicated.
States are under obligation to introduce procedural guarantees applicable
to LOAC violations. These guarantees shall involve: (1) effective penal sanc-
tions; (2) prosecution or extradition of alleged perpetrators; and (3) universal
jurisdiction for serious LOAC violations.27 With regards to the first obligation,
effective penal sanctions shall be introduced to domestic legislation. If the state
develops or possesses AWS, which it plans to use in armed conflict, it should
consider passing relevant legislation that would regulate the use of such tech-
nologies. In the context of AWS, states developing and deploying such tech-
nologies in military operations include Israel and the USA.28
The second point refers to the aspect of prosecution and extradition that
imposes an obligation erga omnes partes, i.e. established for the protection of col-
lective interests.29 A good example of this is the case of Israel, who was charged
with the crime of designing, manufacturing, and selling AWS to an unknown
state.30 Nevertheless, extradition is a challenging aspect because of several excep-
tions that can be applied, as well as the need for an extradition treaty. It is therefore
suggested that the obligation to prosecute violations shall prevail over extradi-
tion.31 These problems are even deeper in cases of alleged war crimes with AWS
involved in conduct constituting a breach of the LOAC. This aspect has been
discussed in the previous chapter. Practices relating to other weapons reveal the
inefficiency of international protection against the illegal use of weapon systems.
For example, despite attacks with incendiary and chemical weapons in Syria,
there is no information on attempts to hold the state responsible or trying the

Zealand, Manual of Armed Forces Law.Volume 4 Law of Armed Conflict, New Zealand Defence Force
DM 69 (2nd ed., 2019), Chapter 7, section 4.
26 USA, Department of Defense, Directive 3000.09:Autonomy in Weapon Systems, 21 November 2012, as
changed 8 May 2017, Department of Defense Documents No. 3000.09.
27 J.M. Henckaerts,‘The Grave Breaches Regime as Customary International Law’ [2009] 7 Journal of
International Criminal Justice, 4, 683, 693.
28 States like Germany, Spain, and the United Kingdom informed that they do not plan to use weapons
wholly uncontrolled by humans. However, they do not opt for a prohibition of AWS. See: R. Livoja,
‘Why It’s so Hard to Reach an International Agreement on Killer Robots’ (The Conversation, 12
September 2018) <https://theconversation.com/why-its-so-hard-to-reach-an-international-agree-
ment-on-killer-robots-102637> accessed 27 May 2021.
29 Obligation erga omnes partes, contrary to the obligation erga omnes, imposes a duty based on a
multilateral treaty. In international criminal law it is e.g. art. 90 of the Rome Statute of the ICC.
30 T. Nedwick,‘A Group of Israelis Secretly Built And Tested Suicide Drones For An Unknown Asian
Customer’ (The Drive, 11 February 2021) <https://www.thedrive.com/the-war-zone/39209/a
-group-of-israelis-secretly-built-and-tested-suicide-drones-for-an-unknown-asian-customer>
accessed 27 May 2021.
31 J. Nowakowska-Małusecka,‘Indywidualna odpowiedzialność karna za zbrodnie popełnione w byłej
Jugosławii i Rwandzie’ (Wydawnictwo Uniwersytetu Śląskiego 2000), 27.
182 Regulating artificial intelligence in industry
alleged perpetrators.32 It leaves the victims with no tools against the state organs
responsible for determining the need for investigating a case.
The final point, concerning the rule of universal jurisdiction, covers exer-
cising jurisdiction for serious LOAC violations33 beyond a state’s boundaries,34
irrespective of the relationship between a state and the location, perpetrator,
or act. It is enforced in the common interest of all states. The universal juris-
diction has not only been incorporated into I-IV Geneva Conventions (art.
49, 50, 129, 146, respectively35), but also into other treaties that apply to the
situation of armed conflict.36 It therefore also covers acts committed with the
use of AWS in armed conflicts. Part of such acts would involve grave breaches
or serious LOAC violations, assuming that they were committed intentionally
and directed against civilian populations or objects. The rationale is that the
use of AWS is considered as a use of the means and methods of warfare. An
exercise of universal jurisdiction contributes to a culture of responsibility by
preventing any avoidance of responsibility by means of procedural inefficien-
cies in the national legal order.37 It is a preventive tool against accidents with
AWS committed by inadequately trained personnel or in an environment in
which the system itself has not been sufficiently tested.
Law on state responsibility does not give precise information as to which
LOAC violations amount to state responsibility and what consequences should

32 Human Rights Council, ‘Report of the Independent International Commission on Inquiry on


the Syrian Arab Republic’, 9 August 2018, UN Doc A/HRC/39/65. See also: Organisation on
the Prohibition of Chemical Weapons, ‘Note by the Technical Secretariat, Second Report by the
OPCW Investigation and Identification Team Pursuant to Paragraph 10 of Decision C-SS-4/Dec.3
<Addressing the Threat from Chemical Weapons Use>’, 12 April 2021, OPCW Official Series
S/1943/2021. K. Hessey, ‘Illegal weapons: a global guide. Understanding the weapons, the treaties
and the violators’ (The New Humanitarian, 3 April 2017) <https://www.thenewhumanitarian.org/
analysis/2017/04/03/illegal-weapons-global-guide> accessed 27 May 2021.
33 On the distinction between grave breaches and serious violations of the LOAC see: O. Gross, ‘The
Grave Breaches System and the Armed Conflict in the Former Yugoslavia’ [1995] 16 Mich. J. Int. L.,
3, 784.
34 ILC, ‘Final Report of the International Law Commission:The obligation to extradite or prosecute
(aut dedere aut iudicare) of its 66th session’ [2014] YILC, vol. II (Part Two), 18.
35 State obligations imposed under these articles cover penalisation, prosecution and suppression of acts
contrary to I-IV Geneva Conventions.
36 Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment
(adopted 10 December 1984, entered into force 26 June 1987), 1645 UNTS 85; Convention on
the Safety of United Nations and Associated Personnel (adopted 9 December 1994, entered into
force 15 January 1999), 2051 UNTS 363; Inter-American Convention on Forced Disappearance
of Persons (adopted 6 September 1994, entered into force 28 March 1996) <https://www.oas.org/
juridico/english/treaties/a-60.html> accessed 27 May 2021.
37 Two types of universal jurisdictions have been proposed, namely criminal and civil. The goal of
universal criminal jurisdiction, being a subject of the erga omnes obligation, is to avoid impunity at
the international level. The focus of the latter relates to victims of violations and aim at ensuring
reparation for international crimes.This is not considered an erga omnes obligation. See: B.I. Bonafe,
‘Universal Civil Jurisdiction and Reparation for International Crimes’, in S. Forlati, P. Franzina (eds),
Universal Civil Jurisdiction (Brill Nijhoff 2020), 99, 104.
The use of AI in armed conflicts 183
be set in motion. Although state responsibility for the use of AWS was initially
raised in the discussion, the argument seems to be used as a makeshift response
for addressing LOAC violations. However, it is not the only possible answer to
the effects of the state’s behaviour in armed conflict. Certain clarification can
be found in para. 3 of the 2005 Basic Principles and Guidelines, which empha-
sises that a state has a duty to ‘provide effective remedies to victims, including
reparation’,38 which should be adequate, effective, and prompt. Nevertheless,
this legal instrument addresses only the category of serious LOAC violations
‘which, by their very grave nature, constitute an affront to human dignity’. In
other words, it does not cover other LOAC violations when an act by AWS
does not fall into this category. These acts assume intentional and direct human
behaviour, thus excluding unintentional or unpredicted AI mistakes or errors.
There is no one single answer as there are many different kinds of AWS, there-
fore, each case requires individual examination.
Another element of an internationally wrongful act is the attribution of
the conduct to the state. The conduct shall be committed by an individual
(agent or representative) whose link with the state allows their conduct to be
attributed to that state.39 The fundamental question for the rule of attribution
is therefore the establishment of an institutional or factual link between a state
and an individual person or entity who commits the conduct. A state bears the
responsibility for the behaviour of not only its armed forces, but also of other
state organs and, exceptionally, private individuals.40 Arts. 4–11 of ARSIWA
provide the list of entities, whose behaviour counts for the conduct of a state,
namely: (1) organs of a state; (2) persons or entities exercising elements of
governmental authority; (3) organs placed at the disposal of a state by another
state; (4) person or group of persons directed or controlled by a state; (5) an
insurrectional or other movement; and (6) persons or entities whose conduct
is acknowledged by a state as its own. In the case of AWS it can sometimes be
difficult to establish a link with the state.
With respect to armed forces, under the LOAC a state is responsible for the
conduct of members of its armed forces. This is particularly important when
their acts were committed contrary to orders or instructions.41 Moreover, the

38 Basic Principles and Guidelines (n 22).


39 Certain Questions Relating to Settlers of German Origin in the Territory Ceded by Germany to Poland
(Advisory Opinion) 1923 [1923] PCIJ Publications Series B. No. 6, 22.
40 Exceptional situations allowing to attribute a conduct of a private person to a State refer to indi-
viduals acting on the instruction of, or under the direction or control of a State. As indicated by de
Stefano, privatisation and outsourcing of State functions may shift attribution of conduct committed
by private contractors towards the category of ‘instructions’ under art. 8 ARSIWA. States are also
subject to obligation of due diligence. It means that they shall prevent and punish delinquencies
against foreigners and acts contrary to the rights of other states. See: C. de Stefano, Attribution in
International Law and Arbitration (OUP 2020), 65, 74–75. See also: Corfu Channel case (n 20), 22.
41 M. Sassòli, ‘State responsibility for violations of international humanitarian law’ [2002] IRRC, 84,
401, 406.
184 Regulating artificial intelligence in industry
state is also responsible for the conduct of persons or entities acting under
that state’s instructions, direction, or control.42 This rule in armed conflict
refers, for instance, to armed groups supported by one state and fighting against
another state. The legal test for attribution can be found in the Nicaragua case,43
where the ICJ stated that it must prove that the state effectively controlled
military and paramilitary operations of an armed group. The test is not satis-
fied if the state only finances, organises, trains, arms, delivers equipment, plans
operations, and chooses targets. Consequently, delivery of AWS to an armed
group or choice of targets pre-programmed into the AWS by a delivering
state would not be enough to meet the attribution test. The state would have
to effectively instruct and direct or control the armed group that committed
LOAC violations.44 The scope of control of the delivering state upon AWS
would contribute to its responsibility, if this control was in fact effective.45 The
category of instructions given to private entities or individuals who develop or
control AWS is particularly interesting. A state purchasing AWS from private
individuals would provide the AWS specifications, which can then be assessed
from the perspective of responsibility. In 2018, the Israeli defence contrac-
tor, Aeronautics Ltd demonstrated the capabilities of their Orbiter 1K suicide
drone by striking Armenian armed forces on Azerbaijan’s behalf. Israel declared
that it would charge the company’s executives and several other employees
involved. Moreover, Israel suspended the export licences of the company.46
The test from the Nicaragua case was later developed by the ICJ in the Genocide
case,47 where the court stated that there has to be a relationship between the
conduct of the organ of a state or armed group factually acting as the organ of

42 Art. 8 of ARSIWA.
43 Nicaragua Case (n 17).
44 Nicaragua Case (n 17) 115-116. See also:A. Cassese,‘The Nicaragua and Tadić Tests Revisited in Light
of the ICJ Judgment on Genocide in Bosnia’ [2007] 18 EJIL, 4, 649, 660–661.
45 Interestingly, a different perspective was taken by the International Criminal Tribunal for Former
Yugoslavia in the case Prosecutor v. D.Tadić. For the attribution of the conduct of a group to a State,
the Tribunal was satisfied with the overall control of a State upon an armed group, irrespective of the
factual control, demand or imposition of the conduct of a group to a State, the Tribunal was satisfied
with the overall control of a State upon an armed group, irrespective of the factual control, demand
or imposition of the conduct. However, this conclusion was reached for the examination of the situ-
ation specifically in the Former Yugoslavia, and not of the responsibility of a State for the conduct of
an armed group in general. See: Tadić Case (Judgment) ICTY-94-1-A (15 July 1999), 120-121. See
also The Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia
and Herzegovina v. Serbia and Montenegro) (Judgment) [2007] ICJ Rep., 43, at 400-404.
46 J. Magid,‘Israeli dronemaker said to have bombed Armenians for Azerbaijan faces charges’ (Times of
Israel, 29 August 2018) <https://www.timesofisrael.com/israeli-dronemaker-said-to-have-bombed
-armenians-for-azerbaijan-faces-charges/> accessed 27 May 2021; J. Ari Gross,‘Licences suspended
for dronemaker accused of bombing Armenia for Azerbaijan’ (Times of Israel, 27 January 2019)
<https://www.timesofisrael.com/licenses-suspended-for-dronemaker-accused-of-bombing-arme-
nia-for-azerbaijan/> accessed 27 May 2021.
47 The Genocide Case (n 2).
The use of AI in armed conflicts 185
that state and the responsibility of the state.48 As a consequence, an act of an
individual is attributable to the state if the individual, in fact, acted in complete
dependence on the state.49 Therefore, there is a gap in provisions relating to
state responsibility in terms of the sole delivery of AWS and the lack of effec-
tive control on those equipped with AWS by the state. The borderline in
non-effective and effective control over AWS ceded to an armed group would
be difficult to prove. The control test adopted by the ICJ narrows state respon-
sibility exclusively to the specific conduct of that individual upon which the
state exercised effective control. It means that each conduct requires a separate
examination.50 The use of AWS by the non-state armed group would therefore
be assessed from the effective control perspective. Another challenge is related
to private military and security companies that would deliver or train armed
forces on how to use AWS. The argument for attributing the conduct of indi-
viduals who were instructed by the state to create or control AWS is that states
cannot avoid responsibility by actually delegating their functions to private
individuals.51 If contractors are functionally linked to a state, this state bears an
obligation to ensure respect for the LOAC by the contractors.52 Finally, pursu-
ant to art. 16 of ARSIWA, a state is responsible for aiding or assisting in the
unlawful conduct of another state. The attribution in this case arises if the state
is aware of the conduct and the aid or assistance is intended to facilitate the
conduct.53 This provision can be of tremendous value to the transfer of AWS
to another state or to AI-related surveillance-sharing. The UN Register of
Conventional Arms presents transfers of technology in weapon systems among
states, also covering the category of combat aircraft and unmanned aerial vehi-
cles.54 According to the UN Register, it captures over 90% of the global arms
trade.55 Nonetheless, none of the categories refers to AI-weapon technologies
specifically.

48 The Genocide Case (n 2), 402–404.


49 The Genocide Case (n 2), 392. See also: P. Palchetti,‘De facto organs of a State [2017] MPEPIL10.
50 A. Kees,‘Responsibility of States for Private Actors’, [2011] MPEPIL 14.
51 P. Palchetti (n 49), 13.
52 The Montreux Document on pertinent international obligations and good practices for States
related to operations of private military and security companies during armed conflict (adopted 17
September 2008) (International Committee of the Red Cross,August 2009) <https://www.icrc.org/en/
doc/assets/files/other/icrc_002_0996.pdf> accessed 27 May 2021, 9.
53 N. Melzer, ‘Human rights implications of the usage of drones and unmanned robots in warfare,
Directorate-General for External Policies of the Union Study’ (European Parliament, May 2013)
<https://www.europarl.europa.eu/RegData/etudes/etudes/join/2013/410220/EXPO-DROI
_ET%282013%29410220_EN.pdf> accessed 27 May 2021, 38.
54 The reports on the category covering the U.S. show that a State heavily involved in transfer of these
technologies is Israel. Nonetheless, state reports are usually coherent and differ from one another.
55 Conventional arms transfer reports are available at the United Nations Register of Conventional
Arms visit the UNROCA website (https://www.unroca.org/).
186 Regulating artificial intelligence in industry
The liability regime
The reasons for liability
Liability provides the obligation to pay compensation for damage caused by
acts not prohibited in international law. It is determined either by: (1) the
adoption of international agreement introducing the obligation of compensa-
tion; (2) the emergence of a customary rule of the obligation to compensate;
and (3) the acceptance of the progressive development of the rule of interna-
tional law stipulating the liability risk.56 There are no treaties directly regulating
liability for the effects of using any weapon systems; such cases are regulated
domestically. The customary nature of liability is also controversial.57 There are
authors suggesting liability as an emerging customary rule, but only in relation
to a particular regime, for example, to transboundary harm.58
Since there are no international regulations concerning reparation and com-
pensation for the effects of hostilities that are not prohibited under interna-
tional law, states themselves enact relevant legislation concerning war torts.
The latter do not result from prohibited acts, but are harm- or damage-orient-
ed.59 NATO has provided an institutional background for victims of the effects
of NATO-led military operations. The United States also adopted legislation
concerning compensation for the effects of military operations. Whereas both
regulations cover harm or damage not directly resulting from the conduct of
hostilities, they deal with acts of negligence. This can have a potential effect on
the use of AWS, if the effects were not predicted and not limited to military
targets. Due to the changing and dynamic character of contemporary armed
conflicts, it is sometimes difficult to distinguish between the direct effects of
hostilities and the effects of other military operations.

The NATO model


Although NATO has been granted international legal personality, the respon-
sibility or liability of international organisations is still vague.60 In the context
of AWS, it can be either NATO or the state contributing its military forces.
In order to resolve this problem, art. 7 of the Draft Articles on Responsibility
of International Organisations propose the test of ‘effective control’ where it

56 J.M. Sorel,‘The Concept of >>Soft Responsibility<<’, in J. Crawford,A. Pellet, S. Olleson, The Law
of International Responsibility (OUP 2010), 167.
57 M. Montjoie, ‘The Concept of Liability in the Absence of an Internationally Wrongful Act’, in: J.
Crawford,A. Pellet, S. Olleson, The Law of International Responsibility (OUP 2010), 503–504
58 Boyle argues for emerging principle of ‘polluter pays’. See:A. Boyle,‘Polluter Pays’ [2009] MPEPIL,
passim. See also:A. Duhan,‘Liability for Environmental Damage’ [2019] MPEPIL, 8.
59 R. Crootof, ‘War Torts: Accountability for Autonomous Weapons’ [2016] U. Pa. L. Rev., 164, 1347,
1388-1389.
60 D.M. Grütters, ‘NATO, International Organisations and Functional Immunity’ [2016] International
Organisations Law Review, 13, 211, 212.
The use of AI in armed conflicts 187
must be proved that the organisation exercised effective control over the con-
duct of an organ or an agent placed at the disposal of that organisation.61 It can
be apportioned to the use of AWS in a NATO-led operation if, for example,
a state delivers AWS that is then used by and controlled by the NATO-led
forces.
A case study of NATO involvement in Afghanistan illustrates the matter of
liability well.62 In order to avoid any links to responsibility, the term ‘compen-
sation’ was very often replaced with ‘ex gratia’, battle damages, honour or sola-
tium payments. In consequence, the International Security Assistance Force
(ISAF) in Afghanistan established the so-called Civilian Casualty Tracking Cell.
Its aim was to collect and maintain data on harm caused to civilians (but not to
their property). Due to the lack of transparency in military procedures leading
to the identification of the responsible unit, civilians were usually not aware
of the role of the Cell.63 Due to its ineffectiveness, the Cell was later replaced
by the Civilian Casualty Mitigation Team.64 The Team was responsible for the
coordination of subject-specific studies and recommendations to the chain of
command, as well as for the coordination of working groups that addressed
the establishment of guidelines and standard operating procedures concern-
ing civilian harm tracking processes.65 The working groups were composed of
ISAF members and of international and Afghani organisations. Additionally,
NATO states adopted a set of non-binding Guidelines on Monetary Payments
to Civilian Casualties in Afghanistan.66 Pursuant to the Guidelines, when deter-
mining the appropriate response to a particular civilian casualty, armed forces
should take into account local customs and norms, including potential ex gratia
payments. The Guidelines were useful in Afghanistan, but were not adopted in
later conflicts, e.g. in Libya.67

61 International Law Commission, Draft Articles on the responsibility of international organisations,


UN Docs A/66/10 [2011] Yearbook of the International Law Commission II, Part Two.
62 Human Rights Watch, ‘Letter to NATO to Investigate Compensation for Civilian Casualties in
Afghanistan’ (Human Rights Watch, 2 April 2009) <https://www.hrw.org/news/2009/04/02/letter
-nato-investigate-compensation-civilian-casualties-afghanistan> accessed 27 May 2021.
63 Center for Civilians in Conflict, ‘Civilian Harm Tracking: Analysis of ISAF Efforts in Afghanistan’
(Civilians in Conflict, 2014) <https://civiliansinconflict.org/wp-content/uploads/2017/09/ISAF
_Civilian_Harm_Tracking.pdf> accessed 27 May 2021, 10-11.
64 See: S. Muhammedally,‘Minimizing civilian harm in populated areas: Lessons from examining ISAF
and AMISOM policies’ [2016] 98 IRRC, 1, 225, 232.
65 Center for Civilians in Conflict (n 63), 7.
66 NATO, ‘NATO Nations Approve Civilian Casualty Guidelines’ (NATO, 6 August 2010) <https://
www.nato.int/cps/en/SID-9D9D8832-42250361/natolive/official_texts_65114.htm> accessed 27
May 2021.
67 S. Hill, A. Manea,‘Protection of Civilians: A NATO Perspective’ [2018] 34 Utrecht Journal of Interna-
tional and European Law, 2, 146, 155.
188 Regulating artificial intelligence in industry
The US practices addressing effects of hostilities
The US regulations can be found in the Foreign Claims Act (FCA),68 which
was originally adopted to improve relations between the US armed forces and
civilians abroad.69 It introduces compensatory claims of up to $50,000 (paid
in local currency). An individual can claim compensation for personal injury,
death, or property damage caused by the non-combat related activities of the
US armed forces. The FCA authorises the Secretary of Defence to appoint
ad hoc Foreign Claims Commissions to deal with the cases. The Commission
applies the local laws and customs of the country in which the claim arose or,
if the claimant resides elsewhere, the country of residence. The judgements of
the Commission are final.70
Although this system provides the possibility for compensation, it has some
disadvantages. First, the system is ad hoc, which means the Commissions oper-
ate on a non-permanent basis. Second, under § 2734(b)(3) of the FCA, the
US compensation model provides the so-called ‘combat exclusion’, meaning
compensation cannot be awarded for the harm or damage resulted directly or
indirectly from a combat situation or from immediate preparations for combat.
The definition of ‘combat’ in practical terms is a challenging issue as well, even
for the Foreign Claims Commissions.71 However, it is noteworthy that many
FCA-based claims in Iraq dealt with damages caused by a negligent discharge
of weapons,72 which is important in the context of AWS.
In addition to the FCA, there is an alternative source of compensation,
namely the ad hoc system of solatia payments and condolence payments under
the Commander’s Emergency Response Programme.73 These are solatia pay-
ments based on a pre-determined sum, for death, injury, or property damage
caused by the US armed forces during combat. An individual (direct victim or
the victim’s family) can apply. The payment is determined in accordance with

68 USA, Foreign Claims Act: 10 U.S. Code par. 2734. Property loss; personal injury or death: incident to non-
combat activities of the armed forces; foreign countries, 10.8.1956, <https://uscode.house.gov/statviewer
.htm?volume=70A&page=154> accessed 27 May 2021.
69 C.V. Daming, ‘When in Rome: Analyzing the Local Law and Custom Provision of the Foreign
Claims Acts’ [2012] Wash. U. J. L. & Pol’y, 39, 311.
70 USA, Foreign Claims Act: 10 U.S. Code par. 2735. Settlement: final and conclusive, 10 August 1956
(Office of the Law Revision Counsel) <https://www.govinfo.gov/app/details/USCODE-2011-title10
/USCODE-2011-title10-subtitleA-partIV-chap163-sec2735/context> accessed 27 May 2021.
71 J.Walerstein,‘Coping with combat claims: an analysis of the Foreign Claims Act’s combat exclusion’,
[2009] 11 Cardozo Journal of Conflict Resolution 1, 319, 345.
72 United States Government Accountability Office, ‘Report to Congressional Requesters: Mili-
tary Operations. The Department of Defense’s Use of Solatia and Condolence Payments in Iraq
and Afghanistan’, 27 May 2007, GAO-07-699 (US Government Accountability Office, 31 May 2007)
<https://www.gao.gov/assets/gao-07-699.pdf> accessed 27 May 2021, 50.
73 U.S. Army, The Judge Advocate General’s Legal Center & School, National Security Law Depart-
ment,‘Operational Law Handbook’, Charlottesville 2018 (Library of Congress, 2018) <https://www
.loc.gov/rr/frd/Military_Law/pdf/operational-law-handbook_2018.pdf> accessed 27 May 2021, at
243-234.
The use of AI in armed conflicts 189
the local customs. Moreover, solatia payments and FCA-based claims exclude
one another. As the Operational Law Handbook of 2018 indicates, ‘the indi-
vidual or unit involved in the damage has no legal obligation to pay’.74 This
type of compensation is only an expression of goodwill. The Commander’s
Emergency Response Program funds have been used to make condolence pay-
ments in Afghanistan and in Iraq as a recognition of loss.75 Unpredicted effects
of using AWS in hostilities that would not amount to LOAC violations would
be addressed in this regime. The term ‘unpredicted’ refers, for example, to a
situation when AWS made decisions autonomously or when circumstances
constituting a basis for the AWS’ decision changed.

Conclusions
The enforcement of the LOAC is mainly focused on individual criminal
responsibility. However, victims, presumably protected by the LOAC, possess
few effective measures addressing harm or damage caused by the use of AWS
in armed conflicts. Although LOAC provides state responsibility through the
obligation to pay compensation for some LOAC violations, the responsibility
would be difficult to invoke in addressing the use of AWS, mainly due to the
problem of the attribution of AWS’ actions to the state.
The autonomous behaviour of AWS can be unpredictable and ultra-haz-
ardous at the same time. The acceptance of this risk by states should be fol-
lowed with the acceptance of liability for some of the acts not prohibited by
international law. Compensation programmes for effects of hostilities contrib-
ute to the general success of, and civilian support for, military operations. At
the domestic level, the programmes, albeit not expressly addressing the use of
AWS, have already been in place for more than 60 years. They lead to differ-
entiated, and sometimes divergent, protection systems for victims, since they
rely on a character of conflict and the parties involved. Moreover, compensa-
tion programmes encounter significant challenges in implementing national
laws, including interpretation of combat exclusion or combat zones. All of this
further limits their application to AWS cases.

74 ibid 320.
75 Amsterdam International Law Clinic, ‘Monetary Payments for Civilian Harm in International and
National Practice’ (Nuhanovic Foundation, 2013) <http://www.nuhanovicfoundation.org/user/file
/2013_civic_report_on_monetary_payments.pdf> accessed 27 May 2021, at 13-15.
13 The problematisation of human
control over lethal autonomous
weapons
A case study of the US Department of State
Mikolaj Firlej

Introduction
The requirement of human control over the use of autonomous weapons sys-
tems (AWS) is a highly discussed topic, yet still elusive. Many policy advocates
and academics argue that AWS could potentially comply with the law of armed
conflict (LOAC) only if guided by human control, and this has been discussed
in the preceding chapters.1 They postulate to recognise the requirement of
human control to be on par with other key principles of LOAC, that is, the
principles of distinction, proportionality, humanity, and military necessity.
The US government has adopted Directive 3000.09 on AWS, which states
that autonomous weapon systems ‘shall be designed to allow commanders and
operators to exercise appropriate levels of human judgment over the use of force’.2
The US formulation of the role of humans over the use of AWS has gained
significant traction, and authors started to equate this concept with the require-
ment of ‘human control’.3 However, the US government did not support such
description and opposed any international effort to regulate AWS on the basis
of the concept of ‘human control’. This chapter explores in more depth the
difference between the concepts of ‘human judgement’ and ‘human control’
by studying the assumptions that underpin these seemingly similar policy con-
cepts. It also identifies key points that have been left out of US Department of
Defence (DoD) policy problematisation but are nonetheless important consid-
erations that can inform a critical assessment of the US DoD approach.

1 T. Chengeta, ‘Defining the emerging notion of “meaningful human control”’ (2016) NYU J. Int’l L.
& Pol., 839.
2 DoD, Directive 3000.09 Autonomy in Weapon Systems (2012).
<https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf> accessed
27 May 2021.
3 ICRC, ‘Ethics and autonomous weapon systems: An ethical basis for human control?’ (Report)
(2018), 2, 8.

DOI: 10.4324/9781003246503-15
The problematisation of control over autonomous weapons 191
The problem addressed by US DoD policy
The US policy reflected in Directive 3000.09 delineates three types of robotic
systems that get the ‘green light’ for approval in the policy.4 These are: (1)
semi-autonomous weapons, such as homing munitions; (2) defensive super-
vised autonomous weapons, such as the ship-based Aegis weapon system; and
(3) non-lethal, non-kinetic autonomous weapons, such as electronic warfare
to jam enemy radars.5 These three classes of AWS are in wide use today, and
the policy confirms that developers can use autonomy according to existing
practices.6 These weapons are subject to normal acquisition rules and do not
require any additional approval. However, any future weapons that would use
autonomy in a novel way outside those three types get a ‘yellow light’. Those
systems are subject to a lengthy review process, focusing primarily on tests and
evaluations, both before development and fielding.
A potential novel way of using autonomy that falls outside the specified
instances refers to any kind of autonomous weapon used for offensive purposes.7
While offensive AWS are not prohibited by Directive 3000.09, the additional
restrictions aim to mitigate risks associated with the potential development
of such weapons. The reason why these restrictions are in place is to ‘mini-
mise the probability and consequences of failures in autonomous and semi-
autonomous weapon systems that could lead to unintended engagement’.8 An
unintended engagement is defined as ‘the use of force resulting in damage to
persons or objects that human operators did not intend to be the targets of US
military operations’.9 The result could lead to ‘unacceptable levels of collateral
damage’ that go against the law of war or Rules of Engagement (ROE).10 The
US delegation to the UN provides an example of accidental attacks that killed
civilians, or friendly forces would be considered as ‘unintended engagements’
under DoD Directive 3000.09.11
Failures are defined as ‘an actual or perceived degradation or loss of intended
functionality or inability of the system to perform as intended or designed’.12

4 P. Scharre, Army of None (W.W. Norton & Company 2018), 89.


5 DoD (n 2); Scharre (n 4), 89.
6 Currently, the US military field defensive AWS, such as the Aegis at sea and the Patriot on land,
both designed to shield against missile attacks.These two systems, in addition to others, are meant to
counter an incoming threat to US forces. See Lockheed Martin,Aegis
<https://www.lockheedmartin.com/en-us/products/aegis-combat-system.html> accessed
27 May 2021;
Patriot <https://www.army-technology.com/projects/patriot/> accessed 27 May 2021.
7 M. Cummings,‘The Human Role in Autonomous Weapon Design and Deployment’ (2014), 2.
8 DoD (n 2).
9 DoD (n 2) Glossary Part II Definitions.
10 ibid.
11 US Government, ‘Human-Machine Interaction in the Development, Deployment and Use of
Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (Statement CCW/
GGE.2/2018/WP.4) (2018).
12 DoD (n 2) Glossary Part II Definitions.
192 Regulating artificial intelligence in industry
Directive 3000.09 states that failures can result from various causes, e.g.
human–machine interaction failures or cyber-attacks. The Defense Science
Board (DSB) specifies that the major challenge for broader adoption of such
technology is an issue of trust.13 The report states that even if weapons’ manu-
facturers implement trustworthiness attributes such as high levels of compe-
tence, reliability, and integrity at the level of design, there is still a problem
with operational trustworthiness associated with the use of such weapons.14
The main challenge is that AWS may have different sensors and data sources
than any of its human teammates, and therefore they may be operating on
different contextual assumptions of the operational environment. Further,
because such systems are self-learning, their capabilities may vary across differ-
ent environments, and their ‘reasoning process’ may take a different path than
that of a human decision-maker.15
However, despite these dangers, Directive 3000.09 does not prohibit offen-
sive AWS. On the contrary, it states that if an offensive AWS met all the nec-
essary criteria, such as reliability under realistic conditions, then in principle it
could be authorised.16 Explicitly Directive 3000.09 does not pose any limits
regarding the development and use of autonomy in weapon systems. The DoD
has not yet approved any autonomous weapons that could fall outside the three
types of robotic systems granted a ‘green light’. Specifically, Frank Kendall,
former Pentagon acquisition chief, said: ‘We have not had anything that was
even remotely close to autonomously lethal’.17 If, however, the US military
administration is confronted with such a situation, Kendall said that his concern
would be ensuring that the weapon would comply with the laws of war and
that the weapon allowed for ‘appropriate human judgement’.18

‘… judgment can be implemented through the use of automation’


The term ‘appropriate human judgement’ is the cornerstone of US policy on
AWS, introduced early in the text in the following context: ‘It is DoD policy
that autonomous and semi-AWS shall be designed to allow commanders and
operators to exercise appropriate levels of human judgment over the use of
force’.19 The concept of ‘human judgement’ appears to be similar to the con-
cept of ‘human control’, which has been included in the widely discussed
report by the Campaign to Stop Killer Robots (Campaign), published just two

13 DSB, Summer Study on Autonomy (2016), 1.


14 ibid., 14.
15 ibid.
16 DoD (n 2); Scharre (n 4), 90.
17 F. Kendall, Interview (7 November 2016) in Scharre (n 4), 91.
18 ibid.
19 DoD (n 2).
The problematisation of control over autonomous weapons 193
days before Directive 3000.09.20 The Campaign has been known as a vocal sup-
porter of prohibiting AWS. The report states:

Humans should therefore retain control over the choice to use deadly
force. Eliminating human intervention in the choice to use deadly force
could increase civilian casualties in armed conflict.21

And then in the summary:

At present, military officials generally say that humans will retain some
level of supervision over decisions to use lethal force, but their statements
often leave open the possibility that robots could one day have the ability
to make such choices on their own power.22

Interestingly, the US government concept of ‘appropriate levels of human


judgement’ over the use of AWS has sometimes been credited as the first
legal exemplification of the concept of ‘human control’.23 However, the US
delegation distances itself from such a description due to the use of the word
‘control’. They argue that human judgement ‘over the use of force’ is distinct
from ‘human control’ over the weapon. They provide the following example:

an operator might be able to exercise meaningful control over every aspect


of a weapon system, but if the operator is only reflexively pressing a but-
ton to approve strikes recommended by the weapon system, the operator
would be exercising little, if any, judgment over the use of force. On the
other hand, judgment can be implemented through the use of automation.
For example, the extensive automation of functions in a weapon system
could allow the operator to exercise better judgment over the use of force
by removing the need to focus on basic tasks and to give him or her more
time to understand the broader situation. Similarly, the use of algorithms
or even autonomous functions that take control away from human opera-
tors can better effect human intentions and avoid accidents.24

The US delegation distances themselves from the notion of human control


because they feel that framing the debate on the use of AWS on ‘control’ is

20 The concept of human control with the adjective ‘meaningful’ has been formulated in more struc-
tured way later in 2013 by Article 36 and then adopted by HRW.Article 36,‘Killer Robots: UK Gov-
ernment Policy on Fully Autonomous Weapons’ (2013) <http://www.article36.org/wp-content/
uploads/2013/04/Policy_Paper1.pdf.> accessed 27 May 2021.
21 HRW and others, ‘Losing Humanity. The Case Against Killer Robots’ (Report) (April 2015) 978-
1-6231-32408, 37.
22 ibid., 1.
23 ICRC (n 3).
24 US Government (n 11).
194 Regulating artificial intelligence in industry
too restrictive and may imply a so-called direct human control. Humans histori-
cally exercised ‘direct control’ over weapons because weapons have been seen
merely as a tool in the hands of fighters. In a sense, humans were ‘masters’ of
their weapons.25 The concept of ‘direct control’ of weapons by humans can be
traced to the 1949 Geneva Convention and their 1977 Additional Protocols,
whose provisions invoke the idea that without human control or use, a weapon
is nothing but a mere tool.26 As an example, in armed conflict, participating
in hostilities is shown by the ‘bearing of arms’. Thus, persons ‘who have laid
down their arms’ are considered to be ‘taking no active part in the hostilities’.27
Such interpretation of LOAC would suggest that all types of weapons should
be guided by ‘direct control’ to be fully compliant with the law.28 This view is
reflected by ICRC and HRW, who argue that without having direct human
control AWS would violate the principle of proportionality, among others.29
The US delegation does not agree with this argument, and they cite various
examples to support a broader understanding of human factors on the bat-
tlefield. One of the examples is the Automatic Ground Collision Avoidance
System developed by the US Air Force (USAF) that has helped prevent so-
called ‘controlled flight into terrain’ accidents. The system assumes control
of the aircraft when an imminent collision with the ground is detected, and
returns control back to the pilot when the collision is averted.30 Therefore,
they prefer to place emphasis on the human commander or operator and their
capacity to judge the likely effect of using AWS, rather than on the notion of
‘control’, which is often limited and interpreted too rigidly.31
Let us highlight this distinction between human control and human judge-
ment in the most radical context, that is, in the context of offensive AWS,
weapons that fall outside three types of robotic systems that get the ‘green light’
for approval in Directive 3000.09. According to the requirement of human
control, humans must always retain control over life and death decisions; thus,
this means that delegating lethal authority for a machine to make its own deci-
sion shall not be allowed. That is in contrast with the requirement of human
judgement, whereby the development of such weapons, while subject to addi-
tional scrutiny, is, however, in principle allowed. It is worth emphasising how
such subtle semantic difference plays a transformative role. By using the word
‘judgement’, Directive 3000.09 steers the focus from direct control at the level
of engagement and targeting to the design requirement of weapons that allows

25 Chengeta (n 1) 839.
26 Chengeta (n 1) 840.
27 Art. 3, Geneva Convention Relative to the Treatment of Prisoners of War (Third Geneva Conven-
tion), 12 August 1949, 75 UNTS 135.
28 Chengeta (n 1), 840.
29 HRW, ‘Shaking the Foundations. The Human Rights Implications of Killer Robots’ (Report)
(2014), 12.
30 US Government (n 11).
31 ibid.
The problematisation of control over autonomous weapons 195
humans to make informed decisions about the potential deployment of weap-
ons. The appreciation of design requirements generates two subsequent posi-
tive obligations: (1) that humans deploying the systems must understand how
weapons operate in realistic environments so that humans can make informed
decisions regarding their use, and (2) to satisfy this obligation, AWS require
adequate levels of operational testing, verification, validation and evaluation.
The design-oriented requirement does not, however, generate the obligation
not to delegate lethal authority for a machine to make its own decision as long
as two positive obligations are satisfied. On the contrary, the requirement of
human control explicitly states that ‘humans should retain control over the
choice to use deadly force’.
To restate, both policies – the concept of human control and human judge-
ment – recognise the challenges that pose lethal autonomy, yet they come
with different propositions. Both approaches affirm that the development of
robotic weapons has arrived at the point that militaries are able to delegate
lethal authority to machines, and that this represents a novel challenge for
policy-makers. Yet one approach argues for the prohibition of weapons that
make their own targeting decisions, while the second approach leaves the door
open for the potential development of such weapons. Does it mean that both
the Campaign and DoD identified the same problem, but only the remedies
differ? In order to investigate this further, one has to consider what presupposi-
tions underlie both representations of the ‘problem’.

The assumptions behind the US DoD policy on AWS


The term ‘presuppositions’ refers to background ‘knowledge’, usually taken for
granted and not questioned assumptions. It is important to note that this analy-
sis does not attempt to elicit the assumptions or beliefs held by policy-makers
or identify their biases. Rather, the goal is to uncover ‘deep-seated presupposi-
tions that lodge within problem representations’.32 In order to do so, one has
to sometimes go deeper than the level of public discourse and also investigate
the opinions of key architects of Directive 3000.09.
It can be argued that there are three core assumptions that underpin the US
DoD’s construction of AWS as a policy problem. First, the DoD representa-
tives do not specify any hard limits of the weapons’ autonomy development.
They justify this approach by referring to the principle of equivalent retalia-
tion – the US government does not want to be in the position where their
adversaries develop offensive AWS superior to the US weaponry. Second,
DoD authorities set the potential development limits of AWS, if ever, very
low relative to other organisations, specifically the Campaign. The potential
development bar is artificial general intelligence (AGI). Third, the DoD does

32 C. Bacchi, Analysing policy:What’s the problem represented to be? (Pearson 2009), 5.


196 Regulating artificial intelligence in industry
not consider offensive AWS as weapons that necessarily render excessive risks
but instead argues that any risks can be mitigated by the ‘technical solution’.

Balancing operational safety with asymmetric combat advantage


Directive 3000.09 does not explain why the US DoD leaves the door open to
build and deploy offensive AWS. This information is, however, present in other
governmental documents. The most recent National Security Strategy was the
first in history to specifically emphasise the importance of AI and autonomous
technologies for the future of the American military.33 Accordingly, the DoD
National Defense Strategy committed to ‘invest broadly in the military applica-
tion of autonomy, AI, and machine learning (…) to gain competitive military
advantages’.34 In the FY2019 Budget Request, AI and autonomous systems are
designated as administration R&D priorities.35 AWS specifically are considered
as a vital element of the DoD’s ‘Third Offset Strategy’.36 The objective of the
Third Offset Strategy is to ensure a continued asymmetric combat advantage for
the US. In recent years, the US’s competitors have developed innovative tech-
nologies that may undermine US security interests.37 Consider, for instance,
collaborative autonomy technologies such as swarms. Collaborative autonomy
is a type of machine–machine teaming whereby multiple systems are capable
of coordinating their actions to achieve a common objective.38 ‘Swarm’ is one
such teaming that includes a group of homogenous and small aerial systems,
which operate collectively as a coherent unity. Research and development
projects in China have already shown that it is possible to develop workable
control algorithms for various types of swarm missions.39
While Directive 3000.09 (2012) was announced before the Third Offset
Strategy (2014), it nonetheless provides useful insights into the DoD’s early
thinking about the use of autonomy in weapon systems. The Directive’s word-
ing has been chosen carefully to balance additional safety considerations for the
increased autonomy in weapon systems, with a lack of rigid restrictions regard-
ing further development work on military autonomy to reflect the objectives
later set out by the Third Offset Strategy. An example is the cornerstone of
the policy that is the guidance of the appropriate levels of human judgement
over the use of AWS. The word ‘guidance’ may incline readers to think that

33 DoD, National Security Strategy (2017).


34 DoD, National Defense Strategy (2018), 7.
35 The White House,An American Budget (2019).
36 DoD,The Defense Innovation Initiative (Memorandum) (2014)
See <https://archive.defense.gov/pubs/OSD013411-14.pdf> accessed 27 May 2021.
37 ibid 8.
38 V Boulanin and M Verbruggen, ‘Mapping the Development of Autonomy in Weapon System’
(Report) (2017), 30.
39 E. Feng, Ch. Clover,‘Drone swarms vs conventional arms: China’s military debate’ (4th August 2017)
Financial Times.
The problematisation of control over autonomous weapons 197
Directive 3000.09 does not establish a new legal obligation for the US military
administration. Indeed, according to the US Administrative Procedure Act
(APA), policy statements are considered as ‘non-legislative rules’, which means
that they fall within the definition of ‘rules’ but are not required to be prom-
ulgated through the use of legislative rulemaking procedures.40 Thus, policy
statements do not have the force of law similarly to interpretative rules. What
differentiates interpretative rules from policy statements is that the former are
rules issued to clarify or explain existing laws, the latter are issued to ‘advise
the public of the manner in which the agency proposes to exercise a discre-
tionary power’.41 Legislative rules, contrary to interpretative rules and policy
statements, have the ‘force and effect of law’ and may be promulgated only
if they went through a public notice and comment procedure – a process in
which the public is given an opportunity to comment on a proposed version
of the rule and the agency responds to the comments.42 However, rules that
involve ‘military functions’ are exempt from the APA43 notice and comment
procedure, which sometimes makes it difficult to determine whether a par-
ticular rule constitutes a new law, clarifies an existing law or is merely advice
about the exercise of an agency’s discretionary power. Courts in the US focus
on the particular language used in the document when making this determina-
tion. For instance, mandatory language delineating an agency’s obligations in
a policy statement can serve as strong evidence of intent to bind the agency
itself.44 Agency statements ‘couched in terms of command’ may be read to
eliminate agency discretion when applying a policy, transforming the state-
ment into a legislative rule.45 Following this approach, if a ‘so-called policy
statement is in purpose or likely effect one that narrowly limits administrative
discretion, it will be taken for what it is—a binding rule of substantive law’.46
Interestingly, Directive 3000.09 does contain some mandatory language. For
instance, they require offensive AWS to go through a detailed review process
before development and fielding.47 However, it is uncertain how to interpret
the main issue at stake, which is the concept of human judgement. Directive
3000.09 states that AWS ‘shall be designed (…) to exercise appropriate levels of
human judgment over the use of force’.48 Although various parts of the Code
of Federal Regulations (CFRs) that govern federal departments use the word

40 5 U.S.C. § 553(b)(A).
41 Attorney General’s Manual on the Administrative Procedure Act (1947), 30.
42 5 U.S.C. §553(b), (c).
43 ibid (a) (1).
44 J. Cole, T. Garvey (Congressional Research Service), ‘General Policy Statements: Legal Overview’
(2016), 9.
45 American Bus Ass'n v. United States 627 F.2d 531 (D.C. Cir. 1980).
46 Guardian Federal Savings & Loan Ass'n v. Federal Savings & Loan Insurance Corp. 589 F.2d 658,
666 (D.C. Cir. 1978)
47 DoD (n 2), 4d.
48 ibid., 4a.
198 Regulating artificial intelligence in industry
‘shall’ to establish mandatory requirements, the US Supreme Court held that
‘shall’ could also mean ‘may’.49 Thus, the wording of Directive 3000.09 does
not necessarily suggest whether that the requirement of human judgement
over the use of AWS is a legislative rule or a soft policy intent that leave the US
military departments with a wide degree of discretion. Given that the topic of
AWS is still a nascent area of research and it is uncertain how to regulate such
weapons, the motivation behind the ambiguity could be purposeful and stra-
tegic. This is because the policy of human judgement seeks to balance at times
competing interests – ensuring the operational safety of weapons of which we
still know relatively little yet not restricting their potential use for the combat
advantage. The policy of human control, on the other hand, has left out of the
problem representation the military competition among countries and desire to
achieve asymmetric combat advantage.

The potential development limits of AWS


Directive 3000.09 does not stipulate any specific ‘red-lines’ for AWS develop-
ment. However, Robert Work, former Deputy Secretary of Defence, argued
that the red line of AI development in the US military could lie in the develop-
ment of artificial general intelligence. ‘The danger is if you get a general AI
system and it can rewrite its own code [MF emphasised] (…) We don’t see ever
putting that much AI power into any given weapon’.50 That said, Work rec-
ognised that when other countries start to use general AI on the battlefield,
the US military may need to rethink its approach. ‘The only way that we
would go down that path is if turns out our adversaries do and it turns out
that we are at an operational disadvantage (…)’.51 Work’s opinion is not iso-
lated among high-ranking DoD officials. Dana Deasy, DoD Chief Information
Officer, commenting on the new DoD AI strategy, said that the most central
theme throughout the document is ‘the need to increase the speed and agil-
ity, which is how we will deliver AI capabilities across every DoD mission [MF
emphasised]’.52
DoD representatives’ words help us to emphasise the difference between the
assumptions regarding the development of AWS in two policies – one favour-
ing the notion of human control and another one arguing for the requirement
of human judgement. According to the policy of human control, the real prob-
lem is the mere phenomenon of offensive AWS, not AGI lethal autonomy.
That is a stark difference. Offensive AWS, as discussed earlier, is a phenomenon

49 Gutierrez de Martinez v. Lamagno, 515 U.S. 417, 434 n.9 (1995) (‘Though ”shall” generally means
“must,” legal writers sometimes use, or misuse,“shall” to mean “should,”“will,” or even “may.”’)
50 R.Work, Interview (22 June 2016) in Scharre (n 4), 98.
51 ibid 99.
52 Fedscop, ‘Pentagon unveils strategy for military adoption of artificial intelligence’ <https://www
.fedscoop.com/artificial-intelligence-pentagon-military-unclassified/> accessed 27 May 2021.
The problematisation of control over autonomous weapons 199
of delegating an autonomous machine to make an offensive, lethal decision.
This is an existing phenomenon; in other words, there are already existing
weapons that are capable of making their own lethal decisions. This research
does not refer here to the three types of weapons that receive the ‘green light’
from Directive 3000.09, i.e. semi-autonomous weapons, defensive supervised
autonomous weapons, and non-lethal, non-kinetic autonomous weapons –
they are all in use today. It refers to weapons that fall beyond the three categories
and use autonomy capabilities in a novel way – weapons that have been occa-
sionally in use before and are in potential use today. Weapons such as advanced
loitering munitions, i.e. the Israeli Harpy, where no human approves the spe-
cific target before engagement.53 The Harpy weapon is in the arsenal of various
countries today, including China, India, South Korea, Chile, and Turkey. It
is also reported that the Chinese have reverse engineered their own variant.54
Harpy loitering munitions were used a number of times in 2018 and 2019 by
the Israel Defence Forces to destroy Syrian Pantsir-S1 SAM batteries. The US
DoD currently owns a miniature, high-precision loitering munition called the
AeroVironment Switchblade.55 The Switchblade was used by the US Army
in Afghanistan to target ‘high value targets’, such as insurgent leaders, mortar
teams, or insurgents travelling in vehicles.56 Switchblade still keeps humans in
the loop via a functioning radio link to approve targets before engagement,
making them semi-autonomous weapons, but it can be potentially deployed
without direct human intervention. A more contested example of delegating
decision-making for a machine to make a lethal decision is the Long-Range
Anti-Ship Missile AGM-158C (LRASM). The LRASM is a stealthy anti-ship
cruise missile released by Lockheed Martin and based on the Joint Air-to-
Surface Standoff Missile (AGM-158B JASSM-ER) that incorporates a multi-
mode radio frequency sensor, a new weapons’ datalink and altimeter, and an
uprated power system.57 These enable the LRASM to have a significant degree
of independence from a human operator. The LRASM is initially directed by
pilots, but then halfway to its destination, it severs communication with its
operators. The weapon itself decides which of the selected targets to attack
among the given pool.58 It is thus capable of selecting its own target based
on processing information and a continuous stream of data. LRASM weapon
is already available to the US military, while in February 2020, the US State

53 IAI,‘Hapry’ < https://www.iai.co.il/p/harpy> accessed 27 May 2021.


54 Scharre (n 4), 47.
55 AeroVironment, ‘Switchblade’ <https://www.avinc.com/uas/adc/switchblade/> accessed 27 May
2021.
56 S. Atlamazoglou,‘This new technology may be the future of close air support’ (2019) <https://sof-
rep.com/news/this-new-technology-may-be-the-future-of-close-air-support/> accessed 27 May
2021.
57 Lockheed Martin, ‘LRASM Press Release’ <https://www.lockheedmartin.com/en-us/products/
long-range-anti-ship-missile.html> accessed 27 May 2021.
58 ibid.
200 Regulating artificial intelligence in industry
Department approved a possible sale of LRASM to Australia.59 Both the Israeli
Harpy and LRASM are based on advanced autonomy capabilities; however,
they are both far from what is considered as AGI.
AGI is a different concept from the currently fielded AWS. While the term
AGI has no accepted definition, there are several widely used characteristics.
First, AGI refers to the potential future technology which may greatly exceed
human capabilities in all dimensions. Second, AGI is set to display intelligence
that is not tied to a highly specific set of tasks. In other words, it generalises
what it has learned, including generalisation to contexts qualitatively different
than those it has experienced before or generally interprets its tasks in the con-
text of the world at large.60 This engineering system is not yet technically avail-
able, and its arrival is not univocally accepted by the wider AI community.61
Regardless of the practical feasibility of achieving AGI, the relationship
between existing AI developments in weapon systems and AGI sheds light on
the different problem representations in two camps favouring either human
control or human judgement. The DoD authorities set the potential develop-
ment limits of AWS very low relative to the Campaign. Subject to the regula-
tory precautions, they are in principle open to delegating machines to make
offensive, lethal decisions. This is directly confirmed by the US delegation
during the UN GGE meetings:

DoD Directive 3000.09’s requirements that weapons be designed to allow


commanders and operators to exercise appropriate levels of human judg-
ment over the use of force reflect a deliberate decision to permit weapons
that are programmed to make ‘decisions’ that relate to targeting.62

Moreover, some of the key architects of DoD Directive 3000.09, such as Work,
are going even further by arguing that if adversaries start to develop AGI the
US will be at an operational disadvantage and the DoD will need to rethink its
position.63 This means that the architects of Directive 3000.09 leave open the
possibility to develop and use AGI if there is ‘an urgent military need’.
This is not to say that the DoD’s goal is to develop and deploy military
AGI. None of the official strategies evokes this concept, and even among the
architects of Directive 3000.09 there is a degree of scepticism towards such

59 Navy Recognition, ‘US approves a sale to Australia for 200 AGM-158C Long Range Anti-Ship
Missiles LRASM’ <http://www.navyrecognition.com/index.php/news/defence-news/2020/feb-
ruary/8029-us-approves-a-sale-to-australia-for-200-agm-158c-long-range-anti-ship-missiles-lrasm
.html> accessed 27 May 2021.
60 See e.g. D. Lewis, G Blum, N Modirzadeh,‘War-Algorithm Accountability’ (Report) (2016), 18.
61 J. Searle,‘Minds, Brains and Programs’ [1980] 3 Behavioral and Brain Sciences 417–424; S. Beckers,
‘AAAI: an Argument Against Artificial Intelligence’ In V. Müller (ed), Philosophy and Theory of Artificial
Intelligence 2017 (Springer 2018), 235–247
62 US Government (n 11).
63 R.Work, Interview (22 June 2016) in Scharre (n 4), 99.
The problematisation of control over autonomous weapons 201
advancements.64 However, it seems that the DoD leaves the door open for fur-
ther significant improvements of already existing AI applications, including in
the targeting and engagement phase, and that the global competition between
countries further accelerates this process, which does not currently have any
hard red-lines.

Weapons rendering excessive risk


The DoD’s third key assumption behind their policy of human judgement is
that the technical analysis of risk should be at the centre of the debate about
the potential restrictions of AWS. The potential legal challenges are subjugated
to pure technical assessment. Specifically, two issues stand out when discussing
the risk factor of AWS: (i) whether AWS can meet the LOAC principle of
distinction and (ii) whether AWS creates a ‘responsibility gap’.
The LOAC principle of distinction prohibits weapons that cannot be
directed at a specific military object or if its effects cannot be limited as required
by international law, and if the result in either case is that the nature of the
weapon is to engage with targets without distinction.65 According to Sharkey,
existing AWS lack three of the technology components required to distinguish
between legitimate and illegitimate targets. These requirements are: (1) ade-
quate sensory processing systems for distinguishing between combatants and
civilians; (2) programming language to define a non-combatant or person hors
de combat; and (3) battlefield awareness or common-sense reasoning to assist in
discrimination decisions.66 Therefore, existing AWS are not able to conform
with the principle of distinction in a complex conflict situation where they
will face the choice of identifying and engaging with legitimate targets hidden
among the civilian population.
US military experts point out that a great deal of existing weapons, let alone
Harpy, would not satisfy Sharkey’s conditions and thus would need to be
removed from modern weaponry.67 The DoD agree that all weapons should
comply with the principles of LOAC, including with the principle of distinc-
tion, but they argue that existing AWS are not ‘inherently indiscriminate’ and
in fact they often enhance compliance with the law rather than violate it.68
For the DoD, it is persons who must comply with the LOAC by employing
weapons in a discriminate and proportionate manner. For instance, even if the
weapon autonomously selects and engages targets, ‘its use would be precluded
when expected to result in incidental harm to civilians or civilian objects that

64 R.Work, Interview (22 June 2016) in Scharre (n 4), 99.


65 The principle of distinction arises from international customary law. It is also codified in art. 48 of
the Protocol I, with supplementary rules in art. 51 and art. 52. See IHL Database <http://tinyurl
.com/IHLdata> accessed 27 May 2021.
66 N. Sharkey,‘The evitability of autonomous warfare’ (2012) 94 IRRC, 788–789.
67 Scharre (n 4).
68 US Government,‘Autonomy in Weapon Systems’ (Report CCW/GGE.1/2017/WP.6) (2017).
202 Regulating artificial intelligence in industry
is excessive in relation to the concrete and direct military advantage expected to
be gained’.69 Thus, the DoD delineates AWS from ‘inherently indiscriminate’
weapons such as mines or cluster munitions. This means that AWS should not
be automatically rendered unlawful. One should not confuse the prohibition
on weapons that are indiscriminate – because they cannot be aimed at a lawful
target – with the prohibition on the use of discriminate weapons in an indis-
criminate fashion. In fact, existing AWS by their very nature are designed to
be discriminating. They can only attack specifically designated targets that meet
set criteria determined by an algorithm from among previously pre-defined
targets.
There are already instances of using AWS without violating LOAC princi-
ples. There are situations in which AWS could satisfy this rule with a consider-
ably low level of ability to distinguish between civilian and military targets.70
First, in operations in well-defined circumstances ‘without placing civilians at
excessive risk’. An example of such a situation is a battlefield where combatants
are strictly separated from civilians and occupy only a specific territory. Second,
existing AWS might only target enemy weapons, as opposed to the individu-
als operating them, until that individual poses a potential threat. Lastly, such
weapons might also operate where no civilians are present. In such a scenario,
AWS will only select and attack other hostile machines. Drawing on the above
considerations, there are at least three potential sets of circumstances under
which exiting AWS might be able to comply with the principle of distinction.
There are two important points in the DoD argumentation. First is that
existing AWS are not ‘inherently indiscriminate’. Second is that, while emerg-
ing AWS may today pose certain challenges related to the technical ability to
discriminate against civilians, further improvements in the technology will ‘fix’
these challenges. This idea of ‘technology fixing’ is echoed by the statement of
the US representatives during the GGE meeting in Geneva:

Emerging technologies are difficult to regulate because technologies con-


tinue to change as scientists and engineers develop advancements. A best
practice today might not be a best practice in the near future. Similarly,
a weapon system that, if built today, would risk creating indiscriminate
effects, might, if built with future technologies, prove more discriminating
than existing alternatives by reducing the risk of civilian casualties.71

Interestingly, the Campaign’s view about the future ‘technology fixing’ of


existing weapons’ deficiencies is less clear. In the Losing Humanity report, the

69 ibid.
70 J.Thurnher, ‘The Law that Applies to Autonomous Weapon Systems’ (2013) 17 ASILI; M Schmitt,
J Thurnher, ‘“Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict’
(2013) HNSJ, 243.
71 US Government (n 69).
The problematisation of control over autonomous weapons 203
Campaign authors argue in one place that AGI, ‘which would try to mimic
human thought’, is a potential option to promote autonomous weapons’ com-
pliance with LOAC.72 Further, in the report, however, they seem to cast doubt
whether technology can indeed truly emulate human action:

Even if the development of fully autonomous weapons with humanlike


cognition became feasible, they would lack certain human qualities, such
as emotion, compassion, and the ability to understand humans. As a result,
the widespread adoption of such weapons would still raise troubling legal
concerns and pose other threats to civilians.73

However, in order to move beyond the debate about the technical feasibility of
designing a weapon to comply with LOAC, the Campaign turned to another
important problem – the issue of responsibility for a weapon’s wrongdoing.

The problem of a responsibility gap again shifts attention


towards risk analysis
Potentially, there may be situations where no one is held responsible for
machine’s wrongdoings. This is the so-called a ‘responsibility gap’.74 Some
argue that the more autonomous weapon’s systems become, the less it will
be possible to properly hold those who designed them or ordered their use
responsible for their actions.75 Yet, the impossibility of punishing the artificial
agent means that one cannot hold the machine responsible.76 The responsibil-
ity gap arises when in the execution of a targeting decision, an AWS must
have done something that the operator did not directly programme it to do,
and thus the operator cannot be blamed for misapplications of force.77 On the
one hand, this element of unpredictability often guarantees a machine’s flex-
ible adjustment to a dynamic environment. Programmers purposefully design
these systems to allow them to respond to changes in real-time, rather than to
anticipate every possible eventuality that may arise. On the other hand, this
element of unpredictability appears to be particularly problematic in the appli-
cation of force to a target.78 The problem is that AWS deliberately distance
operators from the enforcement of targeting decisions. Operators appear in the
first stages of the causal chain that leads to the application of force to a target
but not the final stage.

72 HRW and others (n 21).


73 ibid.
74 ibid., 42.
75 ibid.
76 ibid.
77 A. Leveringhaus, Ethics and Autonomous Weapons (Palgrave Macmillan 2016), 79.
78 ibid.
204 Regulating artificial intelligence in industry
The problem of the responsibility gap can be framed as a legal problem.79 It
can be argued that existing mechanisms for legal responsibility are ill-suited and
inadequate to fully address the unlawful harms that AWS might cause and that as
a result humans involved with the production or use of AWS would ‘escape lia-
bility for the suffering caused by such weapons’.80 DoD representatives engaged
with the legal argument, but they narrowed it down to purely technical con-
siderations regarding risk analysis. This process occurred in the following steps.
The DoD’s initial strategy was to downplay the relevance of the ‘responsi-
bility gap’. They argued that the ‘responsibility gap’ does not occur because,
in principle, LOAC deals primarily with states, not individuals. Unlike indi-
vidual criminal responsibility, state responsibility is not based on the concept of
personal culpability, but on the attribution to the state of the (mis)conduct of
its organs or agents.81 Individual criminal responsibility can only be attributed
under the LOAC to the most serious breaches of the law, such as war crimes,
crimes against humanity, aggression, and genocide.82 The Campaign argued,
however, that the use of AWS has the potential to commit such crimes, and
thus the responsibility gap appears.83 According to the concept of individual
criminal responsibility, criminal offences are either caused intentionally or by
negligence. When an artificial agent is intentionally directed to harm or harm
is caused by negligence then the human operator is criminally liable. When
the lack of intent (mens rea) of the operator cannot be established, then the
responsibility gap arises.84
DoD disagrees and claims that in the entire chain of command there is
always a human element in place that can be found either at the program-
ming level or at the operating level.85 Thus, the programmer may be held
responsible if he or she programmed the AWS in a way that they intentionally
breached the law of armed conflict, or the operator may be responsible if he
or she decided to operate AWS in an unlawful manner.86 In situations where

79 In the debate about AWS the words ‘responsibility’ and ‘accountability’ are often used interchange-
ably. Both the UN GGE and the Campaign refer to the term ‘accountability’ as an umbrella term
to describe various forms of legal responsibility, i.e. state responsibility, administrative proceedings
undertaken in response to violations of IHL, civil liability, and individual criminal responsibility. For
the purpose of this article, I refer to the broad notion of ‘responsibility gap’, but I discuss both state
and individual responsibility. See: Campaign, ‘Mind the Gap:The Lack of Accountability for Killer
Robots’ (Report) (2015); UN GGE, ‘Report of the 2018 session of the Group of Governmental
Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (Report
CCW/GGE.1/2018/3) (2018)
80 Campaign (n 79).
81 US Government (n 69); See also: R. Arnold, ‘Legal Challenges Posed by LAWS: Criminal Liability
for Breaches of IHL by (the Use of) LAWS’ in R Geiß, Lethal Autonomous Weapon Systems (2016), 10.
82 Arnold (n 81), 10.
83 Campaign (n 79); UN GGE (n 80).
84 Campaign (n 79).
85 US Government (n 69).
86 Arnold (n 81), 10.
The problematisation of control over autonomous weapons 205
neither programmer, not commander can be initially identified, the impos-
sibility to ascribe criminal responsibility to a person is not caused by the fact
that the harmful conduct was committed by AWS, but the problem is rather
the impossibility to collect evidence allowing for the proper identification of a
relevant human element responsible for the machine wrongdoings.87
Thus, the discussion about responsibility started as legal discourse, but then
developed into a debate about the reasonable risk threshold and risk standards.
The risk analysis in turn is largely dominated by the technical knowledge of the
current and potential future performance of AWS, their testing and training speci-
fications. Here again, as it was with the legal problem of distinction, the DoD
and their experts shift the focus from legal and moral arguments towards almost
entirely technical considerations. ‘The standard of care or regard that is due in
conducting military operations with regard to the protection of civilians is a com-
plex question to which the law of war does not provide a simple answer’,88 we
read. The DoD further states that this standard must be assessed based on the gen-
eral practice of states and common standards of the military profession in conduct-
ing operations. In particular, the best measure to promote ‘accountability’ are:

training on the weapon system and rigorous testing of the weapon system
can help commanders be advised of the likely effects of employing the
weapon system.89

Taking the above into account, while the DoD recognises the increased risks
associated with the current use of autonomy, it does not necessarily consider
AWS as weapons that inherently render risks that are too excessive. The poten-
tial limitation of the military development is AGI, although not without reserva-
tion. That said, a significant leap of technical progress is required to close the gap
between existing weapons utilising narrow AI and weaponised AGI. This means
therefore that the US military, subject to all precautions, leaves themselves open
for further advancements of AWS, fuelled by international competition, which
only accelerates in pace.90 The DoD takes a qualitatively different take on the
legal problems of discrimination and responsibility relative to the Campaign.
They shift the legal ramifications of the problems onto a technical issue and build
confidence in their view on the basis of their technology prowess and relevant
testing and monitoring capabilities. Thus, the potential novel legal and ethical
challenges for human–machine controls arising from AWS are predominately,
if not exclusively, subjugated to technical expertise and military ‘know-how’.

87 ibid.
88 US Government (n 69).
89 ibid.
90 See Congressional Research Service, ‘Renewed Great Power Competition: Implications for
Defence—Issues for Congress’ (Report) (2020).
206 Regulating artificial intelligence in industry
What has been left out of the US DoD
problem representation?
By shifting from ‘human control’ to ‘human judgement’, in this part, it is argued
that the policy challenges the notion of an active human role in the operation
of AWS. A response to the problem of increasing risk associated with the use of
AWS is ‘more autonomy, less human’, but this assumption has been largely left
out of the problem representation discourse, in the place of the vague notion
of ‘human judgement’. Further, Directive 3000.09 exempts a cyber domain
from their scope and thus creates a legal and policy vacuum when it comes to
the potential development and use of offensive autonomous cyber weapons – a
threat arguably more persistent than still incidental uses of kinetic AWS. Finally,
the problem formulated by the DoD is framed primarily as technical in nature.
While the US policy recognises specific challenges associated with the develop-
ment and use of AWS, it leaves unproblematic considerations that such weap-
ons, even if they meet all necessary technical reviews, may still be considered as
unacceptable solely on an ethical basis. It is further argued that these three argu-
ments may inform an alternative problem and policy representation of AWS.

Increasing importance of control by design and the silent


acceptance of the removal of ‘human control’
This section identifies three important, although not widely acknowledged,
transformations that appeared between 2005 and 2011, and culminated with
DoD Directive 3000.09. All of them lead to the incremental replacement of
the notion of human control with the concept of human judgement.
First, while DoD strategies since 2005 emphasised the role of human factors,
over time one can observe in the military documents the decreasing impor-
tance of direct human control and increasing relevance of indirect control,
particularly at the stage of design. The Air Force 2009 Flight Plan identifies
the shifting role of humans from active operators to monitoring agents. The
Plan explains:

Increasingly humans will no longer be ‘in the loop’ but rather ‘on the
loop’ – monitoring the execution of certain decisions. Simultaneously,
advances in AI will enable systems to make combat decisions and act
within legal and policy constraints without necessarily requiring human
input.91

We can see how the description of AI-based systems prepares the setting for the
removal of direct human engagement. The Plan also emphasises the changing

91 US Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (Report) (2009) <https://fas.org
/irp/program/collect/uas_2009.pdf> accessed 27 May 2021.
The problematisation of control over autonomous weapons 207
role of human operators moving from being in the loop to being on the loop.
Their role is to ‘monitor the execution of operations’ and retain ‘the ability
to override the system during the mission’.92 In the same Plan we read about
the embedding of human qualities within machines at the level of design: ‘the
systems programming will be based on human intent’.93
The second important transformation was the removal of reference to direct
human control after DoD Directive 3000.09. In 2009 the USAF asked for
policy to guide the development of future weapons capabilities, including fully
autonomous systems. The office of Defence Policy started to receive questions
about legal and ethical issues associated with the use of lethal autonomy, while
different military branches responded without coordination. The US Army
initially claimed that it ‘will never delegate use-of-force decisions to a robot’.94
The USAF had a different perspective.95 After two years, in 2011, the DoD
responded with a temporary Unmanned Systems Roadmap stating:

Policy guidelines will especially be necessary for autonomous systems that


involve the application of force (…) For the foreseeable future, decisions
over the use of force and the choice of which individual targets to engage
with lethal force will be retained under human control [MF emphasised] in
unmanned systems.96

In the same document, however, we read that the DoD envisions unmanned
systems to operate with manned systems ‘while gradually reducing the degree of
human control [MF emphasised] and decision making required for the unmanned
portion of the force structure’.97 In DoD Directive 3000.09 from 2012 the
notion of human control is replaced by ‘human judgement’, signalling a wider
shift towards the controls at the level of design relative to direct human con-
trol. In the Unmanned Systems Roadmap from 2013 the notion of human
control is used in the following way:

research and development in automation are advancing from a state of


automatic systems requiring human control toward a state of autonomous
systems able to make decisions and react without human interaction.98

92 ibid.
93 ibid.
94 P. Scharre, ‘Interview with Dan Saxon’. Qtd. in D. Saxon, ‘A human touch: autonomous weapons,
DoD Directive 3000.09 and the interpretation of ‘appropriate levels of human judgment over the
use of force’ in N. Bhuta and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (CUP
2016), 195.
95 US Air Force (n 92), 41, 51.
96 DoD, Unmanned Aircraft Systems Integrated Roadmap FY2011-2036 (Report) (2011).
97 ibid.
98 DoD,‘Unmanned Aircraft Systems Integrated Roadmap FY2013-2038’ (Report) (2013), 15.
208 Regulating artificial intelligence in industry
Military practitioners argue today that DoD Directive 3000.09 was transforma-
tional because it was the first policy on AWS. The most significant revolution
was, however, unnoticed – the DoD removed the notion of ‘human control’
and by including the phrase ‘appropriate levels of human judgement over the
use of force’, they left the door open for the exercise of no direct human con-
trol at all over the use of AWS, including in the offensive settings.
The third aspect of changing human–machine controls between 2005 and
2012 is related to the introduction of ‘levels of autonomy’ by the DoD as a frame-
work to inform the interaction with human operators.99 In the 2005 Roadmap,
the DoD defined ten levels of autonomy ranging from remotely guided sys-
tems to fully autonomous.100 The 2009 Roadmap specifically asked opera-
tors and commanders to ‘retain the ability to refine [MF emphasised] the level
of autonomy’.101 And later: ‘The level of autonomy should be dynamically
adjusted [MF emphasised] based on workload and the perceived intent of the
operator’.102 This guidance attempted to aid the development process and the
use of weapons by grouping functions for generalised scenarios. They assume
the separation of duties between humans and machines. Humans are set to
respond to dynamic situations and shift to the right ‘mode of automation’
rather than co-actively interface with machines to achieve the best capabili-
ties. The concept of levels of autonomy has been criticised by practitioners
who argue that the framework of ‘levels of autonomy’ only represents situa-
tions where increased automation must come with less human control, which
creates an unnecessary trade-off and does not accurately represent the com-
plexity of human–machine interferences embedded in AWS. The criticism
has also been levied inside the DoD. Few months after the introduction of
Directive 3000.09, which reaffirms the idea of levels of autonomy, the DSB
published a report that strongly criticised this concept. The DSB argued for
the replacement of ‘levels of autonomy’ with a framework that focuses on the
explicit allocation of cognitive functions and responsibilities between humans
and machines to achieve specific capabilities.103 However, despite these criti-
cisms, the concept of ‘levels of autonomy’ is still in use within the US military,
although the DSB report suggested a different approach.
To sum up, in the academic debate, many authors equate the concept of
human control with the notion of human judgement, although the DoD
distances itself from such claims. The DoD established Directive 3000.09 as
a response to the problem of the increasing risk associated with the use of
AWS, but grounded the response in the wider approach of ‘unmanning’ and

99 Saxon (n 94).
100 DoD, Unmanned Aircraft Systems Roadmap 2005-2030 (Report) (2005), 48.
101 US Air Force (n 92).
102 ibid.
103 DSB,‘The Role of Autonomy in Weapon Systems’ (Report) (2012) <https://fas.org/irp/agency/
dod/dsb/autonomy.pdf> accessed 27 May 2021.
The problematisation of control over autonomous weapons 209
disavowing human control. Directive 3000.09 even leaves the door open for
the exercise of no human judgement at all. The current DoD conviction of
‘more autonomy, less human’ as a response to operational risks is particularly
problematic but has been largely left out of the problem representation. An
alternative way to see the problem is to appreciate the necessity of human factors
by arguing that the more advanced a weapon is, the more complex the control
system is, and thus paradoxically the contribution of human operators is more
crucial.104 This alternative approach could stem from the Human-Centred AI
framework, which postulates to design technologies that offer both high lev-
els of human control and high levels of computer automation.105 According
to this perspective, irrespective of the system’s level of autonomy, a human
should always have the possibility to override the machine’s action. Moreover,
humans should also conduct active oversight of weapon systems in order to
avoid ‘algorithmic hubris’, which equates to unreasonable expectations regard-
ing the machine’s performance.106 These are only the most common exam-
ples, but the key argument is that there is an alternative to the DoD problem
formulation of delegating lethal autonomy to machines that complicates the
seemingly ‘natural technology development’ towards more autonomy and less
human engagement.

The exclusion of cyber weapons and their complexities


Directive 3000.09 explicitly excludes autonomous or semi-autonomous sys-
tems for cyberspace operations.107 However, cyber weapons may actually rep-
resent an even greater threat than the isolated uses of kinetic AWS because they
could generate malicious effects across the internet.108 The most significant
challenge comes from autonomous offensive cyber weapons, which are already
in use since the Stuxnet operations. Stuxnet was not only the first cyber weapon
that caused physical damage, but also it was a weapon that autonomously car-
ried out its attack, although the autonomy was still largely constrained. Stuxnet
operated in a similar fashion to homing munitions, searching for a specific tar-
get (in this case for operating programmable logic controllers used in industrial
applications) and when the target was found Stuxnet changed its settings and
took control. However, Stuxnet had a number of safeguards in place to limit
its spread and effects; for instance, it could not modify itself autonomously
and had a self-termination date.109 While still new software updates generally

104 See L. Bainbridge,‘Ironies of automation’ (1983) 19/6 Automatica 775–779; J. Bradshaw et al.‘The
seven deadly myths of autonomous systems’ (2013) 28(3) IEEE Intelligent Systems, 54–61.
105 B. Shneiderman, ‘Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy’ (2020)
International 36/6 Journal of Human–Computer Interaction, 495–504.
106 ibid., 497.
107 DoD (n 2).
108 F. Delerue, Cyber Operations and International Law (CUP 2020) 160.
109 Scharre (n 4) 224.
210 Regulating artificial intelligence in industry
originate with humans, security experts already warn about more advanced
autonomous offensive cyberweapons with the ability to self-replicate.110
As is the case with kinetic weapons, here again the DoD representatives
fear that the adversaries may prompt the US to launch offensive AWS, despite
the lack of clear rules regarding their use. We read in the National Security
Strategy that US competitors have not only achieved significant progress in
integrating data analytic capabilities in the military operations, but they are also
developing advanced weapons that could threaten the US current command-
and-control architecture.111 Advanced command-and-control instructions are used
to identify attackers, relate attacks, and disrupt ongoing malicious activity.
However, there are at least four operational challenges to have effective com-
mand-and-control architecture in a cyber domain. First, there is the problem
of offensive unpredictability.112 An intelligent malware agent with self-learning
capacities can learn and override defensive acts in order to exploit any potential
vulnerabilities of a system.113 The second challenge is concerned with the offence
undetectability, called in cyber studies ‘the attribution problem’.114 Complex mal-
ware is difficult to detect and, even when recognised, one can only provide an
account for known intrusions. The challenge is to confront the prospect of per-
manent intrusion of the defender’s infrastructure when the scale and scope of
infiltration can raise significant security issues. The third challenge relates to the
complexity of the defence system.115 While the attacker usually only needs to
understand the procedures of entry, the defender must protect the entire network
against many interrelated points of vulnerabilities. Lastly, traditional command-
and-control architecture is under pressure from supply chain risks, such as manu-
facturers who introduce vulnerabilities into the specific components of a system.116
That said, Directive 3000.09 exempts the cyber domain and thus creates a
legal and policy vacuum when it comes to the potential development and use
of offensive autonomous cyber weapons. Hence, an alternative way to see the
formulated problem is to consider currently separate frameworks for the cyber
domain and kinetic AWS jointly and to explore whether existing controls from
Directive 3000.09 are congruent with autonomous cyber weapon operations.
If not, it is worth exploring what makes kinetic weapons special that they
require increased regulatory oversight. A potential answer is that kinetic AWS
could be lethal, while cyber weapons could not. However, cyber weapons
could either cause lethal or related harm as a side effect or they could cause sig-
nificant damage, even if not lethal. Thus, one can elevate these insights, expose

110 P. Scharre interview with Bradford Tousley (27 April 2016) Qtd.in Scharre (n 4), 228.
111 DoD, National Security Strategy (2017) 8.
112 L. Kello, The Virtual Weapon and International Order (Yale University Press 2017), 68–69.
113 M. Brundage and others,‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation’ (Report) (2018), 20.
114 Kello (n 113), 69–72, 129–132.
115 ibid., 72–73.
116 ibid., 73–74.
The problematisation of control over autonomous weapons 211
complexities related to cyber weapons, and challenge the representation of a
problem in the current form by the DoD.

Beyond legal and technical considerations – human dignity and


moral engagement
What is left unproblematic in the DoD approach towards AWS is that such
weapons, even if they meet all necessary legal and technical reviews, may still
be considered as unacceptable solely on an ethical basis. In other words, one can
argue that delegating lethality to a machine to make a decision is unethical by
its own nature. It is worth acclaiming that these types of arguments have started
to gain more traction now the debate on AWS has matured. For instance, in
the first report published by the Campaign, the word ‘dignity’ does not appear,
while ethical considerations are discussed primarily in the context of the poten-
tial compliance with international humanitarian law.117 This is not to say that
the argument that AWS violate human dignity, regardless of legal and techni-
cal considerations, was not advanced by the Campaign. The Campaign reports
refer, for instance, to the ‘profound effects’ on the ‘impersonalisation of battle’
that may remove some of the instinctual objections to killing.118 However,
despite these expressions, there are still calls from the academic community to
further de-emphasise the legal and technical frame in favour of a deeper ethical
argument that the use of AWS would infringe on human dignity.119
The issue of human dignity is a complex matter, but it is often understood
as a special status attributed to humans from which certain rights and duties
arise.120 In the field of warfare, the question is whether something morally
valuable that constitutes the special status of humans is lost when a human is
replaced by a machine in the use of force. Leveringhaus argues that the replace-
ment of human agency with artificial agency at the point of force delivery is
not morally desirable because it leads to moral disengagement.121 The argument
does not suggest that human operators are not entirely disengaged. They have
to ensure that the use of AWS does not lead to excessive risks. They are not,
however, fully morally engaged. It is because to be a fully morally engaged
human is more than just to respect someone else’s rights. It is to act for reasons
that are not entirely rights-based, such as recognition of common humanity, a
concern for the vulnerable or pity and mercy. Thus, the replacement of human
agency with artificial agency leads to at least partial moral disengagement.122

117 HRW and others (n 21).


118 ibid.
119 E. Rosert, F Sauer, ‘Prohibiting Autonomous Weapons: Put Human Dignity First’ (2019) 10(3)
Global Policy Volume, 371.
120 P. Carozza,‘Human Dignity’, in D. Shelton (ed), The Oxford Handbook of International Human Rights
Law (OUP 2015), 345–359.
121 Leveringhaus (n 77), 90–94.
122 ibid.
212 Regulating artificial intelligence in industry
Conclusions
The US DoD introduced the policy on AWS as a response to the problem of
the potential use of AWS for offensive purposes, which increases the oper-
ational risk of unintended consequences. Directive 3000.09 introduced the
requirement of human judgement over the use of such weapons to mitigate the
risks associated with the use of such weapons, but by using the word ‘judge-
ment’, the policy steered the focus from direct control at the level of engage-
ment and targeting to the design requirement of weapons.
The US DoD approach is based on the assumptions that the development of
lethal autonomy is justified by the global competition between countries, and
as such there are no specific barriers to development. It thus fits with the wider
contours of the Third Offset Strategy aimed at building superior technology
capabilities relative to countries such as China and Russia. Contrary to the
Campaign, the DoD does not consider offensive AWS as weapons that neces-
sarily render too excessive risks. Legal arguments against AWS – the inability
to discriminate against civilians and the potential responsibility gap resulting
from AWS wrongdoings – are subjugated primarily to ‘technical’ discourse and
military know-how.
It is argued that US DoD policy on AWS challenges the notion of an active
human role in the operation of such weapons. Their response to the problem
of operational risk associated with the use of AWS is ‘more autonomy, less
human’, but this assumption has been largely left out of the problem repre-
sentation discourse. Thus, one can build an alternative problem representa-
tion that appreciates the necessity of direct human interference by arguing
that the more advanced a weapon is, the more complex the control system
is, and thus the contribution of the human operators is more crucial. Further,
Directive 3000.09 exempts the cyber domain from their scope and thus cre-
ates a legal and policy vacuum when it comes to the development and use of
offensive autonomous cyber weapons. One alternative problem representation
is to consider the currently separate frameworks for cyber domain and kinetic
AWS jointly and to explore whether existing controls from Directive 3000.09
are congruent with those on autonomous cyber weapon operations and, if not,
why this is the case. Finally, the problem formulated by the DoD is framed pri-
marily as technical in nature. The DoD does not problematise considerations
that such weapons, even if they meet all necessary technical reviews, may still
be considered unacceptable on an ethical basis. An alternative problem rep-
resentation may challenge the development and use of AWS due to concern
about human dignity in warfare.
Summary

In recent years, public attention has increasingly focused on the implementation


of AI-related systems, without proper legal and regulatory analysis. Therefore,
this publication aimed at filling this gap by guiding the reader not only through
the different applications of AI but also through the correlated legal aspects.
The research in this volume revealed that AI still has a long way to go before
it can be considered safe and trusted. The degree of development of AI tools
is very varied, and their strength and weaknesses have been expressed in the
different chapters. Even though the technologies allow their users to perform
certain tasks quicker and often in a more accurate way, the common “scepti-
cism” towards AI among the different regulators is consistently around certain
key vulnerabilities, including accountability, transparency, explainability, and
protection of fundamental rights. Nonetheless, the COVID-19 pandemic has
become a powerful catalyst for many industries to embrace information, com-
munication, and AI technologies more closely. With a large number of busi-
nesses deploying or exploring AI options, it is imperative that AI designers,
regulators, and businesses work together to build and continuously improve
systems by implementing a sound regulatory and ethical framework.
Even though the EU has some of the toughest privacy and security regula-
tions in the world, a general legal framework for AI is still under development.1
At present there are no comprehensive regulations on AI in any jurisdiction.
The national legislation of some countries introduces regulations for the use of
certain technologies, but, in general, the legal aspects of regulating the use of
AI are still under discussion or development. There is also little harmonisation
of regulatory matters on AI among different jurisdictions.
Certain regulatory frameworks that apply to specific industries try to cope
with the rapid technological advancement by creating a variety of sui generis

1 It is worth noting that on 21 April 2021, the European Commission published its proposal for a
comprehensive regulatory framework for AI in the EU. See: European Commission, Proposal for a
Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on
Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, Brus-
sels, 24 April 2021, COM (2021) 206 final, 2021/0106 (COD).
214 Regulating artificial intelligence in industry
rights. A good example of this is the copyright system that applies to many
creative industries, such as music. However, as presented in Chapter 7, they
do not seem to deal with some of the fundamental problems of that industry.
Any attempts at the future harmonisation of regulatory matters of AI
should aim at providing the legal certainty necessary to facilitate innova-
tion and investment in AI, while also safeguarding fundamental rights and
ensuring that AI applications are used safely. Moreover, they should provide
supervisory and enforcement mechanisms that would allow for national
or regional harmonisation, and implementation of corrective actions and
sanctions. For any regulatory frameworks to be effective, their impact and
effectiveness would also need to be systematically revised. This will require
greater collaboration between engineers, researchers, scholars, regulators,
and industry representatives. Such diverse teams are more likely to contrib-
ute to a meaningful regulatory framework for AI in industry.
Bibliography

Table of Cases
China
Feilin v Baidu (2018) Beijing Internet Court <https://www.bjinternetcourt.gov.cn/cac/zw
/1556272978673.html> accessed 27 May 2021.
Shenzhen Tencent v Yingxun Tech (2019) Shenzhen Nanshan District People’s Court, Yue
Min Chu No. 14010. zhongguo caipan wenshuwang (China Judgements Online)
<https://wenshu.court.gov.cn/website/wenshu/181107ANFZ0BXSK4/index.html
?docId=30ba2cab36054d80a864ab8000a6618a> accessed 27 May 2021.

European Union
Case 120/78 Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein (‘Cassis de Dijon’)
[1979] ECR 649.
Case C-470/93 Verein gegen Unwesen in Handel und Gewerbe Köln.e.V. v Mars GmbH [1995]
ECR I-1923.
Case C-55/94 Reinhard Gebhard v Consiglio dell’Ordine degli Avvocati e Procuratori di Milano
[1995] ECR I-4165.
Case C-210/96 Gut Springenheide and Tusky [1998] ECR I-4657.
Case C-604/10 Football Dataco Ltd and Others v Yahoo! UK Ltd and Others [2012] ECJ 115.
Case C-112/00 Eugen Schmidberger v Austria [2003] ECR I-5659.
Case C-322/01 Deutscher Apothekerverband eV v 0800 DocMorris NV and Jacques Waterval
[2003] ECLI:EU:C:2003:664.
Case C-110/05 Commission v Italy (‘Trailers’) [2009] ECR I-519.
Joined Cases C-402/07 and C-432/07 Sturgeon and Others [2009] ECR I-10923.
Case C-5/08 Infopaq International A/S v Danske Dagbaldes Forening [2009] ECJ 465.
Case C-497/13 Froukje Faber v Autobedrijf Hazet Ochten BV [2015] ECLI:EU:C:2015:357.
Case C-99/16 Jean-Philippe Lahorgue v Ordre des avocats du barreau de Lyon [2017]
ECLI:EU:C:2017:107, Opinion of AG Wathelet.
Case T-48/19 Smart Things Solutions GmbH v European Union Intellectual Property Office
(EUIPO) [2020] ECLI:EU:T:2020:483.
Case C-410/19 The Software Incubator Ltd v Computer Associates UK Ltd [2020]
ECLI:EU:C:2020:1061, Opinion of AG Tanchev.
216 Bibliography
International Courts and Tribunals
Application of the Convention on the Prevention and Punishment of the Crime of
Genocide (Bosnia and Herzegovina v. Serbia and Montenegro) (Judgment), [2007] ICJ
Rep 43.
Certain Questions Relating to Settlers of German Origin in the Territory Ceded by
Germany to Poland (Advisory Opinion) 1923, [1923] PCIJ Publications Series B. No. 6.
Corfu Channel Case (UK v. Albania) (Merits), [1949] ICJ Rep 4.
Judgment (Military and Paramilitary Activities in and Against Nicaragua, (Nicaragua v.
USA) (Merits), [1986] ICJ Rep 14.
Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory
(Advisory Opinion), [2004] ICJ Rep 136.
Legal Status of Eastern Greenland (Judgment) 1933, [1933] PCIJ Publications Series A/B
No 53.
Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 257.
Military Tribunal V, 19 February 1948, in Trials of War Criminals Before the Nuremberg
Military Tribunals Under Control Council Law No. 10, Vol. 11, 1950.
Prosecutor v. Stanislav Galić, IT-98-29-T, ICTY Judgement and Opinion, 5 December 2003.
Prosecutor v Tadić, ICTY-94–1-A (Judgement 15 July 1999).
United States Military Tribunal, Nuremberg, United States v. Wilhelm List et al., Case
No. 47, 1948.

Israel
Israel, Supreme Court of Justice, Adalah Legal Center for Arab Minority Rights in Israel
et al v. Minister of Defense et al., judgment, HCJ 8276/05, 12 December 2006, [2006]
Israel Law Reports 2, 352, 353.

United Kingdom
Montgomery v Lanarkshire Health Board (Scotland) [2015] UKSC 11.
NHS Trust v T [2004] EWHC 1279 Fam.
Nova Productions Ltd v Mazooma Games Ltd & Ors (CA). Reference: [2007] EWCA Civ 21.
R (On the Application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC
2341.
Re AK (Medical Treatment: Consent) [2001] 1 FLR 129.
Re T (Adult: Refusal of Medical Treatment) [1992] 4 All ER 649.
W Healthcare NHS Trust v KH [2004] EWCA Civ 1324.

United States
American Bus Ass’n v. United States 627 F.2d 531 (D.C. Cir. 1980).
Guardian Federal Savings & Loan Ass’n v. Federal Savings & Loan Insurance Corp. 589 F.2d 658,
666 (D.C. Cir. 1978).
Gutierrez de Martinez v. Lamagno, 515 U.S. 417, 434 No.9 (1995).
Naruto v Slater 888 F3d 418 (9th Cir 2018).
State v Loomis 881 N.W.2d 749 (Wis. 2016).
Bibliography 217
Table of International Instruments
International Treaties
Additional Protocol to the Geneva Conventions of 12 August 1949, and Relating to the
Protection of Victims in International Armed Conflicts (adopted 8 June 1977, entered
into force 7 December 1978) 1125 UNTS 3.
Berne Convention for the Protection of Literary and Artistic Works, signed on 19 September
1886, entered into force 5 December 1887.
CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems
and their Environment, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg,
30 December 2018.
Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or
Punishment (adopted 10 December 1984, entered into force 26 June 1987), 1645
UNTS 85.
Convention for the Protection of Human Rights and Fundamental Freedoms (European
Convention on Human Rights, as amended), opened for signature in Rome on 4
November 1950, came into force on 3 September 1953.
Convention for the Protection of Individuals with Regard to Automatic Processing of
Personal Data, ETS No. 108 as amended by the CETS amending protocol No. 223, 1981.
Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons
(CCW) Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate
Effects (as amended on 21 December 2001) 1342 UNTS 137.
Convention on the International Liability for Damage Caused by Space Objects (adopted 29
March 1972, entered into force 1 September 1972) 961 UNTS 187.
Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-
Personnel Mines and on Their Destruction, done in Oslo on 18 September 1997,
entered into force 1 March 1999, 2056 UNTS 211.
Convention on the Safety of United Nations and Associated Personnel (adopted 9 December
1994, entered into force 15 January 1999), 2051 UNTS 363.
Geneva Convention Relative to the Treatment of Prisoners of War (Third Geneva
Convention), 12 August 1949, 75 UNTS 13.
ICAO Convention on International Civil Aviation (Chicago Convention), 7 December
1944, (1994) 15 UNTS 295.
Inter-American Convention on Forced Disappearance of Persons (adopted 6 September
1994, entered into force 28 March 1996).
International Convention for the Safety of Life at Sea of 1974, entered into force on 25 May
1980, 1184 UNTS 2.
International Convention on Civil Liability for Oil Pollution Damage (adopted 29
November 1969, entered into force 19 June 1975) 973 UNTS 3.
OECD Council Recommendation on Artificial Intelligence, adopted on 22 May 2019, C/
MIN(2019)3/FINAL.
Paris Agreement Under the United Nations Framework Convention on Climate Change
adopted on 12 December 2015, entered into force on 4 November 2016.
Protocol to the United Nations Framework Convention on Climate Change adopted on 1
December 1997, entered into force on 16 February 2005, 1155 UNTS 331.
Rome Statute of the International Criminal Court (adopted 17 July 1998, entered into force
1 January 2001), 2187 UNTS 38544.
218 Bibliography
United Nations Convention on the Elimination of Discrimination against Women,
18 December 1979, 1249 UNTS 13.
United Nations Framework Convention on Climate Change adopted 9 May 1992, entered
into force March 21, 1994, 31 ILM 849.
Vienna Convention on Civil Liability for Nuclear Damage (adopted 21 May 1963, entered
into force 12 November 1977), 1063 UNTS 266.

EU Law
Commission Directive on the Provisions of Article 33 (7), on the abolition of measures
which have an effect equivalent to quantitative restrictions on imports and are not
covered by other provisions adopted in pursuance of the EEC Treaty (1969)OJ
L13/29.
Consolidated Version of the Treaty on the Functioning of the European Union (2012) OJ
C326/01.
Consolidated Version of the Treaty on the European Union (2012) OJ C326/15.
Council Decision (EEC) 93/389 of 24 June 1993 for a monitoring mechanism of
Community CO2 and other greenhouse gas emissions (1993) OJ L167/31.
Council Decision 94/571/EC adopting a specific programme for research and technological
development, including demonstration, in the field of industrial and materials
technologies (1994 – 1998) (1994) L222/23.
Council Directive 85/374/EEC on the approximation of the laws, regulations and
administrative provisions of the Member States concerning liability for defective products
(1985) OJ L210/29.
Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on
the legal protection of databases (1996) OJ L77/20.
Directive (EU) 2018/2002 of the European Parliament and of the Council of 11 December
2018 amending Directive 2012/27/EU on energy efficiency (2018) OJ L328/210.
Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on
the legal protection of computer programs (2009) OJ L 111/16.
Directive 2009/103/EC of the European Parliament and of the Council of 16 September
2009 relating to insurance against civil liability in respect of the use of motor vehicles, and
the enforcement of the obligation to insure against such liability, OJ L 263, 7 October
2009.
Directive 2010/30/EU of the European Parliament and of the Council of 19 May 2010 on
the indication by labelling and standard product information of the consumption of
energy and other resources by energy-related products (2010) OJ L153/1 (Energy
Labelling Directive).
Directive 2012/27/EU of the European Parliament and of the Council of 25 October
2012 on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and
repealing Directives 2004/8/EC and 2006/32/EC (2012)OJ L315/1 (Energy Efficiency
Directive).
Directive 2014/104/EU of the European Parliament and of the Council of 26 November
2014 on certain rules governing actions for damages under national law for infringements
of the competition law provisions of the Member States and of the European Union,
OJ L 349, 5.12.2014.
Regulation (EC) 725/2004 of the European Parliament and of the Council of 31 March
2004 on enhancing ship and port facility security.
Bibliography 219
Rome II Regulation (EC) 864/2007 of the European Parliament and of the Council of
11 July 2007 on the law applicable to non-contractual obligations (Rome II), OJ L 199,
31.7.2007.
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing Directive 95/46/
EC (General Data Protection Regulation) (2016) OJ L119.
Regulation (EU) 2017/1369 of the European Parliament and of the Council of 4 July
2017 setting a framework for energy labelling and repealing Directive 2010/30/EU
(2017) OJ L198/1.
Regulation (EU) 2018/1807 of the European Parliament and of the Council of
14 November 2018 on a framework for the free flow of non-personal data in the
European Union.

EU/EC Documents
Commission ‘Regulation of the European Parliament and of the Council Laying Down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending
Certain Union Legislative Acts’ (Proposal) COM (2021) 206 final.
Commission Regulation (EU) 2019/424 of 15 March 2019 laying down ecodesign
requirements for servers and data storage products pursuant to Directive 2009/125/EC
of the European Parliament and of the Council and amending Commission Regulation
(EU) No 617/2013 (2019) OJ L74/46.
Commission, ‘A New ERA for Research and Innovation’ (Communication) COM (2020)
628 final.
Commission, ‘A Policy Framework for Climate and Energy in the Period from 2020 to
2030’ (Communication) COM (2014) 15 final.
Commission, ‘Advancing the Internet of Things in Europe’ (Working document) SWD
(2016) 110 final.
Commission, ‘An Overall View of Energy Policy and Actions’ (Communication) COM
(1997) 167 final.
Commission, ‘Artificial Intelligence for Europe’ (Communication) COM (2018) 237 final.
Commission, ‘Communication to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions. Tackling Online
Disinformation: A European Approach’ COM (2018) 236 final.
Commission, ‘Digitising European Industry: Reaping the full benefits of a Digital Single
Market’ (Communication) COM (2016) 180 final.
Commission, ‘European Climate Law’ (Proposal) COM (2020) 80 final.
Commission, ‘Free Flow of Data and Emerging Issues of the European Data Economy’
(Working document) SWD (2017) 2 final.
Commission, ‘Proposal for a Regulation of the European Parliament and of the Council.
Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)
and Amending Certain Union Legislative Acts’ COM (2021) 206 final 2021/0106
(COD).
Commission, ‘Research and Technological Development Activities of the European Union’
(Annual report) COM (1998) 439 final.
Commission, ‘Report to the European Parliament, the Council and the European Economic
and Social Committee, on the Safety and Liability Implications of Artificial Intelligence,
the Internet of Things and Robotics’ COM (2020) 64 final.
220 Bibliography
Commission, ‘Strengthening Environmental Integration within Community Energy Policy’
(Communication) COM (1998) 571 final, Commission, ‘A sustainable Europe for a Better
World: A European Union Strategy for Sustainable Development’ (Communication)
COM (2001) 264 final.
Commission, ‘The Common Policy in the Field of Science and Technology’
(Communication) COM (1977) 283 final.
Commission, ‘The European Green Deal’ (Communication) COM (2019) 640 final.
Commission, ‘The Future Activities of the Joint Research Centre’ (Communication) COM
(1983) 107 final.
Commission, ‘The Greenhouse Effect and the Community’ (Communication) COM
(1988) 656 final.
Commission, ‘The S and T Content of the Specific Programmes Implementing the 4th
Framework Programme for Community Research and Technological Development
(1994 – 1998) and the Framework Programme for Community Research and Training
for the European Atomic Energy Community (1994 – 1998)’ (Working document)
COM (1993) 459 final.
Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence
and Trust’ COM (2020) 65 final.
Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence
(Independent), European Commission B-1049 (Brussels, 8.4.2019).
European Commission, ‘Investor Citizenship and Residence Schemes in the European
Union’ Report from the Commission to the European Parliament, The Council, the
European Economic and Social Committee and the Committee of the Regions, Brussels,
23 Jauary.2019, COM (2019) 12 final.
European Commission, White Paper on Artificial Intelligence - A European Approach to Excellence
and Trust COM (2020) 65 final.
European Council, ‘European Council Meeting (12 December 2019)’ (Conclusions)
EUCO 29/19.
European Council, ‘Special Meeting of the European Council (1 and 2 October 2020)’
(Conclusions) EUCO 13/20.
Eurostat, ‘Primary and Final Energy Consumption Slowly Decreasing’ (28 January 2021)
<https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20210128-1
?redirect=%2Feurostat%2Fweb%2Fenergy%2Fpublications> accessed 27 May 2021.
Eurostat, ‘Share of Renewable Energy in the EU up to 18.0%’ (23 January 2020) <https://
ec.europa.eu/info/news/share-renewable-energy-eu-180-2020-jan-23_en> accessed
27 May 2021.
Opinion of the European Economic and Social Committee on ‘The Perspectives of
European Coal and Steel Research’ (2005) OJ C294/7.
Proposal for a Council Decision adopting a research programme on reactor safety (1984 –
1987) (1983) OJ C250/6.
Public Consultation on AI White Paper: Final Report, European Commission, DG for
Communications Networks, Content and Technology (November 2020).
STOA, ‘Legal and Ethical Reflections Concerning Robotics’ (Policy Briefing) PE (2016)
563.501 <https://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/
EPRS_STU(2016)563501(ANN)_EN.pdf> accessed 27 May 2021.
STOA, ‘The Ethics of Artificial Intelligence: Issues and Initiatives’ (Study) PE (2020)
634.452.
White Paper on Artificial Intelligence: A European approach to excellence and trust, COM
(2020) 65 Final (Brussels, 19.02.2020).
Bibliography 221
Other documents
Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims
of Violations of International Human Rights and Humanitarian Law, adopted and
proclaimed by General Assembly resolution 60/147 of 16 December 2005.
Human Rights Council, ‘Report of the Independent International Commission on Inquiry
on the Syrian Arab Republic’, 9 August 2018, UN Doc A/HRC/39/65.
Human Rights Council, ‘Report of the Special Rapporteur on Extrajudicial, Summary or
Arbitrary Executions, Christof Heyns’, 9 April 2013, UN Doc A/HRC/23/47.
ILC, ‘Fifth Report on State Responsibility, by Mr. Gaetano Arangio-Ruiz, Special
Rapporteur’ (1993) II ILC Yearbook 1, UN Doc. A/CN.4/453 and Add. 1–3.
ILC, ‘Final Report of the International Law Commission: The Obligation to Extradite
or Prosecute (aut dedere aut iudicare) of its 66th Session’ (2014) YILC, vol. II (Part
Two).
ILC, ‘Report of the Study Group of the International Law Commission: Fragmentation
of International Law: Difficulties Arising from the Diversification and Expansion of
International Law’ (13 April 2006) A/CN.4/L.682.
International Law Commission, ‘Draft Articles on the Responsibility of International
Organisations, UN Docs A/66/10’ (2011) Yearbook of the International Law
Commission II, Part Two.
NATO, ‘NATO Nations Approve Civilian Casualty Guidelines’ (NATO, 6 August 2010)
<https://www.nato.int/cps/en/SID-9D9D8832-42250361/natolive/official_texts
_65114.htm> accessed 27 May 2021.
NATO, ‘Military Decision on MC 362/1 – NATO Rules of Engagement (Military
Decision)’, 30 June 2003, NATO UNCLASSIFIED.
Organisation on the Prohibition of Chemical Weapons, ‘Note by the Technical Secretariat,
Second Report by the OPCW Investigation and Identification Team Pursuant to
Paragraph 10 of Decision C-SS-4/Dec.3 <<Addressing the Threat from Chemical
Weapons Use>>’, 12 April 2021, OPCW Official Series S/1943/2021.
Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapon Systems,
12–16 December 2016, UN Doc CCW/CONF.V/2 (10 June 2016).
UN General Assembly, Resolution 56/83: Articles on Responsibility of States for
Internationally Wrongful Acts, (adopted 28.01.2002), UN Doc. A/RES/56/83 (2002).

Table of Legislation
Automated and Electric Vehicles Act 2018 (England & Wales)
Belgium, Défense, Etat-Major de la Défense, Ordre Général - J/836, establishing La
Commission d’Evaluation Juridique des nouvelles armes, des nouveaux moyens et des
nouvelles méthodes de guerre, 18 July 2002.
Copyright, Designs and Patents Act 1988 (England & Wales).
Copyright Law of the People’s Republic of China 2010 (as amended).
‘Decree of the President of the Russian Federation on the Development of Artificial
Intelligence in the Russian Federation’ (CSET, 28 October 2019).
French Decree n° 2018-211 of 28 March 2018 on Experimentation with Automated
Vehicles on Public Roads.
French Justice Reform Act 2019.
German Road Traffic Act (Straßenverkehrsgesetz).
Israeli Civil Wrongs (Liability of the State) Law, 5712 (1952).
222 Bibliography
President of the Russian Federation, ‘Decree of 10.10.2019 No. 490 on the Development
of Artificial Intelligence in the Russian Federation’.
U.S. Accountability Act of 2019.
U.S. Administrative Procedure Act (5 U.S.C. Subchapter II).
U.S. DoD, Directive 3000.09 Autonomy in Weapon Systems (2012).
U.S. Foreign Claims Act: 10 U.S. Code par. 2734. Property loss; personal injury or death:
incident to noncombat activities of the armed forces; foreign countries, 10 August 1956.
USA, Foreign Claims Act: 10 U.S. Code par. 2735. Settlement: final and conclusive, 10
August 1956 (Office of the Law Revision Counsel).
U.S. Malicious Deep Fake Prohibition Act of 2018.
U.S. National Defense Authorization Act for Fiscal Year 2020.

Table of Professional Standards and Regulations


Alabama State Bar, OGC Formal Opinion 2010-2: “Retention, Storage, Ownership,
Production and Destruction of Client Files”, available at: <https://www.alabar.org/
office-of-general-counsel/formal-opinions/2010-02/>
Alaska Bar Association, Ethics Opinion 2014-3: “Cloud Computing & the Practice of
Law”, available at: <https://alaskabar.org/wp-content/uploads/2014-3.pdf>
American Bar Association’s Model Rules of Professional Conduct 2020.
Connecticut Bar Association – Professional Ethics Committee, Informal Opinion 2013-7 on
Cloud Computing, available at: <https://www.ctbar.org/docs/default-source/publications
/ethics-opinions-informal-opinions/2013-opinions/informal-opinion-2013-07>
Council of Bars and Law Societies of Europe Guidelines on the Use of Cloud Computing
Services by Lawyers, adopted on 07 September 2012.
Illinois State Bar Association Professional Conduct Advisory Opinion no. 16-06: “Client
Files; Confidentiality; Law Firms”, available at: <https://www.isba.org/sites/default/
files/ethicsopinions/16-06.pdf>
Kentucky Bar Association, Formal Ethics Opinion KBA E-437 of 21 March 2014, available
at: <https://cdn.ymaws.com/www.kybar.org/resource/resmgr/Ethics_Opinions_
(Part_2)_/kba_e-437.pdf>
Louisiana State Bar Association, Public Opinion 19-RPCC-021: “Lawyer’s Use of
Technology”, available at: <https://www.lsba.org/documents/Ethics/EthicsOpinionLa
wyersUseTech02062019.pdf>
Massachusetts Bar Association, Ethics Opinion 12-03, available at: <https://www.massbar
.org/publications/ethics-opinions/ethics-opinion-article/ethics-opinions-2012-opinion
-12-03>
New York State Bar Association Committee on Professional Ethics, Ethics Opinion 842,
available at: <https://nysba.org/ethics-opinion-842/>
Oregon State Bar Opinion no. 2011-188 (revised 2015): “Information Relating to the
Representation of a Client: Third-Party Electronic Storage of Client Materials”, available
at: <http://www.osbar.org/_docs/ethics/2011-188.pdf>
Pennsylvania Bar Association Committee on Legal Ethics and Professional Responsibility,
Formal Opinion 2011-200, available at: <http://www.slaw.ca/wp-content/uploads
/2011/11/2011-200-Cloud-Computing.pdf>
Professional Ethics Commission of the Maine Board of Overseers of the Bar, Opinion
#207: “The Ethics of Cloud Computing and Storage”, available at: <https://www
.mebaroverseers.org/attorney_services/opinion.html?id=478397>
Bibliography 223
Solicitors Regulation Authority (SRA) for England and Wales, Standards and Regulations,
adopted on 30 May 2018.
UK Bar Standards Handbook (BSB) version 4.6 adopted on 31 December 2020.
Virginia State Bar Standing Committee on Legal Ethics, Legal Ethics Opinion 8972:
“Virtual Law Office and Use of Executive Suites”, available at: <https://www.vsb.org
/docs/1872-final.pdf>

Books
Abeyratne R., Megatrends and Air Transport (Springer 2017).
Abeyratne R., Strategic Issues in Air Transport: Legal, Economic and Technical Aspects (Springer
2012).
Adams M., de Waele H., Meeusen J., Straetmans G. (eds), Judging Europe’s Judges (Hart
Publishing 2013).
Arvind T.T., Contract Law (2nd ed., OUP 2019).
Bacchi C., Analysing Policy: What’s the Problem Represented to Be? (Pearson 2009).
Barfield W., Pagallo U. (eds), Research Handbook on the Law of Artificial Intelligence (Edward
Elgar 2018).
Barton B., Lucas A., Barrera-Hernández L., Rønne A. (eds), Regulating Energy and Natural
Resources (OUP 2006).
Bazarkina D., Pashentsev E., Simons G. (eds), Terrorism and Advanced Technologies in
Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat (Nova
Science Publishers 2020).
Beauchamp T., Childress J., Principles of Biomedical Ethics (5th ed., OUP 2001).
Beck G., The Legal Reasoning of the Court of Justice of the EU (Hart Publishing 2012).
Bektaş T., Coniglio S., Martinez-Sykora A., Voß S. (eds), Computational Logistics (Springer
2017).
Bell J., Boyron S., Whittaker S., Principles of French Law (2nd ed., OUP 2008).
Bell S., McGillivray D., Pedersen O., Environmental Law (8th ed., OUP 2013).
Bernstein P.L., Against the Gods: The Remarkable Story of Risk (Wiley 1998).
Besson S., Tasioulas J.(eds), The Philosophy of International Law (OUP 2010).
Bhuta N., et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy (CUP 2016).
Boothby W.H. (ed.), New Technologies and the Law in War and Peace (CUP 2018).
Boothby W.H., Weapons and the Law of Armed Conflict (2nd ed., OUP 2016).
Bothe M., Partsch K.J., Solf W.A., New Rules for Victims of Armed Conflicts: Commentary on
the Two 1977 Protocols Additional to the Geneva Conventions of 1949 (2nd ed., Martinus
Nijhoff 2013).
Boulanin V. (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk,
Volume I: Euro-Atlantic Perspectives (SIPRI 2019).
Bronwen M., Yeung K., An Introduction to Law and Regulation: Text and Materials (CUP
2007).
Buyers J., Artificial Intelligence: The Practical Legal Issues (Law Brief Publishing 2018).
Cappelletti M., Seccombe M., Weiler J.H.H. (eds), Integration Through Law: Europe and the
American Federal Experience (Walter de Gruyter and Co 1986).
Clayton G., Firth G., Immigration & Asylum Law (8th ed., OUP 2018).
Corn G.S., Van Landingham R.E., Reeves S.R. (eds), U.S. Military Operations (OUP 2016).
Corrales M., Fenwick M., Forgo N. (eds), Robotics, AI and the Future of Law (Springer 2018).
Crawford J., Pellet A., Olleson S. (eds), The Law of International Responsibility (OUP 2010).
Creutz K., State Responsibility in the International Legal Order: A Critical Appraisal (CUP 2020).
224 Bibliography
Croley S.P., Regulation and Public Interests: The Possibility of Good Regulatory Government (PUP
2008).
Dawson M., de Witte B., Muir E. (eds), Judicial Activism at the European Court of Justice
(Edward Elgar Publishing 2013).
Delerue F., Cyber Operations and International Law (CUP 2020).
Dennett A.., Public Law Directions (OUP 2019).
Dignum V., Responsible Artificial Intelligence, in Artificial Intelligence: Foundations, Theory, and
Algorithms (Springer 2019).
Embley J., Goodchild P., Shephard C., Slorach S., Legal Systems & Skills (4th ed., OUP 2020).
Evon A-T., El Sheikh A., Jafari M.(eds), Technology Engineering and Management in Aviation:
Advancements and Discoveries (IGI Global 2012).
Fastenrath U., Geiger R., Khan D.E., Paulis A., Schorlemer S. von, Vedder Ch. (eds), From
Bilateralism to Community Interest: Essays in Honour of Bruno Simma (OUP 2011).
Forlati S., Franzina P. (eds), Universal Civil Jurisdiction (Brill Nijhoff 2020).
Foster N., Sule S., German Legal System and Laws (4th ed., OUP 2011).
Gates K.A., Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance
(NYU Press 2011).
Giliker P., Vicarious Liability in Tort: A Comparative Perspective (CUP 2010).
Grzeszczak R. (ed.), Economic Freedom and Market Regulation: In Search of Proper Balance
(Nomos 2020).
Guldhal C.C., NATO Rules of Engagement. On ROE, Self-Defence and the Use of Force during
Armed Conflict (Brill Nijhoff 2019).
Herring J., Legal Ethics (2nd ed., OUP 2017).
Higgins R., Problems & Progress: International Law and How We Use It? (Oxford Clarendon
Press 1994).
Infantino M., Zervogianni E. (eds), Causation in European Tort Law (CUP 2017).
Kaplan J., Artificial Intelligence: What Everyone Needs to Know (OUP 2016).
Kearns M., Roth A., The Ethical Algorithm (OUP 2019).
Kello L., The Virtual Weapon and International Order (Yale University Press 2017).
Kindt E., Privacy and Data Protection Issues of Biometric Applications (1st ed., Springer 2013).
Kinsey C., Corporate Soldiers and International Security. The Rise of Private Military Companies
(Routledge 2006).
Koch B.A., Koziol H., Unification of Tort Law: Strict Liability (Kluwer International Publishing
2002).
Kötz H., Wagner G., Delikstrecht (Franz Vahlen 2016).
Koulu R., Kontiainen L., How Will AI Shape the Future of Law? (Legal Tech Lab 2019).
Kurki V.A.J., Pietrzykowski T. (eds), Legal Personhood: Animals, Artificial Intelligence and the
Unborn (Springer 2017).
Kushan M.G. (ed.), Aircraft Technology (IntechOpen 2018).
Legg M., Bell F., Artificial Intelligence and the Legal Profession (Hart 2020).
Leveringhaus A., Ethics and Autonomous Weapons (Palgrave Macmillan 2016).
LexisNexis PSL TMT Team (ed.), An Introduction to Technology Law (LexisNexis 2018).
Lohsee S., Schulze R., Staudenmayer D. (eds), Liability for Artificial Intelligence and the Internet
of Things (Hart 2019).
Luger G.F., Stubblefield W.A., Artificial Intelligence: Structures and Strategies for Complex
Problem Solving (6th ed., Pearson 2008).
Macdonald E., Atkins R., Koffman & Macdonald’s Law of Contract (9th ed., OUP 2018).
Maglogiannis I., Iliadis L., Pimenidis E. (eds), Artificial Intelligence Applications and Innovations
(Springer 2020).
Bibliography 225
Majumdar M.C., Majumdar D., Sackett J.I. (eds), Artificial Intelligence and Other Innovative
Computer Applications in the Nuclear Industry (Springer 1988).
Markesinis B.S., Unberath H., The German Law of Torts: A Comparative Treatise (Hart 2002).
Marszałek-Kawa J. (ed.), Economic and Energy Stability in Asia. Perspectives and Scenarios (2016)
Marszałek Publishing House.
Massai L., European Climate and Clean Energy law and Policy (Earthscan 2012).
Matthews H., Pioneer Aviators of the World: A Biographical Dictionary of the First Pilots of
100 Countries (McFarland 2003).
Mikanagi Y., Japan’s Trade Policy: Action or Reaction? (Routledge 1996).
Morgera E. (ed.), The External Environmental Policy of the European Union: EU and International
Law Perspectives (CUP 2012).
Müller V. (ed.), Philosophy and Theory of Artificial Intelligence 2017 (Springer 2018).
Mustapha H. (ed.), Artificial Intelligence in Renewable Energetic Systems: Smart Sustainable Energy
Systems (Springer 2018).
Newton M., May L., Proportionality in International Law (OUP 2014).
Neznamov A. (ed.), Novye zakony robototehniki. Reguljatornyj landshaft. Mirovoj opyt
regulirovanija robototehniki i tehnologij iskusstvennogo intellekta (New Laws of Robotics. The
Regulatory Landscape. Global Experience in Regulating Robotics and Artificial Intelligence
Technologies) (Infotropic Media 2018).
Nowakowska-Małusecka J., ‘Indywidualna odpowiedzialność karna za zbrodnie popełnione
w byłej Jugosławii i Rwandzie’ (Wydawnictwo Uniwersytetu Śląskiego 2000).
O’Connell R.L., Of Arms and Men: A History of War, Weapons and Aggression (OUP 1990).
Ogus A.I., Regulation: Legal Form and Economic Theory (OUP 1994).
Parpworth N., Constitutional and Administrative Law (11th ed., OUP 2020).
Paunio E., Legal Certainty in Multilingual EU Law: Language, Discourse and Reasoning at the
European Court of Justice (Ashgate Publishing 2013).
Pellegrino F., The Just Culture Principles in Aviation Law: Towards a Safety-Oriented Approach
(Springer 2019).
Posner R.A., Economic Analysis of Law (8th ed., Wolters Kluwer 2011).
Runco M., Pritzker S. (eds), Encyclopaedia of Creativity (3rd ed., Elsevier 2020).
Scharre P., Army of None (W. W. Norton & Company 2018).
Segal G., Goodman D.S.G. (eds), Towards Recovery in Pacific Asia (Routledge 2000).
Shelton D. (ed.), The Oxford Handbook of International Human Rights Law (OUP 2015).
Shmelova T., Sikirda Y., Sterenharz A. (eds), Handbook of Research on Artificial Intelligence
Applications in the Aviation and Aerospace Industries (IGI Global 2020).
Sokołowski M.M.., European Law on Combined Heat and Power (Routledge 2020).
Sokołowski M.M., Regulation in the European Electricity Sector (Routledge 2016).
Sonnefeld R. (ed.), Odpowiedzialność państwa w prawie międzynarodowym (Polski Instytut
Spraw Międzynarodowych 1980).
Stefano C. de, Attribution in International Law and Arbitration (OUP 2020).
Twidell J., Weir T., Renewable Energy Sources (3rd ed., Routledge 2015).
Umit H. (ed.), Digital Business Strategies in Blockchain Ecosystems Transformational Design and
Future of Global Business (Springer 2020).
Van den Bergh R., The Roundabouts of European Law and Economics (Eleven International
Publishing 2018).
Visvizi A., Lytras M.D., Mudri G. (eds), Smart Villages in the EU and Beyond (Emerald 2019).
Waisberg N., Hudek A., AI for Lawyer: How Artificial Intelligence is Adding Value, Amplifying
Expertise, and Transforming Careers (Wiley 2021).
Weller M., The Oxford Handbook of the Use of Force in International Law (OUP 2015).
226 Bibliography
Wilson S., Rutherford H., Storey T., Wortley N., Kotecha B., English Legal System (4th ed.,
OUP 2020).
Zeben van J., Rowell A., A Guide to EU Environmental Law (University of California Press
2020).
Zimmerman J. (ed.), Aksjologia prawa administracyjnego [Axiology of Administrative Law] vol. 2
(Wolters Kluwer Polska 2017).

Table of Journals and Periodicals


Abeyratne R., ‘Key Legal Issues in ICAO: A Commentary and Review’ (2019)
44 Air&SpaceL 53–67.
Andreotti R.J., ‘Promoting General Aviation Safety: A Revision of Pilot Negligence Law’
(1992) 58 JAirL&Com 1089–1148.
Arnold R., ‘Legal Challenges Posed by LAWS: Criminal Liability for Breaches of IHL
by (the Use of) LAWS’ in R. Geis, Lethal Autonomous Weapon Systems: Technology,
Definition, Ethics, Law & Security (German Federal Foreign Office 2017).
Bachar G.J., ‘Collateral Damages: Domestic Monetary Compensation for Civilians in
Asymmetric Conflict’ (2019) 19 ChiJIntL 2.
Bahnes N., Kechar B., Hafid H., ‘Cooperation between Intelligent Autonomous Vehicles
to Enhance Container Terminal Operations’ (2016) 3 J. Innov. in Digital Ecosystems 1.
Bazarkina D., Pashentsev E., ‘Artificial Intelligence and New Threats to International
Psychological Security’ (2019) 17(1) Russia in Global Affairs, 147–170.
Blake V.K., ‘Regulating Care Robots’ (2020) 92 Temp L Rev 551.
Bo Y., Meifang Y., ‘Construction of the Knowledge Service Model of a Port Supply Chain
Enterprise in a Big Data Environment’ (2020) 33 Neural Computing & Applications 5.
Bradshaw J.M., Hoffman R.R., Johnson M., Woods D.D., ‘The Seven Deadly Myths of
Autonomous Systems’ (2013) 28/3 IEEE Intelligent Systems.
Bridy A., ‘Coding Creativity: Copyright and the Artificially Intelligent Author’ (2012)
5 Stanford Technology Law Review 1–28.
Brkan M., Bonnet G., ‘Legal and Technical Feasibility of the GDPR’s Quest for Explanation
of Algorithmic Decisions: Of Black Boxes, White Boxes and Fata Morganas’ (2020)
11(1) EJRR 18–50.
Brunn M., Diefenbacher A., Courtet P., et al., ‘The Future Is Knocking: How Artificial
Intelligence Will Fundamentally Change Psychiatry’ (2020) 44 Acad Psychiatry 461.
Buiten M.C., ‘Towards Intelligent Regulation of Artificial Intelligence’ (2019) 10 Eur J
Risk Reg 41.
Calabresi G., Malamed A.D., ‘Property Rules, Liability Rules, and Inalienability: One View
of the Cathedral’ (1972) 85 Harvard Law Review 1089–1128.
Caplan H., ‘Passenger Health – Who’s in Charge?’ (2001) 26 Air&SpaceL 203–217.
Cassese A., ‘The Nicaragua and Tadić Tests Revisited in Light of the ICJ Judgment on
Genocide in Bosnia’ (2007) 18 EJIL 4, 649.
Castets-Renard C., ‘Accountability of Algorithms in the GDPR and Beyond: A European
Legal Framework on Automated Decision-Making’ (2019) 30 Fordham Intell. Prop.
Media & Ent.L.J. 91.
Challen R., Denny J., Pitt M., et al., ‘Artificial Intelligence, Bias and Clinical Safety’ (2019)
28 BMJ Qual Saf 231.
Chan H.Y., ‘The Underappreciated Role of Advance Directives: How the Pandemic
Revitalises Advance Care Planning Actions’ (2020) 27(5) Eur J Health Law 451.
Bibliography 227
Channon M., ‘Automated and Electric Vehicles Act 2018: An Evaluation in Light of
Proactive Law and Regulatory Disconnect’ (2019) 10 European Journal of Law and
Technology 2, 1–36.
Chen D.L., ‘Judicial Analytics and the Great Transformation of American Law’ (2019) 27
Artificial Intelligence and Law 15–42.
Chen W.-M., Kim H., Yamaguchi H., ‘Renewable Energy in Eastern ASIA: Renewable
Energy Policy Review and Comparative SWOT Analysis for Promoting Renewable
Energy in Japan, South Korea, and Taiwan’ (2014) 74 Energy Policy 319–329.
Chengeta T., ‘Are Autonomous Weapon Systems the Subject of Article 36 of Additional
Protocol I to the Geneva Conventions?’ (2016) 23(1) UC Davis Journal of International
Law and Policy 65, 77.
Chengeta T., ‘Defining the Emerging Notion of “Meaningful Human Control”’ (2017)
49(3) NYU JILP 839.
Chiappetta A., ‘Hybrid Ports: The Role of IoT and Cyber Security in the Next Decade’
(2017) 2 Journal of Sustainable Development of Transport and Logistics 2.
Chiappetta A., ‘Toward Cyber Ports: A Geopolitical and Global Challenge’ (2017) 12
FormaMente 1.
Chowdhury S., Sumita U., Islam A., Bedja I., ‘Importance of Policy for Energy System
Transformation: Diffusion of PV Technology in Japan and Germany’ (2014) 68 Energy
Policy 285–293.
Coeckelbergh M., ‘Artificial Intelligence, Responsibility Attribution, and a Relational
Justification of Explainability’ (2020) 26 Science and Engineering Ethics 2051–2068.
Cohen T.H., Lowenkamp Ch.T., Hicks W.E., ‘Revalidating the Federal Pretrial
Risk Assessment Instrument (PTRA): A Research Summary’ (2018) 82 Federal
Probation 2.
Crawford K., ‘Time to Regulate AI that Interprets Human Emotions’ (2021) 592 Nature 167.
Crootof R., ‘War Torts: Accountability for Autonomous Weapons’ (2016) 164 UPaLRev.
Dalton-Brown S., ‘The Ethics of Medical AI and the Physician-Patient Relationship’ (2020)
29 CQHE 115–118.
Daming C.V., ‘When in Rome: Analyzing the Local Law and Custom Provision of the
Foreign Claims Acts’ (2012) 39 Wash. U. J. L. & Pol’y 309.
Deeks A., ‘The Judicial Demand for Explainable Artificial Intelligence’ (2019) 119 Columbia
Law Review 7.
Dempsey P.S., ‘The Financial Performance of the Airline Industry Post-Deregulation’
(2008–2009) 45 HousLRev 421–485.
Devarapalli P., ‘Machine Learning to Machine Owning: Redefining the Copyright Use
Only Ownership from the Perspective of Australian, US, UK and EU Law’ (2018) 40
European Intellectual Property Review 11.
Duffy G., Tucker S.A., ‘Political Science: Artificial Intelligence Applications’ (1995) 13(1)
SocSciComputRev 1–20.
Duhan A., ‘Liability for Environmental Damage’ (2019) 3 MPEPIL 8.
Duran C.A., Cordova F.M., Yanine F., Carrillo E., ‘Fuzzy Knowledge to Detect
Imprecisions in Strategic Decision Making in a Smart Port’ (2020) 9 International
Journal of Advanced Trends in Computer Science and Engineering 3.
Duran C.A., Palominos F., Cordova F.M., ‘Applying Multi-Criteria Analysis in a Port
System’ (2017) 122 Procedia Computer Science, 478–485.
Feldman R.C., Aldana E., Stein K., ‘Artificial Intelligence in the Health Care Space: How
We Can Trust What We Cannot Know’ (2019) 30 Stan L & Pol’y Rev 399.
228 Bibliography
Fenrick W.J., ‘The Law Applicable to Targeting and Proportionality after Operation Allied
Force: A View from the Outside’ (2000) 3 Yearbook of International Humanitarian
Law 53.
Fukasaku Y., ‘Energy and Environment Policy Integration: The Case of Energy Conservation
Policies and Technologies in Japan’ (1995) 23(12) Energy Policy 1063–1076.
Garnier S., ‘Citizenship by Investment’ (2020) 41 Harvard International Review 1, 15–18.
Gaudreau J., ‘The Reservations to the Protocols Additional to the Geneva Conventions for
the protection of War Victims’ (2003) 849 IRRC 18.
Ginsburg J., ‘People Not Machines: Authorship and What It Means in the Berne Convention’
(2018) 49 International Review of Intellectual Property and Competition Law 131–135.
Giuffrida I., Treece T., ‘Keeping AI under Observation: Anticipated Impacts on Physicians’
Standard of Care’ (2020) 22 Tul J Tech & Intell Prop 111.
Gross O., ‘The Grave Breaches System and the Armed Conflict in the Former Yugoslavia’
(1995) 16 MichJIntL 3.
Grout A., Howard N., Coker R., Speakman E.M., ‘Guidelines, Law, and Governance:
Disconnects in the Global Control of Airline-Associated Infectious Diseases” (2017)
17(4) LancetInfectDis e118–e122.
Grütters D.M., ‘NATO, International Organisations and Functional Immunity’ (2016)
13 International Organisations Law Review 211–222.
Guihot M., Matthew A.F., Suzor N.P., ‘Nudging Robots: Innovative Solutions to Regulate
Artificial Intelligence’ (2017) 20 Vand J Ent & Tech L 385.
Habli I., Lawton T., Porter Z., ‘Artificial Intelligence in Health Care: Accountability and
Safety (2020) 98 Bull World Health Organ 251.
Hamakawa Y., ‘Present Status of Solar Photovoltaic R&D Projects in Japan’ (1979)
86 SurfSci 444–461.
Harmon R., Junklewitz H., Sanchez I., ‘Robustness and Explainability of Artificial
Intelligence’ EUR 30040 EN (2020) EU Science Hub 11–13.
Heilig L., Voÿ S., ‘Information Systems in Seaports: A Categorization and Overview’ (2016)
18 Inf. Technol. Manage. 3, 179–201.
Henckaerts J.M., ‘The Grave Breaches Regime as Customary International Law’ (2009)
7 JICJ Journal of International Criminal Justice 4, 683.
Hill S., Manea A., ‘Protection of Civilians: A NATO Perspective’ (2018) 34 Utrecht Journal
of International and European Law 2.
Ho C.W.L., Soon D., Caals K., Kapur J., ‘Governance of Automated Image Analysis and
Artificial Intelligence Analytics in Healthcare’ (2019) 74 Clin Radiol 329.
Hoeren T., Niehoff M., ‘Artificial Intelligence in Medical Diagnoses and the Right to
Explanation’ (2018) 4 Eur Data Prot L Rev 308.
Hovi J., Sprinz D.F., Bang G., ‘Why the United States Did Not Become a Party to the
Kyoto Protocol: German, Norwegian, and US Perspectives’ (2012) 18(1) EurJIntRelat
129–150.
Howard J., ‘Artificial intelligence: Implications for the future of work’ (2019) 62 Am J Ind
Med., 917–926.
Introna L., Wood D., ‘Picturing Algorithmic Surveillance: The Politics of Facial Recognition
Systems’ (2002) 2 Surveill. Soc. 3–4.
Jayavardhana G., Rajkumar B., Marusica S., Palaniswami M., ‘Internet of Things (IoT):
A Vision, Architectural Elements, and Future Directions’ (2013) 29 Elsevier Future
Generation Computer Systems 7.
Jensen E.T., ‘Precautions against the Effects of Attacks in Urban Areas’ (2016) 98(1)
International Review of the Red Cross 147.
Bibliography 229
Jensen E.T., ‘The (Erroneous) Requirement for Human Judgment (and Error) in the Law
of Armed Conflict’ (2020) 96 International Law Studies 26.
Jevglevskaja N., ‘Weapons Review Obligation under Customary International Law’ (2018)
94 ILS 186.
Jiang F., Jiang Y., Zhi H., et al., ‘Artificial Intelligence in Healthcare: Past, Present and
Future’ (2017) 2 Stroke Vasc Neurol e000101. doi:10.1136/svn-2017-000101.
Jolley R., ‘Artificial Intelligence - Can Japan Lead the Way?’ (2015) 52(11) ACCJ
Journal 24.
Jones L.D., Golan D., Hanna S.A., Ramachandran M., ‘Artificial Intelligence, Machine
Learning and the Evolution of Healthcare: A Bright Future or Cause for Concern?’
(2018) 7 Bone Joint Res 223.
Kaloop M.R., Sayed M.A., Kim D., Kim E., ‘Movement Identification Model of Port
Container Crane Based on Structural Health Monitoring System’ (2014) 50 Structural
Engineering and Mechanics 1, 105–119.
Keitner Ch.I., ‘Categorizing Act by State Officials: Attribution and Responsibility in the
Law of Foreign Official Immunity’ (2016) 26 DukeJComp&IntlL. 451–478.
Kelsen H., ‘Collective and Individual Responsibility in International Law with Particular
Regard to the Punishment of War Criminals’ (1943) 31 Cal. L. Rev. 530.
Kelsen H., ‘The Legal Process and International Order’ (1935) The New Commonwealth
Research Bureau Publications Ser. A, No. 1.
Kenig-Witkowska M.M., ‘The Concept of Sustainable Development in the European
Union Policy and Law’ (2017) 1(1) JCULP 64–80.
Kim G., Lee H., Lee B., Lee J., Son W., ‘Fourth Industrial Revolution in Japan: Technology
to Address Social Challenges’ (2021) 11(2) WorldEconBrief 1–8.
Kosinski M.W., Wang Y., ‘Deep Neural Networks Are More Accurate than Humans at
Detecting Sexual Orientation from Facial Images’ (2018) 114 JPSP 246.
Kovacic M., ‘The Making of National Robot History in Japan: Monozukuri, Enculturation
and Cultural Lineage of Robots’ (2018) 50(4) CritAsianStud 572–590.
Kroll J.A., Huey J., Barocas S., Felten E.W., Reidenberg J.R., Robinson D.G., Yu H.,
‘Accountable Algorithms’ (2017) 165 U. Pa. L. Rev. 633.
Krupiy T., ‘Of Souls, Spirits and Ghosts: Transposing the Application of the Rules
of Targeting to Lethal Autonomous Robots’ (2015) 2(16) Melbourne Journal of
International Law 1.
Larsson S., ‘The Socio-Legal Relevance of Artificial Intelligence’ (2019) 103 Droit et
Societe 573.
Le H.M., Yassine A., Riadh M., ‘Scheduling of Lifting Vehicle and Quay Crane in
Automated Port Container Terminals’ (2012) 6 Intern. J. Intelligent Inform. Database
Syst. 516–531.
Lee J., ‘Overview and Perspectives on Japanese Manufacturing Strategies and Production
Practices in Machinery Industry’ (1997) 37(10) IntJMachToolsManufact 1449–1463.
Leloudas G., Haeck L., ‘Legal Aspects of Aviation Risks Management’ (2003)
28 AnnalsAirSpace.L 149–169.
Linz M., ‘Scenarios for the Aviation Industry: A Delphi-Based Analysis for 2025’ (2012) 22
JAirTranspManagement 28–35.
Liu H.-W., Lin Ch-F., Chen Y.-J., ‘Beyond State v Loomis: Artificial Intelligence,
Government Algorithmization and Accountability’ (2019) 27 International Journal of
Law and Information Technology 122–141.
Liu H.Y., ‘Categorization and Legality of Autonomous and Remote Weapons Systems’
(2012) 94 IRRC 627 636.
230 Bibliography
Liu H.-Y., ‘From the Autonomy Framework towards Networks and Systems Approaches
for ‘Autonomous’ Weapons Systems’ (2019) 10 Journal of International Humanitarian
Legal Studies 89, 93.
Lu Ch.-t., Wetmore M., Przetak R., ‘Another Approach to Enhance Airline Safety: Using
Management Safety Tools’ (2006) 11(2) JAirTransp 113–139.
Lysaght T., Lim H.Y., Xafis V., et al., ‘AI-Assisted Decision-Making in Healthcare’ (2019)
11 ABR 299.
Macrae C., ‘Governing the Safety of Artificial Intelligence in Healthcare’ (2019) 28 BMJ
Qual Saf 495.
Madurai E.R., Shafiullah G.M., Raju K., Mudgal V., Arif M.T., Jamal T., Subramanian
S., Sriraja Balaguru V.S., Reddy K.S., Subramaniam U., ‘Covid-19: Impact Analysis
and Recommendations for Power Sector Operation’ (2020) 279 Applied Energy
115739.
McCutcheon J., ‘Curing the Authorless Void: Protecting Computer-Generated Works
Following ICETV and Phone Directories’ (2013) 1 Melbourne University Law Review
53–56.
McMillan Ch.J., ‘Going Global: Japanese Science-Based Strategies in the 1990s’ (1991)
12(2) ManageDecisEcon 171–181.
Mendes de Leon P., ‘The Fight against Terrorism Through Aviation: Data Protection
Versus Data Production’ (2006) 31 Air&SpaceL 320–330.
Mondoloni S., Rozen N., ‘Aircraft Trajectory Prediction and Synchronization for Air
Traffic Management Applications’ (2020) 119 ProgAerospSci 100640.
Mosechkin I., ‘Iskusstvennyj intellekt i ugolovnaja otvetstvennost’: problemy stanovlenija
novogo vida subyekta prestupleniya (Artificial Intelligence and Criminal Responsibility:
Problems of Formation of a New Type of Criminal Subject)’ (2019) 1 Vestnik Sankt-
Peterburgskogo universiteta. Pravo (Bulletin of St Petersburg University Law) 461–476.
Muhammedally S., ‘Minimizing Civilian Harm in Populated Areas: Lessons from Examining
ISAF and AMISOM Policies’ (2016) 98 IRRC 1.
Olofsson A., Öhman S., ‘Views of Risk in Sweden: Global Fatalism and Local Control – An
Empirical Investigation of Ulrich Beck’s Theory of New Risks’ (2007) 10(2) JRiskRes
177–196.
Othman M.K., Rahman N.S., Ismail A., Saharuddin H.A., ‘Factors Contributing to the
Imbalance of Cargo Flows in Malaysia Large-Scale Minor Ports Using a Fuzzy Analytical
Hierarchy Process (FAHP) Approach’ (2019) 35 The Asian Journal of Shipping and
Logistics 1, 13–23.
Pagallo U., ‘Algo-Rhytms and the Beat of the Legal Drum’ (2018) 31 PhilosTechnol
507–524.
Pellet A., ‘Can a State Commit a Crime? Definitely, Yes!’ (1999) 10(2) EJIL 425–434.
Powell J., ‘Trust Me, I’m a Chatbot: How Artificial Intelligence in Health Care Fails the
Turing Test’ (2019) 21(10) J Med Internet Res e16222 doi: 10.2196/16222.
Qian Sun T., Medaglia R., ‘Mapping the Challenges of Artificial Intelligence in the Public
Sector: Evidence from Public Healthcare’ (2019) 36(2) GovInfQ 368–383.
Quéguiner J.-F., ‘Precautions under the Law Governing the Conduct of Hostilities’ (2006)
88 International Review of the Red Cross 797.
Rahman S., ‘Artificial Intelligence in Electric Power Systems: A Survey of the Japanese
Industry’ (1993) 8(3) IEEE TransPowerSyst 1211–1218.
Rahmatian A., ‘Originality in UK Copyright Law: The Old “Skill and Labour” Doctrine
Under Pressure’ (2013) 44 International Review of Intellectual Property and
Competition Law 4–34.
Bibliography 231
Reddy S., Fox J., Purohit M.P., ‘Artificial Intelligence-Enabled Healthcare Delivery’ (2019)
112(1) J R Soc Med 22.
Reiling A.D., ‘Courts and Artificial Intelligence’ (2020) 11(2) IJCA 8.
Relly V.P., Aversa R., Akash B., Bucinell R., Corchado J., Apicella A., Tiberiu Petrescu
F.I., ‘History of Aviation – A Short Review’ (2017) 1(1) JAircrSpacecrTechnol 30–49.
Ricketson S., ‘People or Machines: The Bern Convention and the Changing Concept of
Authorship’ (1991–1992) 16 Columbia VLA Journal of Law & Arts.
Riet van de Reinder P., ‘An Overview and Appraisal of the Fifth Generation Computer
System Project’ (1993) 9(2) FutureGenerComputSyst 83–103.
Rong G., Mendez A., Assi E.B., Zhao B., Sawan M., ‘Artificial Intelligence in Healthcare:
Review and Prediction Case Studies’ (2020) 6 Engineering 291.
Rosert E., Sauer F., ‘Prohibiting Autonomous Weapons: Put Human Dignity First’ (2019)
10/3 Global Policy Volume 371.
Sassòli M., ‘State Responsibility for Violations of International Humanitarian Law’ (2002)
84 IRRC 401–406.
Sassòli M., Quintin A., ‘Active and Passive Precautions in Air and Missile Warfare’ (2014)
44 Israel Yearbook on Human Rights 69.
Sauter W., ‘Proportionality in EU Law: A Balancing Act?’ (2013) 15 Cambridge Yearbook
of European Studies 439–466.
Schemmel M.L., de Regt B., ‘The European Court of Justice and the Environmental
Protection Policy of the European Community’ (1994) 17(1) Boston College
International and Comparative Law Review 53.
Schere M.U., ‘Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies,
and Strategies’ (2016) 29 Harvard Journal of Law & Technology 353.
Schmidt R.A., Pioch E.A., ‘Pills by Post? German Retail Pharmacies and the Internet’
(2003) 105(9) British Food Journal 618–633.
Schmitt M.N., ‘Autonomous Weapon Systems and International Humanitarian Law: A
Reply to the Critics’ (2013) 2 Harvard National Security Journal Features 7.
Schmitt M.N., Thurnher J., ‘“Out of the Loop”: Autonomous Weapon Systems and the
Law of Armed Conflict’ (2013) 4 HNSJ 243.
Schmitz P., ‘On the Joint Use of Liability and Safety Regulation’ (2000) 20 International
Review of Law and Economics 3.
Schönberger D., ‘Artificial Intelligence in Healthcare: A Critical Analysis of the Legal and
Ethical Implications’ (2019) 27 Int J Law Info Tech 171.
Searle J., ‘Minds, Brains and Programs’ (1980) 3 Behavioral and Brain Sciences 417–424.
Sharkey N., ‘The Evitability of Autonomous Warfare’ (2012) 94 IRRC 788–789.
Shavell S., ‘The Judgement Proof Problem’ (1986) 6 International Review of Law and
Economics 45–58.
Shimpo F., ‘The Principal Japanese AI and Robot Law. Strategy and Research toward
Establishing Basic Principles’ (2018) 3 JLawInfoSyst 44–65.
Shinners L., Aggar C., Smith S.G.S., ‘Exploring Healthcare Professionals’ Understanding
and Experiences of Artificial Intelligence Technology Use in the Delivery of Healthcare:
An Integrative Review’ (2019) 26 J Health Inform 1.
Shneiderman B., ‘Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy’
(2020) 36/6 International Journal of Human–Computer Interaction 495–504.
Sokołowski J., Lewandowski P., Kiełczewska A., Bouzarovski S., ‘A Multidimensional Index
to Measure Energy Poverty: The Polish Case’ (2020) 15(2) EnergSourcePartB 92–112.
Sokołowski M.M., ‘Burning Out Coal Power Plants with the Industrial Emissions Directive’
(2018) 11(3) JWELB 260–269.
232 Bibliography
Sokołowski M.M., ‘Discovering the New Renewable Legal Order in Poland: With or
Without Wind?’ (2017) 106 Energy Policy 68–74.
Sokołowski M.M., ‘European Law on the Energy Communities: A Long Way to a Direct
Legal Framework’ (2018) 27(2) EurEnergyEnvironLawRev 60–70.
Sokołowski M.M., ‘Kolegialny model ustrojowy organu jako droga do pełnej niezależności
polskiego regulatora sektora energetycznego’ [Collective System Model of the
Administrative Body as a Way to Full Independence of the Polish Electricity Sector’s
Regulator] (2010) 13(1) Polityka Energetyczna [Energy Policy] 99–109.
Sokołowski M.M., ‘Regulation in the Covid-19 Pandemic and Post-Pandemic Times:
Day-Watchman Tackling the Novel Coronavirus’ TGPPP (2020). doi: 10.1108/
TG-07-2020-0142.
Sokołowski M.M., ‘Renewable and Citizen Energy Communities in the European Union:
How (Not) to Regulate Community Energy in National Laws and Policies’ (2020) 38(3)
JEnergyNatResourL 289–304.
Sokołowski M.M., ‘Renewable Energy Communities in the Law of the EU, Australia, and
New Zealand’ (2019) 28(2) EurEnergyEnvironLawRev 34–46.
Sokołowski M.M., ‘When Black Meets Green: A Review of the Four Pillars of India’s
Energy Policy’ (2019) 130 Energy Policy 60–68.
Su C., Xu Z., Pathak J., et al., ‘Deep Learning in Mental Health Outcome Research: A
Scoping Review’ (2020) 10 Transl Psychiatry 116.
Suau-Sanchez P., Voltes-Dorta A., Cugueró-Escofet N., ‘An Early Assessment of the
Impact of Covod-19 on Air Transport: Just Another Crisis or the End of Aviation as We
Know It?’ (2020) 86 JTranspGeogr 102749.
Surden H., ‘Artificial Intelligence and Law: An Overview’ (2019) 35 Ga St U L Rev
1305.
Sutrop M., ‘Should We Trust Artificial Intelligence?’ (2019) TRAMES 23(73/68), 4,
499–522.
Tatsuta M., ‘New Sunshine Project and New Trend of PV R&D Program in Japan’ (1996)
8(1–4) RenewEnergy 40–43.
Thomas B.T., ‘Autonomous Weapon Systems: The Anatomy of Autonomy and the Legality
of Lethality’ (2015) 37(1) Houston Journal of International Law 235.
Thurnher J., ‘The Law that Applies to Autonomous Weapon Systems’ (2013) 17 ASILI 4–10.
Tongzon J., Chang Y.-T., Lee S.-Y., ‘How Supply Chain Oriented is the Port Sector’
(2009) 122 Int J Prod Econ 1.
Tsakiridi S., ‘AI Ethics in the Post-GDPR World: Part 1’ (2020) 20(6) P. & D.P. 13–15.
Tsakiridi S., ‘AI Ethics in the Post-GDPR World: Part 2’ (2020) 20(7) P. & D.P. 6–10.
Tutt A., ‘An FDA for Algorithms’ (2016) 69 ALR 83–123.
Ufert F., ‘AI Regulation through the Lens of Fundamental Rights: How Well Does the
GDPR Address the Challenges Posed by AI?’ (2020) 5(2) European Papers 1087–1097.
Van Roy V., AI Watch – National Strategies on Artificial Intelligence: A European Perspective in
2019, EUR 30102 EN (Publications Office of the European Union 2020).
Veteto J., ‘The Alienability of Allegiance: An International Survey of Economic Citizenship
Laws’ (2014) 48 The International Lawyer 1, 79–103.
Vijay S., ‘The Indian Ocean and Smart Ports’ (2019) 14 Indian Foreign Affairs Journal 3.
Wachter S., ‘Data Protection in the Age of Big Data’ (2019) 2(1) Nature Electronics 6–7.
Wachter S., Mittelstadt B., Floridi L., ‘Transparent, Explainable, and Accountable AI for
Robotics’ (2017) 2 Sci. Robot 2–3.
Wadhwa V., ‘Laws and Ethics Can’t Keep Pace with Technology’ (2014) 4 Massachusetts
Institute of Technology: Technology Review 15.
Bibliography 233
Walerstein J., ‘Coping with Combat Claims: An Analysis of the Foreign Claims Act’s
Combat Exclusion’ (2009) 11 Cardozo Journal of Conflict Resolution 1, 319.
Watanabe C., ‘Identification of the Role of Renewable Energy: A View from Japan’s
Challenge: The New Sunshine Program’ (1995) 6(3) RenewEnergy 237–274.
Wierzbowski M., Galán Vioque R., Gamero Casado E., Grzywacz M., Sokołowski
M.M., ‘Challenges and Prospects of E-Governance in Poland and Spain’ (2021) 17(1)
ElectronGovIntlJ 1–26.
Wirtz B.W., Weyerer J.C., Geyer C., ‘Artificial Intelligence and the Public Sector –
Applications and Challenges’ (2019) 42(7) IntJPublAdmin 596–615.
Wittman D., ‘Prior Regulation versus Post Liability: The Choice between Input and
Output Monitoring’ (1977) 6 Journal of Legal Studies 193–212.
Yang Y., Zhong M., Yao H., Yu F., Fu X., O. Postolache, ‘Internet of Things for Smart
Ports: Technologies and Challenges’ (2018) 21 IEEE Instrumentation and Measurement
Magazine 1.
Ye J., ‘The Role of Health Technology and Informatics in a Global Public Health
Emergency: Practices and Implications from the Covid-19 Pandemic’ (2020) 8(7) JMIR
MedInform e19866.
Zapusek T., ‘Artificial Intelligence in Medicine and Confidentiality of Data’ (2017) 11 Asia
Pacific J Health L & Ethics 105.
Zemanek K., ‘New Trends in the Enforcement of Erga Omnes Obligations (2000) 4 Max
Planck UNYB 1–5.
Index

Access to justice 10, 91, 133 Automated contract management 84


Accessibility of AI 33, 116 Automated decision-making 31, 74, 78
Accountability: AI ethics principles 9, 19, Automated facial recognition (AFR)
25, 33, 71, 73, 79, 204, 212; regulation 21 see also facial recognition
11, 13, 26–27, 35, 43, 89, 93, 175, 205 Automated vehicles 133
Administration of justice 74, 83, 85–88, Autonomous weapons systems (AWS) 156,
90, 96 164, 181, 190, 204
ADR see Alternative Dispute Resolution Autonomy: AI ethics principles 90, 171;
Advance healthcare 66, 68, 71, 76 decision-making 71, 75–76, 157;
Afghanistan: civilian harm and casualty human 19, 90; in healthcare 71, 75–76,
tracking 187; International Security 80–82; means of warfare 157–164; US
Assistance Force 187; NATO Department of Defense Policy 191–196,
intervention in 187; solatia and 205–209, 212
condolence payments 188 Aviation Safety Agency (EU) 117
AFR see automated facial recognition
AI judge 86, 92 Bar Standards Board (BSB) 90, 95, 223
Air Force (US) 169, 194, 206–208 Barristers 90
Air traffic management 116–117, 122, 124 Berne Convention for the Protection of
Algorithmic Accountability Bill 89 Literary and Artistic Works 104–105
Algorithmic decision-making 41, 169 Best interest 75
Algorithms: bias 25–26, 31, 34, 63–64; Bias: algorithmic 6–8, 31, 34, 63–64,
design and training 7, 101, 107, 159, 72, 92; gender 64; neutrality 6, 63;
169, 172, 174, 196; performance 121, prevention of 6, 89; racial 44; risk 71
169–171, 193, 202, 209; systems 6, Big data 31, 36, 103, 128, 139, 144, 148,
103, 149 150, 155
American Bar Association (ABA) 95 Biomedicine 79
Analytics (predictive) 38, 62, 83–85, 91, Biometric Information Privacy Act
116, 121 (US) 30
Anti-money laundering 58–59 Biometrics 8, 21–24, 28–31, 58, 61 ,64
Articles on Responsibility of States BRICS countries 49–50
for Internationally Wrongful Acts
(ARSIWA) 175–177, 183–185 Canada: Civil Resolution Tribunal 85;
Artificial General Intelligence 195, 198, Immigrant Investor Programme 54,
200, 203, 205 61–62
Augmenting tools 34, 67, 74 Carbon emission 128–129, 137
Authorship 102, 104–108, 111 CBI see Citizenship by Investment
Automated and Electric Vehicles Act 2018 CEPEJ see European Commission for the
(UK) 133–134 Efficiency of Justice
Automated container terminal 128 Chatbots 38, 60, 86, 92, 116
Index 235
Chicago Convention see ICAO Data: Analytics 128, 155, 210; Collection
Convention on International Civil 31, 46, 50, 64; Personal 8–11, 23–25,
Aviation 27–33, 46, 49–50, 91, 95; protection 8,
China: copyright law 106, 111; Cyberspace 23–25, 28–34, 42, 90, 95, 132 see also
Administration (institution) 47; General Data Protection Regulation
investment and co-ordination 12, 13; Data Protection Authorities 28, 34
research and development 196; Wolters Decarbonisation 146–147, 154
Kluwer China Law & Reference 106 Deepfake 37–38, 41–44, 46, 51–52
Civil justice 86–88 Deep learning 68, 70–71, 82, 124, 138,
Civil Resolution Tribunal of British 149, 154
Columbia 85 Defense Advanced Research Projects
Climate-energy policy 140–145 Agency (DARPA) 102
Climate neutrality 154 Deranking technologies 38
Cloud computing 143, 145 Deregulation 152
Command-and-control 210 Digital technologies 9, 132–134,
COMPAS 86–88 142–144, 153
Compensation 133–134, 176–177, 186–189 Dignity 5, 42, 64, 90, 103, 183, 211–212
Consciousness 39, 51 Disclosure 31, 55, 93
Confidentiality 77, 80–81, 93–95 Discrimination 7–9, 34, 41, 63, 74, 89, 92,
Consent (legal) 9, 28, 30–33, 68, 80, 82 171, 201–202, 205, 212
Constitutional rights 44, 48 Disinformation 40, 42
Contract: analytics 83–85; drafting 84; Documents: Analysis 83–84; drafting 84
management 84; negotiation 59 Doctor-patient relationship 69, 71, 73,
Convention against Torture and Other 75–76, 80–82
Cruel, Inhuman or Degrading
Treatment or Punishment 182 End-of-life care 66–68
Convention on the Elimination of Energy: communities 147; efficiency
Discrimination Against Women 64 141–142, 144, 151; regulation 140, 151;
Convention on the Protection of Personal storage 147; tariffs 144
Data 89 England: Advance healthcare decision-
Convention on the Safety of United making 68–69, 80; NHS hospital 67
Nations and Associated Personnel 182 Equality 9, 42, 64, 90
Control: test of effective 185–187, 210; test EUROCONTROL see European
of overall 184 Organisation for the Safety of Air
Copyright, Designs and Patents Act 1988 Navigation
(UK) 106–108 European Commission 3–4, 34, 41–42, 49,
Correctional Offender Management 65, 77, 89, 117, 142
Profiling for Alternative Sanctions see European Commission for the Efficiency of
COMPAS Justice (CEPEJ) 89
Council of Bars and Law Societies of European Convention on Human Rights
Europe 94–95 (ECHR) 89
Court of Justice of the European Union European Data Protection Board (EDPB)
3, 108 28–29, 34
Courtal 85, 91 European Network and Information
COVID–19 pandemic 3, 13, 25, 32, 36, Security Agency (ENISA) 131, 134
51, 56, 68, 94–95, 115–118, 123–125, European Parliament 3, 14, 29, 40
129, 155, 213 European Parliament’s Science and
Crime predicting 45 Technology Options Assessment
Criminal activities 58 (STOA) 25, 139, 145, 150–151, 154
Criminal justice 83, 86–88 European Organisation for the Safety of
Cyber security 96, 131, 134, 137 Air Navigation (EUROCONTROL)
Cyber weapons 206 117, 126
Cyberspace 134, 209–212 European Union: Artificial Intelligence Act
Cyberspace Administration of China 47 155; Climate and Energy Package 141;
236 Index
Declaration of Cooperation on Artificial House of Lords 78, 91
Intelligence 78; European Climate Human: control 43, 75, 159, 164–167,
Law 142; European Commission 190–195, 198, 200, 206–209; decision-
77; European Green Deal 142–143; making 7, 76, 112, 159, 192; error 84,
European Research Area 153; GDPR 129, 137, 175; judgement 165, 190–195,
see General Data Protection Regulation; 197–201, 206–209, 212; operator 96,
Lighthouse initiative 12, 13, 18; 122, 161, 191, 193, 199, 204, 207–211;
Recommendations to the Commission rights 40, 46–47, 51, 63–64, 80, 90
on Civil Law Rules on Robotics 108; Hybrid threats 41
Software Directive and the Database Hydrogen energy 145, 147, 153
Directive 107; White Paper see White
Paper on AI IATA see International Air Transport
European Union Agency for Law Association
Enforcement Cooperation (EUROPOL) ICAO see International Civil Aviation
40–41 Organisation
Explainability 3–9, 13, 19, 90–96, 213 ICAO Convention on International
Civil Aviation (Chicago Convention)
Facial recognition 6, 21–22, 25, 28, 30, 32, 114, 123
43–44, 58, 89 ICJ see International Court of Justice
Facial Recognition and Biometric ICRC see International Committee of the
Technology Moratorium Bill 89 Red Cross
Fake news 46 ICTY see International Criminal Tribunal
Federation of European Risk Management for the Former Yugoslavia
Associations (FERMA) 116, 120–125 IEEE see Institute of Electrical and
Fifth Generation Computer System 149 Electronics Engineers Standards
Foreign Claims Act (US) 188–189 Association
Foreseeability 135–136 India: citizenship 61; Punjab National Bank
France: predictive analytics in 91; statement 61; Supreme Court 86, 92
on military objectives 173 Inequality 26, 63
Fundamental rights 3, 9, 19, 35, 42, Information: asymmetries 175; client,
64, 88–96, 213–214; Agency 21–22; confidentiality 77, 80–81, 93–95;
Equality 42, 64, 90; respect for 42, 92 gathering 62, 113, 120, 128, 158
Inland seaport 129, 137
General Data Protection Regulation Institute of Electrical and Electronics
(GDPR) 7–11, 23–25, 28, 30–34, 74, Engineers (IEEE) Standards
78, 80, 89–95 Association 93
Germany: healthcare 17; use of AI in Insurance 14, 96, 126, 130, 132–133,
61–62, 64, 181; use of uncontrolled 136–137
weapons 181 International Air Transport Association
governance (IATA) 116–121, 126
Greenhouse gases emission 128, 139, International Civil Aviation Organisation
141–142, 146, 148, 154 (ICAO) 114, 120, 123–126
Group of Governmental Experts on Lethal International Committee of the Red Cross
Autonomous Weapons Systems (GGE) (ICRC) 163, 165–166, 168–169, 173,
156, 159, 162, 164–168, 170–172, 193–194
174–175 International Court of Justice (ICJ) 156,
177–179, 184–185
Harm Assessment Risk Tool (HART) International Criminal Tribunal for
86, 88 the Former Yugoslavia (ICTY) 163,
Healthcare: advance 66, 68, 71, 76; AI in 166, 184
66–71, 73, 82; decision-making 66–69, International Organisation for
71–82; future treatment 79–82; provider Standardisation (ISO) 33
68–69, 72, 74–75, 81–82 International Ship and Port Facility
Hostilities, effects of 176, 186–189, 194 Security (ISPS) 132, 137
Index 237
internationally wrongful acts 175–179, 183 Machine learning 6, 40, 56–63, 70, 73, 82,
Internet Courts, China 106 89, 92, 100, 108, 112, 120, 124, 139,
Internet of Things (IoT) 127–129, 159, 196
138–139, 143–145, 148, 150, 154, 163 Medicine 6
Investigation 54, 63, 95, 113 Military advantage 166, 169–10, 202
Iraq: claims 188; solatia and condolence Military operation 166, 169, 171, 173,
payments 188–189 175–176, 181–191, 205, 210
ISO see International Organisation for Ministry of Science and Technology of the
Standardisation People’s Republic of China (MOST)
Israel: Aeronautics Ltd 184; Harpy 45–46
system 178, 199–201; transfer of Money laundering 54–55, 58–59, 64–65
weapons 185
National Health Service (UK) 67, 78
Japan: AI Strategy 2019 150; climate- Navigation 117, 123–124, 129
energy policy 145–150; Energy Supply Non-discrimination 16, 42, 49, 64,
Security Expert 149; Fukushima 89–90
nuclear accident 146; monozukuri North Atlantic Treaty Organization
139; Moonlight Project 145; New (NATO) 166, 176, 186–187
Energy and Industrial Technology
Development Organization (NEDO) Obligations: erga omnes 177–179,
150; New Sunshine Program 145–146; 181–182; to ensure respect 179, 185
photovoltaics 145–147; Society 5.0 Offensive purposes 191, 212
145, 147; Sunshine Project 145; Omissions 175
Village Energy Management System Online hearings 91
(VEMS) 147 Online tribunals 85
Judge analytics (profiling) 84–85, 91–92 Organisation for Economic Co-operation
Justice: access to 10, 91, 133; and Development (OECD) 49, 58,
administration of 74, 83, 85–88, 90, 90–91
96; civil 83, 85–86; criminal 83, 85–88; Originality 106–108, 110–112
system 86 Outsourcing 95, 183

Law enforcement 21, 28–29, 40, 44, 57 Patient: care 66–67, 73, 78–80; doctor-
Law firms 84 patient relationship 69, 71, 73, 75–76,
Law of Armed Conflict (LOAC) 156, 158, 80–82; end-of-life care 66–68
162–170, 174–183, 189, 194, 201–204 Pattern 6, 8, 59–60, 67, 70, 84–85, 103,
Lawful objectives 163, 166, 202 108, 120, 130, 144, 155, 159, 161
Lawful measures 25, 31, 179 Personal data 8, 9, 11, 23–24, 27–33, 46,
LAWS see Lethal Autonomous Weapons 49, 95
Systems Phishing 36, 39–40, 51
Legal: Advisors 159, 161; Aid 91; Piracy (copyright) 99
Framework 49, 64, 77, 213; Practice Police 29–30, 55, 57
83–88, 90; professional privilege; Pre-trial 86–87, 96
research 83–84; sector 18, 90, 91, 96; Pre-Trial Risk Assessment Instrument
services 95 (PTRA) 86–87
Lethal Autonomous Weapons Systems 156, Predictive analytics 38, 62, 83–85, 91,
180, 190 116, 121
LexisNexis 84 Principle of distinction 161, 170, 173, 190,
Liability: fault-based 133; product 10, 201–202, 205
132, 135; strict / risk-based 133, 136; Privacy 21, 25, 28–35, 42, 44–45, 77,
vicarious 133, 136 80–81, 89–92, 213
Litigation 55, 85, 93, 133 Privilege: travel 61; legal professional
Litigation analytics 84 93–95
Long-Range Anti-Ship Missile (LRASM) Proportionality 16–17, 166–167, 169,
199–200 190, 194
238 Index
Psychological: security 36–37, 46; warfare STOA see European Parliament’s
37–39 Science and Technology Options
Public interest 30, 32, 177 Assessment
Public safety assessment (PSA) 86–87 Stockholm International Peace Research
Institute (SIPRI) 159, 165
Racial bias 8, 24, 41, 44 Stop Killer Robots Campaign 192–193,
Ranking technologies 38–39 195, 200, 202–205, 211–212
Real-time 27, 60, 121, 131, 139, 144, Stuxnet 209
155, 203 Supreme Court of India 86, 92
Reasoning 18, 62, 104. 149, 192, 201 Supreme Court of the United States 198
Regulatory: analytics 213; compliance Surveillance 60, 120, 185
5–7, 25, 30–34, 54–59, 78, 89, 95, 115, Sustainable development 52, 79, 129,
136, 159, 164, 168, 180, 201–203, 211; 140, 147
framework 9–11, 41, 77, 80, 122–125, Swarm 196
134, 214; precautions 200 Sweden: Analytic Imaging Diagnostic
Reliability 32, 71–75, 80, 91, 192 Arena 79; healthcare in 77–79; National
Renewable energy sources 139, 141–147, Strategy for AI 78; statement on lethal
153–155 autonomous weapon systems 159;
Responsibility gap 201, 203–204, 212 weapons review 179
Risk: assessment 35, 64, 86–87, 143; in Syria: attacks in 181; Pantsir-S1 SAM
healthcare 70–82; management 115, batteries 199
118–124, 129
Robotics 40–41, 78, 108, 112, 138–139, Tax 55–56, 64, 142
143, 149–151, 154 Technology: audio-visual 42, 46, 91;
Rule of attribution 183 blockchain 147; communications 50,
Rule of law 42, 90–91, 96 96; facial recognition 21–28, 31–35;
Rules of engagement 191 information 130, 149
Russia: AI strategy 158, 212; legal response Templates 21–22, 24, 26, 30, 32,
to AI 45–49; Ministry of Defence 47 34–35, 111
Third-party suppliers 131
Safeguards 8, 33, 48, 73–74, 77–81, 91, 96, Tort 132–134, 136 see also war torts
175, 180, 209, 214 Trade-off 26–27, 208
Safety standards 118, 123, 136–137 Trade secrets 55
SARP see Standards and Recommended Transparency (principle) 3–13, 19, 25–33,
Practices 42, 49, 57, 79, 88–90, 92–93, 95–96,
Seaport activities 127 187, 213
Secrecy 55, 95 Trial 41, 86–87, 89, 91–92, 109
Sensitive: data 24, 96; information
86–87, 93 Unintended engagements (military) 161,
Sentencing 74 172, 191
Smart manufacturing 144 United Kingdom (UK): AI Committee of
Smart port ecosystem 129–134 the House of Lords 78; Centre for Data
Smart ships 129, 133 Ethics and Innovation 78; Copyright,
Social engineering 40 Designs and Patents Act 1988, CDPA
Social media 101 page 106–108; Immigrant Investor
Soft law 33 Programme 54; use of uncontrolled
Solatia payments 188 weapons 181; Taranis combat drone 178
Solicitors 90, 95–96 United Nations: Convention against
Solicitors Regulations Authority (SRA) for Torture and Other Cruel, Inhuman or
England and Wales 90, 95–96 Degrading Treatment or Punishment
Standards and Recommended Practices 182; Convention on the Elimination
(SARP) 124 of Discrimination Against Women 64;
State: Liability 176, 186; Responsibility Convention on the Safety of United
162, 176–177, 182–183, 185, 189, 204 Nations and Associated Personnel 182;
Index 239
Framework Convention on Climate Third Offset Strategy 196, 212; weapons
Change 141–142; High Commissioner review 179
for Human Rights 36; Register of Universal jurisdiction 181–182
Conventional Arms 185; Sustainable Unlawful acts 30, 132, 185, 204
Development Goals 52, 147 Unmanned aerial vehicle (UAV) 36, 46
United States (US): Air Force 169, 194,
206–208; Army 207; Congress 44, 158; Virtual agents 116
copyright law 105, 109; Copyright Virtual reality and AI 46, 60
Office 105; Defense Science Board 192, Voice recognition 70, 115
208; Foreign Claims Act 188; Immigrant
Investor Programme 54; Legal Reviews War torts 186
of Weapons and Cyber Capabilities Weapons review 180
180; National Cyber Security Centre Westlaw 84
96; National Defense Strategy 196; White Paper on AI 3–4, 7–19, 41, 78, 89,
Operational Law Handbook 188; 93, 116
Phalanx systems 178; policy statements Wireless 128–129
197–198; solatia and condolence World Intellectual Property Organisation
payments 188–189; Supreme Court 198; (WIPO) 99, 104–106, 112

You might also like