Professional Documents
Culture Documents
REGULATING ARTIFICIAL
INTELLIGENCE IN INDUSTRY
Edited by
Damian M. Bielicki
Regulating Artifcial Intelligence
in Industry
Damian M. Bielicki is Senior Lecturer in Law and Director of the Law &
Technology Research Group at Kingston University London. He is also a
lecturer in Space Law and Cyber Law at Birkbeck University of London. He
is a Senior Fellow of the Higher Education Academy, a member of the
International Law Association, and a member of the International Institute of
Space Law.
Routledge Research in the Law of Emerging Technologies
Contributors vii
Preface x
List of acronyms and abbreviations xii
PART I
Horizontal AI applications 1
Summary 213
Bibliography 215
Index 234
Contributors
1 For the evolution of the term and some of the technologies see, for example: W. Barfielfd and U.
Pagallo, Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018), 7–38; M. Corrales,
M. Fenwick, N. Forgo (ed.), Robotics,AI and the Future of Law (Springer 2018), 1–7.
2 J. Kaplan, Artificial Intelligence:What Everyone Needs to Know (OUP 2016), 1.
3 For instance, Legg and Hutter identified over 70 definitions of this term and its variations. See: S. Legg
and M. Hutter, ‘A Collection of Definitions of Intelligence’ (arXiv,Technical Report IDSIA-07-07, 15
June2007) <https://arxiv.org/pdf/0706.3639.pdf> accessed 27 May 2021.
4 This definition is based on Microsoft’s publication in which it is further specified that AI is ‘making
predictions using previously gathered data, and learning from errors in those predictions in order to
generate newer, more accurate predictions about how to behave in the future’. See T. van Kraay,‘What
is Artificial Intelligence?’ (Microsoft Azure, 9 August 2018) <https://azure.microsoft.com/en-gb/blog/
what-is-artificial-intelligence/#:~:text=%E2%80%9CThe%20ability%20of%20a%20digital,displayed
%20by%20humans.%E2%80%9D%20%E2%80%93%20Wikipedia> accessed 27 May 2021.
Preface xi
behaviour.5 This definition seems particularly appropriate to this book in that it
focuses on what AI systems are doing nowadays in the different sectors covered
in this publication.
This book is divided into two main parts. Part I includes sectors with hori-
zontal AI applications, and Part II concerns sectors with vertical AI applica-
tions. Accordingly, Part I focuses on AI applications that are generally more
acquirable because the technology can fit into different disciplines and fulfil
a variety of similar needs. The sectors that use it face similar types of legal
problems, including data protection, transparency, explainability, accountabil-
ity, and others. On the other side, Part II focuses on AI that is applied to a
specific problem in a specific industry that is highly optimised for that industry.
A vertical AI is therefore designed to solve very targeted needs in a particular
sector.
Any work that sought to cover all the sectors using AI would run to many
volumes. Therefore, this book is not intended to be comprehensive, and the
industries covered in this book do not encompass every industry currently
using AI. Nevertheless, it provides a solid overview of how AI is being used
and regulated across a wide range of sectors, including aviation, energy, gov-
ernment, healthcare, legal, maritime, military, music, security, supply chain,
and others. Wherever possible, it includes the impact of the COVID-19 pan-
demic on the use of AI in industry and on corresponding regulatory matters.
Finally, it offers a set of recommendations for optimal regulatory interventions.
This book collects the efforts of a diverse group of scholars and practitioners,
each of whom are attributed to in the list of contributors.
Damian M. Bielicki
5 The second part of this definition comes from the publication by Luger and Stubblefield who defined
this term as ‘the branch of computer science that is concerned with the automation of intelligent
behaviour’. See: G.F. Luger and W.A. Stubblefield, Artificial Intelligence: Structures and Strategies for Com-
plex Problem Solving (6th ed., Pearson 2008), 1.
List of acronyms and abbreviations
Horizontal AI applications
1 Artifcial intelligence and its
regulation in the European Union
Gauri Sinha and Rupert Dunbar
Background
There is an acknowledgement that the European Union (EU) is behind key
competitors in the race to attract, create and nurture AI companies and invest-
ment.1 On this basis, the European Commission has published a White Paper,
and a public consultation process on this has now been undertaken.2 The plan
is for the publication of legislative proposals imminently.3
With some uncertainty concerning the precise future regulation of AI in
the EU, this chapter seeks nonetheless to establish from the White Paper and
consultation process what indication there is for companies on key points of
concern: transparency, explainability and accountability. It contextualises these
concepts and highlights the challenges inherent in them. These, of course, are
challenges common to all jurisdictions.
Problems more unique to the EU also remain though, not least an under-
lying scepticism concerning AI across not just EU citizenry, but also public
authority and business—with 90% of the consultation respondents expressing
concern that AI may breach fundamental rights.4 There is also an interesting
development in that the European Parliament has published its positions on AI
and the emergence of nationalism in both the consultation and following the
COVID-19 pandemic, providing a far from unified picture.
The reality is that regulation on the ground is likely to be years away.
Given that action through legislation (‘positive harmonisation’) is far from
immediate, it is important to draw attention to the Court of Justice of the
European Union’s probable role in developing this area through its case law
1 White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, COM (2020)
65 Final (Brussels, 19 February 2020), 4.
2 Public Consultation on AI White Paper: Final Report, European Commission, DG for Communica-
tions Networks, Content and Technology (November 2020).
3 European Parliament (Press Releases), ‘Parliament leads the way on first set of EU rules for Arti-
ficial Intelligence’ (20 October 2020) <https://www.europarl.europa.eu/news/en/press-room
/20201016IPR89544/parliament-leads-the-way-on-first-set-of-eu-rules-for-artificial-intelligence>
accessed 27 May 2021.
4 Public Consultation on AI (n 2), 7.
DOI: 10.4324/9781003246503-2
4 Regulating artificial intelligence in industry
(so-called ‘negative harmonisation’). Overall, companies will be encouraged
by the Court’s mood music, even if the specific tune is difficult to identify at
this stage.
Ultimately, concerning the future of regulation for AI in the EU, it must
be said that whilst there is ambition to achieve much in the field, the collec-
tive vision and philosophy for what AI can do, and how it can benefit society,
remains fragmented.
Introduction
The White Paper defines AI as ‘a collection of technologies that combine
data, algorithms and computing power’.5 Its applications are multiple and the
Commission recognises that, in a positive light, it could improve healthcare,
help combat climate change, increase the efficiency of production and improve
security (amongst others).6 But in order to achieve these ambitions it needs to
overcome certain challenges, including the fact that citizens are anxious con-
cerning AI’s capacity to do them harm, both intentionally and unintentionally,
and that businesses are seeking legal certainty. A unified way forward is sought.
How can this be achieved? The answer is not straightforward and depends
on a number of subjective factors that have their roots in trust and ethics. For
the EU there is an increasing realisation that making decisions quickly or more
efficiently is not what defines the success of AI. What is more important is how
the benefits are realised. Are we ready to leave decisions that impact human
lives in the hands of machines? Perhaps the answer to this ‘readiness’ lies some-
where in the corridors of trust and ethics, and it is to these concepts that the
chapter now turns.
In April 2019, the European Commission High Level Expert Group on AI
adopted the Ethics Guidelines for Trustworthy AI, stressing that human beings
will only be able to confidently and fully reap the benefits of AI if they can trust
the technology.7 The broad principle that AI initiatives should not be realised
if they entail the compromising of ethics is abundantly clear.8 Compliance with
ethics, however, is a different complexity that merits discussion. The ‘creative’
interpretation of ethical principles should not be used as a shield to achieve
‘box-ticking’ compliance where corporations only seem to be complying.9
The relationship between ethics and law adds a further layer of intricacy. As
highlighted by the European Data Protection Supervisor, ethics in the EU is
5 White Paper (n 1), 2. See also A Definition of AI: Main Capabilities and Disciplines, High-Level Expert
Group on Artificial Intelligence (Independent), European Commission B-1049 (Brussels, 8 April 2019).
6 White Paper (n 1), 1.
7 Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence (Independent),
European Commission B-1049 (Brussels, 8 April 2019).
8 S.Tsakiridi,‘AI ethics in the post-GDPR world: Part 1’ (2020) P. & D.P. 20(6), 13–15.
9 Compliance in several other areas, for example, financial crime laws, has been reduced to ‘box-ticking’
where organisations seem to be complying to avoid regulatory action.
AI and its regulation in the EU 5
not conceived as an alternative to compliance with the law, but as the under-
pinning values that protect human dignity and freedom.10
What follows in this chapter is an attempt to unravel the complexities
behind three ethical principles that form the cornerstone of an effective AI
model—transparency, explainability and accountability.
Transparency
In the context of AI, transparency indicates the capability to describe, inspect
and reproduce the mechanisms through which AI systems make decisions.
Transparency, however, is closely linked to trust, which is also about being
explicit and open about choices and decisions concerning data sources and
development processes and stakeholders.11 In essence, it would mean having
a complete view of the system on three levels.12 First, at the implementa-
tion level, where the AI model acts on the data that it is fed to produce a
known output, including the technical principles of the model and the associ-
ated parameters. The Commission considers this as the standard ‘white-box’
model, in contrast to the ‘black-box’ model where this principle is unknown.
Second, at the specification level, all the information that resulted in the imple-
mentation, such as objectives, tasks and relevant datasets, should be open and
known. The third level is interpretability, which represents the understanding
of the underlying mechanisms of the model. For example, what are the logi-
cal principles behind the processing of data and what is the rationale behind a
certain output? The Commission believes that these questions are the hardest
to answer and the third level of transparency is not achieved in current AI sys-
tems.13 This is particularly complex as the third level of transparency is closely
linked to fairness in decisions, often impacting human lives. Fairness itself is a
subjective and contextual concept, influenced by multiple social, cultural and
legal factors.
With the level of subjectivity present, it is crucial to remember that AI
models are not designed by machines and potentially reflect the biases and
prejudices of the designers choosing the features, metrics and structures of
a model.14 The concept of trust also needs further clarification to navigate
through the effectiveness of AI. Trust is primarily used to speak about the trust
or distrust of individuals and institutions who are responsible for developing,
deploying or using AI. However, trust could also be directed at those who are
Explainability
Whilst transparency is about understanding how an AI model makes its deci-
sions, explainability goes a step further. It adds the need for justification, which
in the AI context would require an explanation as to why a particular decision
was reached. For example, if a loan application is rejected, a consumer may
rightly ask for the reasons or justification behind the rejection. Resolving these
issues definitively is especially pressing as there is ongoing academic debate
concerning the current extent of legal requirements under the General Data
Protection Regulations (‘GDPR’) in this regard.21
Explainability has also been linked to responsibility, in that AI experts should
know what they are doing, and should be able and willing to communicate,
explain and give reasons for the impact that AI models are having on human
and non-human subjects. Reading between the lines, morality, again, is closely
linked, including the obligation to gain greater awareness of unintended conse-
quences and the moral significance of what the models do. This awareness has
been seen as an essential requirement for the effective working of AI.22
Explainability could take various forms, the most obvious being the justifi-
cation of key features that come into play in AI decisions. Conversely, explain-
ing the requirements that are not met may also help in explaining a decision,
particularly in customer-facing scenarios such as loan applications, where it
might be explained why the customer has not met certain criteria to be success-
ful.23 Regardless of the justification provided, the risk of subjectivity reflecting
human biases still creeps in. It is the Commission’s view that bias and dis-
crimination are inherent risks of any societal or economic activity that involves
human decision-making. However, AI amplifies this risk by impacting a large
Accountability
Transparency, explainability and accountability may often be used as syno-
nyms, but it is important to highlight the fundamental differences between
them. Whilst transparency and explainability refer to the explanation of deci-
sions before the AI model has taken its decision, accountability refers to the
ability to explain the role of an AI model after the action is done, or not done.39
In simple terms, the goal is to determine where the liability lies.
developers of AI may be best placed to address risks arising from the devel-
opment phase, [but] their ability to control risks during the use phase
may be more limited. In that case, the deployer should be subject to the
relevant obligation.52
This is far from developed and appears to anticipate that some potential influ-
ence is not sufficient to carry forward any obligation for the developer into
the deployment stage. It may be preferable to treat the endeavour as a shared
enterprise, rather than a baton (or hot potato) to be passed. This is the approach
for issues of liability in the proposals (where a developer could theoretically be
accountable for the harm caused in the given example). This, it is submitted,
confuses matters by envisioning single party obligations with multiple party
liabilities. This is in part attributable to the fact that issues of liability, as dis-
cussed above, also rest within Member State rules.
46 Tsakiridi (n 17), 8.
47 Coeckelbergh (n 22), 2055.
48 White Paper (n 1), 10.
49 Ibid., 25.
50 Tsakiridi (n 17), 9.
51 White Paper (n 1), 22.
52 ibid.
12 Regulating artificial intelligence in industry
The level of human oversight which will be required is not yet fully evident.
The White Paper explores—non-exhaustively—review prior to deployment,
review after AI action, monitoring throughout or integrating operational con-
straints (e.g. automatic stop).53 The EU Parliament has been clearer in stating
that for high-risk AI applications human oversight should be possible at all
stages.54
53 ibid., 21.
54 European Parliament (n 3).
55 JRC Technical Reports, ‘AI Watch TES analysis of AI Worldwide Ecosystem in 2009-2018’ EUR
30109 EN (2020) EU Science Hub, 5.
56 90% considered this very important, Public Consultation on AI (n 2), 5.
57 88% considered this very important, ibid.
58 87% considered this very important, ibid.
59 White Paper (n 1), 6.
60 ibid., 5.
61 J. D’Onfro, ‘AI 50: America’s Most Promising Artificial Intelligence Companies’ (Forbes, 17 Sep-
tember 2019) <https://www.forbes.com/sites/jilliandonfro/2019/09/17/ai-50-americas-most
-promising-artificial-intelligence-companies/?sh=735e2a86565c> accessed 27 May 2021—The
remaining breakdown is Massachusetts (5) (Boston (4) and Cambridge (1)); New York City, NY (5);
Seattle,WA (5);Ann Arbor, MI (1);Austin,TX (1); Chicago, IL (1).
AI and its regulation in the EU 13
Whilst China’s experience is currently more diffuse, it too plans to develop a
central hub model.62 The reluctance to pool talent and focus innovation in the
EU ‘lighthouse’ (reflected by 86% suggesting it was very important to support
current centres and networks with ‘only’ 64% suggesting a lighthouse scheme
was very important63) was perhaps to be expected given national self-interest,
but it does mean that a more organic and less turbo-charged ascent may result.
There is also a broader concern. One needs to be tentative in drawing
conclusions too soon from the coronavirus pandemic, but it is fair to say that
even between nations and institutions within the EU the experience has not
always been edifying concerning united action. The coordinated day planned
to simultaneously begin mass COVID vaccinations across all EU Member
States, in what the EU termed ‘a touching moment of unity’, was undermined
by some nations acting unilaterally in advance.64 The Commission’s threat to
invoke Article 16 of the Northern Ireland protocol to prevent vaccines from
arriving in the UK via the ‘back door’ was met with incredulity in both the
Northern Ireland and the Republic of Ireland (an EU Member State that was
frustrated by the lack of any consultation).65 The Commission’s overseeing of a
slow vaccination rollout has also been acknowledged by Commission President
von der Leyen and has led some to call for the President’s resignation.66 Any
perceived shortcomings in EU leadership concerning the crisis could have
spill-over effects, impacting on coordinated EU projects.
62 This would comprise nine cities from the Guangdong province and two Special Administrative
Regions (Hong Kong and Macau), which collectively form the ‘Greater Bay Area’—H. Bork,‘Made
in China:The Pearl River Delta Area is experiencing a growth spurt in ambition and tech’ (Roland
Berger, 5.12.2019) <https://www.rolandberger.com/en/Insights/Publications/China's-government
-plan-for-its-own-Silicon-Valley.html> accessed 27 May 2021.
63 Public Consultation on AI (n 2), 5.
64 M. Eddy,‘Germany and Hungary Begin Vaccinations a Day Early’ (The New York Times, 26 December
2020) <https://www.nytimes.com/2020/12/26/world/Germany-Hungary-vaccinations-begin
.html> accessed 27 May 2021.
65 S. Harrison, ‘EU Vaccine Export Row: Irish Government ”In Talks” with European Commis-
sion: Analysis’ (BBC, 9 February 2021) <https://www.bbc.co.uk/news/world-europe-55986492>
accessed 27 May 2021.
66 A. Zorzut, ‘EU president faces fresh calls to resign over ‘disastrous’ Covid vaccine programme’ (The
New European, 16 April 2021) <https://www.theneweuropean.co.uk/brexit-news/europe-news/vdl
-faces-calls-to-resign-over-vaccine-programme-7903690> accessed 27 May 2021.
14 Regulating artificial intelligence in industry
voluntarily opting into the scheme and would then receive a ‘quality label’ for
their AI applications as a result.67
‘High risk’ would be defined based on cumulative criteria. The first concerns
the nature of the sector and whether it is one where ‘given the characteristics of
the activities typically undertaken, significant risks can be expected to occur’.68
For the sake of clarity, it was proposed that an exhaustive list be provided69 and
that this be periodically reviewed. The second cumulative criteria require that
the activity also be one in which the AI’s specific use is such that ‘significant
risks are likely to arise’.70 This, it is suggested, could be based on the impact
likely to result, particularly death, injury and immaterial damage, legal effects
or similar, and outcomes that cannot reasonably be avoided by subjects.71
It might be thought that such a distinction between high-risk (regulated)
and other (unregulated) applications might form a secure basis around which to
structure future planning (or perhaps a chapter in an edited collection discuss-
ing future planning). However, only 43% of respondents favoured limiting the
regulations to high-risk applications in the proposal.72 There was also a lack of
engagement from the public consultation concerning whether the definition
of ‘risk’ was a satisfactory one.73 Accordingly, this aspect of the proposal seems
not to rest on especially solid ground.
The Court
The prospect of an extended period before the specific AI legislation is passed
begs the question as to what the interim shape of ‘regulation’ will be. Whilst
internet buying may have certain advantages, such as the ability to place
the order from home or the office, without the need to go out, and to
have time to think about the questions to ask the pharmacists, and these
advantages must be taken into account.90
87 Beck notes that the Court, too, is drawn to areas of political fashion. See: G. Beck, The Legal Reasoning
of the Court of Justice of the EU (Hart Publishing 2012), 390.
88 Schemmel and de Regt note that ‘few areas of [EU] law and policy that have been shaped and influenced
more positively by the jurisprudence of the ECJ than the area of environmental protection’, M.L Schem-
mel, B. de Regt, ‘The European Court of Justice and the Environmental Protection Policy of the
European Community’ (1994) 17(1) Boston College International and Comparative Law Review, 53,
54. See similarly S. Bell, D. McGillivray, O. Pedersen, Environmental Law (8th ed., OUP 2013), 225;
E. Paunio, Legal Certainty in Multilingual EU Law: Language, Discourse and Reasoning at the European
Court of Justice (Ashgate Publishing 2013), 87–94. More proactive efforts to protect the consumer
in more recent case law can be seen in Joined Cases C-402/07 and C-432/07 Sturgeon and Oth-
ers [2009] ECR I-10923 and Case C-497/13 Froukje Faber v Autobedrijf Hazet Ochten BV [2015]
ECLI:EU:C:2015:357. Historically consumers had typically been expected to be reasonably circum-
spect and been afforded less protection, e.g. Case C-470/93 Verein gegen Unwesen in Handel und Gew-
erbe Köln.e.V. v Mars GmbH [1995] ECR I-1923, para. 24 and Case C-210/96 Gut Springenheide
and Tusky [1998] ECR I-4657, para. 31. For criticism that the Court’s approach to consumer protec-
tion has now gone too far see JHH Weiler,‘Epilogue: Judging the Judges – Apology and Critique’, in
Maurice Adams and others (eds), Judging Europe’s Judges (Hart Publishing 2013), 245.
89 Case C-322/01 Deutscher Apothekerverband eV v 0800 DocMorris NV and Jacques Waterval [2003]
ECLI:EU:C:2003:664.
90 ibid., para. 113.
91 ibid., paras. 120–123.
18 Regulating artificial intelligence in industry
It is necessary to point out that this was a decision taken in 2003, before
the increasing ubiquity of online shopping. As such, it demonstrated (i) some
forward thinking from the Court concerning technological potential92 and (ii)
a disinclination to leave regulation to Member States. This is not to say that the
Court was a lone vanguard; indeed, there was already an EU Directive in place
concerning distance selling. But notably the Directive permitted Member
States to adopt more strict standards so as to protect the consumer, here though
the Court recognised that such a right was still subject to the residual free
movement provisions.
These cases are indications of the Court’s capacity to carry forward the EU’s
interests through integrating opportune sections of the market both prior to
and, indeed, after legislation has been passed. Accordingly, the Court’s view is
going to be of importance for a long time to come.
Hence, whilst specific references to artificial intelligence in case law have
thus far been sporadic and few, it is notable that they have appeared welcoming.
Unsurprisingly, references to AI more frequently appeared in the Opinions
of Advocates General, advising the Court, where a more discursive approach
to reasoning is adopted than that taken by the Court itself. These so far indi-
cate acute awareness of the potential and strategic importance of AI for the
EU moving forward. Indeed, an Advocate General recently pointed out, ‘the
opportunities for virtual storage or artificial intelligence programmes inevitably
transform the way in which the profession and its practice are conceived’.93 The
case concerned the (somewhat traditionalistic) legal sector and is a reminder
of AI’s prospective reach, and judicial sensitivity to this. Another Advocate
General drew attention to the fact that the definition of the term ‘product’ may
need to be revisited in light of modern technologies and cited with approval
a Commission report on the matter.94 For its part, the General Court was
unwilling to limit the scope of the definition ‘smart’ to humans alone.95
Way forward
Practically, it appears likely that the development of AI in the EU will remain
more atomised than the Commission hopes, with Member States and par-
ticipants valuing established centres and networks above ‘lighthousing’. It
also appears that there is significant pressure to review the proposed approach
92 Writing shortly after judgment Pike and Pioch noted of the ‘limited’ market at the time, with Doc-
Morris receiving only 130 orders per day. R.A. Schmidt, E.A. Pioch, ‘Pills by Post? German retail
pharmacies and the internet’ (2003) 105(9) British Food Journal 618–633, 618.
93 Case C-99/16 Jean-Philippe Lahorgue v Ordre des avocats du barreau de Lyon [2017] ECLI:EU:C:2017:107,
Opinion of AG Wathelet, para. 2.
94 Case C-410/19 The Software Incubator Ltd v Computer Associates UK Ltd [2020] ECLI:EU:C:2020:1061,
Opinion of AG Tanchev, para. 29.
95 Case T-48/19 smart things solutions GmbH v European Union Intellectual Property Office (EUIPO)
[2020] ECLI:EU:T:2020:483.
AI and its regulation in the EU 19
to risk (which will leave many applications outside of its remit unless those
responsible voluntarily participate). These revisions would also help to address
what remains a significant underlying apprehension regarding AI in the EU
generally.
More fundamentally, as more AI systems are integrated into everyday life, it
is predicted that it will have a globally transformative effect on economic and
social structures similar to the effect of other general-purpose technologies,
such as steam engines, railroads, electricity, electronics, and the internet.96
Whilst it is easy to get caught up in the innovative aspects of AI, AI still has
a long way to go before it can be considered safe for all parties involved. The
wide use of data is a double-edged sword, allowing machines to make quicker
decisions but also exposing several vulnerabilities that may potentially make
ethical compromises. Analysing the different ethical challenges raised by AI, it
is clear that transparency, explainability and accountability are interconnected
and bound by a common theme of trust.97 The Commission in its White Paper
refers to this as the ‘ecosystem of trust’, which should give citizens the confi-
dence to use AI systems and companies and public bodies the legal certainty to
innovate using AI.
It is imperative that both regulators and businesses share this vision of build-
ing trust by implementing AI with a sound ethical framework. The consumers
are crucial too, and for AI to succeed the individuals impacted by it must be
sure that their fundamental rights are not being compromised. Placing trust as
the irreplaceable ingredient, businesses need to identify the specific benefits
that AI technology can bring and weigh it against the costs of deploying it
responsibly and ethically.
With a large number of businesses deploying or exploring AI options, it
may be mistakenly believed that the success of AI is a matter of financial profit.
What is far more significant, however, is how it connects directly to humans
impacted by it. Putting human well-being at the core of development sets both
a realistic goal as well as concrete means to measure the impact of AI.98 The
Commission is of the view that human agency and oversight are the key ele-
ments of a trustworthy AI system.99 AI should empower human beings, allow-
ing them to make informed decisions whilst protecting fundamental rights.
The ‘human’ element needs to be present in all aspects of AI, such as human-
in-the-loop, human-on-the-loop, and human-in-command.
The Commission is of the view that human involvement is key to ensuring
that human autonomy is not undermined, and the objective of trustworthy and
ethical AI remains intact. Human oversight is seen as essential in several aspects
96 J. Howard, ‘Artificial Intelligence: Implications for the Future of Work’ (2019) 62 Am J Ind Med
917–926.
97 Tsakiridi (n 17), 9.
98 Dignum (n 11), 49.
99 Ethics Guidelines for Trustworthy AI (n 7).
20 Regulating artificial intelligence in industry
of its application, from monitoring and designing to the output of the AI sys-
tem. One might question, though, if human review is necessary to achieve
effective AI outcomes, are we really achieving the impact that AI promised to
bring about?
Ironing out these issues will take time. In the interim, companies can be
confident that the Court will not allow divergent national regulations to
emerge. However, beyond a reluctance to allow Member States to regulate
this sector the Court will have — and should have — less to say and will likely
be guided by the legislative process. This is welcome as, at current, the last
thing AI in the EU needs is another dissonant voice.
2 The impact of facial recognition
technology empowered by artifcial
intelligence on the right to privacy
Natalia Menéndez González
1 For the purposes of this chapter, FRT is defined as a technology capable of producing a template
of the face image of a subject and comparing it with photos of pre-existing facial portraits. Then,
software is used to construct a replica by processing photographs of the subject to identify or validate
its identity or to attribute a certain characteristic to the subject. See K. Hamann and R. Smith,‘Facial
Recognition Technology:Where Will It Take Us?’ (2019) 34 (1) CJM <https://www.americanbar.org
/groups/criminal_justice/publications/criminal-justice-magazine/2019/spring/facial-recognition
-technology/> accessed 27 May 2021. Further definitions of FRT:‘From a technical viewpoint, facial
recognition is a subcategory of the sphere of artificial intelligence known as “computer vision”.’ in
Consultative Committee of the Convention for the Protection of Individuals with regard to Auto-
matic Processing of Personal Data, Facial Recognition: current situation and challenges (T-PD(2019)05rev);
‘[FRT] allows the automatic identification of an individual by matching two or more faces from
digital images. It does this by detecting and measuring various facial features, extracting these from the
image and, in a second step, comparing them with features taken from other faces.’ See: Fundamental
Rights Agency, Facial recognition technology: fundamental rights considerations in the context of law enforcement
(FRA focus, 2019) 2. Moreover, ‘automatic processing of digital images which contain the faces of
individuals for identification, authentication/verification or categorisation of those individuals.’ See:
Article 29 Working Party, Opinion 02/2012 on facial recognition in online and mobile services (00727/12/
EN WP 192, 2012).
2 ‘The biometric template […] is a structured reduction of a biometric image’; see I. Iglezakis, ‘EU
Data Protection Legislation and Case-Law with Regard to Biometric Applications’ (2013), available
at <http://dx.doi.org/10.2139/ssrn.2281108> accessed 27 May 2021.
3 Fundamental Rights Agency (n 1).
4 R (on the application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC 2341.This
is the first case within the European continent regarding the use of FRT. In this case, FRT is named
as AFR.
DOI: 10.4324/9781003246503-3
22 Regulating artificial intelligence in industry
ond function is known as ‘verification’ or ‘authentication’, also referred to as
‘one-to-one comparison’. This application compares two biometric templates
to determine the likelihood that two images show the same person.5 If the like-
lihood is above a certain threshold, identity is verified. It is essentially an iden-
tification operation. However, instead of being performed against all the facial
templates stored within a database, it is only against a unique template (for
instance, when a person unlocks their mobile phone using facial recognition).6
Finally, FRT can also be used for so-called ‘categorisation’, i.e. face analysis. In
this application, the technology is not used to identify or match individuals, but
to obtain certain characteristics of individuals, which do not necessarily allow
for identification.7 Such characteristics might be sexual orientation, gender,
age, mental health, the potentiality to commit a crime, and many more.
These examples give a better idea of the realm that AI application has opened
for the categorisation function of FRT.
Initially, facial images might be considered personal data, as long as they allow
for the identification of a person. With only a facial image (a relatively easy thing
to obtain, for instance, from a social network) a vast amount of (extremely)
personal data can be acquired with the use of FRT. However, it is also pos-
sible that facial images can be processed and/or noise added to them so that it is
11 European Commission Horizon 2020, ‘Periodic Reporting for period 2 - iBorderCtrl (Intelligent
Portable Border Control System)’ (European Commission CORDIS EU Research Results, 31 March
2021) <https://cordis.europa.eu/project/id/700626/reporting> accessed 27 May 2021.
12 Article 3.1 General Data Protection Regulation [2016] OJ 2 119/33.
13 Article 4.1 GDPR
24 Regulating artificial intelligence in industry
impossible to deduce the identity of a specific person from them.14 If this is the
case, facial images would not considered personal data but non-personal data.15
Another important, and frequently unnoticed, aspect that should be con-
sidered when analysing whether facial images are personal data or not is their
potential reversibility to allow identification. If we consider a facial image used
to identify a subject as personal data, we fall under the realm of data protection
and, therefore, the GDPR applies. Within the GDPR, not all personal data
are considered to be on the same ‘level’ and there is a special category called
‘sensitive data’. This data can potentially reveal ‘racial or ethnic origin, political
opinions, religious or philosophical beliefs, or trade union membership, […]
genetic data, biometric data, […] data concerning health or […] a natural per-
son’s sex life or sexual orientation’.16 Due to its ‘sensitive nature’ the processing
of these data is not allowed under the GDPR, except in specific cases contem-
plated by the law.17 Biometric data (facial images used to identify a person) are
automatically considered as sensitive data.18
Consequently, facial images can be considered ‘personal data’ when they
are obtained to perform identification and verification using FRT. Moreover,
many current FRT applications that perform a categorisation function would,
as a result of their performance, reveal extremely personal data, such as sex-
ual orientation, mental health conditions, and many more. These can also be
considered as personal data. The use of FRT to perform categorisation has
received a lot of backlash in the scholarship. Academics believe that this field
of study known as ‘sentiment analysis’ (also referred to as ‘physiognomic AI’)
has no scientific base and, therefore, lacks validity.19
Although FRT (and AI) is not mentioned explicitly within the GDPR,
many of the provisions apply to the technology. Sartor has argued that the
introduction of Internet-related terms within the GDPR is a reaction to the
14 ‘Noise is an unwanted component of the image [that] […] can degrade the accuracy of […] face
recognition’ See: I. Budiman, D. Suhartono, F. Purnomo, M. Shodiq, ‘The effective noise removal
techniques and illumination effect in face recognition using Gabor and Non-Negative Matrix Fac-
torization’ (International Conference on Informatics and Computing, Mataram, October 2016),
32–36
15 The Regulation (EU) 2018/1807 of the European Parliament and of the Council of 14 November
2018 on a framework for the free flow of non-personal data in the European Union applies to ‘data
other than personal data’ which, as already stated, is defined within Article 4.1 GDPR.Therefore, we
have a negative definition of non-personal data.
16 Article 9.1 GDPR
17 Article 9 GDPR. See also E. Kindt, Privacy and Data Protection Issues of Biometric Applications (Springer
2013), 124–144
18 See Article 4.14 GDPR.
19 A. Daub,‘The Return of the Face’ (Longreads, October 2018) <https://longreads.com/2018/10/03
/the-return-of-the-face/> accessed 27 May 2021; J. Empspak, ‘Facing Facts: Artificial Intelligence
and the Resurgence of Physiognomy’ (Undark, 11 August 2017) < https://undark.org/2017/11/08
/facing-facts-artificial-intelligence/> accessed 27 May 2021; K.Crawford,‘Time to regulate AI that
interprets human emotions’ [2021] Nature 592.
The impact of FRT on the right to privacy 25
lack of these terms in the previous Data Protection Directive due to its his-
torical and social context.20 Therefore, it appears that the GDPR has failed
to include AI-related terms in the same fashion. It may be because of the
relatively new and innovative nature of the latest developments of AI, which
were not predicted at the time the GDPR was drafted. It might also have to
do with the fact that the GDPR is not oriented to any technology exclusively,
rather intending to address all threats for the right to data protection regardless
of where they come from. In general, academics have argued that the GDPR
fails to provide guidance on how to deploy AI-enhanced technologies while
respecting its content. The main critiques are focused on its wide and often
vague content.21
Fairness
From the algorithmic fairness point of view, FRT might be biased against cer-
tain characteristics, including race, gender, and others.22 It is well-known that
facial recognition algorithms are still struggling to identify people of colour.23
Further, the literature has brought to light the harms that the use of FRT for
identification/verification entail for transgender and non-binary people.24 This
20 G. Sartor, F. Lagioia, The impact of the General Data Protection Regulation (GDPR) on artificial intelligence
(2020 Panel for the Future of Science and Technology - STOA) 49.
21 S.Wachter,‘Data protection in the age of big data’ [2019] 2(1) Nature Electronics, 6–.; Sartor, Lagioia
(n 20) 7; S. Wachter, B. Mittelstadt, L. Floridi, ‘Transparent, Explainable, and Accountable AI for
Robotics’ [2017] Sci. Robot, 2–3
22 J. Buolamwini,T. Gebru,‘Gender Shades: Intersectional Accuracy Disparities in Commercial Gender
Classification’ (1st Conference on Fairness,Accountability and Transparency, New York City, Febru-
ary 2018).
23 T. Simonite, ‘The best algorithms struggle to recognize black faces equally’ (WIRED, 22 July
2019) <https://www.wired.com/story/best-algorithms-struggle-recognize-black-faces-equally/>
accessed 27 May 2021.
24 N. Stevens, O. Keyes, ‘Seeing infrastructure: race, facial recognition and the politics of data’ (2021)
Cult. Stud. <https://ironholds.org/resources/papers/seeing_infrastructure.pdf> accessed 27 May
26 Regulating artificial intelligence in industry
is caused by training datasets composed of facial images that belong mainly to
white people, especially men. Moreover, FRT still doesn’t have a robust foun-
dation to recognize faces that have been altered due to an accident or paraly-
sis or individuals who have undergone facial surgical procedures.25 If training
datasets are the reflection of the society we are living in and that society is built
upon inequality, FRT performance is going to perpetuate that inequality.26
It also should be considered that FRT, even in its ‘physical display’, has
developed by taking as a reference an ‘average’ subject model. Therefore, rep-
resentation and research are absent on how people with disabilities or with
craniofacial differences relate to FRT. This field is known as the ‘politics of
artefacts’ and has scarcely studied FRT, but it could help to make it fairer.27
Some argue that FRT is often designed to comprise a variety of interests and
dismiss others. As an example, Introna and Wood mentioned that ATM bank
machines are designed failing to contemplate somebody in a wheelchair or
unable to enter a pin code due to a disability.28 Furthermore, the trade-off
between fairness and accuracy has been one of the main obstacles to improve-
ment in this sense. Whereas computer scientists claim that their aim is accuracy
within FRT systems, not questioning the social inequality mentioned within
the previous paragraph, lawyers are more focused on FRT reflecting diversity
and the desire for a fairer society.
Accountability
The question of whether, and how, accountability can be demanded from
algorithms is widely discussed in the literature.29 Who should be held account-
able? According to what criteria, and to whom? The European Data Protection
Supervisor has expressed his concerns about these questions and the literature
34 A.R. Aguiar, ‘La AEPD mantiene 6 meses después de su investigación contra Mercadona por sus
cámaras de reconocimiento facial, que terminará antes de julio de 2021’ (Business Insider, 29 Decem-
ber, 2019) <https://www.businessinsider.es/sigue-investigacion-mercadona-reconocimiento-facial
-781491> accessed 27 May 2021; J.Wakefield,‘Co-op facial recognition trial raises privacy concerns’
(BBC News, 10 December 2020) <https://www.bbc.com/news/technology-55259179> accessed
27 May 2021
35 EDPB ‘Dutch DPA issues Formal Warning to a Supermarket for its use of Facial Recognition Tech-
nology’ (EDPB, 26 January 2021) <https://edpb.europa.eu/news/national-news/2021/dutch-dpa
-issues-formal-warning-supermarket-its-use-facial-recognition_es> accessed 27 May 2021.
36 K. Hill, ‘The secretive company that might end privacy as we know it’ (New York Times, 18 January
2020) <https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition
.html> accessed 27 May 2021
The impact of FRT on the right to privacy 29
identify perpetrators and victims of crimes and track down hundreds of
at-large criminals, including paedophiles, terrorists, and sex traffickers. It is
also used to help exonerate the innocent and identify the victims of crimes
including child sex abuse and financial fraud.37
They allege to have built an impressive facial images database only by using
public information from the open web.38 From the moment the company
started to gain certain popularity, it has faced a great deal of opposition for the
lack of transparency on the origin of the facial images used within its (training)
database as well as for its potential to become a tool for police abuse.
Clearview AI has also been subject to recent attention from law enforce-
ment agencies across the EU. Clearview’s activities have been the object of
several parliamentary questions on whether: (a) their services were in use by
anyone in the EU; (b) they hold any data of EU citizens and (in that case) how
they had been processed; and (c) they were consistent with the EU data pro-
tection regulation and the EU–US bilateral agreements on privacy.39 However,
several members of the European Parliament have shown their disagreement
with the European Commission’s ambiguous response.40 Clearview AI’s prac-
tices prompted the European Data Protection Board (EDPB) to issue a letter
on this matter.41 They concluded that ‘the use of a service such as Clearview
AI by law enforcement authorities in the European Union would, as it stands,
likely not be consistent with the EU data protection regime’.42 As a conse-
quence of this, in January 2021 Hamburg’s DPA deemed Clearview AI’s bio-
metric profiles of Europeans illegal. It also ordered the company to delete the
mathematical hash values representing the biometric profile of Matthias Marx,
a Hamburg resident and member of the Chaos Computer Club whose bio-
metric profile had been added to Clearview’s searchable database without his
knowledge.43 Finally, in February 2021, the Swedish DPA fined the Swedish
In the case of face categorisation, consent might not be free and informed
because both the uses and functioning of the technology will not be clearly
explained due to its innovative nature and the intrinsic difficulty to understand
AI-empowered technologies. Therefore, the processing might be considered
unlawful. 46
Furthermore, Article 7 of the GDPR establishes the necessary conditions
for consent. It must be given freely, be specific, informed, and unambiguous.
Similarly, Article 15 of Illinois’ Biometric Information Privacy Act (BIPA)
requires informed consent for biometric data processing. The different func-
tions that FRT might perform have to be taken into account in this respect.
For instance, it might entail a different burden for the data subject to consent to
be subject to FRT to check-in at an airport instead of to one of the applications
mentioned within the second section of this chapter. Some authors strongly
relate the need to clarify consent terms to the principle of purpose limitation.
Taking into account that state-of-the-art FRT involves AI, the principle of
purpose limitation increases its importance.47 Consequently, data processors/
controllers should specify the data collected (facial image, template, name, etc.),
the functions performed by the FRT (identification/verification or categorisa-
tion), whether ulterior functions for the data collected are contemplated (such
as building a facial images database), and the data retention policy. Within US
law, the BIPA states in s. 15(b) that ‘no private entity may collect, store, or use
44 ‘Swedish DPA: Police unlawfully used facial recognition app’ (EDPB, 12 February 2021) <https://
edpb.europa.eu/news/national-news/2021/swedish-dpa-police-unlawfully-used-facial-recognition
-app_es> accessed 27 May 2021.
45 ibid.
46 E. Selinger,W. Hartzog,‘The Inconsentability of Facial Surveillance’ [2019] Loyola Law Rev. 66; Sar-
tor, Lagioia (n 20); S. Schiffner et al., ‘Towards a roadmap for privacy technologies and the General
Data Protection Regulation:A transatlantic initiative’ (Annual Privacy Forum, Barcelona, June 2018)
24-42.
47 Article 5.1.b GDPR: ‘Personal data shall be […] collected for specified, explicit and legitimate
purposes and not further processed in a manner that is incompatible with those purposes; further
processing for archiving purposes in the public interest, scientific or historical research purposes or
statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with
the initial purposes (‘purpose limitation’)’
The impact of FRT on the right to privacy 31
biometric information without first giving notice to, and obtaining a written
release or consent from, the subject’.
Furthermore, due to the innovative nature of the technology, and the low
degree of trust it enjoys, people do not possess enough knowledge and power
to understand the true impact of what they are consenting to.48 As possible
‘countermeasures’, Articles 13 and 14 of the GDPR provide that the data sub-
ject has the right to be informed about the collection and use of their personal
data. This provides for the data subjects to exercise their rights when consent
has not been given. Moreover, Article 15 GDPR provides that:
The data subject shall have the right to obtain from the controller confir-
mation as to whether or not personal data concerning him or her are being
processed, and, where that is the case, access to the personal data and the
following information: (a) the purposes of the processing; (b) the catego-
ries of personal data concerned; (c) the recipients or categories of recipient
to whom the personal data have been or will be disclosed, in particular
recipients in third countries or international organisations; (d) where pos-
sible, the envisaged period for which the personal data will be stored, or,
if not possible, the criteria used to determine that period; (e) the existence
of the right to request from the controller rectification or erasure of per-
sonal data or restriction of processing of personal data concerning the data
subject or to object to such processing; (f) the right to lodge a complaint
with a supervisory authority; (g) where the personal data are not collected
from the data subject, any available information as to their source; (h) the
existence of automated decision-making, including profiling, referred to
in Article 22(1) and (4) and, at least in those cases, meaningful informa-
tion about the logic involved, as well as the significance and the envisaged
consequences of such processing for the data subject.
Further, an apparent incompatibility between the Big Data (FRT databases are
composed of plenty of facial images from diverse backgrounds) and Article 9 of
the GDPR might be spotted. Using Big Data to train AI systems and allow it
to make inferences might be contradictory with the lawfulness of processing in
the sense that some of the outcomes of the training could not be anticipated
(such as the presence of algorithmic bias). Therefore, the GDPR principles
from Article 5, especially the purpose limitation and the data minimisation,49
should be applied in a way that does not collide with AI performance. Data
minimisation and storage limitation50 are also applicable to FRT because they
Many FRT systems fall within this definition, such as those monitoring mar-
keting preferences.52 Further, the COVID-19 health emergency has entailed
an increasing deployment of FRT for profiling uses such as monitoring tem-
perature (FRT thermal scanners),53 but also employees’ and students’ perfor-
mance and attention (including at home).54
personal data may be stored for longer periods insofar as the personal data will be processed solely for
archiving purposes in the public interest, scientific or historical research purposes or statistical pur-
poses in accordance with Article 89(1) subject to implementation of the appropriate technical and
organisational measures required by this Regulation in order to safeguard the rights and freedoms of
the data subject (‘storage limitation’).’
51 BBC News, ‘Facial recognition identifies people wearing masks’ (BBC News, 7 January 2021)
<https://www.bbc.com/news/technology-55573802> accessed 27 May 2021.
52 A. Lau, ‘Facial Recognition in Global Marketing’ (Towards data science, 25 April 2020) <https://
towardsdatascience.com/facial-recognition-in-global-marketing-8d0ca0b313c7> accessed 27 May
2021.
53 M.Van Natta, P. Chen, S. Herbek, R. Jain, N. Kastelic, E. Katz, M. Struble,V.Vanam, N.Vattikonda,
‘The rise and regulation of thermal facial recognition technology during the Covid-19 pandemic’
[2020] JLB 7
54 M. Andrejevic, N. Selwyn, ‘Facial Recognition Technology in Schools: Critical Questions And
Concerns’ [2020] Learn Media Technol., 45; A.Webber,‘PwC Facial Recognition Tool Criticised For
Home Working Privacy Invasion’ (Personnel Today, 16 June 2020) < https://www.personneltoday
.com/hr/pwc-facial-recognition-tool-criticised-for-home-working-privacy-invasion/> accessed
27 May 2021.
The impact of FRT on the right to privacy 33
Way forward
Law and technology are not independent from each other. In the current
world, a world that moves more and more towards ‘digital governance’, the
line between these two disciplines is increasingly blurring. FRT perfectly illus-
trates this conundrum. It is a tool that allows increasingly beneficial, interest-
ing, and unpredictable outcomes in a wide range of fields, and it is in danger
because of the potential threat it poses for the right to privacy and specifi-
cally data protection, and also from the algorithmic fairness, transparency, and
accountability points of view. This technology involves a challenge for a pri-
vacy/data protection model that may no longer be relevant to the present
scenario, where AI-empowered technologies are increasingly gaining strength.
The only way to keep up with this challenge is to engage the same profession-
als responsible for designing and deploying the technology in the regulatory
work, by putting the same tools that make the technology ground-breaking to
the service of protection of the subject’s rights.
One of the possible solutions to the consent conundrum might be establish-
ing a compliance standard for consent. This standard could include legal assess-
ment and possible certification (along the same line as conformity assessments
for product risks or ISO standards). It would act as an incentive for technology
suppliers to try to include privacy by design and default criteria55 in their designs
and deployments. Some authors are for these instruments to, among other rea-
sons, demonstrate compliance with the legal requirements.56 Others warn of
the risk that such soft law provisions could increase what is expected from the
industry, up to the point that they eventually become formal requirements.57 In
55 Article 25 GDPR: ‘1. Taking into account the state of the art, the cost of implementation and the
nature, scope, context and purposes of processing as well as the risks of varying likelihood and sever-
ity for rights and freedoms of natural persons posed by the processing, the controller shall, both at
the time of the determination of the means for processing and at the time of the processing itself,
implement appropriate technical and organisational measures, such as pseudonymisation, which are
designed to implement data-protection principles, such as data minimisation, in an effective manner
and to integrate the necessary safeguards into the processing in order to meet the requirements of
this Regulation and protect the rights of data subjects. 2.The controller shall implement appropri-
ate technical and organisational measures for ensuring that, by default, only personal data which are
necessary for each specific purpose of the processing are processed. That obligation applies to the
amount of personal data collected, the extent of their processing, the period of their storage and
their accessibility. In particular, such measures shall ensure that by default personal data are not made
accessible without the individual's intervention to an indefinite number of natural persons. 3. An
approved certification mechanism pursuant to Article 42 may be used as an element to demonstrate
compliance with the requirements set out in paragraphs 1 and 2 of this Article’.
56 S/ Chun, ‘Facial Recognition Technology: A Call for the Creation of a Framework Combining
Government Regulation and a Commitment to Corporate Responsibility’ [2020] 21 N.C. J.L. &
Tech., 99–135
57 G. Marchant,‘Soft Law’ Governance Of Artificial Intelligence’ (AI Pulse, 25 January 2019) <https://
aipulse.org/soft-law-governance-of-artificial-intelligence/> accessed 27 May 2021; L. Edwards, M.
Veale,‘Slave to the Algorithm? Why a “Right to an Explanation” Is Probably Not the Remedy You
Are Looking For’ [2017] 16(1) DLTR.
34 Regulating artificial intelligence in industry
this case, the role of the Data Protection Authorities and the EDPB might be
crucial in monitoring compliance and assessing these certifications.
There is no international legally binding instrument that would regulate
FRT. The lack of specific regulation leads to adapting existing tools, such as
the GDPR. This may lead to situations where the GDPR is wrongly inter-
preted and/or applied. Moreover, the aim of the GDPR is not to answer to
the challenges posed by AI but to build the basis of the data protection regime
within Europe; thus, the regulation has a scope beyond AI, and its mean-
ing may be lost or deviated. There is a current proposal for an AI Act by the
European Commission but, taking into account the duration of the EU legisla-
tive process, we will still have to wait a little bit to see a EU regulatory instru-
ment within the field.
In order to tackle the privacy problems of FRT it could be beneficial to
consider differential privacy solutions. The term denotes a mathematical for-
malization of the foregoing idea that we should be comparing what someone
might learn from an analysis if any particular person’s data was included in the
dataset with what someone might learn if it was not.58 In the FRT field, this
would entail applying a small ‘perturbation’ to the face templates and only stor-
ing those ‘perturbated’ templates, minimizing risks in case of a data breach.59
Other technical tools, such as k-anonymization,60 homomorphic
encryption,61 or phenotypically or demographically diverse data augmentation
using GAN (Generative Adversarial Networks),62 might also play an important
role in this respect. GAN might be used to create ‘artificial’ facial images to
train FRT systems. This possibility would entail two advantages: on the one
hand, it would allow the introduction of phenotypically or demographically
diverse faces, defeating algorithmic bias. On the other hand, it would consider
privacy and data protection implications in case of a training database data
breach.
One more option would be to create a brand-new regulation specifically
for this technology. However, this option is not necessarily feasible, as shown
63 Substitute Senate Bill 6280-2019-20 concerning the use of facial recognition services (effective July
1, 2021).
3 The malicious use of artifcial
intelligence against government
and political institutions in
the psychological area
Evgeny Pashentsev and Darya Bazarkina
Introduction
The rapid development of AI and its widespread implementation poses new
challenges to government and public structures and to international organisa-
tions. One of these challenges is the prevention of crimes related to the mali-
cious use of AI (MUAI). For example, Russian legislation does not mention
any crime related to socially dangerous acts through the use of neural networks,
AI, or acts committed by AI itself.1 Another challenge is posed by the lack of
regulation concerning the use of AI to influence the psychological security
(PS) of a society.
Although the example of Russia was given above, this is not just a problem
for one country. The lack of regulations of the use of AI, including in the field
of PS, is recognised at the level of the United Nations as well.2 Meanwhile,
the issues of MUAI are becoming more and more urgent when discussing
the problems of the psychological destabilisation of political systems, especially
in the context of the crisis caused by the COVID-19 pandemic. For exam-
ple, during the period of quarantine and self-isolation, the number of cases
of phishing increased,3 and the harm caused by phishing can multiply when
fraudsters use AI. Phishing sites are often disguised as government sites, which
can become a serious threat to the reputation of state and supranational bodies.
Samples of terrorist propaganda that actively use AI images are already being
distributed, and in terrorist practice there are already crimes involving the use
of unmanned aerial vehicles (UAV) or search tools based on big data (to track
DOI: 10.4324/9781003246503-4
The use of AI against government institutions 37
down victims).4 Such crimes can become a powerful factor in the destabilisa-
tion of political systems in cases where opposing political forces (rightly or
wrongly) accuse their opponents of using them.
A separate threat is presented by cases of MUAI, in which the very use of AI
technologies (such as deepfakes) is aimed at destabilising PS. In the near future,
the practice of MUAI may begin to outstrip not only the development of legal
mechanisms for preventing and responding to it, but also the very understand-
ing of new realities by civil and political institutions, as well as by the academic
community. Therefore, it is necessary today to search for ways to respond to
the crimes of the future.
The tasks of this chapter are to identify the extent of the threat to govern-
ment and political institutions from MUAI in the area of PS, to determine the
state of legal mechanisms for preventing MUAI and responding to it, and to
offer recommendations for improving this response.
Classifcations of MUAI
It is possible to offer the following classifications of MUAI according to the
degree of its readiness: (1) existing MUAI practices; (2) existing MUAI capa-
bilities that have not yet been used in practice – these capabilities are associated
with a wide range of rapidly emerging new AI features, not all of which are
immediately included in the range of implemented MUAI features; (3) future
opportunities for MUAI based on ongoing developments and future research
(assessment should be given for the near, medium, and long terms); and (4)
unidentified risks, also known as ‘unknown unknowns’. Not all developments
in the sphere of AI can be accurately assessed. A willingness to meet unex-
pected latent risks of MUAI is crucial.
Another classification of MUAI is also possible: (1) by territorial coverage
(local, regional, global); (2) by the degree of damage (minor, significant, large,
catastrophic); (3) by propagation velocity (slow, fast, rapid); and (4) by propa-
gation form (open, latent).
The risk of MUAI is significantly increased by the integrated use of AI tech-
nologies by intruders. However, we should not forget that the threat of AI to
PS is primarily anthropogenic – caused by human activity, not AI specifics (at
least at the current stage of AI development).
7 For example, in 2017 Google stated that it would lower the ranking of reports by the Russian state-
run news agencies Russia Today (RT) and Sputnik. Eric Schmidt, CEO of Alphabet (the company that
owns Google) said that the search engine needed to fight the distribution of misinformation, but
some media publications said this step was a form of censorship. See: BBC,‘Google to ‘derank’ Russia
Today and Sputnik’ (BBC News, 21 November 2017) <www.bbc.com/news/technology-42065644>
accessed 27 May 2021.
40 Regulating artificial intelligence in industry
The legal response to MUAI in the EU and the USA
The development of legal means for the prosecution of MUAI is becoming
one of the main tasks for lawmakers who are developing mechanisms for regu-
lating AI. According to the legal and policy documents of the EU and the US
(currently leading in the field of AI regulation), it is possible to trace a change
in the attitude of governments to the problem of MUAI in the field of PS.
8 A. Neznamov (ed), Novye zakony robototehniki. Reguljatornyj landshaft. Mirovoj opyt regulirovanija robo-
totehniki i tehnologij iskusstvennogo intellekta (New Laws of Robotics. The Regulatory Landscape. Global
Experience in Regulating Robotics and Artificial Intelligence Technologies) (Infotropic Media 2018) 5.
9 European Parliament Resolution of 16 February 2017 with Recommendations to the Commission
on Civil Law Rules on Robotics (2015/2103(INL)) [2018] OJ C 252/239.
10 ibid.
11 Europol, Do Criminals Dream of Electric Sheep? How Technology Shapes the Future of Crime and Law
Enforcement (European Union Agency for Law Enforcement Cooperation (Europol) 2019) 10.
12 M. Brundage and others, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation
(Future of Humanity Institute, University of Oxford 2018) 27.
The use of AI against government institutions 41
is noted as a serious problem for the EU. The use of AI can further increase
the impact of hybrid threats, since the mass production of false information
can be automated by attackers with little or no technical knowledge. Deepfake
technologies give attackers the ability to spread misinformation by imperson-
ating others. Criminals have already used sound deepfakes, posing as com-
pany executives in an attempt to deceive employees of the organisation.13 All
these threats may become subject to the law in the expanded EU legislation,
especially since the process of this improvement is supported by new strategic
documents in the field of AI.
The European Commission released its plans for the future of AI in the EU
on 19 February 2020. The strategy is set out in two main documents. These are
the ‘Report on the Safety and Liability Implications of Artificial Intelligence,
the Internet of Things and Robotics’14 and the ‘White Paper on Artificial
Intelligence – A European Approach to Excellence and Trust’.15 The latter
contains a section headed ‘An Ecosystem of Trust: Regulatory Framework for
AI’, which evaluates the tools available to the Union for regulating technology.
The EU Commission notes, ‘Citizens fear being left powerless in defending
their rights and safety when facing the information asymmetries of algorithmic
decision-making, and companies are concerned by legal uncertainty’.16 It is
extremely important that the Commission notes both the ability of AI to pro-
tect the security of citizens and allow them to enjoy their basic rights, and the
risks, including those caused by MUAI. ‘Lack of trust’ is noted as the main fac-
tor holding back the wider spread of AI. Thus, the EU strategy leads us to the
assumption that, against the background of the existing information warfare
around the world, the first-level threats to PS (the desire of aggressive actors
to play on the fears and distrust of citizens to discredit politicians who actively
advocate the introduction of AI) may become extremely relevant.
According to the European Commission, the use of AI can lead to viola-
tions of the right to free assembly, to discrimination based on gender, racial or
ethnic origin, religion or belief, disability, age or sexual orientation, and to vio-
lations of the right to effective protection and a fair trial in court, and consumer
rights. At the same time, the Commission points primarily to the anthropo-
genic nature of all the violations listed above,17 which can also be a starting
point for developing standards for the prevention and prosecution of MUAI.
technology can also be misused and provide novel and powerful tools
for manipulative, exploitative and social control practices. Such practices
are particularly harmful and should be prohibited because they contradict
Union values of respect for human dignity, freedom, equality, democracy
and the rule of law and Union fundamental rights, including the right
to non-discrimination, data protection and privacy and the rights of the
child.18
The EU calls for greater public participation and greater transparency in iden-
tifying the origin of the internet content, and an independent European net-
work has been established to verify the sources and process of its creation.
18 Commission, ‘Proposal for a Regulation of the European Parliament and Of the Council. Laying
Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Cer-
tain Union Legislative Acts’ COM (2021) 206 final 2021/0106 (COD), preamble, art. 15.
19 Commission, ‘Communication to the European Parliament, the Council, the European Economic
and Social Committee and the Committee of the Regions.Tackling online disinformation: a Euro-
pean Approach’ COM (2018) 236 final.
20 ibid., ch 2.
21 US National Science and Technology Council, The National Artificial Intelligence Research and Develop-
ment Strategic Plan (National Science and Technology Council, Networking and Information Tech-
The use of AI against government institutions 43
strategies. The first of these is to ‘ensure the safety and security of AI systems’,
which implies ensuring not only the safety of the production process of AI
products, but also the clarity of its work at each stage. This strategy also aims to
provide for the reliable protection of AI products from cyber-attacks.
The strategy for developing ‘law-abiding’ AI (‘understand and address the
ethical, legal, and societal implications of AI’) is also closely related to the
task of maintaining human control over AI. The authors of the Strategic Plan
emphasise the importance of the ability of AI to make independent decisions in
accordance with human ethical standards and current legislation. In this regard,
it is extremely important to indicate the need for interdisciplinary research
teams at least to work to answer the question of which data will be used to
train the AI.22 In the case of MUAI, including MUAI aimed at destabilising
PS, this approach is a convenient starting point for discussing the subjectivity
of future offences.
Legislative initiatives in the field of AI are growing in the United States. In
2018, five draft laws were proposed to regulate AI, in 2019 there were eight,
and during the first half of 2020 (until 19 June) there were ten.23 Deepfake
technology is one of the MUAI technologies aimed at destabilising PS that is
most actively discussed by American lawmakers. The first project was a federal
bill to regulate deepfakes, the ‘Malicious Deep Fake Prohibition Act of 2018’,24
which was introduced in the United States in December 2018, and the ‘Deep
Fakes Accountability Act’25 was introduced in June 2019. Legislation against
deepfakes has also been introduced in several states, including California, New
York, and Texas.
As a result of an intense discussion of MUAI issues in the United States on
20 December 2019, President Donald Trump signed the nation’s first federal
law related to ‘deepfakes’. The deepfake legislation was part of the National
Defense Authorization Act for Fiscal Year 2020 (NDAA).26 In addition to
deepfakes, a separate section of the law (section 5708) is dedicated to facial
recognition technology and its use by the intelligence community. This is also
an important step in preventing MUAI attacks against the national security of
27 National Defense Authorization Act for Fiscal Year 2020. Conference Report to Accompany S. 1790.
December 9, 2019. 116th Congress 1st Session, House of the Representatives. Report 116–333
(USA), section 5708.
28 A. Boyd, ‘Lawmakers Working on Legislation to ‘Pause’ Use of Facial Recognition Technology’
(Nextgov, 15 January 2020) <www.nextgov.com/emerging-tech/2020/01/lawmakers-working-leg-
islation-pause-use-facial-recognition-technology/162470/ > accessed 27 May 2021.
29 A. Ng,‘Lawmakers propose indefinite nationwide ban on police use of facial recognition’ (CNet, 25
June 2020) <www.cnet.com/news/lawmakers-propose-indefinite-nationwide-ban-on-police-use
-of-facial-recognition/> accessed 27 May 2021.
30 National Defense Authorization Act for Fiscal Year 2020. Conference Report to Accompany S. 1790.
December 9, 2019. 116th Congress 1st Session, House of the Representatives. Report 116–333
(USA), section 5709.
31 See, for example, E. Pashentsev, ‘Malicious Use of Deepfakes and Political Stability’ In Florinda
Matos (ed) Proceedings of the European Conference on the Impact of Artificial Intelligence and Robotics ECI-
AIR 2020 Supported By Instituto Universitário de Lisboa (ISCTE-IUL), Portugal, 22 – 23rd October 2020
(Academic Conferences International Limited 2020) 100.
The use of AI against government institutions 45
policy position. They fooled the Medicaid.gov administrators, who accepted
them as genuine concerns from actual human beings’. Later, Weiss identified
the comments and asked for them to be removed, ‘so that no actual policy
debate would be unfairly biased’.32 The experiment was very instructive and
speaks about the possible practice of MUAI through the creation of fake texts,
which we know nothing about yet.
Legislative activity and the adoption of strategic planning documents in the
EU and the United States allow us to trace the development of AI regulation
alongside a gradual identification of AI issues, including in the field of PS.
Precedents in this area often stimulate the improvement of legal mechanisms
for the prosecution of MUAI, but today the important role of an interdiscipli-
nary approach to predicting future crimes is already evident (more than ever,
due to the rapid development of technology).
32 B. Schneier,‘Bots Are Destroying Political Discourse As We Know It’ (The Atlantic, 7 February 2020)
<https://www.theatlantic.com/technology/archive/2020/01/future-politics-bots-drowning-out
-humans/604489/> accessed 27 May 2021.
33 China State Council, ‘Notice of the State Council Issuing the New Generation of Artificial Intel-
ligence Development Plan [2017], 35; F. Sapio,W. Chen and A. Lo (trs) (The Foundation for Law and
International Affairs, 2017) <https://flia.org/wp-content/uploads/2017/07/A-New-Generation-of
-Artificial-Intelligence-Development-Plan-1.pdf> accessed 27 May 2021.
34 ibid.
46 Regulating artificial intelligence in industry
Friendliness’, implies that ‘AI development … should conform to human val-
ues, ethics, and morality … it should be based on the premise of safeguarding
societal security and respecting human rights, avoid misuse, and prohibit abuse
and malicious application’.35 Thus, in China, the issue of countering MUAI,
including in the field of PS, is officially recognised as a priority.
MUAI issues are widely discussed in academic and public circles, which
provides many opportunities for developing regulatory mechanisms. A group
of researchers from Tsinghua University has highlighted a wide range of threats
associated with MUAI. In particular, they pointed out the virtual online space/
storage facilitates the collection and exchange of personal data, analysis and
exchange of information (including identification data, medical information,
credit records, personal locations, and movement information). However, at
the same time, this makes it difficult to determine the causes and extent of
such data leaks.36 From the point of view of PS, it is worth paying attention
to conditions that will affect the human psyche, such as ‘unpredictability and
irreversibility of the blurring of time and space’, as well as virtual and objective
reality, in which Chinese experts see potential risks.
Among the main MUAI threats that are noted by the Tsinghua University
report are threats that we would classify as belonging to the second level: fraud
(including fraud in social networks, based on personal information obtained
illegally), hacking of AI-based authentication systems, and the malicious use of
UAV, autonomous vehicles and robots equipped with AI.37 The problems of
psychological security are discussed in closed, primarily military, structures, but
it is worth emphasising that it is important that these issues are studied by civil-
ian analytical centres dealing with the problems of AI implementation, espe-
cially in the context of the development of China’s cybersecurity initiatives.
The most well-known example of the regulation of MUAI in China is the
ban on the use of deepfakes to mislead audiences.38 In 2019, China announced
new rules governing video and audio content on the internet, including a
ban on the publication and distribution of ‘fake news’ created using AI and
virtual reality. Deepfake technologies can ‘endanger national security, disrupt
social stability, disrupt social order and infringe upon the legitimate rights and
35 Ministry of Science and Technology (MOST) of the People's Republic of China,‘Translation. Gov-
ernance Principles for a New Generation of Artificial Intelligence: Develop Responsible Artifi-
cial Intelligence. June 17, 2019’ (New America, 17 June 2019) <https://perma.cc/V9FL-H6J7>
accessed 27 May 2021.
36 China Institute for Science and Technology Policy at Tsinghua University, China AI Development
Report 2018 (China Institute for Science and Technology Policy at Tsinghua University 2018) 94.
37 ibid.
38 国家互联网信息办公室 文化和旅游部 国家广播电视总局 (State Internet Informa-
tion Office Ministry of Culture and Tourism State Administration of Radio and Television),
‘关于印发《网络音视频信息服务管理规定》的通知 (Notice on Issuing the ‘Regulations on the
Administration of Network Audio and Video Information Services’)’ 2019年11月18日 (18 Novem-
ber 2019) <http://www.law-lib.com/law/law_view.asp?id=671676> accessed 27 May 2021.
The use of AI against government institutions 47
interests of others’, according to the transcript of a press briefing published
on the website of the Cyberspace Administration of China (CAC). Any use
of these technologies should be clearly marked and clearly visible to internet
users. Failure to comply with these rules can be considered a criminal offence,
according to the CAC website.39 It is significant that it is not the technology
that is prohibited, but the deliberate misleading of the audience with the help
of the technology.
Russia
MUAI is in the spotlight in Russia, as evidenced by the statement of President
Vladimir Putin in September 2017. ‘Artificial intelligence is not only the future
of Russia, it is the future of all mankind. There are huge opportunities and
threats that are difficult to predict today’.40 In 2018, the Russian Ministry of
Defence hosted the first conference entitled ‘Artificial Intelligence: Problems
and Solutions’, at which the Defence Minister Sergei Shoigu set a task for
military and civilian specialists to join forces in developing AI technologies to
counter possible threats in the field of technological and economic security.41
Thus, the main focus of attention is threats of MUAI that are not directly related
to psychological impact but can ‘regenerate’ into second-level PS threats.
On 10 October 2019, Russia adopted the ‘National Strategy for the
Development of Artificial Intelligence for the Period up to 2030’.42 The points
on human rights, which can serve as a starting point for developing mecha-
nisms for preventing and prosecuting MUAI, are contained in two sections of
the document, one of which is ‘Basic principles for the development and use of
artificial intelligence technologies’ (section III). Among these principles, which
must be observed when implementing the Strategy, are:
39 Y.Yang, B. Goh and E. Gibbs, ‘China seeks to root out fake news and deepfakes with new online
content rules’ (Reuters, 29 November 2019) <https://www.reuters.com/article/us-china-technol-
ogy/china-seeks-to-root-out-fake-news-and-deepfakes-with-new-online-content-rules-idUSK-
BN1Y30VU> accessed 27 May 2021.
40 RIA ‘Novosti’, ‘Putin: the leader in the field of artificial intelligence will become the ruler of the
world’ (RIA ‘Novosti’, 1 September 2017) <https://ria.ru/20170901/1501566046.html> accessed
27 May 2021.
41 K. Fedorov, ‘Shoigu called on scientists to unite to work on artificial intelligence’ (Zvezda TV, 14
March 2018) <https://tvzvezda.ru/news/forces/content/201803141458-vp29.htm> accessed 27
May 2021.
42 President of the Russian Federation,‘Decree of 10.10.2019 No. 490 on the development of artificial
intelligence in the Russian Federation’ (Russia) (Official Legal Information Portal, 11 October 2019)
<http://publication.pravo.gov.ru/Document/View/0001201910110003> accessed 27 May 2021.
48 Regulating artificial intelligence in industry
the digital economy; b) security: the inadmissibility of using AI for the
purpose of intentionally causing harm to citizens and legal entities, as well
as preventing and minimizing the risks of negative consequences of using
AI technologies.43
One of the main aims of creating a comprehensive system for regulating public
relations arising from the development and implementation of AI technologies
in the Russian strategy is the development of ethical rules for human interac-
tion with AI.44
The federal law entitled ‘On Conducting an Experiment to establish special
regulation in order to create the Necessary Conditions for the Development and
Implementation of AI Technologies in the subject of the Russian Federation –
the City of Federal Significance Moscow’45 caused both positive and negative
reactions in the media. The law contains an indication that experiments must
not violate the rights of citizens:
The result of the establishment of a pilot legal regime may not be the
restriction of constitutional rights and freedoms of citizens, imposition of
additional duties, violation of the unity of economic space in the territory
of the Russian Federation or other depreciation of the safeguards guaran-
teeing protection of the rights of citizens and legal entities.46
43 ibid.
44 ibid. See also English translation: M. Konaev,A.Vreeman and B. Murphy (trs) ‘Original CSET Trans-
lation of Decree of the President of the Russian Federation on the Development of Artificial Intelli-
gence in the Russian Federation’ (CSET, 28 October 2019) <https://cset.georgetown.edu/research
/decree-of-the-president-of-the-russian-federation-on-the-development-of-artificial-intelligence
-in-the-russian-federation/> accessed 27 May 2021.
45 President of the Russian Federation, ‘Federal Law No. 123-FZ of 24.04.2020 on conducting an
experiment to establish special regulation in order to create the necessary conditions for the devel-
opment and implementation of artificial intelligence technologies in the subject of the Russian
Federation – the Federal City of Moscow and amending articles 6 and 10 of the Federal law on
personal data’ (Russia) (Official Legal Information Portal, 24 April 2020) <http://publication.pravo.gov
.ru/Document/View/0001202004240030?index=0> accessed 27 May 2021.
46 ibid., ch 1, art. 5.
47 Sberbank, Analiticheskij obzor mirovogo rynka robototehniki (Analytical review of the global robotics market)
(Sberbank 2019), 246.
The use of AI against government institutions 49
implementation of an experimental legal regime. Given that Russian legislation
in the field of AI is currently at the very beginning of its development, a lot
of work remains to be done. How it will respect the principle of transparency,
make the work of AI understandable, and specify in policies and federal law the
non-discriminatory access of citizens to the results of AI? These are fundamen-
tal points from the point of view of the rights of citizens, and of the attitude
of citizens themselves to the implementation of innovation. Today they are
extremely important for the development of society and technology.
48 V.Van Roy, AI Watch – National Strategies on Artificial Intelligence:A European Perspective in 2019, EUR
30102 EN (Publications Office of the European Union 2020), 3.
49 CyberBRICS, ‘About Us’ (CyberBRICS, 2021) <https://cyberbrics.info/about-us/> accessed 27
May 2021.
50 Regulating artificial intelligence in industry
Cooperation in Science, Technology and Innovation’. Since then, cooperation
in the field of ICT has been developing rapidly, as evidenced by the promotion
of the BRICS Digital Partnership in 2016, the adoption of the Declaration of
the BRICS Presidential Summit for a Collaboration for Inclusive Growth and
Shared Prosperity in the 4th Industrial Revolution, and the development of
the Enabling Framework for the Innovation BRICS Network.50 We should
consider new initiatives such as the BRICS Partnership on New Industrial
Revolution (PartNIR), the Innovation BRICS Network (iBRICS Network),
and the BRICS Institute of Future Networks. The establishment of the first
BRICS Technology Transfer Centre (in Kunming) and the first BRICS
Institute of Future Networks (in Shenzhen) in China demonstrates not only
the country’s leadership in AI, but also its interest in developing technologi-
cal cooperation in BRICS.51 Liu Duo, the President of China’s Academy of
Information and Communications Technologies, has said that the Institute in
Shenzhen will focus on policy research on 5G, the industrial internet, AI, vehi-
cle internet, and other technologies. Additional efforts will be made to encour-
age the exchange of ideas between the five countries and to organise more
training events and exchange programmes.52 The iBRICS platform was estab-
lished by the 2019 summit in Brasilia53 and provides contacts between science
parks and technology clusters of the BRICS countries, specialised associations,
and supporting structures to encourage start-ups, including in the field of AI.
Unfortunately, the limited scope of this chapter does not allow us to
describe more practical experiences in the nascent field of MUAI regulation.
However, it is already obvious that the open access to strategic planning docu-
ments allows citizens (including researchers) to create a complete picture of
the development of AI, as well as the risks and threats associated with it.
Currently, there is an active discussion about the fundamental principles that
should govern the use of AI in the collection, processing, and use of data.
The strategic documents of the European Union declare the need to ensure
personal control by citizens over their personal data. In the United States,
for example, there was previously a proposal to revise this principle in favour
of the effective management of personal data for the benefit of citizens.54
Conclusions
The materials studied, including those whose analysis is not included in this
chapter, show that, at both the national and the international levels, there are
no comprehensive mechanisms for the legal regulation of the use of AI, includ-
ing regulations to prevent acts of MUAI or to prosecute individuals and groups
who commit such acts. The national legislation of some countries introduces
regulations for the use of certain technologies, such as the creation of deep-
fakes, but, in general, the legal aspects of regulating the use of AI in the PS
system are still under discussion.
The ability of AI to make autonomous decisions raises the question of AI
subjectivity for lawmakers (to determine who is responsible for the offence:
the AI itself, its developer, or users who have trained the AI using distorted
data), and this question can only be resolved with the close cooperation of
AI specialists, experts in PS, psychologists, and lawyers at the national level.
The first attempts are in fact being made to regulate the use of AI. In these
conditions, it is extremely difficult for international institutions to develop
such measures until the countries with the highest indicators of AI develop-
ment create their own norms and put forward proposals on international legal
issues based on them. It is significant, for example, that at the UN level at the
time this chapter is being written, there are no resolutions with instructions
for regulating deepfakes. It is all the more important to start a comprehensive
study of MUAI today before MUAI comprehensively damages and disorgan-
ises society. The COVID-19 pandemic has become a powerful catalyst for the
digitalisation of a wide variety of services, which inevitably leads to the growth
of MUAI (this is already proven, for example, by the increase in the number
of phishing attacks). In these circumstances, the legislative actors are forced
to work in the context of a double crisis – a pandemic and rising geopolitical
tension.
In the new environment, comprehensive research on MUAI in the context
of human rights is extremely important for the PS system, with a combina-
tion of the results of work in two areas: (1) the ‘technical’ direction – scenario
analysis of situations in which MUAI occurs, based on the technical capabilities
of AI itself; (2) the ‘social’ direction – analysis of social developments that takes
52 Regulating artificial intelligence in industry
into account possible threats to civil rights and freedoms and, in this context,
the specific risks of using AI.
The synthesis of the results of work in these areas will allow us to formulate
both favourable and unfavourable scenarios for the implementation of AI in
the context of PS.
When developing a human-oriented approach to the problem of AI, it
is important to formulate the goals of technological development correctly.
It is most effective to set socially-oriented goals and objectives using specific
planned indicators (such as the UN Sustainable Development Goals55). It is
advisable to refine general policy formulations in accordance with social goals.
A socially-oriented approach avoids the objectification of citizens and preserves
their subjectivity when regulating the use of AI in the context of PS.
Introduction
Citizenship by investment (CBI) is a process whereby a country grants an
applicant citizenship in exchange for an investment in that country’s economy.1
CBI is different from other residency schemes because it grants an applicant
not only residency and the right to work, but also a passport with unlimited
stay the inclusion of political rights and traditional obligations of the respective
national domain.2
Required investment types and amounts of money involved vary in the dif-
ferent CBI host countries, although most nations offering CBI, such as Antigua
and Barbuda or Malta give applicants a choice between making a contribution
to a government-run national development fund or a significant real estate
purchase or corporate investment.3
CBI programmes are unique as they do not fit into traditional models of
naturalisation and citizenship. CBI are an expedited form of naturalisation
through some means of investment, and their use has been considered by many
industry protagonists to be unique access to second citizenship, rather than by
birth right acquisition, be it by descent (jus sanguinis) or by birth in the terri-
tory (jus soli).4
The reasons behind prospective applicants gaining second citizenship may
depend on many factors, such as visa-free access to other counties, education
and healthcare for the applicant’s children, or economic benefits, including
freedom of movement of capital without restriction.5 There may also be push
factors affecting applicants to acquire CBI. For instance, when countries begin
DOI: 10.4324/9781003246503-5
54 Regulating artificial intelligence in industry
to face emergencies, economic crisis, and/or political turmoil, often wealthy
people are the first to migrate abroad in search of a safer environment.6
The term ‘high-net-worth individual’ is used by some segments of the finan-
cial services industry to describe a person whose investible wealth (assets such
as stocks and bonds) significantly exceed a given amount.7 For them, second
citizenship is a compelling investment option. Such programmes have existed
for decades; for instance, the United States, United Kingdom, and Canada
have had ‘Immigrant Investor Programmes’ dating back to the mid-1980s or
mid-1990s.8 Small states have offered a more direct route to citizenship with-
out, or on the basis of very limited, residency requirements. These currently
include nations such as Cyprus, Dominica, and St Kitts and Nevis; however,
in the past it included Ireland and several Pacific Islands. In this regard, CBI
programmes can be very different in scope and character, as every nation has
shaped its programme in consonance with its specific needs and priorities.9 All
of these programmes purport to stimulate growth and employment through
attracting more foreign capital and investment by way of offering expedited
citizenship to high net-worth individuals.10
Nevertheless, the facilitation of CBI programmes potentially creates serious
corporate and immigration compliance risks for governments, including the
potential for: (a) money laundering, and (b) border security risks.
In consideration of the above, corporate and immigration due diligence
are areas that are currently being transformed by AI, but have yet to be intro-
duced into CBI programmes unilaterally. Uniform corporate and immigration
due diligence, or the lack thereof, are the key risk areas in CBI programmes,
particularly regarding facilitating money laundering and border security risks.
Prior to considerations for uniformed due diligence bolstered by AI, it is
important to illustrate the justifications for and importance of due diligence in
CBI programmes.
Due diligence
In the context of CBI programmes, due diligence can be described as an
investigation, audit, or review of a CBI applicant to confirm the legitimacy of
6 R. Sharma,‘The Millionaires Are Fleeing. Maybe You Should,Too’ (NY Times 2018) <https://www
.nytimes.com/2018/06/02/opinion/sunday/millionaires-fleeing-migration.html> accessed 27 May
2021.
7 A. Hayes, ‘High Net Worth Individual’ (Investopedia, 21 March 2021) <https://www.investopedia
.com/terms/h/hnwi.asp> accessed 27 May 2921.
8 The number of US EB-5 investor visas increased five-fold from 2010, but this still represents only
2% of annual immigration to the U.S.
9 X. Xu,A. El-Ashram, J. Gold,‘Too Much of a Good Thing? Prudent Management of Inflows Under
Economic Citizenship Programs’ [2015] International Monetary Fund,Working Paper WP/15/93,
13–20.
10 Ibid.
Leveraging AI in citizenship by investment programmes 55
the application and assess the potential risks under consideration. At its most
basic level, due diligence is a series of investigative and auditing processes that
identify, corroborate, and verify information about an individual or a business
entity so that CBI host states can be protected against money laundering and
border security risks.11 CBI due diligence should specialise in thorough back-
ground checks on criminal records, corruption, sanctions, litigation, frozen
assets, concealment, tax fraud, financial crime and embezzlement, via police
databases, etc. Notwithstanding these risks, justifications for bolstering due
diligence in corporate and immigration compliance with AI tools is mainly
motivated by the number of applications, which indicates the rapid growth
of the industry, and the significant revenue that is generated by CBIs govern-
ments. Risk potential is amplified through the growing number of applicants
and the potential for one or more malfeasant applicants to circumvent current
due diligence methods and ruin the programme’s reputation.
Statistically, in the Caribbean territories, applications had been on the rise
in most cases until 2020. This is exemplified in the 2021 Budget Statement for
Antigua and Barbuda, which was presented by Prime Minister Honourable
Gaston Browne. During his speech he outlined, among other things, the
importance of revenues from their citizenship programme. In 2020, a total of
366 applications were received, representing a 22% decrease from the previous
year.12 Nonetheless, the CBI generated $115.7 million in 2020 alone. From its
inception in 2012, Antigua and Barbuda have naturalised over 4000 citizens
through their programme.13 Just 188km south, in Dominica, their citizen-
ship by investment unit (CIU) reported 2,100 applications to its CBI between
July 2018 and June 2019.14 This number represented the highest application
approval figure officially recorded in a single year among Caribbean CBI. In
this premise, it is worthwhile noting that St Kitts and Nevis, whose govern-
ment declines to disclose precise annual data as it is considered a trade secret,
may have rivalled Dominica in 2019.15 Regardless, Dominica issued 5,815
passports to investors and their family members between 2017 and 2020 and
brought the country $1.2 billion. Conclusively, in St Lucia, according to news
11 Oxford Analytica, ‘Due diligence in Investment Migration: Best Approach and Minimum Standard
Recommendations’, (Report 1, Investment Migration Council, January 2020) <https://investment-
migration.org/wp-content/uploads/DD-in-IM-Best-Approach-and-Minimum-Standard-Recom-
mendations-January-2020.pdf> accessed 27 May 2021.
12 Hon. Gaston Brown., ‘Antigua and Barbuda Budget Statement 2021’ (January 2021) available at
<https://ab.gov.ag/pdf/budget/Budget_Speech_2021.pdf > accessed 27 May 2021.
13 Ch. Nasheim, ‘Antigua CIP Data: Applications Dip, 97% Pick Contribution Option’ (Investment
Migration Insider, 3 January 2020) <https://www.imidaily.com/caribbean/antigua-cip-h1-2019-data
-applications-dip-97-pick-contribution-option/> accessed 27 May 2021.
14 Ch. Nasheim, ‘Dominica CIP approved 2,100 Applications in Last 12 Months, Likely a World
Record’ (Investment Migration Insider, 21 September 2019) <https://www.imidaily.com/caribbean/
dominica-cip-approved-2100-applications-in-last-12-months-likely-a-world-record/> accessed 27
May 2021.
15 ibid.
56 Regulating artificial intelligence in industry
reports, over 1,000 individuals have been granted St Lucian citizenship as of
2020, its fifth year of operation, and this has raised $1.1 billion.16
In the EU, Bulgaria, Cyprus, and Malta are the only Member States operat-
ing CBI schemes.17 Malta understandably places a yearly cap on the number of
citizenship by investment applicants who can be naturalised. Their CBI natu-
ralised a total of 3,673 applicants between its inception in 2013 and 2018. Malta
generated over €900 million euros from the commencement of its CBI, which
represented 7.2% of Malta’s 2017 Gross Domestic Product.18 Comparatively,
the Cyprus Ministry of Finance stated the results of their CBI from its incep-
tion, and according to the results of the audit between June 2013 and December
2019, 2,855 investors received Cyprus passports for investments, and in 7 years
the programme generated €9.7 billion.19
Globally, CBI has grown into an estimated $3-billion-dollar industry, with,
reportedly, 9,000 CBI applicants approved worldwide in 2018/2019.20 It is
estimated that in 2020 in Turkey alone there were about 13,000–14,000 appli-
cants, despite the COVID-19 pandemic.21 This figure is expected to grow
because programmes have become much more affordable and as a result of
CBI competition to attract more wealthy individuals.22 This drop in price also
presents a problem as it could give easier access to individuals with malfeasant
intentions. It is suggested that the current status quo of due diligence within
the CBI industry should be supported with various machine learning tools
increasing its effectiveness and efficiency in consideration of the above-illus-
trated growth in the industry, and the amplified risks that are presented with
this forecasted growth.
In support of improving due diligence in CBI, the Investment Migration
Council,23 in coordination with BDO USA (assurance, tax, and advisory ser-
vices), Exiger (global regulatory and financial crime, risk, and compliance
16 J. St. Amiee.,‘188 Granted citizenship through St. Lucia CIP 2019–2020: Chinese lead the way’ (St
Lucia:The Star, 24 December 2020) <https://stluciastar.com/188-granted-citizenship-through-saint
-lucias-cip-in-2019-2020-chinese-lead-the-way-again/> accessed 27 May 2021.
17 European Commission ‘Investor Citizenship and Residence Schemes in the European Union‘ Report from
the Commission to the European Parliament, The Council, the European Economic and Social
Committee and the Committee of the Regions, Brussels, 23 Jauary.2019, COM(2019) 12 final, 3.
18 Office of the Regulator (Individual Investor Programme), ‘Fifth Annual Report on the Individual
Investor Programme of the Government of Malta’ (ORiip Annual Report, November 2018) availa-
ble at <https://orgces.gov.mt/en/Documents/Reports/Annual%20Report%202018.pdf> accessed
27 May 2021.
19 European Commission Report (n 18), 19–20.
20 Oxford Analytica (n 12).
21 La Vida – Golden Visas, ‘Turkey’s Record Breaking Citizenship by Investment Programme’, 4 May
2021 <https://www.goldenvisas.com/turkeys-record-breaking-citizenship-program> accessed 27
May 2021.
22 Dominica and St Kitts and Nevis now offer CBI for as low as $100,000.
23 The Investment Migration Council, as illustrated on their website, is the worldwide association for
investor immigration and CBI, bringing together the leading stakeholders in the field.
Leveraging AI in citizenship by investment programmes 57
company), and Refinitiv (financial markets data and infrastructure institution),
formed a due diligence working group to examine the state of due diligence in
the entire investment migration sector and explore the potential for minimal
standards, greater transparency, and information sharing across the industry.
This working group conducted two reports, the first of which provides a criti-
cal overview of due diligence processes and an assessment of whether current
due diligence practices need improvement.24 Thereafter, the second report
illustrates recommendations for basic minimum standards for providers of due
diligence within the industry.25 It becomes apparent in these recommenda-
tions that the introduction of specific AI tools becomes even more realistic for
potentially establishing minimum standards within the industry.
Currently, there is little to no due diligence coordination or uniform stand-
ards between CBI active nations and AI could potentially assist with this. In
this premise, currently, licensed agents are required to perform initial admin-
istration of due diligence checks and present their applications to government
CIUs. As such, agents have the first opportunity to identify and reject appli-
cants who do not meet the due diligence requirements, and whether or not an
agent makes an adequate initial decision depends partly on the quality of due
diligence they conduct.26
CIUs should, amongst other things, conduct verification of the validity of
the individual’s police and court records and check national databases and other
government records. Machine learning techniques and algorithms used in soft-
ware such as ‘Veriff’27 can help in this regard. Moreover, CIUs should liaise
with international law enforcement agencies to obtain details on outstanding
warrants or suspicion of international crime activity. CIUs should focus on
the overall integrity and security of not just individual programmes but an
overall industry perspective. It is suggested that CIUs should incorporate the
most innovative and technically advanced tools for conducting multi-tiered
and multilaterally coordinated due diligence.
Due diligence tasks can create significant administrative strains on licensed
agents and CIUs, as they are required to be not only thorough but also proac-
tive, as currently in CBI there is no collaboration concerning corporate and
immigration due diligence.
Implementation of AI is likely to offer great opportunities in this area.
Machine learning tools would assist with the manual workload of CIUs and can
24 Oxford Analytica,‘Due Diligence in Migration, Current applications and Trends’, Report 1, January
2020, <https://www.refinitiv.com/content/dam/marketing/en_us/documents/partners/due-dili-
gence-in-investment-migration-current-applications-and-trends.pdf> accessed on 27 May 2021.
25 Oxford Analytica, ‘Due diligence in Investment Migration: Best Approach and Minimum Standard
Recommendations’, Investment Migration Council Report 2, January 2020 <https://investment-
migration.org/wp-content/uploads/DD-in-IM-Best-Approach-and-Minimum-Standard-Recom-
mendations-January-2020.pdf> accessed 27 May 2021.
26 Oxford Analytica (n 12).
27 Veriff is an automated global identification service which simplifies compliance.
58 Regulating artificial intelligence in industry
help agents with conducting preliminary due diligence. Moreover, automatic
facial recognition techniques in the context of immigration biometrics and bor-
der security could efficiently maximise currently manual due diligence processes.
The screening of applicants biometrics by CIUs is essential for maintaining
security and for safeguarding the credibility of individual CBI programmes.28
Beyond the potential macroeconomic risks, the CBIs, if exploited, will help to
hide or facilitate financial, economic, and/or organised crimes, which include
bribery, corruption, and/or money laundering. The last problem is signifi-
cant. Money Laundering can be defined as ‘a process criminals use to hide,
control, invest, and benefit from the proceeds of their criminal activities. It
occurs where criminals attempt to create a legitimate background for their
money’.32 Machine learning algorithms have been shown to be particularly
useful in monitoring transactions and suspicious activity. These two key aspects
have been recognised by the Financial Action Task Force and the majority of
the OECD Member States, as key characteristics of money laundering, and
thereby are at the heart of national anti-money laundering standards.33
34 Deloitte,‘The case for artificial intelligence in combating money laundering and terrorist financing:
A deep dive into the application of machine learning technology’ (Deloitte 2018) <https://www2
.deloitte.com/content/dam/Deloitte/sg/Documents/finance/sea-fas-deloitte-uob-whitepaper
-digital.pdf > accessed 27 May 2021.
35 ibid.
36 ibid.
37 Luminance is the leading artificial intelligence platform for the legal profession, with over 250 cus-
tomers in more than 50 countries. Luminance’s machine learning technology reads and forms an
understanding of documents, helping lawyers to perform the most thorough and rapid document
reviews across practice areas including due diligence, contract negotiation, regulatory compliance
reviews, property portfolio analysis and eDiscovery.
38 B. El Nakib., ‘Understanding Artificial Intelligence to fight money laundering’ (Compliance Alert, 19
December 2017) <https://calert.info/details.php?id=1614> accessed 27 May 2021.
39 Institute of Chartered Accountants England and Wales, ‘AI in Corporate Advisory – Investment,
M&A and transaction services’ (ICAEW) <https://www.icaew.com/technical/corporate-finance/
ai-in-corporate-advisory> accessed 27 May 2021.
40 Deloitte (n 35).
60 Regulating artificial intelligence in industry
both onerous and manual, resulting in increased workload for compliance, as
well as potential gaps in surveillance and monitoring.41
Some other areas that are gaining traction include fraud detection, auto-
mated reporting, and enhanced surveillance, including voice, video, text, and
pattern-based transaction monitoring. Machine learning technology in fraud
detection currently allows banks to accurately predict if an account is at risk of
being compromised, and it is proposed that similar techniques be adapted in
CBI programmes to accurately predict if CBI applicants have financial risks.42
Companies such as ‘DataVisor’ provide specialist software that can track fraud-
ulent transactions. DataVisor combines applied machine learning capabilities
with powerful investigative workflows and an intelligence network of more
than 4 billion user accounts to provide real-time fraud signals, insights, and
protection to preserve vital trust and security. In 2020 the company claimed
that the software detected up to 30% more fraud with as much as 90% more
accuracy.43
CBI governments typically spend large resources on information about
potential applicants, particularly in the context of the programme investment
and due diligence administration requirements. The introduction of an AI
chatbot to CBI could potentially minimise expenses and boost efficiency in this
respect, thereby helping to boost programme earnings. A chatbot is a real-time
virtual online assistant that can simulate human conversational behaviour.44 In
online ‘conversations’ with potential applicants, chatbots can ask and answer
questions, and generally interact ‘intelligently’. Data from these online chats
can be collected and analysed in order to generate better results in the future.45
Conclusively, all monitoring systems and analytics, not just machine learn-
ing applications, depend on high-quality data. Given the unprecedented avail-
ably of corporate data globally, there is arguably a renewed widespread interest
in applying data-driven machine learning methods to problems for which the
development of conventional corporate solutions is challenged.46
41 ibid.
42 R. Shroff., ‘Artificial Intelligence for Risk Reduction in Banking: Current Uses’ (Towards Data Sci-
ence, 16 January 2020) <https://towardsdatascience.com/artificial-intelligence-for-risk-reduction
-in-banking-current-uses-799445a4a152> accessed 27 May 2021.
43 ibid.
44 ibid.
45 ibid..
46 O. Simeone., ‘A Very Brief Introduction to Machine Learning with Applications to Communication
Systems’ [2018] 4 IEEE Transactions on cognitive Communications and Networking, 650–651.
Leveraging AI in citizenship by investment programmes 61
and traffic across defined border areas or zones. AI is gradually being used to
perform certain tasks in this respect, including identity checks, border security
and control, and analysis of data about visa and asylum applicants.47 There are
known examples of such use of AI in Canada and Germany. In the context
of CBI programmes, certain unique security risks are identified, including the
applicants fleeing justice in their home jurisdiction, and abusing visa-free travel
privileges granted through the purchased passport. Loss of visa-free travel privi-
leges is exemplified in the 2014 revocation of visa-free access between St Kitts
and Nevis and Canada for all St Kitts and Nevis citizens because of due dili-
gence practices, or the lack thereof, in their CBI programme.48 In this regard,
it is argued that one malfeasant applicant could potentially place irrefutable risks
and liabilities onto not only domestic programmes but to the CBI in general.
An interesting example of fleeing justice was posed by the 2019 case of an
Indian businessman Mehul Choski, who was accused in India of alleged fraud-
ulent activity against the Punjab National Bank.49 Although he held Antigua
and Barbuda citizenship through their CBI since 2017, and there was an extra-
dition treaty between these two states, Mr Choski started legal proceedings in
Antigua and Barbuda against his extradition to India and surrendered his Indian
citizenship. As reported in 2021, the Prime Minister of Antigua and Barbuda,
Gaston Browne, remains adamant that Mehul Choksi must leave the country.
As he is ruining the image of the country’s CBI programme. At the time of
writing, the case is still being legally challenged by Mr Choski in the Antigua
and Barbuda courts. 50
In best practice jurisdictions, the due diligence of travellers is informed
by document readers, which provide an efficient and accurate mechanism
to extract data from travel documents, automatically triggering watch lists
searches, enabling biographic and biometric identity verification, and record-
ing the entry of the traveller.51 In the ultimate expression of this automation,
travellers interact with self-service ‘eGates’ and kiosks on entry (or departure)
without processing input from border agency staff, thus releasing resources
for redeployment to other security or facilitation objectives.52 Currently, such
53 P. Molnar and L. Gill, Bots at the gate:A Human Rights Analysis of Automated Decision making in Canada
immigration and refugee systems (University of Toronto 2018), 14–23.
54 Beduschi (n 48).
55 Migration Data Portal, ‘AI-enabled identification management of the German Federal Office for
Migration and Refugees’ <https://migrationdataportal.org/data-innovation-59> accessed 27 May
2021.
56 A Chun.,‘An AI Framework for the Automatic Assessment of e-Government Forms’ [2007] AAAI,
1684–1690.
57 ibid.
Leveraging AI in citizenship by investment programmes 63
AI. These may include algorithmic bias, data exploitation, and inequality with
regards to whom citizenship is granted.58
In 2020, internet giant Amazon used AI to build a resume-screening tool in
hopes that it could make the process of evaluating applications more efficient.
They built an algorithm using resumes the company had collected for a dec-
ade. Since those resumes came primarily from men, Amazon’s system taught
itself that male candidates were preferable and then the company scrapped
the tool.59 This is an example of so-called ‘algorithmic bias’. It poses a serious
challenge to the use and governance of AI.60 This is because machine learning
in CBI programmes will involve ‘training’ the AI, using examples and data of
how humans have previously made administrative decisions. These data exam-
ples or sets may contain social biases and, consequently, this may be reflected
in the AI-based system’s decisions.61
Hypothetically, if the data used for training AI in CBI programmes con-
tained only applicants from one ethnic background and included the specific
associated risks of applicants from that background, the AI decisions across the
board would reflect the biases, attitudes, and opinions in consideration of that
application. The same holds for other decisions such as corporate investment
administration and immigration considerations.
Wong illustrates that it is extremely difficult to create AI training that is
neutral and independent of all human biases.62 One approach for bias neutrality
is to use decision data examples from humans with a wide spread of cultural
and social backgrounds.63 There is an increasing emphasis on human rights in
the conversation about the ethical and social issues raised by AI technologies.64
In consideration of CBI, gender discrimination is a main consideration.
Gender, in the context of CBI nations, refers to the roles, behaviours,
activities, attributes, and norms that a given society at a given time considers
appropriate for men and women. These attributes, opportunities, and relation-
ships are socially constructed and learned through socialisation processes.65 The
objectives here are to identify gender factors that influence investment and
immigration behaviour and to specify the investment decision-making pro-
cess, particularly with respect to female applicants. Empirical investigations of
58 ibid.
59 J. Dastin, ‘Amazon scraps secret AI recruiting tool that showed bias against women’ (Reuters, 11
October 2018) <https://www.reuters.com/article/us-amazon-com-jobs-automation-insight
-idUSKCN1MK08G> accessed 27 May 2021.
60 P-H Wong, ‘Cultural Differences as Excuses? Human Rights and Cultural Values in Global Ethics
and Governance of AI’ [2020] 33 Philosophy & Technology, 705–715.
61 ibid.
62 ibid.
63 ibid.
64 ibid.
65 A. Mackay, Border Management and Gender (DCAF 2018), 7.
64 Regulating artificial intelligence in industry
gender differences in risk taking do point in the direction of less risk taking by
women than by men.66
Relating to this, it is suggested that CBI programmes must consider the
investment and immigration compliance risks associated specifically with high-
net-worth females and sovereign border controls. The integration of gender
policy into AI border controls in CBI is essential to comply with international
and regional legal frameworks,67 instruments and norms concerning security,
gender equality, and human rights.68 Consequently, AI systems, not limited to
CBI, should be designed and operated so as to be compatible with the princi-
ples of human dignity, fundamental rights and freedoms, and cultural diversity,
thereby avoiding algorithmic bias. The introduction of AI presents novel social
challenges and risks that will require coordinated responses.69
Given its multidimensional character, AI inherently touches upon a full
spectrum of legal fields, from legal philosophy, human rights, contract law, tort
law, labour law, criminal law, tax law, investment law, and procedural law, to
name a few. Moreover, data collection also raises jurisdictional problems. It
would be impossible to address all these aspects in detail in one chapter. Some
of these issues are discussed in other chapters in this volume.
Conclusions
The current AI systems used in corporate and immigration compliance could
be applied to CBI programmes. Under corporate compliance, AI could
help identify malfeasant applicants, and those involved with the proceeds
of crime, including money laundering. Corporate due diligence in CIUs
would thereby prove to be more effective and efficient. Alternatively, for
immigration, simple biometric administrative tasks and overall border secu-
rity due diligence would vastly improve, much in line with the AI module
already being used in the immigration sector in Canada and Germany. In
this regard, data must be uploaded without algorithmic bias and represent
the true dynamics of the CBI programme’s specific considerations relating
to gender bias.
It is important to formulate a solution for risk assessment and mitigation
while maintaining a balance between non-discrimination, border security, and
investment opportunities. CIUs should, amongst other things, focus on money
66 C. Eckel, P. Grossman, ‘Forecasting Risk Attitudes: An Experimental Study of Actual and Forecast
Risk Attitudes of Women and Men’ [2002] 23 Virginia Tech 205-207.
67 For instance with the United Nations Convention on the Elimination of Discrimination Against
Women, 18 December 1979, 1249 UNTS 13.
68 Mackay (n 66), 18–20.
69 M. Brundage et al.‘The malicious use of artificial intelligence: Forecasting, prevention, and mitiga-
tion’ (Future of Humanity Institute and the Centre for the Study of Existential Risk 2018) <https://
arxiv.org/ftp/arxiv/papers/1802/1802.07228.pdf> accessed 27 May 2021.
Leveraging AI in citizenship by investment programmes 65
laundering and border security to make sure that their programmes provide
sustainable long-term profits by using the most innovative and technically
advanced tools for due diligence. National governments practicing CBI must
overall respect the global legal standards set by institutions such as the European
Commission, as well as the relevant international legal standards, as aforemen-
tioned, designed to prevent money laundering and other risks.
5 Artifcial intelligence
application in advance
healthcare decision-making
Potentials, challenges and
regulatory safeguards
Hui Yun Chan
Overview
Healthcare decision-making is one of the areas impacted by AI, where it is
continuously integrated into healthcare services to facilitate better and more
efficient delivery of patient care. AI in end-of-life care recently garnered atten-
tion when a recent study investigated the application of AI-assisted decision-
making for predicting patients’ chances of survival and recovery.1 The study
highlighted the potential of AI application in enhancing doctors’ confidence
in clinical assessments and the likelihood of harm to patients arising from risks
in using AI.2 Advance decision-making is an important part of healthcare for
guiding treatment decisions and particularly useful for planning future care and
treatment, ranging from long-term illnesses, such as progressive illnesses, to
terminal illnesses, such as cancer. It is often used in end-of-life care planning,
and thus it is valuable to examine AI’s deployment in this area.
AI in healthcare
AI is increasingly applied in healthcare provision across a range of disciplines,
such as cancer, neurology, cardiology, urology, patient systems, diagnostics,
and medical imaging.3 It is often utilised to manage and analyse data, make
decisions, transcribe information, and assist doctor–patient communication.
Examples of specific applications across the healthcare sector include image
DOI: 10.4324/9781003246503-6
AI application in advance healthcare decision-making 67
interpretation in medical imaging,4 early stroke predictions, diagnoses and
prognoses,5 septic shock diagnoses and treatment of chronic obstructive pul-
monary disease,6 end-of-life decision-making,7 personalised healthcare and
augmented diagnoses8 (such as subgroups of diabetes or autism), and in the
realm of primary healthcare with predictive modelling on health data and busi-
ness analytics for primary care providers.9 Private providers have trialled AI in
NHS hospitals for analysing test results, determining the right treatment, and
ensuring the escalation of patient care to the right specialist immediately.10
In addition to being valuable for identifying treatment patterns and diag-
noses through mining troves of digital health records, AI is used to support
healthcare professionals in making better decisions through clinical decision
support systems. This system analyses relevant patient data to suggest areas of
concerns or potential complications and proposes care pathways, ranging from
post-surgical patient care practices, medications (range, doses, allergies, and
interactions with other prescriptions), monitoring, and intermittent follow-
ups within a cost-effective and safe environment.11 This application offers the
benefit of ‘reducing medical errors [and] increas[ing]healthcare consistency and
efficiency’,12 thus lessening the burdens on clinicians in diagnosing complex
cases. Its application in public health threats response is known, such as in the
Ebola epidemic, for identifying patient reports of illness, matched with their
travel history, through swift processing of vast information relating to diseases
and treatments.13
AI is equally utilised in personalised medicine. An example of this is the
partnership between Auckland University of Technology and the Knowledge
4 L.D. Jones, D. Golan, S.A. Hanna, and M. Ramachandran,‘Artificial Intelligence, Machine Learning
and the Evolution Of Healthcare: A Bright Future or Cause for Concern?’ (2018) 7 Bone Joint Res.,
223.
5 F. Jiang,Y. Jiang, H. Zhi, et al.,‘Artificial intelligence in healthcare: past, present and future’ (2017) 2
Stroke Vasc Neurol e000101. doi:10.1136/svn-2017-000101.
6 S. Reddy, J. Fox and M.P. Purohit,‘Artificial Intelligence-Enabled Healthcare Delivery’ (2019) 112(1)
J R Soc Med., 22, 23.
7 Lysaght (n 1).
8 Callaghan Innovation, ‘White Paper: Thinking Ahead: Innovation Through Artificial
Intelligence’<https://www.callaghaninnovation.govt.nz/sites/all/files/ai-whitepaper.pdf> accessed
27 May 2021.
9 H. Liyanage, S.-T. Liaw, J.Jonnagaddala et al.,‘Artificial Intelligence in Primary Health Care: Percep-
tions, Issues, and Challenges Primary Health Care Informatics’ (2019) Working Group Contribution
to the Yearbook of Medical Informatics.
10 Royal Free London NHS Foundation Trust, ‘Our Work with Google Health UK: DeepMind’
<https://www.royalfree.nhs.uk/patients-visitors/how-we-use-patient-information/our-work-with
-deepmind/> accessed 27 May 2021; NHS England, ‘Artificial Intelligence in Health and Care
Award’ <https://www.england.nhs.uk/aac/what-we-do/how-can-the-aac-help-me/ai-award/>
accessed 27 May 2021.
11 Sloane (n 3), 561, 562.
12 Reddy (n 6), 23.
13 Sloane (n 3),562.
68 Regulating artificial intelligence in industry
Engineering Discovery Research Institute that has produced high success rates
for predicting strokes in patients: ‘95 per cent for one day ahead, and 70 per
cent for 7 and 11 days ahead of a stroke occurring’.14 This is beneficial for
preventive measures to manage long-term disability or pre-empt death, and
saving cost in the long-term. Such remarkable accuracy has also appeared in
other specific areas of healthcare, for example, up to 80% precision in heart
disease detection and 94% accuracy in Japanese cancerous growth endoscopy.15
AI has much to offer in healthcare in ordinary times, but what about its utility
in extraordinary times, such as the COVID-19 pandemic that has swept the
world in2020? The pandemic created a profound sense of urgency in the pop-
ulation for taking care of healthcare matters and for decision-making for medi-
cal treatment in the event individuals became infected with COVID-19.16 The
nature of COVID-19, which primarily affects respiratory functions, has ampli-
fied difficult clinical decisions when deciding whether to resuscitate patients,
and when withdrawing or withholding ventilation support for patients who
are admitted into hospitals. AI could play an important role in assisting clini-
cians and healthcare providers make these decisions, for example, in rapidly
identifying the trajectory of health and illness based on patient profiles using
the latest available scientific evidence of how COVID-19 develops and affects
individuals. The vast network and deep learning features of AI are useful for
simultaneously processing multiple scientific databases relating to COVID-19
and treatment options conducted around the world. In addition to navigat-
ing these evidence-based databases, AI could be deployed to access broader
information, such as the patient’s travel and medical history and likelihood of
survival, to present a clearer profile of clinical outcomes and potential health
advice. The promise of AI appears to be boundless. The next section explores
its potential for supporting clinical decision-making, focusing on advanced
healthcare decision-making.
17 Re AK (Medical Treatment: Consent) [2001] 1 FLR 129; other cases where there were doubts render-
ing the decisions inapplicable include Re T (Adult: Refusal of Medical Treatment) [1992] 4 All ER 649;
NHS Trust v T [2004] EWHC 1279 Fam; W Healthcare NHS Trust v KH [2004] EWCA Civ 1324.
70 Regulating artificial intelligence in industry
individuals afflicted by a range of mental illnesses, such as depression, suicidal
ideation or schizophrenia, and progressive disorders such as dementia.18
Patients need to be informed about their health prognoses to formulate their
future medical care. This would necessitate not only an understanding of the
person’s mental state but also health conditions, diagnoses, and prognoses if they
are suffering from an illness. It is very likely that AI-assisted application would
be beneficial in making future treatment plans for individuals suffering from
terminal or progressive illnesses, such as dementia, Parkinson’s or Alzheimer’s.
Machine learning and deep learning functionalities enable the wide, in-depth
cross-referencing of health records and databases to form patterns and patient
profiles to provide a most likely outcome to the doctors. Additionally, the
datasets from these recognised illnesses can be explicated through disease data-
bases and matched with the particular patient profile to produce a personalised,
initial assessment regarding the way forward. It helps doctors and patients for-
mulate decisions that are closest to the patient’s preferences based on previous
outcomes or behaviours. A comprehensive outlook for such illnesses would
enable patients to have a better comprehension of their long-term health tra-
jectories and to undertake anticipatory remedial measures or propose treatment
strategies to address their health conditions, including quality of life and care
considerations. This diversity of information enables patients and doctors to
make better-informed decisions when planning their future care and treat-
ment. Doctors can identify the nuances of the medical prognoses produced
by AI surrounding the illness, prognosis, and projected life span, which can
then be mapped against individual health conditions, background, values, and
quality of life considerations. In offering specific and contextualised advice to
patients, the doctors would need to make this information comprehensible to
the patients in order for them to formulate their decisions. This is because the
information may risk becoming overwhelming if not carefully presented.
Once patient understanding and mental capacity are established, the next step
is to record preferences regarding future medical treatment and care. Natural
language processing is helpful for searching and locating medical records and
for voice recognition to record decisions or for note-taking of when consulta-
tions occurred. These records, however, need to be cross-checked with doc-
tors to verify their accuracy when making a final decision.
The second aspect of decision-making is the implementation stage, where
three important considerations arise: whether there are any changes in terms
of the person’s thinking about the decision and personal circumstances and the
scope of the treatment. While the scope of the treatment and personal circum-
stances may be determined, a change of mind may be slightly complicated.
This is because when the decision is sought to be applied, the patient is usually
18 M. Brunn, A. Diefenbacher, P. Courtet et al., ‘The Future is Knocking: How Artificial Intelligence
Will Fundamentally Change Psychiatry’ (2020) 44 Acad Psychiatry, 461; C. Su, Z. Xu, J. Pathak, et al.,
‘Deep learning in mental health outcome research: a scoping review’ (2020) 10 Transl Psychiatry,116.
AI application in advance healthcare decision-making 71
already unconscious, thus unable to re-confirm the decision. This is particu-
larly challenging when families contest the decision or where it is in conflict
with the doctors’ clinical assessment regarding treatment. AIapplication may
not be sufficiently nuanced to assess whether a particular patient would have
changed their mind as it is very much subject to the individual’s thought pro-
cess at that time. This aspect would be better confirmed by the families of the
patient rather than AI.
The enthusiasm for AI applications and their potential in healthcare deliv-
ery may have eclipsed concerns generated by its uses and limitations. Cautions
have been raised regarding its inherent risks arising from bias, inaccuracies,
agency, responsibility, accountability, and intelligibility.19 Some have expressed
restrained inclinations for its adoption, being cautious of the implications aris-
ing from data access and governance,20 while others have questioned the likeli-
hood of AI replacing human functions, which appears to lend weight to the
claim that AI capabilities are unduly inflated.21 The deep learning mechanisms
present socio-technical concerns affecting the decision-making process, lead-
ing to common issues such as biased or erroneous outcomes, questionable cor-
relations and transparency doubts.22 The topical concerns that are relevant to
advance healthcare decision-making where AIintegration potentially generates
risks and liability are explored in the next section. The challenges are classified
into two broad aspects, the first in relation to issues of risks with reliability and
trust, and second in relation to autonomy in decision-making and its implica-
tions for doctor–patient relationships.
19 S. Larsson, ‘The Socio-Legal Relevance of Artificial Intelligence’ (2019) 103 Droit et Societe, 573;
Liyanage (n 9); Dalton-Brown (n 15); C. Macrae, ‘Governing the Safety of Artificial Intelligence in
Healthcare’(2019) 28BMJ Qual. Saf., 495; D Schönberger, ‘Artificial Intelligence in Healthcare: A
Critical Analysis of the Legal and Ethical Implications’ (2019) 27 Int. J. Law Info.Tech., 171.
20 Jones (n 4); Reddy (n 6).
21 Reddy (n 6).
22 Schönberger (n 19), 175, 177.
23 Liyanage (n 9), 45; see also I. Giuffrida and T.Treece, ‘Keeping AI Under Observation: Anticipated
Impacts on Physicians’ Standard of Care’ (2020) 22 Tul. J.Tech.& Intell. Prop., 111.
24 Larsson (n 19), 588, 592.
72 Regulating artificial intelligence in industry
These confluences of factors often impinge on the decision-making process
and outcomes, where preferences may change over time, as in the case of
advance decision-making, presenting uncertainties with the reliability of the
decision. The issue of reliability is similarly influenced by the characteristics of
AI in terms of bias, when datasets are replicated across similar patient groups,
resulting in an unintended blanket approach to advising patients. Human inter-
vention is thus required for interpreting outcomes generated by AI to ensure
that such biases are removed, and that any obvious divergence from norms
and contexts are remedied to reflect decisions that are better aligned with the
values, norms, and relationships of the patient.
While AI offers predictions of what people who suffer from similar health
conditions would likely choose, the decisions may not necessarily reflect what
a particular patient would want, as AI-assisted decision-making lacks nuance.
AI has the advantage of injecting reasons and analytical skills into the decision-
making process but may be less likely able to predict the subjective needs
and desires of the individual. Due to the nature of the outcomes of advance
decision-making affecting overall health and decisions that might entail risks of
death, its application should include considerations of what is at stake, as the
consequences would differ in different areas of clinical decision-making, with
different ramifications. A distinguishing factor between AI and human intel-
ligence is the amount of discretion involved in healthcare decision-making.
While AI is renowned for its precision, breadth, and depth, it lacks contextual
nuance, value judgement, intuition, and clinical discretion, which are essential
to healthcare decision-making. Further, AI is unable to replicate the experi-
ence accumulated over the years that is comparable to a doctor’s training.
It is unsurprising that its suitability for healthcare decision-making has been
questioned.25 AI’s utility is more recognisable in an environment that operates
within clear and definite answers; consequently, its capabilities may be more
limited to resolving non-dichotomous problems, which are often found in
the real world.26 Healthcare decision-making often entails complex balances
and sensitive, competing social considerations. Macrae accurately highlighted
that AI integration into healthcare created new risks and augmented existing
concerns, leading to magnified biases arising from various assumptions that are
‘insensitive to local care processes’.27
The issue of reliability is closely connected to trust. Trust in an AI-integrated
healthcare system is a major concern confronting healthcare providers.28 It is
25 H.Surden,‘Artificial Intelligence and Law:An Overview’ (2019) 35 Ga. St. U. L. Rev., 1305, 1323.
26 ibid., 1331: an example is human involvement in performing legal tasks, with the more repetitive and
mechanical aspects delegated to AI.
27 Macrae (n 19), 495.
28 M. Ryan,‘In AI We Trust: Ethics,Artificial Intelligence, and Reliability’ (2020) Science and Engineering
Ethics<https://doi.org/10.1007/s11948-020-00228-y> accessed 27 May 2021; L Shinners, C Aggar,
SGS Smith, ‘Exploring healthcare professionals’ understanding and experiences of artificial intel-
ligence technology use in the delivery of healthcare:An integrative review’ (2019) J.Health Inform.,1.
AI application in advance healthcare decision-making 73
particularly significant in healthcare settings underpinning the doctor–patient
relationship in facilitating medical treatment, recovery, healing, and health
promotion, which are integral to health policies and ethics.29 AI is perceived
as untrustworthy due to its lack of emotive capability and inability to be held
responsible for actions that usually require normative accounts of trust.30 Such
concern was reflected in a study that found that
A similar study reported the importance of trust as the basis for adopting AI
in healthcare.32 Avoiding patient harm is one of the key aims of healthcare
treatment; as such, where trust is lacking in AI-integrated healthcare decision-
making, measures are essential to safeguard patients against risks of harm.33 An
example of safeguarding patients against the risk of harm arising from AI-based
system is sepsis treatment, where the authors of the study advocated for a
dynamic rather than static standard assurance that reflects the nature of AI,
thus underpinning each clinical decision with the obligation to avoid harm to
patients, ensuring good patient care, and securing accountability to patients.34
It is clear that reliability and trust are tightly woven issues in AI-integrated
healthcare systems. Both notions contribute to safety concerns when assessing
the outcomes predicted by machine learning and whether AI has served its
purpose in facilitating clinical decision-making.35 Concerns about the quality
and safety of such decision-making, arising from the likelihood of erroneous
or biased outcomes, should be mediated by human confirmation or verifica-
tion to eliminate any insensitivity to the actual preferences and circumstances
of that particular case. This layer of safeguard permits a more intuitive human
approach to the decision-making process. As highlighted above, socio-tech-
nical concerns necessitate consideration of human, social, and organisational
networks where AI is deployed.36 Although one may suggest that healthcare
29 R.C. Feldman, E.Aldana and K. Stein,‘Artificial Intelligence in the Health Care Space: How We Can
Trust What We Cannot Know’ (2019) 30 Stan. L.& Pol’y Rev., 399, 404.
30 Reddy (n 6).
31 Shinners (n 28).
32 W. Fan, J. Liu, S. Zhu et al., ‘Investigating the Impacting Factors for the Healthcare Professionals to
Adopt Artificial Intelligence-Based Medical Diagnosis Support System (AIMDSS)’ (2018) Ann. Oper.
Res. doi:10.1007/s10479-018-2818-y.
33 I.Habli, T Lawton and Z Porter, ‘Artificial intelligence in health care: accountability and safety
(2020) 98 Bull.World Health Organ.,251.
34 ibid., 253.
35 Macrae (n 19); R.Challen, J. Denny, M. Pitt, et al.,‘Artificial intelligence, Bias and clinical safety’(2019)
28 BMJ Qual. Saf.,231.
36 Macrae (n 19), 497.
74 Regulating artificial intelligence in industry
decision-making is comparable to AI in terms of its intrinsically unstable nature
due to changing preferences, medical advancements, and inherent risks, the
risks in an AI-integrated healthcare system are augmented where some aspects
are beyond the control of clinicians. Consequently, liabilities arising from its
integration into healthcare applications warrant the development of safeguards
and preventive measures to avoid patient harm.37 Robust, continuous testing
and the open publication of safety reports may help to engender trust in its use,
as well as public consultations to identify the acceptability of such risks and
ways to mitigate them in order to shape regulatory approaches.
Intelligibility is another important feature influencing the reliability and
safety of and trust in AI. Intelligibility is often associated with the ‘black box’
nature of AI in relation to how it reached its decision and outcome. This
apprehension has appeared in AI-automated judicial decision-making, with
fears of potentially discriminating sentencing decisions that depart from notions
of justice and fairness.38 In dispelling these concerns, human intervention is
essential to rectify any injustice and remove mechanistic, standardised decision-
making that amplifies systemic risks in the administration of justice.39 A similar
approach is essential in addressing such concerns in healthcare. In a typical
healthcare decision-making encounter, patients and doctors engage in conver-
sations to understand the nature of the illness and clarify concerns regarding
treatment options. This opportunity may be less likely to occur if the decision
is AIgenerated as there would be difficulty in understanding the algorithm
used for the decision-making. This is especially crucial as healthcare decisions
impact people’s lives, with potential risks manifesting in the future. Pathways
to engender trust in healthcare in alleviating the ‘black box’ problem in AI
include ensuring information integrity, healthcare provider competencies, and
patient interest to foster assurance in AI-assisted decision-making.40 Another
option for unravelling the intelligibility of AI is via the right to an explanation
of the decision-making process in the General Data Protection Regulation
(GDPR).41 Although this right is predominantly focused on automated deci-
sions, it can be applied to explain the algorithms of AI-assisted healthcare deci-
sion-making, such as risks, healthcare data, and information affecting patients.42
37 ibid.
38 M.Guihot, A.F. Matthew and N.P. Suzor,‘Nudging Robots: Innovative Solutions to Regulate Arti-
ficial Intelligence’ (2017) 20 Vand. J. Ent.& Tech. L.,385, 410.
39 ibid., 416.
40 Feldman (n 29), 413.
41 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the
protection of natural persons with regard to the processing of personal data and on the free move-
ment of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) OJ L
119, 04.05.2016; cor. OJ L 127, 23.5.2018, hereinafterGeneral Data Protection Regulation, Recital
71,Arts 9(1), 4(15), and 22.
42 See for example T.Hoeren and M.Niehoff, ‘Artificial Intelligence in Medical Diagnoses and the
Right to Explanation’ (2018) 4 Eur. Data Prot. L. Rev., 308, 317, 318.
AI application in advance healthcare decision-making 75
The reliability, trust and intelligibility of AI have raised several important
concerns in the decision-making process. This aspect is only part of the picture.
The second major concern stems from the ethical dilemma of autonomy in
decision-making and implications to doctor–patient relationships.
43 T.Beauchamp and J. Childress, Principles of Biomedical Ethics (5th edn, OUP, 2001), 57.
44 Nuffield Council of Bioethics (n 3).
45 J. Powell,‘Trust Me, I’m a Chatbot: How Artificial Intelligence in Health Care Fails the Turing Test’
(2019) 21(10) J. Med Internet Res., e16222 doi: 10.2196/16222;T.Zapusek, ‘Artificial Intelligence in
Medicine and Confidentiality of Data’ (2017) 11 Asia Pacific J. Health L.& Ethics, 105.
76 Regulating artificial intelligence in industry
particularly after long hours of patient consultations or surgical procedures.46
Dalton-Brown, in exploring this issue, dismissed the notion of AI entirely
replacing doctors, on the basis that such a move would dehumanise healthcare,
as healing and caring remain the central focus of healthcare practice.47 Doctors
are more perceptive in their analysis of therapeutic options in the doctor–
patient relationship compared to AI, which contributes towards cultivating
patient trust and enhancing the quality of the decision-making process through
expressions of care and empathy.48 Common humanity is implicit in the doc-
tor–patient relationship, which supports patient autonomy. The relationship
embodies assurance and trust, which is beyond diagnoses and prognoses, ren-
dering doubts about whether AI could emulate such empathy. It would not be
hard to draw comparisons between the formulaic and technical limitations of
AI and the human touch elicited by human language, expressions and gestures.
In formulating healthcare decisions to be implemented in the future, being
conscientious of patient values and the ability to weigh conflicting priorities
when reaching treatment decisions necessitate medical acuity rather than purely
intelligence. Ethical and moral judgements are prevalent in the decision-making
process, nurtured by mutual confidence, implied social cues, and compassion.
These elements are hardly said to be possessed by AI.49 Additionally, while AI
applications could be trained to enhance accuracy or sensitivity, it is uncertain
if they could be trained to mirror human articulation. Thus far, AI application
in healthcare is facilitative in nature, and is helpful for alleviating the pressures
of administrative tasks and the more advanced diagnoses processes. Human
interface remains the central focus of the doctor–patient relationship.
The above analyses have revealed the specific areas of concern implicating
advance healthcare decision-making, highlighting the importance of context
and human intervention in the decision-making process. Multiple consid-
erations affect healthcare outcomes, moving beyond the diagnostics, such as
personal values and familial relationships. Clinical discretion, acumen, and
experience of healthcare professionals for weighing competing considerations
when advising patients are important elements that are not captured sufficiently
by AI-integrated applications. Consequently, AI-integrated interventions are
unlikely to usurp the doctor–patient relationship, but they are valuable for
supporting the decision-making process, and in approximating human deci-
sion-making. The next section explores regulatory options for safeguarding
healthcare decision-making where AI is integrated into the system.
65 Montgomery v Lanarkshire Health Board (Scotland) [2015] UKSC 11 where it was ruled that patients
are no longer passive recipients of healthcare.
AI application in advance healthcare decision-making 81
of the AI algorithm. In this sense, doctors will be confronted with the chal-
lenge of learning about how AI affects their provision of advice to patients
by explaining the AI-assisted decision-making process. Risks arising from this
stage are important to address, as they potentially affect the liability of the
expected standard of care. This feature entails doctors initially appreciating the
mechanisms and then translating them to the benefit of the patient so that they
can grasp the information and finally make a choice. In an advance decision-
making context, this means safeguarding the choices in spite of the risks that AI
presents but taking steps to reduce these risks. Lawmakers would need to take
into account this aspect when accommodating the additional risks involved in
creating or modifying regulations.
The evolving nature of AI has presented regulatory difficulties and oppor-
tunities. Any regulatory approaches selected by particular countries, either by
establishing specific laws for AI or including AI-related issues under existing
branches of law, remain a real governance challenge.66 While this challenge
persists, it is equally valuable to consider establishing ethical codes and profes-
sional standards to assist healthcare providers in dealing with immediate issues
that arise in the healthcare decision-making process67 rather than automatically
resorting to drafting legislation.68 An example of a proposal for establishing
ethical codes and an ethics committee is in the area of regulating care robots
(AI robots that provide care for the elderly and assisted living).69 A code of eth-
ics and expectations of professional standards illuminate a governance frame-
work encompassing healthcare providers, manufacturers, and government
agencies towards maintaining trust in the healthcare system and to mitigate
risks to patient autonomy, privacy, and confidentiality, ultimately safeguarding
patients from harm.70 One beneficial outcome from instituting these codes and
standards is the ability to offer patients assurance and promote the trust that
their healthcare decisions are being made in a responsible manner, and that
doctors are guided by established norms when carrying out their professional
obligations. Both values are critical in preserving therapeutic doctor–patient
relationships.
Similarly, continuous training for healthcare providers and educating
patients about the applications that affect their healthcare decisions are vital if
AI-integrated interventions become more prevalent in the healthcare system.
Implementing appropriate policies and ensuring adequate training for the safe
and effective use of AI are ways to minimise risks arising from AIapplications in
healthcare.71 An example of reducing medical negligence is to identify appro-
Conclusions
This chapter has outlined the various potentials of AI-integrated systems in
healthcare decision-making, focusing on advance decision-making. The
healthcare sector has experienced advancements notably in diagnostics and
illness predictions, through continuously sophisticated AI neural network
technology and machine learning. The natural language processing and deep
learning features of AI are valuable for informing healthcare professionals and
patients when making informed decisions. Advance decision-making in par-
ticular would benefit from AI technology in facilitating different types of treat-
ment and future healthcare plans. We have also briefly considered how AI
would be beneficial in times of pandemic for assisting healthcare providers
who are under enormous pressure when delivering healthcare services to the
population, particularly in making difficult clinical decisions in continuing or
withdrawing treatments. The evolving, fluid, and opaque nature of AI ren-
der it far from simplistic, despite its seemingly boundless possibilities available
to be harnessed for the benefit of society. Concerns are raised regarding the
risks such applications entail, questions of patient autonomy, conflict regard-
ing decision-making, the level of human intervention, and the impact upon
existing doctor–patient relationships. It is, however, recognised that AI cannot
completely replace some of the tasks within the purview of clinicians.
The continuous use and integration of AI in healthcare provision will con-
tinue to generate questions that require unique responses. Lawmakers have
been attempting to respond to the issues that arise, with a menu of possible
regulatory options to govern these challenges and mediate the risks arising from
its application in healthcare. The more favourable view adopted is to regulate
its use based on the level of risk presented by AI, and according to its context.
These considerations reveal important concerns that legislation can help to
address to minimise the impact of AI on everyday life, particularly where it
is increasingly used in the healthcare sector. An appropriate understanding of
AI-related challenges and legal and ethical issues for advance decision-making
can guide approaches to better appreciate its use, assist doctors in implement-
ing people’s healthcare decisions, and for lawmakers to regulate this developing
technology in healthcare provision.
72 ibid., 121.
73 ibid.
6 Artifcial intelligence in
the legal profession
Damian M. Bielicki
Legal research
Fundamentally, legal research is about finding what the law is and the relevant
sources of that law.1 Without proper research, lawyers simply would not be
able to adequately advise their clients. Particularly in the common law system,
finding the right source of law is sometimes challenging due to the doctrine
of judicial precedent, according to which many of the primary legal principles
have been made and developed by judges from case-to-case. Consequently,
lawyers must be able to find the right precedent and, unfortunately, the system
of law reporting in many places around the world has not always been effec-
tive.2 Among the most known examples of AI software are the ones provided
1 N.Waisberg,A. Hudek, AI for Lawyer: How Artificial Intelligence is Adding Value,Amplifying Expertise, and
Transforming Careers (Wiley 2021), 107.
2 J. Embley, P. Goodchild, C. Shephard, S. Slorach, Legal Systems & Skills (4th ed., OUP 2020) 145ff; S.
Wilson, H. Rutherford,T. Storey, N.Wortley, B. Kotecha, English Legal System (4th ed., OUP 2020),
165ff.
DOI: 10.4324/9781003246503-7
84 Regulating artificial intelligence in industry
by Westlaw and LexisNexis, but there are many other tools on the market.3
Most of these tools provide intelligent document analysis, where users can
upload a document and the AI will automatically suggest authorities that are
relevant to the document but not cited in it. They will also reveal how courts
historically treated the relevant precedents. Moreover, they deliver data-driven
insight, or litigation analytics, providing information on judges, courts, dam-
ages, attorneys, law firms, and case types in order to reveal certain patterns and
help build a case strategy.4 Therefore, these AI-driven tools contain far more
than just search engines to locate specific laws or cases.
Contract analytics
Contract law is one of the most fundamental spheres of law that applies to
every industry and business relationship. A legally enforceable agreement gives
rise to obligations and certain protections for the parties involved.5 Drafting
and reviewing contracts can be very time consuming, particularly if they are
lengthy, containing dozens or even hundreds of pages. AI-enhanced contract
analysis and review can speed up the process significantly, and it can help
reduce the impact of human error.6
There are dozens of companies offering contract analytics.7 Although they
use a variety of different codes, from the users’ point of view the whole process
is almost the same. First, a contract must be uploaded to the software. Second,
either the user has to indicate what needs to be found in the contract or, in the
case of more advanced platforms, AI automatically generates a list of potential
aspects to be reviewed.8 Some platforms go further by offering contract man-
agement databases, and the possibility to work across different contracts at the
same time. To a certain extent, contract analytics programmes still need human
intervention, and their main role is to work in conjunction with their users, to
make the overall process faster, accurate, and more cost-effective.
Predictive analytics
It is not uncommon for litigants to ask their lawyers about the realistic out-
come of their case. The answer often depends on a variety of different factors,
9 Examples of platforms offering such services include, but are not limited to: Bloomberg Law, Lex
Machina, Premonition or Solomonic, to name a few.
10 See official website of the CRT: < https://civilresolutionbc.ca/faq/> accessed 27 May 2021.
11 S. Salter, “What is the Solution Explorer?” (BarTalk, April 2018) <https://www.cbabc.org/Bar-
Talk/Articles/2018/April/Features/What-is-the-Solution-Explorer> accessed 27 May 2021;
J. Zeleznikow, “Using Artificial Intelligence to provide Intelligent Dispute Resolution Support”
[2021] Springer: Group Decision and Negotiation (April 2021), 14–15; A.D. Reiling, “Courts and
Artificial Intelligence” [2020] 11(2) International Journal for Court Administration 8, 4
12 See official Courtal website: <https://courtal.com/portfolio/estonia/> accessed 27 May 2021.
13 ibid.
86 Regulating artificial intelligence in industry
support of bailiff activities.14 Moreover, as of 2021, the Estonian Ministry of
Justice is piloting an “AI judge” project to adjudicate small claims disputes of
less than €7,000. In concept, AI would issue a decision that could be appealed
to a human judge. It is hoped that this project can clear a backlog of cases and
make the whole judicial process more timely and cost-effective.15
One more recent project worth mentioning is the so-called SUPACE –
Supreme Court Portal for Assistance in Courts Efficiency, which is an AI-driven
assistive tool implemented by the Supreme Court of India. It is not designed
to make decisions but to analyse the data received in the filing of cases to sup-
port the work of the judges. If successful at the Supreme Court level, it could
be implemented in the lower courts to help to resolve the problem of delays
and case management. The project was officially launched in April 2021, and,
therefore, at the time of writing, there is still little information available about
this system.16 However, it has been revealed that the system includes a chatbot
that can give an overview of the case, information about the relevant law, etc.
The system is subject to training by human users.17
19 ibid.
20 These states are Arizona, California, Florida, Illinois, Kentucky, Louisiana, Missouri, Montana, New
Jersey, New Mexico, North Carolina, Ohio, Pennsylvania, South Dakota, Tennessee, Texas, Utah,
Washington, and Wisconsin. See official website of the LJAF: <https://www.arnoldventures.org/
work/release-decision-making/> accessed 27 May 2021.
21 See official website of the United States Courts: <https://www.uscourts.gov/services-forms/proba-
tion-and-pretrial-services/supervision/pretrial-risk-assessment> accessed 27 May 2021.
22 T.H. Cohen, C.T. Lowenkamp,W.E. Hicks,“Revalidating the Federal Pretrial Risk Assessment Instru-
ment (PTRA): A Research Summary” [2018] 82 Federal Probation, 2, 24; S.L. Desmarais, E.M. Low-
der, Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors and Defense Attorneys (Safety+Justice
Challenge, February 2019), 4.
23 B. Buskey,A.Woods,“Making Sense of Pretrial Risk Assessment” (National Association of Criminal
Defense Lawyers, June 2018) <https://www.nacdl.org/Article/June2018-MakingSenseofPretrialRi
skAsses> accessed 27 May 2021.
24 ibid.
88 Regulating artificial intelligence in industry
algorithm rates the person from one (low risk) to ten (high risk), and the score
is based on the answer to 137 questions either answered directly by the defend-
ant or pulled from criminal records, if any.25 Although the system is in use in
only a few American jurisdictions,26 it was criticised for the lack of transparency
in the algorithm.27 This was acknowledged by the Wisconsin Supreme Court
in State v Loomis,28 where the Court acknowledged that risk scores failed to
explain how exactly data was used to generate the results.29 Moreover, it was
stated that “this court’s lack of understanding of COMPAS was a significant
problem in the instant case”.30
HART is a British experiment conducted by Durham Constabulary and
Cambridge University. It is aimed at the reduction of reoffending by identify-
ing and helping offenders who are likely to commit an offence in the next two
years.31 The AI learnt from decisions made between 2008 and 2012 and the
algorithm is based on around 30 factors, of which some are not related to the
crime committed.32 A major criticism of this system concerned the fact that
the system categorised individuals into various groups according to their ethnic
origin, income, education levels, and other factors.33
Regulations
Like in other industries, there is an ongoing debate on the impact of AI on
certain key features, such as data security, fundamental rights, and many more.
25 R. Koulu, L. Kontiainen, How will AI shape the future of Law? (Legal Tech Lab 2019), 79–80.
26 For instance, New York,Wisconsin and California.
27 M. Legg, F. Bell, Artificial Intelligence and the Legal Profession (Hart 2020), 226–228
28 881 N.W.2d 749 (Wis. 2016).
29 ibid. See also: H.-W. Liu, Ch-F. Lin,Y-J. Chen, “Beyond State v Loomis: artificial intelligence, gov-
ernment algorithmization and accountability” [2019] 27 International Journal of Law and Information
Technology, 127.
30 State v Loomis, at 774. See also: A. Deeks, “The Judicial Demand for Explainable Artificial Intel-
ligence” [2019] 119 Columbia Law Review, 7, 1844.
31 University of Cambridge, “Helping Police Make Custody Decisions Using Artificial Intelligence”
(UoC Research, 26 February 2018) <https://www.cam.ac.uk/research/features/helping-police
-make-custody-decisions-using-artificial-intelligence> accessed 27 May 2021; M. Burgess, “UK
police are Using AI to Inform Custodial Decisions – But it Could Be Discriminating Against the
Poor” (Wired, 1 March 2018) <https://www.wired.co.uk/article/police-ai-uk-durham-hart-check-
point-algorithm-edit> accessed 27 May 2021.
32 For instance, postal code, gender etc. See more:Appendix 1 to the European Ethical Charter on the
Use of Artificial Intelligence in Judicial Systems and their Environment, titled “In-depth study on
the use of AI in judicial systems, notably AI applications processing judicial decisions and data”, avail-
able at: <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c> at
51
33 B. Min, G. Ferris, Regulating Artificial Intelligence for Use in Criminal Justice Systems in the EU:
Policy Paper (Fair Trials Publication), available at: <https://www.fairtrials.org/sites/default/files
/Regulating%20Artificial%20Intelligence%20for%20Use%20in%20Criminal%20Justice%20Sys-
tems%20-%20Fair%20Trials.pdf> at 17-18, accessed 27 May 2021.
Artificial intelligence in the legal profession 89
There are several instruments providing, either directly or indirectly, some
regulations concerning AI. Most of them are discussed in the other chap-
ters of this volume, including the EU’s General Data Protection Regulation
(GDPR),34 the White Paper on AI,35 the US AI-related bills,36 or Japan’s Act
on Protection of Personal Information.37 Their main focus is on achieving vari-
ous social policies, including data and privacy protection, prevention of bias or
discrimination, accountability, and many more.38 However, all of them address
the AI issues in general, without a particular focus on the legal profession.
One instrument that is dedicated to regulating AI for lawyers is the European
Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and
their environment,39 adopted in December 2018 by the European Commission
for the Efficiency of Justice (CEPEJ).40 The Charter includes five core princi-
ples, as follows: (1) Principle for respect of fundamental rights; (2) Principle of
non-discrimination; (3) Principle of quality and security; (4) Principle of trans-
parency, impartiality and fairness; and (5) Principle “under user control”. The
first principle requires that any use of AI must be in compliance with the rights
guaranteed by the European Convention on Human Rights (ECHR)41 and the
Convention on the Protection of Personal Data.42 At the heart of this principle
is the right to a fair trial. The second principle forbids the use of data that may
lead to discrimination of any kind against individuals or groups. Moreover,
such data should not lead to deterministic analyses or uses.43 The third principle
requires the designers of machine learning models to include multidisciplinary
34 Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data, and repealing Directive 95/46/EC (General
Data Protection Regulation) [2016] OJ L119. See also: J. Buyers, Artificial Intelligence: The Practical
Legal Issues (Law Brief Publishing 2018), 41–51.
35 White Paper on Artificial Intelligence: a European approach to excellence and trust, COM (2020)
65 Final (Brussels, 19 February 2020).
36 E.g. the Algorithmic Accountability Bill or the Facial Recognition and Biometric Technology Mor-
atorium Bill.
37 Act No. 57 of 2003, available in English at: <https://www.cas.go.jp/jp/seisaku/hourei/data/APPI
.pdf> accessed 27 May 2021.
38 Legg, Bell (n 27), 312.
39 CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and
their Environment, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg, 30 Decem-
ber 2018, available at: <https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018
/16808f699c> accessed 27 May 2021.
40 CEPEJ is the body established by the Council of Europe to improve the quality and efficiency of
European judicial systems. See official website of the organisation: <https://www.coe.int/en/web/
cepej> accessed 27 May 2021.
41 Convention for the Protection of Human Rights and Fundamental Freedoms (European Conven-
tion on Human Rights, as amended), opened for signature in Rome on 4 November 1950, came
into force on 3 September 1953.
42 Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data,
ETS No. 108 as amended by the CETS amending protocol No. 223.
43 CEPEJ (n 39), 9.
90 Regulating artificial intelligence in industry
project teams, which would draw from the expertise of legal practitioners as
well as academics from a wide range of social sciences. Any input to the sys-
tem should come from certified sources as well.44 The fourth principle calls
for a balance between the intellectual property of companies providing the
AI-driven systems and the need for transparency, impartiality and fairness.45
Finally, the fifth principle calls for the autonomy of the users, giving them the
possibility to review the data and to control their choices.46
These five principles correspond with similar principles that flow from
standards of professional conduct and professional codes of ethics. These regu-
lations impact the way that lawyers can, or should, use AI.47 There are certain
benefits as well as areas of concern arising from the increased use of AI systems
in the legal sector, particularly by solicitors, barristers, and judges. These areas
include, but are not limited to, the rule of law, fundamental rights, transpar-
ency, and explainability.
AI actors should respect the rule of law, human rights and democratic val-
ues, throughout the AI system lifecycle. These include freedom, dignity and
autonomy, privacy and data protection, non-discrimination and equality,
diversity, fairness, social justice, and internationally recognised labour rights.50
44 ibid., 10
45 ibid., 11.
46 ibid., 12.
47 Legg, Bell (n 27), 64–69.
48 A. Dennett, Public Law Directions (OUP 2019)143–145; N. Parpworth, Constitutional and Administra-
tive Law (11th ed. OUP 2020), 36–51.
49 For instance: Solicitors Regulation Authority (SRA) for England and Wales mentions it as the core
principle for solicitors. See: SRA Principles at <https://www.sra.org.uk/solicitors/standards-regula-
tions/principles/> accessed 27 May 2021. Similarly, Rule A1 of the UK Bar Standards Handbook
(BSB) highlights that the society is based on a rule of law; see version 4.6 adopted on 31 December
2020, available at: <https://www.barstandardsboard.org.uk/the-bsb-handbook.html> accessed 27
May 2021. Similar provisions can be found in regulations of barristers and judges in many other
jurisdictions.
50 S. 1.2.(a) of the OECD Council Recommendation on Artificial Intelligence, adopted on 22 May
2019, C/MIN(2019)3/FINAL.
Artificial intelligence in the legal profession 91
The OECD further recommends that to this end AI actors should implement
appropriate mechanisms and safeguards, including capacity for human determi-
nation51 In June 2019, the G20 adopted so-called human-centred AI Principles
with the exact same provision drawn from the OECD Principles.52
One of the key aspects of the rule of law is access to justice. The AI-driven
projects such as the Canadian CRT, Estonian Courtal, and a variety of cloud-
based technologies, help with the realisation of this right. The CRT proved
particularly useful during the COVID-19 outbreak when it remained open and
fully operational 24 hours a day, 7 days a week. At the same time, millions of
people worldwide did not have proper access to justice, particularly in places
where regional and national lockdowns were imposed. Access to justice was an
issue in many places even before the pandemic, particularly due to the lack of
legal aid.53 Even though during the pandemic many countries offered online
hearings using audio-visual means of communication, the backlog of cases in
some reached record levels. A good example was the United Kingdom, where,
according to the House of Lords, by December 2020 the backlog had reached
crisis level, with over 8,000 people being held in custody awaiting trial.54 It
therefore seems appropriate to suggest that there should be an ongoing invest-
ment and development of AI-driven technologies that would support the work
of workers in the legal sector.
Fundamental rights
Predictive analytics tools become more and more popular as they deliver a very
useful intelligence, including information about the relevant law, a histori-
cal record about damages awarded, but also information about the individuals
involved. This leads to the possibility of profiling judges, lawyers, jurisdictions,
and other things.55 On one side, it may undermine the proper functioning of
justice, but on the other side it makes public activities more transparent by
allowing citizens to know and evaluate their judges, lawyers, and so on. This
practice is already known in the United States,56 but has been banned in France
51 S. 1.2.(b), ibid.
52 G20 Ministerial Statement on Trade and Digital Economy,Annex I G20 AI Principles, S. 1.2.
53 Koulu, Kontiainen (n 25), 66–67.
54 House of Lords, Covid-19 and the Courts, 22nd Report of Session 2019-2020, HL Paper 257 of 30
March 2021, 4–5.
55 “Profiling” is defined in art. 4(4) of the GDPR as: “any form of automated processing of personal
data consisting of the use of personal data to evaluate certain personal aspects relating to a natural
person, in particular to analyse or predict aspects concerning that natural person’s performance at
work, economic situation, health, personal preferences, interests, reliability, behaviour, location or
movements”. See also: Buyers (n 34), 45.
56 D.L. Chen, “Judicial analytics and the great transformation of American Law” [2019] 27Artificial
Intelligence and Law, 15–42. See also Appendix 1 to the European Ethical Charter (n 32), 27.
92 Regulating artificial intelligence in industry
with respect to judges and court clerks.57 These two contradictory examples
show that attitudes to this technology vary considerably.
Another common trait often associated with AI is the potential for bias
against a person or a group of people. Based on the examples of technologies
mentioned earlier, it could, for instance, be a bias against a lawyer or a judge,
but also against a particular group, e.g. based on age, gender, or other char-
acteristics. The bias comes from the data input into the system, from which
the AI is learning.58 Removing bias from machine learning is an area of ongo-
ing research but, in principle, it is possible to remove and modify input traits
that can cause bias.59 The problem of bias is inextricably linked to respect for
fundamental rights. A common concern across different industries is the right
to privacy and prohibition of discrimination. However, in the context of the
legal profession, special attention requires the right to a fair trial, particularly
if we take into account the “AI judge” project piloted in Estonia or the sys-
tem implemented in the Supreme Court of India. For instance, a chatbot that
gives a summary of the case shall not generate discriminatory outcomes so
that neither party is at a disadvantage, either directly or indirectly. Moreover,
the tools should be used with due respect to judges’ independence in their
decision-making. In the case of AI giving the actual decision in a case, it should
be transparent and explainable, and should be subject to scrutiny by all parties
involved, as well as the general public.60
57 Article 33 of the French Justice Reform Act states that “no personally identifiable data concerning
judges or court clerks may be subject to any reuse with the purpose or result of evaluating, analysing
or predicting their actual or supposed professional practices”. It may lead to five years imprisonment.
See more: J.Tashea,“France bans publishing of judicial analytics and prompts criminal penalty” (ABA
Journal, June 2019) <https://www.abajournal.com/news/article/france-bans-and-creates-criminal
-penalty-for-judicial-analytics> accessed 27 May 2021.
58 LexisNexis PSL TMT Team (ed.), An Introduction to Technology Law (LexisNexis 2018), 555.
59 Waisberg, Hudek (n 1), 88–89.
60 Min, Ferris (n 33), 2
61 LexisNexis (n 58), 553–555.
Artificial intelligence in the legal profession 93
ity” or “explainable AI”, which goes a step further than transparency.62 The
matter of general regulations concerning transparency and explainability has
been covered in previous chapters of this volume, particularly in the context
of the GDPR and the EU White Paper. Beyond this, however, there are
some developments being made by the IEEE63 Standards Association that has
initiated a series of standards for AI and autonomous system designers. These
are being developed under the name “IEEE P7000™ series”, and include
13 separate standards for the “future of ethically aligned autonomous and intel-
ligent systems”.64 Particularly relevant to the legal profession are the P7001
Standards on Transparency for Autonomous Systems.65 Their aim is to prepare
standards for developing autonomous technologies that “can assess their own
actions and help users understand why a technology makes certain decisions in
different situations”.66 The Working Group has defined five groups who will
benefit from the standards, among which are lawyers and expert witnesses.67
The focus is on measurable, testable levels of transparency, so that autonomous
systems can be objectively assessed. It is hoped that the project will offer ways
to provide transparency and accountability for AI systems in the legal and other
professions.68 As of May 2021, the standards are still in development.
Confidentiality
Lawyers have a duty to keep the affairs of their clients confidential. Moreover,
special protection is given to all communications between lawyers and clients,
as well as documents prepared for litigation, and they must not be disclosed to
third parties. The latter is commonly known as “legal professional privilege”.69
Lawyers constantly deal with sensitive information, including personally
identifiable information, data relating to litigation, e-mail communication, and
much more. Therefore, they have to be particularly vigilant when embrac-
ing new technologies. Many of the AI-enhanced platforms are cloud-based,
62 A. Deeks (n 30), 1833; M. Brkan, G. Bonnet,“Legal and Technical Feasibility of the GDPR’s Quest
for Explanation of Algorithmic Decisions: of Black Boxes,White Boxes and Fata Morganas” (2020)
11(1) EJRR, 18–50.
63 Institute of Electrical and Electronics Engineers.
64 Drafts of some of the standards are available on IEEE Ethics in Action website: <https://ethicsinac-
tion.ieee.org/p7000/> 28 May 2021.
65 A draft of these standards can be downloaded from the official website: <https://2020.standict.eu/
standards-watch/ieee-p7001-transparency-autonomous-systems> 20 May 2021.
66 IEEE Ethics in Action (n 75).
67 The other groups include users, safety certifiers or agency, accident investigators, and the wider
public.
68 European Parliamentary Research Service, The ethics of artificial intelligence: Issues and initiatives (Euro-
pean Parliament, March 2020), 67–70.
69 J. Herring, Legal Ethics (2nd ed., OUP 2017), 150–153.
94 Regulating artificial intelligence in industry
meaning they are stored on a network of remote servers hosted on the inter-
net.70 This might have been useful in situations like the COVID-19 outbreak
where accessing on-premises IT infrastructure could be challenging. However,
embracing AI and high connectivity may bring the risk of cyber threats and
data breaches.71 Although some AI platforms pride themselves on offering
bank-grade security and powerful encryption for both their in-cloud and on-
premises services, even that does not guarantee 100% security.72 Therefore, it is
important to investigate whether using cloud technology impacts confidential-
ity and legal professional privilege.
In most jurisdictions, in general, lawyers may use AI cloud-based technolo-
gies, although the relevant provisions vary in their details. For instance, in the
United States, the matter is dealt with by provisions established by state bar
associations, and most of them require that the cloud system is secure and its
provider is a reputable organisation. However, many of them highlight that
a lawyer’s obligation to protect does not end once a reputable provider is
selected; the security measures include continuous reasonable care, and peri-
odical reviews.73 In Europe, the Council of Bars and Law Societies of Europe
70 More on that see: R. Leenes, S. De Conca,“Artificial Intelligence and Privacy – AI enters the house
through the Cloud” in W. Barfield, U. Pagallo (eds), Research Handbook on the Law of Artificial Intel-
ligence (Edward Elgar 2018), 280–305.
71 M. Lanterman, “AI and its impact on law firm cybersecurity” (Minnesota State Bar Association)
<https://www.mnbar.org/resources/publications/bench-bar/columns/2019/10/02/ai-and-its
-impact-on-law-firm-cybersecurity> accessed 27 May 2021.
72 ibid.
73 See for instance:Alabama State Bar, OGC Formal Opinion 2010-2:“Retention, Storage, Ownership,
Production and Destruction of Client Files”, available at: <https://www.alabar.org/office-of-gen-
eral-counsel/formal-opinions/2010-02/>; Alaska Bar Association, Ethics Opinion 2014-3: “Cloud
Computing & the Practice of Law”, available at: <https://alaskabar.org/wp-content/uploads/2014
-3.pdf>; Connecticut Bar Association – Professional Ethics Committee, Informal Opinion 2013-7
on Cloud Computing, available at: <https://www.ctbar.org/docs/default-source/publications/
ethics-opinions-informal-opinions/2013-opinions/informal-opinion-2013-07>; Illinois State Bar
Association Professional Conduct Advisory Opinion no. 16-06: “Client Files; Confidentiality; Law
Firms”, available at: <https://www.isba.org/sites/default/files/ethicsopinions/16-06.pdf>; Ken-
tucky Bar Association, Formal Ethics Opinion KBA E-437 of 21 March 2014, available at: <https://
cdn.ymaws.com/www.kybar.org/resource/resmgr/Ethics_Opinions_(Part_2)_/kba_e-437.pdf>;
Louisiana State Bar Association, Public Opinion 19-RPCC-021: “Lawyer’s Use of Technology”,
available at: <https://www.lsba.org/documents/Ethics/EthicsOpinionLawyersUseTech02062019
.pdf>; Professional Ethics Commission of the Maine Board of Overseers of the Bar, Opinion #207:
“The Ethics of Cloud Computing and Storage”, available at: <https://www.mebaroverseers.org/
attorney_services/opinion.html?id=478397>; Massachusetts Bar Association, Ethics Opinion 12-03,
available at: <https://www.massbar.org/publications/ethics-opinions/ethics-opinion-article/ethics
-opinions-2012-opinion-12-03>; New York State Bar Association Committee on Professional Eth-
ics, Ethics Opinion 842, available at: <https://nysba.org/ethics-opinion-842/>; Oregon State Bar
Opinion no. 2011-188 (revised 2015): “Information Relating to the Representation of a Client:
Third-Party Electronic Storage of Client Materials”, available at: <http://www.osbar.org/_docs/
ethics/2011-188.pdf>; Pennsylvania Bar Association Committee on Legal Ethics and Professional
Responsibility, Formal Opinion 2011-200, available at: <http://www.slaw.ca/wp-content/uploads
/2011/11/2011-200-Cloud-Computing.pdf>; Virginia State Bar Standing Committee on Legal
Artificial intelligence in the legal profession 95
(CCBE) created a set of guidelines that require lawyers to: (a) take into account
data protection laws and professional secrecy principles; (b) undertake a pre-
liminary investigation of the services, particularly in terms of experience, repu-
tation, technical evaluation, access offline, security etc.; (c) evaluate in-house
IT infrastructure as opposed to online; (d) consider contractual precautions
and transparency; and (e) carry out an impact assessment.74 These Guidelines
pre-date implementation of the GDPR, and so they now must be read in
conjunction with the GDPR provisions. In particular, special attention should
be paid when using AI platforms that involve data storage and processing on
servers in another country. Personal data can only be transferred to third coun-
tries in compliance with the conditions for cross-border data transfers set out
in Chapter V of the GDPR.75 One more example of similar regulations comes
from England and Wales, where the matters of confidentiality and privilege
are regulated by the Solicitors Regulation Authority (SRA) Standards and
Regulations76 and the Bar Standards Board (BSB) Handbook.77 They require
identification, monitoring, and management of all risks.78 Moreover, any rel-
evant information outsourced to, and held by, third parties must be available
for SRA/BSB or their agent for inspection.79
Conclusions
The impact of technology on the legal profession was highlighted by the
American Bar Association in its comments to the Model Rules of Professional
Conduct as follows: “a lawyer should keep abreast of changes in the law and
its practice, including the benefits and risks associated with relevant technology
(…)”.80 The different examples of AI application in legal services show that AI
can help lawyers amplify their expertise and their work. AI can be particularly
helpful in times like the COVID-19 outbreak, and generally when certain
work can or has to be done remotely. Moreover, the pandemic forced courts
Ethics, Legal Ethics Opinion 8972: “Virtual Law Office and Use of Executive Suites”, available at:
<https://www.vsb.org/docs/1872-final.pdf>.
74 CCBE Guidelines on the Use of Cloud Computing Services by Lawyers, adopted on 07 September
2012, available at: <https://www.ccbe.eu/fileadmin/speciality_distribution/public/documents/IT
_LAW/ITL_Position_papers/EN_ITL_20120907_CCBE_guidelines_on_the_use_of_cloud_com-
puting_services_by_lawyers.pdf> accessed 27 May 2021.
75 See art. 44–50 GDPR.
76 SRA Standards and Regulations available at: <https://www.sra.org.uk/solicitors/standards-regula-
tions/principles/> accessed 27 May 2021.
77 BSB (n 49).
78 SRA s. 2.5. of the Code of Conduct for Firms and Rules C8–C9 of the BSB Handbook respectively.
79 See S. 5.2. of the SRA Code of Conduct for Firms and Rule rC86 of the BSB Handbook respec-
tively.
80 ABA, Model Rules of Professional Conduct, Rule 1.1. Competence – Comment, available at:
<https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of
_professional_conduct/rule_1_1_competence/comment_on_rule_1_1/> accessed 27 May 2021.
96 Regulating artificial intelligence in industry
to embrace information and communications technology more closely. In this
context, the Canadian CRT presents an important case study for future, more
AI-driven courts and tribunals. However, the role of certain tools, particu-
larly the ones that may influence pre-trial decision-making, is heavily debated.
Although an ongoing process of investment and development of AI-driven
technologies should be encouraged, a greater collaboration is needed between
engineers, researchers, scholars, and the legal sector. Hopefully, with greater
collaboration comes greater transparency and explainability. People within the
legal profession need to have a better understanding of the tools available, their
strengths and weaknesses, and in particular their “predictive” features. The
examples presented in this chapter show that these are of particular importance
with respect to the tools used for the administration of justice. Interestingly,
transparency and explainability also raise the question of how “autonomous”
AI systems truly are, and to what extent they should be. It appears that there
needs to be some balancing interplay where AI is required to produce reports
to human operators with an explanation of the AI’s operation and decisions.
The degree of development of AI tools is very varied, and their strengths
and weaknesses have been expressed. Embracing AI and high connectivity may
seem inevitable, but it may as well bring yet another risk, namely data breaches.
Lawyers have always been expected to meet the highest standards, and this
principle extends to the protection of client data. Therefore, with the invest-
ment in AI technologies, appropriate investment in cyber security safeguards
must follow. This includes not only the technical means for protecting the
data storage and connectivity, but should involve implementation of relevant
policies, and taking out cyber insurance. Even though all these measures may
not prevent a cyber breach/attack (arguably nothing can), they are becoming
the industry standards.81 Since lawyers often deal with sensitive personal and
confidential data, they must remain particularly vigilant and play an active role
in continuous monitoring to of the AI platforms at their disposal. This is critical
for the protection of fundamental rights, and to strengthen the core principle
of the rule of law.
81 For example, the SRA requires that solicitors in England and Wales, and registered European lawyers,
take out and maintain indemnity insurance, and that the insurances provides adequate and appropri-
ate cover in respect of the services provided. See:There is an interesting cyber insurance guidance
created by the UK National Cyber Security Centre (NCSC); see: NCSC,“Cyber Insurance Guid-
ance” (NCSC, 6 August 2020) <https://www.ncsc.gov.uk/guidance/cyber-insurance-guidance>
accessed 27 May 2021.
PART II
Vertical AI applications
7 Artifcial intelligence
An earthquake in the copyright
protection of digital music
Luo Li
1 J. Montagu,‘How Music and Instruments Began:A Brief Overview of the Origin and Entire Devel-
opment of Music, from Its Earliest Stages’ (Frontiers in Sociology, 20 June 2017) <https://doi.org/10
.3389/fsoc.2017.00008> accessed 27 May 2021.
2 MN2S,‘The History of Music Distribution’ (MN2S, 4 September 2020) <https://mn2s.com/news/
label-services/the-history-of-music-distribution/> accessed 27 May 2021.
3 ibid.
4 WIPO Conference on the Global Digital Content Market organised by the World Intellectual Prop-
erty Organisation since April 2016 have discussed surrounded copyright issues between human music
creators and their publishers, producers and distribution platforms in the digital age.
DOI: 10.4324/9781003246503-9
100 Regulating artificial intelligence in industry
Cope at the University of California in Santa Cruz started experimenting with
algorithmic composition in 1981. Professor Eduardo Miranda, one of the lead-
ing researchers in the AI/music field, shows a more positive attitude toward
AI-produced music. He calls AI the ‘means to harness humanity rather than
annihilate it … consider computer-generated music as seeds, or raw materials,
for … compositions … not so interested in understanding creativity with AI
rather … interested in AI to harness … creativity’.5 However, this does not
diminish the debate on humans vs machines in music creation but makes it
more pertinent, and AI-related experiments in making music have been boom-
ing since 2015. For example, Melodrive allows game development companies
to create custom soundtracks via its AI system, and this significantly reduces
music costs by up to 90%; Jukedeck was a UK based startup using AI to auto-
matically produce music.6 Users only needed to choose a mood, style, tempo,
and length then the AI system would produce a soundtrack matching the ele-
ments set by its users (users could get five songs a month for free).7 Many IT
giants have also experimented with AI music projects: Google-owned startup
DeepMind has a project called WaveNet whose goal is to create ‘a deep gen-
erative model of raw audio waveforms … able to generate speech which mim-
ics any human voice’;8 Twitter’s LnH project encourages the user to tweet the
LnH bot a song title and the bot then tweets back a short instrumental track
composed by the software; Google’s project Magenta in 2016 aimed to cre-
ate compelling music by using a machine learning system; in China, IT giant
Baidu developed an AI composer using image-recognition software to turn an
image into a song;9 哼趣10 is a piece of Chinese software that can produce a
piece of music based on the user humming a short melody, and users can also
edit the produced music by selecting different musical instruments, styles, and
lengths.
All of the above send a clear message: AI systems are becoming more popu-
lar for music creation and unavoidably competing with human musicians to
some extent. It is predicted that in the future AI will play significant roles along
three lines in the music industry according to the development of AI music
5 L.Trandafir, ‘On Creativity, Music and Artificial Intelligence: Meet Eduardo R Miranda’ (Landr, 18
August 2016) <https://blog.landr.com/meet-eduardo-miranda/> accessed 27 May 2021.
6 S. Dredge, Music’s Smart Future – How will AI Impact the Music Industry? (BPI 2006), 6.This report was
made by Music Ally’s Editor-in-Chief Stuart Dredge for the British Phonographic Industry.
7 ibid. Some news reports show that Jukedeck has been acquired by TikTok to fix the issue of the
higher royalties requested by major labels when TikTok’s music licensing expires. D. Sanchez, ‘As
TikTok’s Music Licensing Reportedly Expires, Owner ByteDance Purchases AI Music Creation
Startup JukeDeck’ (Digital Music News, 23 July 2019) <https://www.digitalmusicnews.com/2019
/07/23/tiktok-bytedance-acquires-jukedeck/> accessed 27 May 2021.
8 A. van den Oord and Sander Dieleman,‘WaveNet:A Generative Model for Raw Audio’ (DeepMind,
8 September 2016) <https://deepmind.com/blog/article/wavenet-generative-model-raw-audio>
accessed 27 May 2021.
9 Dredge (n 6), 7–8.
10 哼趣 is pronounced as heng qu in Chinese pinyin system. It means enjoy humming in English.
Copyright protection of digital music 101
applications: self-entertainment, environmental simulation, and assistant and
independent music creation.
First of all, AI applications, including Amper Score,11 Jukedeck, and 哼趣,
make music creation an easy one-minute task for public users using and sharing
AI-produced music to social media for fun purposes. Public users’ interven-
tion in music creation is minimised as the produced music relies highly on AI
applications’ self-operation and algorithm design. However, it is hard to say
whether AI should be deemed as the creator of the resulting music because the
creative capability of an AI system seems to be doubtable. Second, without pay-
ing a higher copyright licence fee to human musicians, low-cost AI-produced
background music or AI-simulated music in the surrounding environment
may occupy a large part of the leisure industry, including restaurants, pubs,
and other public places. Game development companies or other longer-term
projects with tight budgets may also use AI music to reduce costs. Melodrive
can ‘compose an infinite stream of original, emotionally variable music in real-
time – the idea being that it adapts to what’s happening within the game at
a particular point in time’.12 This case provides a good example of whether
this background music, produced by an AI system, should be deemed as a
protected copyrightable work, and therefore whether the AI system should be
treated as its author. Finally, AI could also assist human musicians in the music
creation, or even AI could be engaged as a key collaborator in the music crea-
tion. Australia-based startup Popgun announced that their AI product Splash
Pro was a tool for ‘empowering millions of musicians to discover, be inspired
and express their creativity’.13 Collaboration rather than the replacement of
human input is the purpose here. This, however, would raise the question of
who is the creator of such work. It could be considered to be a hybrid work
made by both humans and AI systems. Furthermore, it is also possible that
AI systems could one day create music independently with minimal human
intervention. Therefore, how to deal with music independently created by AI
is still up for debate.
However, the artificial nature of AI systems and their so-called ‘creation
processes’ are beyond traditional views and interpretations of what defines a
copyrightable work and what relates to creativity in a copyright context; this
affects the fundamentals of the existing copyright law system and could result
in a total re-evaluation of what should be protected by copyright law.
AI-supported
The AI systems based on ‘handcrafted knowledge’ have no learning capabili-
ties, poor handlining of uncertainties, and can only deal with narrowly defined
problems.15 Such AI systems should be treated as machines following rules
defined by humans and supporting humans to complete a task. Therefore, such
AI-produced music should be called AI-supported music, embracing the full
intervention of humans and under the substantial control of humans, because
the AI systems only execute human-set rules to produce music.16 This is similar
to humans setting rules in a software programme where figures are input into
the software, producing a result based on the pre-set rules. The difference is
that an AI system can deal with a larger amount and more complicated rules.
Obviously, there is no creativity on the side of the AI system and, therefore,
from a copyright perspective, there is no doubt of the human authorship of the
resulting music produced by the AI systems.
AI-assisted
In psychology, novelty and appropriateness are two important components
for identifying creativity – that is to say, the product or process resulting from
a creative activity is new and contains value. Nevertheless, this definition is
ambiguous when assessing the degrees of ‘new’ and ‘value’. An opinion doubts
that machine’s activities enable to qualify any creativity element, because it
believes creativity ‘has often been looked at in a spiritual way, being seen as the
14 J. Launchbury, ‘DARPA Perspective on AI’ Defense Advanced Research Projects Agency <https://
www.darpa.mil/about-us/darpa-perspective-on-ai> accessed 27 May 2021.
15 ibid.
16 L. Li, Intervention Report for the WIPO Conversation on Intellectual Property and Artificial Intelligence (Third
Session) (2020) WIPO <https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelli-
gence/conversation_ip_ai/pdf/ind_li.pdf> accessed 27 May 2021.
Copyright protection of digital music 103
only uniquely ‘human’ characteristic, one that defines an area of experience
… creative thinking is a bastion of human dignity in an age where machines,
especially computers, seem to be taking over routine skilled activities and eve-
ryday thinking’.17
In fact, AI systems have not only been taking over routine skilled activities.
The second wave of AI research is evidence of this. Such AI systems are based
on ‘statistical learning’ in Launchbury’s presentation. Because of the advanced
development of big data and computational power as well as better algorithms
systems (the three pillars of the success of AI systems), present AI systems can
define rules by themselves through clustering and classifying massive datasets,
and then use these defined rules to predict and make decisions by themselves
to some extent. This is why humans think they can have ‘conversations’ with
Apple’s Siri and how AI applications can ‘create’ music. In a human science
context, the term ‘creativity’ seems mysterious as it is often explained with
vague notions such as ‘inspiration’ and ‘intuition’,18 but these notions indicate
that creativity requires a source of imagination. Humans can imagine things
they have never seen; however, the ‘imagination’ of most AI composers is
either constrained or minimised, when compared with the non-restricted free-
dom and high flexibility of the human imagination.
The reality is that AI systems are still based on statistical learning, which
means AI systems cannot learn and imaginatively predict things with features
beyond the data supplied by humans – AI only has a limited decision-making
capability and constrained prediction capability rather than full freedom of such
capabilities (like humans). Besides, such AI systems do not understand and
explain the data, but instead analyse existing features of the data. Just like the
statement by MuseNet on its website ‘MuseNet was not explicitly programmed
with an understanding of music, but instead through discovered patterns of
harmony, rhythm, and style by learning to predict the next token in hundreds
of thousands of MIDI files’.19 Furthermore, the limited decision-making capa-
bility of AI systems depends on human engineers’ setting the algorithm. While
human engineers set abstract parameters in the algorithm system, AI compos-
ers can produce different musical compositions when users input the same
conditions; they would produce the same result if they were setting concrete
parameters in the algorithm. Material human intervention appears at every
stage of the production process (from the training data selected by humans to
the algorithm settings determining the degree of AI decision-making). In this
case, human contribution is central and AI systems merely have an assistant sta-
17 A. Cropley, ‘Definitions of Creativity’ in Mark Runco and Steven Pritzker (eds), Encyclopaedia of
Creativity (3rd ed., Elsevier 2020), 319–320.
18 R. López de Mántaras,‘Artificial Intelligence and the Arts:Toward Computational Creativity’ Open-
Mind BBVA <https://www.bbvaopenmind.com/en/articles/artificial-intelligence-and-the-arts
-toward-computational-creativity/> accessed 27 May 2021.
19 OpenAI, MuseNet <https://openai.com/blog/musenet/> access 27 May 2021.
104 Regulating artificial intelligence in industry
tus. Therefore, it is appropriate that the outputs produced by such AI systems
are called AI-assisted outputs. However, considering that AI music applica-
tions (second wave AI systems) can produce music that is not predictable by
humans, even with their limited decision-making abilities, those unpredictable
elements can be seen as evidence for the material contributions of AI applica-
tions. Therefore, it is worth considering whether this could tailor a certain
degree of creativity, but this may largely depend on how existing copyright law
interprets the term ‘creativity’ and whether such interpretation could extend to
this limited decision-making situation.
AI-generated
In its report on the Conversation on Intellectual Property and Artificial
Intelligence, the World Intellectual Property Organisation (WIPO) defines
‘AI-generated’ as ‘the generation of an output by AI without human
intervention’.20 This is similar to Launchbury’s third wave of AI systems based
on ‘contextual adaptation’.21 Such AI systems learn more from the understand-
ing of real-world phenomena than from data, with the abilities of explanation,
reasoning, and abstracting perceived information, to develop new meanings by
themselves.22 ‘Developing new meanings’ insinuates creation because it sug-
gests that AI systems are ‘thinking’: they try to integrate perceived information,
then AI system would explain and reasoning these information. More impor-
tantly, they abstract and explore novel things beyond such information. The
capabilities of abstracting and exploring novel things from existing information,
from author’s view, eventually towards creative activity. There is no doubt
that creativity exists in their resulting outputs. In the author’s view, the outputs
produced by such AI systems should be called AI-generated outputs instead of
AI-created outputs to distinguish them from human creations. This is because
the nature of such AI systems and their produced outputs is beyond the scope
of the existing copyright system designed specifically for human creations.
20 Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (2020) WIPO/IP/AI/2/
GE/20/1 REV. <https://www.wipo.int/edocs/mdocs/mdocs/en/wipo_ip_ai_2_ge_20/wipo_ip
_ai_2_ge_20_1_rev.pdf> accessed 27 May 2021.
21 Launchbury (n 14).
22 ibid.
Copyright protection of digital music 105
authors’. Therefore, whether or not the term ‘author’ covers non-human
23
23 Berne Convention for the Protection of Literary and Artistic Works,WIPO <https://www.wipo.int
/treaties/en/ip/berne/> accessed 27 May 2021.
24 S. Ricketson,‘People or Machines:The Bern Convention and the Changing Concept of Authorship’
(1991-1992) 16 Columbia VLA Journal of Law & Arts. See also J. Ginsburg,‘People Not Machines:
Authorship and What It Means in the Berne Convention’ (2018), 49 International Review of Intellectual
Property and Competition Law, 131–135.
25 ibid.
26 The Compendium of US Copyright Office Practices: Chapter 300, para 306. <https://www.copy-
right.gov/comp3/chap300/ch300-copyrightable-authorship.pdf> accessed 27 May 2021.
27 S, Gibbs, ‘Monkey Business: Macaque Selfie Can’t be Copyrighted, Say US and UK’ (The Guard-
ian 22 August 2014) <https://www.theguardian.com/technology/2014/aug/22/monkey-business
-macaque-selfie-cant-be-copyrighted-say-us-and-uk> accessed 27 May 2021.
28 888 F3d 418 (9th Cir 2018).
29 ibid., 425.
30 ibid., 420.
106 Regulating artificial intelligence in industry
and necessarily exclude animals that do not marry and do not have heirs enti-
tled to property by law’.31
China’s copyright law also clarifies that ‘works of Chinese citizens, legal
entities or other organisations, whether published or not, shall enjoy copyright
in accordance with this Law’.32 Current technologies cannot create AI systems
that are intelligent enough to enter into a contract or claim the legal capabil-
ity to sue for any infringement of its own rights. Therefore, an AI system is
not able to be recognised as a legal entity or other organisation. In practice,
Beijing Internet Court gave a clear interpretation of human authorship in April
2019 in its judgement of Feilin v Baidu.33 One of the disputes, in this case, was
whether a report on the judicial analysis of the film and entertainment indus-
try automatically generated by software named Wolters Kluwer China Law
& Reference constituted a work under Chinese copyright law. Although the
Court admitted that its content satisfied the formality requirement of a literary
work and its selection, judgement and analysis of relevant data reflected origi-
nality to some extent, the Court further held, however, that originality is not
a sufficient condition to qualify a piece of work, whereas a person creating the
work is a necessary condition in Chinese copyright law. Furthermore, even if
in the case of Shenzhen Tencent v Yingxun Tech in December 2019 the Shenzhen
Nanshan District Court gave its judgement in favour of an AI-produced liter-
ary work being protected under copyright law, the Court refused to admit that
software-generated works were protected by copyright based on the rationale
that software is not an author.34 Courts in both cases tried to locate human
intervention in the creative activities.
The Copyright, Designs and Patents Act 1988 (CDPA) in the United
Kingdom states ‘author, in relation to a work, means the person who create[s]
it’.35 Obviously, this definition excludes any space for AI being an author.
Interestingly, the CDPA provides a special section for computer-generated
works, which is perhaps closest to the situation of AI-assisted works, saying ‘[i]n
the case of a literary, dramatic, musical or artistic work which is computer-gen-
erated, the author shall be taken to be the person by whom the arrangements
necessary for the creation of the work are undertaken’.36 In other words, even
if a piece of music is made by an AI machine, its author is not the machine
31 ibid., 426.
32 Copyright Law of the People’s Republic of China 2010 (amended), art 2.WIPO <https://wipolex
.wipo.int/en/text/466268> accessed 27 May 2021.
33 Feilin v Baidu (2018) Beijing Internet Court <https://www.bjinternetcourt.gov.cn/cac/zw
/1556272978673.html> accessed 27 May 2021.
34 Shenzhen Tencent v Yingxun Tech (2019) Shenzhen Nanshan District People’s Court,Yue Min Chu
No.14010. zhongguo caipan wenshuwang (China Judgements Online) <https://wenshu.court
.gov.cn/website/wenshu/181107ANFZ0BXSK4/index.html?docId=30ba2cab36054d80a864ab8
000a6618a> accessed 27 May 2021.
35 Copyright, Designs and Patents Act 1988, s. 9(1).
36 ibid., s. 9(3).
Copyright protection of digital music 107
itself but the person or persons providing the database and creating an algo-
rithm to instruct the machine on how to make the music. These are computer
programmers or software engineers rather than AI systems. However, there
are no equivalent rules for sound recordings, and the right to be identified as
author or director and maintain moral rights does not apply to computer-gen-
erated work.37 The CDPA does not give equivalent treatment to the human
authors of computer-generated work since they are perhaps seen as providing
less of an intellectual contribution to such works, compared with normal works
(e.g. literary works) made fully by humans. In the European Union, both the
Software Directive and the Database Directive allow the EU Member States to
define authorship of a computer programme but make it clear that the ‘author
of a database/computer program shall be the natural person or group of natural
persons who created the base/program …’.38
Therefore, it can be argued that it is difficult to pursue a machine-author-
ship possibility under the existing copyright framework (no matter national and
international levels).
Originality
The criteria for copyright protection should assess whether a legislative sys-
tem implies that a human author must contribute to a copyrightable work.
Copyright law protects expressions of an idea rather than the idea itself, in
which originality is the key factor for obtaining copyright. The UK copyright
system has a low originality requirement in that a work is original as long as it
embraces the author’s sufficient skills, labour and judgement. Although there
are possible imminent changes to the originality requirement in the UK after
the case of Infopaq International A/S v Danske Dagbaldes Forening,39 it is no doubt
that the words “skills” and “labour” could only attribute to or link to humans.40
Moreover, both the Software Directive and the Database Directive in the EU
declares that originality means an author’s own intellectual creation.41 The
Infopaq case further highlights such a link between originality and the author’s
own intellectual creation, and its interpretation of originality is understood as
a reflection of an author’s own personality.42 ‘Personality refers to individual
An alternative solution?
While national and international copyright systems give little space to inter-
pret non-human authorship and therefore originality, it is still worth exploring
alternative solutions to deal with AI-produced works in a copyright context.
This is valuable not only for stimulating AI-related creations and for invest-
ment purposes but also for coping with uncertainties and foreseen copyright
issues.
Computer-generated works
It is necessary to discuss alternative solutions that could be applied under the
current copyright system. UK copyright law identifies that the author of a
computer-generated work is ‘the person by whom the arrangements neces-
sary for the creation of the work are undertaken’.45 This seems to be a pos-
sible avenue for AI-produced music to pursue copyright protection. However,
there is a view that doubts the precise meaning of the word ‘arrangements’ as it
suggests plans and preparations to make things happen (here it means planning
and preparing to create music), and the persons making such arrangements
are varied; they could be users, programme designers, software investors, or
v Mazooma Games Ltd & Ors, the Court held that the game’s designer was the
person by whom arrangements were undertaken, rather than the users who
played the game and generated unique images during play.47 However, this is
not straightforward for music creation. While professional musicians adopt AI
music software during the creative process, they prefer to look at AI software as
a useful tool rather than dominating their creation. In fact, however, musicians
may be unconsciously inspired and influenced by AI autonomous generation.
The whole creative process is a trial according to the musician’s adjustment,
judgement, and selection of relevant sources provided by the AI software,
which embraces an interactive impact-collaboration element: musicians are
inspired by AI-produced music trials, made based on their own adjustment,
judgement, and selection of sources; AI software continually produces music
corresponding to added elements and selections that gradually match the music
with the musician’s creative ideas. Musicians may modify, adapt, and integrate
these trials together with their own ideas, and a final resulting work is created.
From the author’s view, a piece of music produced in this way should also be
treated as AI-assisted music as the AI plays an important assistant role in help-
ing the human musician complete a final work. From this perspective, it seems
strange to say that the AI music software designer/engineer is a co-creator
and more importantly that AI-produced music produces an unclear boundary
between human authored music and computer-generated music. It is difficult
to identify which part belongs to the human musician’s intellectual creation
and which part belongs to the AI system’s efforts. From the author’s perspec-
tive, it is more like a collaborative work with co-author status, especially, we
can forecast the future AI software is capable of making decision-making or
achieving largely autonomous accompaniment while engaging human musi-
cians’ creation process.. From this point of view, the participation of AI in
creation is beyond the scope of a mere tool in traditional computer-generated
work, and thereby a category of computer-generated works would not be
defined enough for such types of AI-produced music (both AI-assisted and
-generated music). Nevertheless, the copyright system is clear that only human
authors qualify for copyright.
48 P. Devarapalli,‘Machine Learning to Machine Owning: Redefining the Copyright Use Only Own-
ership from the Perspective of Australian, US, UK and EU law’ (2018) 40 European Intellectual Prop-
erty Review, 11, 722–788.
49 A. Bridy,‘Coding Creativity: Copyright and the Artificially Intelligent Author’ (2012) Stanford Tech-
nology Law Review, 5, 1–28.
50 Shenzhen Tencent v Yingxun Tech (2019) (n 34).
51 ibid.
52 ibid.
53 ibid.
54 ibid.
Copyright protection of digital music 111
tool and preferred to trace the human intervention as much as possible. The
Court treated this article as an integrated intellectual creation made by both
employees’ judgements and selections and the operation of the software – and
that the article was a literary work protected under Chinese copyright law.
Furthermore, according to art. 11 of the copyright law in China, ‘[w]here
a work is created according to the intention and under the supervision and
responsibility of a legal entity or other organisation, such legal entity or organi-
sation shall be deemed to be the author of the work’.55 The disputed article
was an integrated intellectual creation made by multiple teams sharing differ-
ent tasks, reflecting Tencent’s need to publish a financial article.56 The whole
employee group creating the article had multiple teams that Tencent super-
vised; this article was published on Tencent’s website with a note at the end
stating, ‘this article was automatically written by Tencent’s robot Dreamwriter’,
which points to Tencent as the author (the claimant). The Court stated that
this indicated Tencent would bear all relevant liabilities coming from the dis-
puted article.57 Therefore, the Court held the disputed article was the work of
a legal entity, and Tencent, the claimant, was deemed as the author and there-
fore entitled to copyright.
In this case, the Court emphasised the direct connection between the
form of expression in the disputed article and the employee group’s intel-
lectual activities in arranging and selecting the input, setting the conditions,
template and language style. To some extent, the Court was more in favour
of users (the employee group) and played down the software’s autonomous
role. Considering the programme designers and programme users were all
from Tencent, there was no dispute over Tencent’s authorship. However,
if the users and programme designers were independent and separate parties,
this judgement could be seen as biased to the programme users, which con-
flicts with the preference of game designers in the Nova case. Unfortunately,
the Court in the Shenzhen Tencent case did not make any direct connections
between the originality of the article and the software designers. The issue
therefore remains unclear. Jukedeck’s model seems to answer this question to
some extent: it allows its users to generate five free songs a month before they
start paying for each track, but users need to pay more to obtain copyright
regardless. In other words, programme designers/owners own the copyright,
whereas its users who generate the songs only enjoy limited personal use.
The copyright system is primarily designed to regulate human creative
activities. From the author’s view, it would not be suitable to overly engage
in searching for human intellectual labour in technology-produced goods, as
ignorance of technology’s contribution may hinder technology innovation
and the related dissemination of cultural productions. After all, it can predict
Introduction
With the foundations laid by pioneers and experimenters,1 aviation has always
been at the forefront of progress. Being safety-oriented,2 the aviation com-
munity concentrates most of its efforts on enhancing the protection of life
and health of those involved in aerial operations, and to that end it is eager to
explore the potential of emerging technologies.3
The ‘lesson-learned’ approach is the basis for most of the safety improve-
ments in this sector.4 While technology and human knowledge have tremen-
dously advanced over the years, the industry has never been more vulnerable
than it is today.5 Its financial specifics (e.g. perishable inventory, high fixed
costs, excess capacity, reliance on external factors and cycles influencing
1 H. Matthews, Pioneer Aviators of the World: A Biographical Dictionary of the First Pilots of 100 Countries
(McFarland 2003), 179–188;V.P. Relly et al., ‘History of Aviation – A Short Review’ (2017) 1(1) J.
Aircr. Spacecr.Technol., 30.
2 Article 44(a), (d), (h) ICAO Convention on International Civil Aviation (Chicago Convention), 7
December 1944, (1994) 15 UNTS 295. See also F. Pellegrino, The Just Culture Principles in Aviation
Law:Towards a Safety-Oriented Approach (Springer 2019), 1–44; R.Abeyratne, Strategic Issues in Air Trans-
port: Legal, Economic and Technical Aspects (Springer 2012), 19–164; R.J. Andreotti,‘Promoting General
Aviation Safety:A Revision of Pilot Negligence Law’ (1992) 58 J.Air. L. & Com., 1089.
3 R.John Lofaro, K.M. Smith,‘The Aviation Operational Environment: Integrating a Decision-Making
Paradigm, Flight Simulator Training and an Automated Cockpit Display for Aviation Safety’ in E.
Abu-Taieh, A. El Sheikh, M. Jafari (eds), Technology Engineering and Management in Aviation: Advance-
ments and Discoveries (IGI Global 2012), 241–282, R. Arnaldo Valdés et al.,‘Aviation 4.0: More Safety
Through Automation and Digitization’ in M. Gemal Kushan (ed) Aircraft Technology (IntechOpen
2018), 25–42.
4 See Ch-T. Lu et al., ‘Another Approach to Enhance Airline Safety: Using Management Safety Tools’
(2006) 11(2) JAirTransp 113; P. Gomes,‘New Strategies to Improve Bulk Power System Security: Les-
sons Learned from Large Blackouts’ in IEEE Power Engineering Society General Meeting (IEEE 2004),
vol. 2, 1703–1708; M.M. Sokołowski, Regulation in the European Electricity Sector (Routledge 2016),
60–63, M.M. Sokołowski, European Law on Combined Heat and Power (Routledge 2020), 31–33.
5 M. Linz, ‘Scenarios for the Aviation Industry: A Delphi-based Analysis for 2025’ (2012) JAirTransp-
Management 22, 28-35.
DOI: 10.4324/9781003246503-10
AI and risk preparedness in the aviation industry 115
demand)6 coupled with the increasing values and complexity of modern air-
craft, digital transformation and increasing interdependence of aviation actors,
and the global pandemic, all add to the severity of the challenges to come.
Therefore, aside from learning from dramatic or disruptive events that have
already taken place, aviation professionals are seeking to increase their capac-
ity to take corrective action in advance. Compliance with industry guidelines
and standards is of primary importance in this regard, as is appropriate risk
management in a broader sense. It should be noted, however, that the degree
of advancement of aviation equipment, combined with the volume of aerial
operations taking place in the contemporary world, have seriously impaired the
industry’s abilities to recognise and avert danger in time.
In the above context, the use of AI provides a number of opportunities and
strengthens human efforts in the air transport domain.7 This chapter addresses
the recent applications of AI in aviation. It examines the potential of AI for the
necessary improvements in the area of risk management and calls for a world-
wide regulatory initiative to ensure safe implementation and further develop-
ment of AI in the aviation sector. The chapter also includes the impact of
COVID-19 on the industry.
21 ibid., 20.
22 ibid., 20.
23 European Union Aviation Safety Agency, ‘Artificial Intelligence Roadmap: A Human-Centric
Approach to AI in Aviation’ (February 2020) <www.easa.europa.eu/newsroom-and-events/news/
easa-artificial-intelligence-roadmap-10-published> accessed 27 May 2021.
24 ibid, 2.
25 European Aviation Artificial Intelligence High Level Group, ‘The FLY AI Report. Demystifying
and Accelerating AI in Aviation/ATM’ (March 2020) 10 <www.eurocontrol.int/publication/fly-ai
-report> accessed 27 May 2021.
26 ibid.
27 Airports Council International (ACI) and IATA, ‘The NEXTT Vision in a Post- Covid-19 World’
(October 2020) 1 <https://nextt.iata.org/dist/i18n/zh_CN/pdf/nextt-vision-post-covid-19-world
.pdf> accessed 27 May 2021.
118 Regulating artificial intelligence in industry
for an extended time.28 With the realisation of this comes the understanding
that neither governments nor industries can continue to pursue a strategy of
complete risk avoidance indefinitely.29 The risk of COVID -19 needs to be
managed effectively, aircraft need to fly, and borders need to be open. This,
in turn, demands prompt action and regulatory initiative to enable practical
access to and application of available risk mitigation solutions.30 As sound scien-
tific evidence becomes available, such COVID-19-induced risk management
mechanisms should be subjected to continuous reassessment and replacement
by methods that prove to be more effective.31
In this light, the need to recover, restore confidence and satisfy public con-
cerns with regards to the protection of passengers’ and aviation professionals’
health will likely drive long-term progress in the field of AI as a risk management
and safety improvement tool in aviation.32 For example, AI can support real-
time infection control, assist in passengers screening, and improve the tracing
of the virus spread as well as the identification of high-risk cases.33 Additionally,
thanks to such AI applications as predictive maintenance, AI can support the
aviation industry when it comes to operational safety. In particular, that it is of
utmost importance that commercial aviation continues to prove that air travel
can operate safely by complying with rules and processes for the highest levels
of safety.34 At the same time, significant changes, involving health safety, envi-
ronmental impact and application of new technologies, in the business need to
be accepted and understood to ensure that the aviation industry returns stronger
and is able to cope with new challenges that will inevitably arise.35
COVID-19 has brought almost everything to a complete halt, including the
simple business of flying,36 but early indications suggest that the aviation indus-
try that will emerge from the COVID-19 crisis will be even more eager to
28 N. Phillips, ‘The coronavirus is here to stay — here’s what that means’ (16 February 2021) <www
.nature.com/articles/d41586-021-00396-2> accessed 27 May 2021.
29 IATA ‘Travel and managing the risks of Covid-19’ (12 April 2021) <https://airlines.iata.org/analysis
/travel-and-managing-the-risks-of-covid-19> accessed 27 May 2021.
30 M.M. Sokołowski, ‘Regulation in the Covid-19 Pandemic and Post-Pandemic Times: Day-Watch-
man Tackling the Novel Coronavirus’ (2020), Transform. Gov. People Process. Policy, DOI 10.1108/
TG-07-2020-0142.
31 ACI & IATA (n 33).
32 J.Ye ‘The role of health technology and informatics in a global public health emergency: practices
and implications from the Covid-19 pandemic’ (2020) 8(7) JMIR Med. Inform., e19866.
33 R. Vaishya et al., ‘Artificial Intelligence (AI) Applications for Covid-19 Pandemic’ (2020), 14 (4)
337–339.
34 ‘[T]here is a common understanding that this is no time for cutting corners on safety – the reputa-
tion of the industry in this respect remains as vulnerable as ever.The public will not accept a lapse
in safety standards because of the pandemic’, P. Ky, ‘Life Beyond Covid-19 – How Will Aviation
Need to Change?’ (16 November 2020) <https://www.eurocontrol.int/article/life-beyond-covid
-19-how-will-aviation-need-change> accessed 27 May 2021.
35 ibid.
36 P. Suau-Sanchez et al., ‘An Early Assessment of the Impact of Covid-19 on Air Transport: Just
Another Crisis or the End of Aviation as We Know It?’ (2020) 86 JTranspGeogr 102749.
AI and risk preparedness in the aviation industry 119
apply AI-driven solutions.37 This time though, the focus has changed and most
efforts are expected to shift in emphasis from enhancing customers experience
or managing capacity issues to improving health safety and limiting human-to-
human interaction using various means of contactless travel.38
37 See ‘Air Canada CleanCare+:TouchFree Bag Tagging coming to more Canadian airports’ (Air Can-
ada, July 2020), <www.aircanada.com/content/aircanada/ca/en/aco/home/about/media/media
-features/touch-free-bag.html> accessed 27 May 2021, S. Singh, ‘How COVID is Revolutionising
Touchless Airport Technology’ (Simple Flying, 17 July 2020) <https://simpleflying.com/touchless
-airport-technology-covid/> accessed 27 May 2021,‘JAL tests contactless check-in kiosks at Tokyo
Haneda Airport’ (Airport Technology, 25 November 2020) <www.airport-technology.com/news/jal
-contactless-check-in-kiosks> accessed 27 May 2021.
38 ACI and IATA (n 33), 2.
39 A. Olofsson, S. Öhman,‘Views of Risk in Sweden: Global Fatalism and Local Control – An Empiri-
cal Investigation of Ulrich Beck's Theory of New Risks’ (2007) 10(2) J. Risk. Res., 177.
40 Deloitte, ‘AI and Risk Management: Innovating with Risk Confidence’ (2018), 7 <https://www2
.deloitte.com/content/dam/Deloitte/uk/Documents/financial-services/deloitte-uk-ai-and-risk
-management.pdf> accessed 27 May 2021.
41 H. Caplan,‘Passenger Health – Who’s in Charge?’ (2001) 26 Ai r& Space L., 203.
42 G. Leloudas, L. Haeck, ‘Legal Aspects of Aviation Risks Management’ (2003), 28 Annals. Air Space
L., 149.
43 M.S. Hamid, K. Kaiser, ‘Learning Intelligent Behavior’ in Grigoris Antoniou, John Slaney (eds)
Advanced Topics in Artificial Intelligence: 11th Australian Joint Conference on Artificial Intelligence, AI-98
Brisbane,Australia, July 13-17, 1998, Selected Papers (Springer 1998), 143–154.
120 Regulating artificial intelligence in industry
are capable of recognising.44 It absorbs massive quantities of data both histori-
cally and in real time, and uses it to capture such discrepancies. This feature is
of prime importance, as in most cases emerging threats arise along with pattern
irregularities.45 Moreover, this makes AI particularly suitable for the aviation
industry, which is highly reliant on digital data streams between air and ground
systems.46
The steady growth of the amount of information exchanged makes the
aviation sector promisingly rich in data.47 However, when it comes to data
processing, the tools currently in use are unable to manage the volume of
information gathered from aircraft systems, surveillance, airlines, airports or
other industry stakeholders.48 Partly because about 90 per cent of data pro-
duced is unstructured,49 which means that it cannot be positioned and analysed
in a conventional way in columns and rows.50 In this context, the advantage
of AI lies in its ability to handle unstructured data.51 Today, cognitive capabili-
ties, including data mining, machine learning, and natural language processing,
can take over old methods of analysis and draw conclusions from unstructured
data, resulting in better and faster detection of known and unknown hazards.52
In the risk management context, the major advantages of AI can be divided
into the following areas:
44 See G.Tayfur, Soft Computing in Water Resources Engineering:Artificial Neural Networks, Fuzzy Logic And
Genetic Algorithms (WIT Press 2012), 4.
45 IATA (n 8), 18.
46 European Aviation (n 31).
47 ICAO,‘Artificial Intelligence and Digitalization in Aviation’, ICAO Assembly – 40th Session, 1 August
2019, 2 <www.icao.int/Meetings/a40/Documents/WP/wp_268_en.pdf> accessed 27 May 2021.
48 Deloitte,‘Why Artificial Intelligence is a Game Changer for Risk Management’ (2016), 1 <https://
www2.deloitte.com/content/dam/Deloitte/us/Documents/audit/us-ai-risk-powers-performance
.pdf> accessed 27 May 2021.
49 ibid.
50 ibid.
51 F. Sassite et al., ‘A Smart Data Approach for Automatic Data Analysis’ in Vikrant Bhateja, Suresh
Chandra Satapathy, Hassan Satori (eds) Embedded Systems and Artificial Intelligence. Proceedings of ESAI
2019, Fez, Morocco (Springer 2020), 691.
52 Deloitte (n 46).
53 FERMA (n 11), 14.
AI and risk preparedness in the aviation industry 121
Specifically, in the aviation business, AI can support risk management pro-
cesses by enabling real-time aircraft safety monitoring54 and passengers’ health
monitoring,55 improving assets protection,56 enhancing dynamic resource allo-
cation (e.g. minimising queues at the airports by means of predictive analytics
and passenger notification applications),57 etc.
The potential of AI stirs up the imagination of the air transportation indus-
try, fuelling visions of a future in which aviation actors commonly adopt AI
for a variety of purposes, also including AI application in the cockpit environ-
ment.58 Although when aiming for AI transition it is fine to have ambitious
objectives, it is also essential to start with a proof of concept.59 The usefulness
and commercial viability of such an objective for the particular entity, such as
an airline, airport, or aviation organisation, can be proven by means of testing
or a pilot project.60 The launch of a single, small-scale initiative to test the use
of AI within a specific organizational structure can expose challenges that may
arise during the transformation in a safe and controlled manner.61 Applying this
strategy can save a lot of struggle in the future.
When considering the use of AI for the purpose of risk management, it
should be remembered that currently AI algorithms are tasked with perform-
ing specified operations with restricted scope. Visual or speech recognition
and text analysis can serve as examples.62 While for that particular task AI is
indeed capable of outperforming humans, the all-purpose and infinitely capa-
ble AI is mostly at the conceptual stage.63 For this reason, the creation of a
hybrid ecosystem based on the combination of computational methods and
human intelligence will likely result in the best outcome.64 Today, making a
risk-based decision requires two different but inextricably entwined elements:
objective data and a subjective feeling of what is to be achieved or lost by the
decision. Objective calculation and subjective perception are vitally impor-
tant; neither is quite enough by itself.65 On the contrary, to the machines,
54 W. Bellamy III, ‘Delta Develops Artificial Intelligence Tool to Address Weather Disruption, Improve
Flight Operations’ <www.aviationtoday.com/2020/01/08/delta-develops-ai-tool-address-weather
-disruption-improve-flight-operations> accessed 27 May 2021.
55 For instance,ViatorAero system equipped with abilities to monitor passengers temperature or tired-
ness, see Aviation Business News, ‘How Artificial Intelligence is Now Supporting the Aviation
Industry’ (Aviation Business News, April 2018) <www.aviationbusinessnews.com/low-cost/artificial
-intelligence-aviation-industry> accessed 27 May 2021.
56 IATA (n 8), 16.
57 IATA, (n 8),13.
58 R.Abeyratne, Megatrends and Air Transport (Springer 2017), 173–200.
59 FERMA (n 11) 21.
60 ibid.
61 ibid.
62 ibid.
63 ibid, 14.
64 European Aviation (n 31), 25.
65 P.L. Bernstein, Against the Gods:The Remarkable Story of Risk (Wiley 1998), 119.
122 Regulating artificial intelligence in industry
human choices are mostly the product of the automatic and repetitive use of
their expertise and acquired skills, rather than of contemplation, evaluation and
thoughtful assessment. In unprecedented, time-sensitive conditions, individu-
als often need to make decisions without much consideration.66 In this respect,
‘human–machine teaming’67 seems to be the best way to go. Through the use
of AI, present constraints in risk management processes can be overcome, and
the decisions based on gut feeling would become data-based and systemat-
ic.68 AI being available 24/7 and 100 per cent up to date can enable a clearer
understanding of risks, provide timely information and support to designated
aviation staff, and with that it can tremendously increase the sector’s chances of
taking corrective and preventive steps in time.
In the coming years, AI is expected to support and enhance the perfor-
mance of various aviation actors69 by assisting in areas where its abilities greatly
outweigh those of human operators, such as detection and identification of
new risks, fleet and staff management, air traffic management, etc.70 Although
its capabilities are still limited, and many challenges of a regulatory or ethical
nature remain, the aviation industry should be encouraged to lay down the
foundations for the application of the technology early, taking one small step
at a time.
66 U. Pagallo,‘Algo-Rhytms and the Beat of the Legal Drum’ (2018) 31 Philos.Technol., 507, 521.
67 T. Kistan, ‘Innovation in ATFM:The Rise of Artificial Intelligence’ (presentation at ICAO Air Traf-
fic Flow Management Global Symposium, Singapore, 20–22 November 2017) <https://www.icao
.int/Meetings/ATFM2017/Documents/3-Trevor%20Kistan%20%20ICAO%20ATFM%20Global
%20Symposium%20-%20AI%20in%20ATFM,%20Thales,%20T.%20Kistan%20-%20PRESENTA-
TION.pdf> accessed 27 May 2021.
68 FERMA (n 11), 14.
69 European Aviation (n 31), 25.
70 ibid.
71 R. Abeyratne, ‘Key Legal Issues in ICAO: A Commentary and Review’ (2019) 44 Air & Space L.,
53, 66.
AI and risk preparedness in the aviation industry 123
operation.72 These include matters such as passenger rights and air carrier lia-
bility, as well as health safety standards and protective measures to be fol-
lowed when flying to a designated country. For a business, which is global in
nature, such a patchwork of regulations is highly problematic. Similarly, the
impact of AI goes beyond any particular entity or even beyond the borders
of any sector.73 Hence, the use of AI solutions can have a disruptive effect on
those involved in aviation activities, such as airlines, manufacturers, airports,
air navigation service providers, or maintenance service providers. Most cer-
tainly, it will also significantly affect larger circles. Consequently, there is a
strong need to foster global collaboration and consolidate isolated, regional,
or sectoral industry efforts.74 The complexity of the subject matter requires
the involvement of a wide range of actors from diverse backgrounds and with
broad expertise to collaborate on relevant policies and to address public and
ethical matters relating to AI.75
Preferably, such actions should be taken under the auspices of the
International Civil Aviation Organization (ICAO), a specialised agency of
the United Nations, established by the Chicago Convention76 of 1944 and
entrusted with tasks of developing ‘principles and techniques of international
air navigation’77 and promoting ‘planning and development of international air
transport’.78
Given the wide variety of AI solutions and potential applications, one could
make various comments on the most pressing legal issues that need to be
addressed in the wake of AI implementation in the aviation sector. However,
two particular problems, accelerated by the COVID-19 crisis, call for the
immediate attention of the international civil aviation community. Namely,
the safety of AI applied and data governance in the aviation sector equipped
with AI solutions.
From the very beginning of commercial air transportation, safety was a major
concern for the aviation community. Not surprisingly, then, ICAO, from its
inception, has been focused on safety and has played a vital role in establish-
ing rules focused on the safe operation of aircraft. The industry has evolved
over the years, and the construction of aircraft has improved together with the
voluminous body of hard and soft air laws addressing aviation safety, and sub-
sequently aviation security. As a result, today, there is no equivalent mode of
transport that can be associated with such a high degree of safety and with such
72 Regarding conflicting public health measures. See A. Grout, N. Howard, R. Coker, Elizabeth M.
Speakman,‘Guidelines, Law, and Governance: Disconnects in the Global Control of Airline-Associ-
ated Infectious Diseases’ (2017) 17(4) Lancet Infect. Dis., e118, 119.
73 European Aviation (n 31), 12.
74 See for instance European initiatives like European Aviation (n 31) and EASA (n 29).
75 Deloitte (n 46), 23.
76 Chicago Convention,Article 43.
77 ibid,Article 44.
78 ibid.
124 Regulating artificial intelligence in industry
rigorous boarding procedures as a commercial airliner. AI implementation will
undoubtedly alter the landscape of safety and security risks. In the light of the
above, the use of AI, being a risk management solution and an emerging safety
and security risk79 at the same time, is of great interest to ICAO.
It is worth noting that the issue of AI has been raised at the ICAO forum on
a few occasions. For example, during the Thirteen Air Navigation Conference,
Singapore presented a paper titled ‘Potential of Artificial Intelligence in Air
Traffic Management’.80The paper highlighted AI’s capabilities in the decision-
making domain, and its potential for guaranteeing that aviation is not hin-
dered by human limitations.81 Moreover, during the 40th Assembly Session,
the Working Paper on AI was presented by the International Coordinating
Council of Aerospace Industries (ICCAIA) and Civil Air Navigation Services
Organisation (CANSO), calling for Standards and Recommended Practices
(SARPs) amendments and actions, particularly in areas such as certification,
operations, qualification, and data sharing and training.82 Nevertheless, despite
ICAO involvement in numerous initiatives in the field of AI,83 including par-
ticipation in the EUROCAE Working Group 114 (WG-114) on Artificial
Intelligence,84 there is no clear indication of any action attempted for the review
or improvement of relevant SARPs. Apart from actions aimed at exploring and
promoting the potential of AI, ICAO should strive to ensure that an adequate
framework for the safe use of AI in commercial aviation is in place.
79 M. Brundage et al.,‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Miti-
gation’ (2018), 3–7.
80 ICAO, ‘Potential of Artificial Intelligence (AI) in Air Traffic Management (ATM)’ Thirteenth Air
Navigation Conference ICAO, Montréal, 9–19 October 2018 <www.icao.int/Meetings/anconf13/
Documents/WP/wp_232_en.pdf> accessed 27 May 2021.
81 ibid., 3.
82 ICAO (n 53).
83 As listed:‘Partnering in the UN AI for Good Annual Global summit, presenting work session on AI
in aviation as well as participate in mobility sessions; Hosting internships, developing in-house deep
learning AI models showcasing natural language processing techniques for aeronautical information
management as well as document summarization; Supporting local AI company networks, start-ups
and incubators such as Thales AI@Centech and Concordia District 3 by providing ideas and coach-
ing, Collaborating with McGill on the introduction of AI in aviation inside the McGill data science
and machine learning program. Broadening the horizon of AI in aviation through workshops organ-
ized in collaboration with CRIAQ, Exploring the creation of an AI in Aviation Focus Group under
the hospices of ITU to address issues related to compliance and certification, Collaboration with
XPrize Foundation by providing AI challenges and participating in the Global Initiative on AI and
Data Commons, Collaboration with ITU on their United 4 Smart Sustainable Cities (U4SSC) ini-
tiative including the use of AI for urban mobility; Participation in the EUROCAE Working Group
114 (WG-114) on Artificial Intelligence’, ICAO,‘Artificial Intelligence (AI)’ <https://www.icao.int
/safety/Pages/Artificial-Intelligence-(AI).aspx> accessed 27 May 2021.
84 EUROCAE,‘New working group:WG-114 / Artificial Intelligence’ (19 June 2019) <www.eurocae
.net/news/posts/2019/june/new-working-group-wg-114-artificial-intelligence> accessed 27 May
2021.
AI and risk preparedness in the aviation industry 125
Further to the above-indicated safety concerns, another issue that calls for
global action is data governance in the aviation sector. It goes without saying
that AI requires sufficient quality and a proper quantity of information to fulfil
its function85 so as to avoid the GIGO (‘garbage in, garbage out’) effect.86 But
data governance is much more than data quality and quantity management;
it is also about ensuring that acquired data are reliable and available when
needed,87 that proper infrastructure is provided and secure (also when it comes
to cyber risks),88 that data sharing frameworks are in place,89 and that issues
such as potential data breaches are addressed and managed.90 As the challenges
associated with processing data are not new to international aviation,91 it is to
be hoped that they will also be addressed in the AI context.
As the aviation sector can benefit only from a unified, coordinated approach,
the cooperation between the ICAO, states, aviation stakeholders, and industry
representatives to define standards for AI, especially in the above-discussed
areas (i.e. safety and data governance), is necessary.
Conclusions
In the face of the intense financial distress caused by COVID-19, the imple-
mentation of AI may not appear as a basic need for the industry. Beyond doubt,
commercial aviation needs to learn from what has gone wrong, and beyond
doubt this is the right and proven path in the aviation sector. However, relying
solely on past successful strategies may prove to be not only ineffective in the
present but also hazardous to the future of commercial aviation. The aviation
industry needs to widen its perspective beyond the challenges of conducting
business in today’s pandemic environment, beyond solving the COVID-19
induced problems, not only to let the industry grow, but also to limit the risks
of major crises recurring in the future. As Professor Brian Havel, Director of
the Institute of Air and Space Law at McGill University, put it: ‘[i]nternational
aviation is not a business, where problems disappear if we simply leave them
alone, in which matters can be left to wait forever’.92 COVID-19 has put the
air transportation industry under stress and called for greater flexibility and
85 FERMA (n 11), 5.
86 Bernstein (n 71), 177.
87 FERMA (n 11), 11
88 ibid.
89 European Aviation (n 31), 16.
90 FERMA (n 11), 12.
91 P. Mendes de Leon, ‘The Fight Against Terrorism Through Aviation: Data Protection Versus Data
Production’ (2006) 31 Air & Space L., 320.
92 B. Havel, ‘Keynote Speech’Worldwide Airports Lawyers Association XI Conference, Bogota, 9–11
October, 2019 <https://www.abiaxair.com/wala2019/post_conference.php> accessed 27 May
2021.
126 Regulating artificial intelligence in industry
agility to deal with challenges of long-continued uncertainty.93 Therefore, the
aviation community is seeking to implement various improvements using the
available technologies at hand. AI, as one of the most promising advancements
when it comes to risk identification, calls for special attention. AI is a disruptive
technology and a risk in itself. In the wake of its accelerating implementation
in the aviation sector, such issues as safety and data governance need to be
addressed without any delay. Today, addressing the issue of AI in the aviation
industry is a matter of urgency both from a regulatory and business perspective.
The appropriate framework is required to establish trust and overcome nega-
tive perceptions of AI, to maintain and further improve levels of safety in civil
aviation, to reduce liability and insurance associated challenges, and to provide
businesses with an acceptable level of certainty when it comes to the AI use.
Therefore, a coordinated, global response involving a wide range of AI experts
and industry actors is necessary to ensure that these objectives are met. The
European efforts in the field of AI, including EASA and EUROCONTROL
initiatives, can serve as inspiring examples and a solid foundation for the inter-
national discussion to be held under the lead of ICAO.94
1 C.A. Duran, F. Palominos, F.M. Cordova, “Applying Multi-criteria Analysis in a Port System” [2017]
122 Procedia Computer Science, 478–485; M.K. Othman, N.S. Rahman, A. Ismail, H.A. Saharuddin,
“Factors Contributing to the Imbalance of Cargo Flows in Malaysia Large-scale Minor Ports Using
a Fuzzy Analytical Hierarchy Process (FAHP) Approach” [2019] 35 The Asian Journal of Shipping and
Logistics, 1, 13-23.
2 See e.g.Y.Yang, M. Zhong, H.Yao, F.Yu, X. Fu, O. Postolache, “Internet of Things for Smart Ports:
Technologies and Challenges” [2018] 21 IEEE Instrumentation and Measurement Magazine, 1, 34–43;A.
Botti,A. Monda, M. Pellicano, C.Torre,“The Conceptualization of the Port Supply Chain as a Smart
Port Service System:The Case of the Port of Salerno” [2017] 5 Systems, 2, 35.
3 C.A. Duran, F.M Cordova, F.Yanine, E. Carrillo, “Fuzzy Knowledge to Detect Imprecisions in Stra-
tegic Decision Making in a Smart Port” [2020] 9 International Journal of Advanced Trends in Computer
Science and Engineering, 3, 377–380.
4 “Internet of Things (IoT), as defined by the IEEE, is a network of items including sensors and embed-
ded systems which are connected to the Internet and enable physical objects to gather and exchange
data”; G. Jayavardhana, B. Rajkumar, S. Marusica, M. Palaniswami,“Internet of Things (IoT):A Vision,
Architectural Elements, and Future Directions” [2013] 29 Elsevier Future Generation Computer Systems,
7, 1645–1660.
5 Yang et al. (n 4).
6 ibid. See also X. Li,W. Xu,“A Reliable Fusion Positioning Strategy for Land Vehicles in GPS-denied
Environments based on Low-cost Sensors” [2017] 64 IEEE Trans. Ind. Electron., 4, 3205–3215; M.R.
Kaloop, M.A. Sayed, D. Kim, E. Kim,“Movement Identification Model of Port Container Crane based
on Structural Health Monitoring System” [2014] 50 Structural Engineering and Mechanics, 1, 105–119.
DOI: 10.4324/9781003246503-11
128 Regulating artificial intelligence in industry
projects in the ports of Rotterdam, Hamburg, Le Havre, Shanghai, Vigo, and
Singapore relating to smart ports and the IoT.7 These include sensing tech-
nologies, automated quayside cranes, automated guided vehicles for container
handling and yard cranes.8 A network of smart sensors and actuators, wireless
devices and data centres make up the key infrastructure of smart ports, which
allow the port authorities to provide essential services in a faster and more effi-
cient manner.9 Moreover, according to some estimations, an automated con-
tainer terminal can save at least 25% more energy and reduce 15% more carbon
emissions than a traditional terminal.10 Thus port systems are undergoing a pro-
found transformation, implementing technologies associated with distributed
smart sensors and actuators, data communication and internet connectivity for
remote and automatic operation, control optimization based on AI, as well as
big data analysis.11 Such smart ports employ AI-related information systems
that “manage, monitor, and store massive amounts of data (e.g. maritime traffic
and logistic data) and provide large-scale computerised and paperless services
in smart ports”.12 The diversity of the gathered data and information then
enables “smart port applications to adapt to the dynamic requirements of a
7 See for instance:A. Belfkih, C. Duvallet, B. Sadeg,“The Internet of Things for Smart Ports:Applica-
tion to the Port of Le Havre” [2017] Proc. Int. Conf. Intell. Platform Smart Port (IPaSPort), 15–16;
K.H. Kim, B.H. Hong, “Maritime Logistics and Applications of Information Technologies” [2010]
Proc. 40th Int. Conf. Comput. Industrial Eng., 1–6; Port of Rotterdam Authority, “The Smart Port
Doesn't Stop at the City Limits” (8 April 2019) https://www.portofrotterdam.com/en/news-and
-press-releases/the-smart-port-doesnt-stop-at-the-city-limits accessed 27 May 2021; Hamburg Port
Authority, “Smart-port: The Intelligent Port”, available at <https://www.hamburg-port-authority
.de/en/hpa-360/smartport/> accessed 27 May 2021; C.L Botana,“Environmental Actions. Port of
Vigo: Green Port” [2015] Proc.Atlantic Stakeholders Platform Conf. (ASPC), 1–4.
8 ibid.
9 The major drivers in smart ports are productivity and efficiency gains. Yau et al. suggest that the
latest version of ports are customer – and community centric smart ports that are distinguished by
five main features: a) smart port services (such as vessel and container supply chain management);
b) technologies such as data centre, networking and communication, and automation; c) use of sus-
tainable technology to increase energy efficiency and achieve reduction in greenhouse emission; d)
cluster management such as a shipping cluster that consists of geographically proximate companies
and stakeholders with their main activity being shipping; and e) development of hub infrastructures
to foster collaboration among different ports and supply chain stakeholders. See: K-L.A. Yau, S.
Peng, J. Qadir,Y-Ch. Low, M.H. Ling,“Towards Smart Port Infrastructures: Enhancing Port Activities
using Information and Communication Technology” [2020] 23 IEEE Instrumentation & Measurement
Magazine, 8, 83387–83404.
10 In such smart ports the whole terminal's modules communicate with a central control unit of the
terminal control room. H.M. Le,A.Yassine, M. Riadh,“Scheduling of lifting Vehicle and Quay Crane
in Automated Port Container Terminals” [2012] 6 Intern. J. Intelligent Inform. Database Syst., 516–531.
11 Yang et al., supra note 6. See also N. Bahnes, B. Kechar, H. Hafid,“Cooperation between Intelligent
Autonomous Vehicles to enhance Container Terminal Operations” [2016] 3 J. Innovation in Digi-
tal Ecosystems, 1, 22–29; R.H. Murofushi, J.Tavares, “Towards Fourth Industrial Revolution Impact:
Smart Product based on RFID Technology” [2017] 20 IEEE Instrum. Meas. Mag., 2, 51–56.
12 L. Heilig, S.Voÿ,“Information Systems in Seaports: A Categorization and Overview” [2016] 18 Inf.
Technol. Manage., 3, 179–201.
AI, smart seaports, and supply chain management 129
complex system handling diverse aspects and comprising different technologies
(e.g. wireless communications and embedded systems for sensing operation)”.13
The unforeseeable emergence of COVID-19 has imposed pressing chal-
lenges and opportunities on the maritime sector and to the employment of
AI. COVID-19-related quarantines, closures of ports, security checks at ports
resulting in extra waiting times for berthing operations, inland seaport tranship-
ment operations, supply chain management, and other aspects had an impact
on the overall amount of freight and ocean shipping. Greater employment of
smart ports and smart ships could mitigate such adverse effects, and AI-related
digitisation could be the legacy of the pandemic.
Analytically speaking, the entire smart port ecosystem consists of five main
areas of application: (a) smart vessel management; (b) smart container man-
agement; (c) smart port management; (d) smart energy management; and (e)
smart resource management.14 Smart AI-governed ports have the potential to
cut human error, enable active communication with the social environment,
make operations faster, safer, and more efficient, and cut carbon emissions for
environmental and energy sustainability.15 Moreover, smart ports offer adap-
tive solutions in response to a world characterized by uncertainty.16 Such AI
decision-making support systems based on a predictive model of behaviour
economize transaction costs, apply efficient risk management, improve worker
safety and enable a super-efficient allocation of resources, competitiveness, and
sustainable development.17 Finally, AI can contribute to “smart ships”, i.e. sys-
tems with completely autonomous navigation, sailing, control, and guidance.
These are sometimes referred to as “intelligent ships”.18
13 Yau et al. (n 11). See also Ch. Shuo, J.Wang, J. Zhao,“The Analysis of the Necessity of Constructing
the Huizhou Smart Port and overall Framework” [2017] Proc. Int.Conf. Intell.Transp., Big data Smart
City, 159–162.
14 Yau et al. (n 11), 83391.
15 G.A. Rodrigo, N. Gonzalez-Cancelas, B. Molina Sarrano, A. Camarero Orive, “Preparation of a
Smart Port Indicator and Calculation of a Ranking for the Spanish Port system” [2020] 4 Logistics, 9;
G. Buiza, S. Cepolina, A. Dobrijevic, M. del Mar Cerban, O. Djordjevic, and C. Gonzalez,“Current
Situation of the Mediterranean Container Ports regarding the Operational, Energy and Environ-
ment areas” [2015] Proc. Int. Conf. Ind. Eng. Syst. Manag., 530–536; J.Twidell,T.Weir, Renewable Energy
Sources (3rd ed., Routledge, 2015).
16 A. Bujak, “The Development of Telematics in the Context of the Concepts of Industry 4.0 and
Logistics 4-0” [2018] Proceedings of the e-Business and Telecommunication Networks (Springer), 509–524.
17 Ports of Singapore, Rotterdam and Hamburg, are prime examples of smart ports using AI tools to
improve their business operations.
18 See for instance M. Schiaretti, L. Chen, R.R. Negenborn, “Survey on Autonomous Surface Vessels:
Part I – A New Detailed Definition of Autonomy Levels”, in Tolga Bektaş, Stefano Coniglio,Anto-
nio Martinez-Sykora and Stefan Voß (eds), Computational Logistics (Springer, 2017), 219–233; L.Van
Cappelle, L. Chen, R.R. Negenborn, “Survey on ASV Technology Developments and Readiness
levels for Autonomous Shipping” [2017] Proceedings of the 9th International Conference on Computa-
tional Logistics (ICCL 2018), 65–79.
130 Regulating artificial intelligence in industry
Integration and collaboration
A seaport with an AI ecosystem is built around a “cyber-physical system”, i.e.
physical infrastructure with cyber facilities that impact the efficiency of the
maritime industry on various levels, including ships, ports, and associated sup-
ply chains.19 AI in the field of supply chain management is used to spot patterns
in the logistics chain and offer detailed prediction times of when vessels, lorries,
and containers will reach terminals, therefore allowing for greater planning.20
Moreover, AI may be used to predict future equipment needs, long-term yard
utilisation, container damage, numbers for gate visits, and many other aspects.21
Seaports in Rotterdam, Hamburg, Le Havre, Shanghai, Vigo, and Singapore
have, for example, established so-called “supply chain integration”. This term
refers to a “data sharing, integrity and transparent partnership with information
technology, system integration and collaboration as the guiding ideology, and
centred on the port enterprise at the core”.22 Another related process is the so-
called “supply chain structure of the enterprise”, which is formed by “related
upstream and downstream enterprises to provide a smooth flow of informa-
tion, logistics and capital throughout the supply chain from the middle source
to the remittance”.23 For example, smart vessel supply management supervises
vessels, including their choice of routes and ports, based on the location and
traffic within the ports. This is to improve their arrival punctuality at ports. On
the other side, smart container management helps with the acquisition, track-
ing, transport, storage, and repositioning of containers,24 as well as with their
trans-shipment (transfer from one vessel to another).25 Moreover, smart energy
management reduces energy consumption.26 Such an AI-operated system of
19 S.Vijay,“The Indian Ocean and Smart Ports” [2019] 14 Indian Foreign Affairs Journal, 3, 207–221.
20 A. Chiappetta,“Toward Cyber Ports:A Geopolitical and Global Challenge” [2017] 12 FormaMente, 1.
21 J.,Yang,Y. Li, X. Ding, J. Lin,“Research on Automatic Wharf Unmanned Gate System Based on Arti-
ficial Intelligence” [2019] Proceedings of the 2019 7th International Conference on Information Technology:
IoT and Smart City, 521– 524.
22 Ch-S. Lu,“Port Supply Chain Coordination Evaluation Research” [2009] 41 China Market, 1, 52–54.
See also J.Tongzon,Y-T. Chang, S-Y. Lee,“How Supply Chain Oriented is the Port Sector” [2009]
122 Int J Prod Econ., 1, 21–34;Y. Bo,Y. Meifang, “Construction of the Knowledge Service Model
of a Port Supply Chain Enterprise in a Big Data Environment” [2020] 33 Neural Computing and
Applications, 5, 1699–1710.
23 ibid.
24 Chin Liu, Hossein Jula, K Vukadinovic, and Petros A. Ioannou,“Comparing Different Technologies
for Containers Movement in Marine Container Terminals” [2000] Proc. ITSC. IEEE Intell. Transp.
Syst., 488–493. See also Yau et al. (n 11), 83391.
25 Moreover, smart port management optimizes port services, such as commodity inspection, customs
clearance, transportation planning, procedures and applications (e.g., trans-shipment, trade license, as
well as import and export permits), customer service, market information exchange, and insurance
provisioning;Yau et al. (n 11), 83392.
26 For example, Yau et al. (n 11) suggest that the Valencia and Hamburg ports are equipped with
motion-sensitive lights that are illuminated when vehicles pass by such lighting system has been
shown to reduce energy consumption by up to 80%. See also M. Jovic, N. Kavran, S.Aksentijevic, E.
AI, smart seaports, and supply chain management 131
management can provide effective schedules, resources allocation, and optimi-
sation in terms of time and cost.27
The AI-enabled ecosystem increases collaboration between the different
parties, such as port authorities, cargo owners, and third-party logistics provid-
ers, as it aligns their individual digital roadmaps. This in turn allows for mutu-
ally beneficial opportunities to improve efficiency and cut waste (by sharing
data and providing a standard interface for AI-enabled insights, predictions, and
constraints). Moreover, AI provides a much more accurate and reliable forecast
for vessel arrival and departure by up to 80%, which enormously improves the
operation of supply chain management. AI also analyses and monitors cargo
and marine vehicles in real-time. It automates challenging manual analysis,
speeds up cargo logistics planning, and improves the detection of potentially
exceptional situations.28
Tijan,“The Transition of Croatian Seaports into Smart Ports” [2019] Proc. 42nd Int. Conv. Inf. Com-
mun.Technol., Electron. Microelectron., 1386–1390.
27 Yau et al. (n 11), 83392. See also G.C. Ceyhun, “Recent Developments of Artificial Intelligence in
Business Logistics: A Ma ritime Industry Case”, in Hacioglu Umit (ed.), Digital Business Strategies in
Blockchain Ecosystems Transformational Design and Future of Global Business (Springer 2020), 343–355;
Ch.-A. Gizelis,T. Mavroeidakos, A. Marinakis, A. Litke,V. Moulos,“Towards a Smart Port:The Role
of the Telecom Industry” in Ilias Maglogiannis, Lazaros Iliadis and Elias Pimenidis (eds.), Artificial
Intelligence Applications and Innovations (Springer 2020) 128-144.
28 See e.g.A. Loukili, S.L. Elhaq,“A Model Integrating a Smart Approach to Support the National Port
Strategy for a Horizon of 2030” [2018] Proc. Int. Colloq. Logistics Supply Chain Manage. (LOGISTI-
QUA), 81–86; K. Douaioui, M. Fri, Ch. Mabrouki, E.A. Semma, “Smart Port: Design and Perspec-
tives” [2018] Proc. 4th Int. Conf. Logistics Oper. Manage., 1–6.
29 Chiapetta (n 22), 95–104. Chiapetta also suggests that software systems that support critical infra-
structures operations are becoming more and more attractive to outside cyber-attacks from cyber-
criminal interested in wreaking havoc in cyber environments. See:A. Chiappetta,“Hybrid Ports:The
Role of IoT and Cyber Security in the next Decade” [2017] 2 Journal of Sustainable Development of
Transport and Logistics, 2, 47–56.
30 ENISA,“Cyber security aspects in the maritime sector” (ENISA, 9 December 2011) <https://www
.enisa.europa.eu/publications/cyber-security-aspects-in-the-maritime-sector-1/> accessed 27 May
2021.
31 Yau et al. (n 11) 83401; M. Atif, S. Latif, R. Ahmad, A.K. Kiani, J. Qadir, A. Baig, H. Ishibuchi,
W. Abbas, “Soft Computing Techniques for Dependable Cyber-physical Systems” [2019] 7 IEEE
132 Regulating artificial intelligence in industry
the vessels’ automatic identification system signal to misreport their location,
accessing electronic chart display and information systems software to modify
maps all represent real concerns and may have disastrous consequences.32 These
risks can impact on potential liability for damages caused. The industry-specific
regulations related to this subject include provisions relating to safety and secu-
rity standards, as found in the International Ship and Port Facility Security
(ISPS) Code33 and in the EU Regulation (EC) No 725/2004 on enhancing
ship and port facility security.34 However, these regulations do not consider
cyber-attacks as possible threats or unlawful acts and are not adequate to address
the potential AI-related risk for public security.
In terms of potential liability for damages resulting from the use of emerg-
ing digital technologies, the laws of EU Member States do not contain liability
rules specifically applicable to such damages.35 The matter of liability in the EU
is not largely coordinated, with the exception of product liability law under
Directive 85/374/EC,36 some aspects of liability for infringing data protection
law,37 and liability for infringing competition law.38 For most modern techno-
European Parliament and of the Council of 11 July 2007 on the law applicable to non-contractual
obligations (Rome II), OJ L 199, 31 July 2007, 40).
39 A study group suggests that “while existing rules on liability offer solutions with regard to the risks
created by emerging digital technologies, the outcomes may not always seem appropriate, given the
failure to achieve: (a) a fair and efficient allocation of loss, in particular because it could not be attrib-
uted to those: 1) whose objectionable behaviour caused the damage; or 2) who benefitted from the
activity that caused the damage; or 3) who were in control of the risk that materialised; or 4) who
were cheapest cost avoiders or cheapest takers of insurance; (b) a coherent and appropriate response
of the legal system to threats to the interests of individuals, in particular because victims of harm
caused by the operation of emerging digital technologies receive less or no compensation compared
to victims in a functionally equivalent situation involving human conduct and conventional technol-
ogy; (c) effective access to justice, in particular because litigation for victims becomes unduly burden-
some or expensive”. See: Expert Group on Liability and New Technologies (n 38).
40 See for instance M. Martin-Casals, “Causation and Scope of Liability in the Internet of Things” in
Sebastian Lohsee, Reiner Schulze and Dirk Staudenmayer (eds.), Liability for Artificial Intelligence and
the Internet of Things (Hart 2019), 201–233; B.A. Koch, H. Koziol, Unification of Tort Law: Strict Liability
(Kluwer International Publishing 2002) 70–89; P. Giliker, Vicarious Liability in Tort: A Comparative
Perspective (CUP 2010), 228–251.
41 See for instance J. Bell, S. Boyron, S.Whittaker, Principles of French Law (2nd ed., OUP 2008) 360-417;
N. Foster, S. Sule, German Legal System and Laws (4th ed., OUP 2011), 461–475; B.S. Markesinis, H.
Unberath, The German Law of Torts:A Comparative Treatise (Hart 2002).
42 For example, § 7 of the German Road Traffic Act (Straßenverkehrsgesetz) provides for strict liability of
the keeper of the vehicle.This rule was deliberately left unchanged when the Road Traffic Act was
adapted to the emergence of automated vehicles. Similarly, French Decree n° 2018-211 of 28 March
2018 on experimentation with automated vehicles on public roads relies on the Loi Badinter of 5 July
1985 (n°85-677). See also Gerhard Wagner,“Robot Liability”, in Lohsee, Schulze, Staudenmayer (n
42), 27–62.
43 If the vehicle is uninsured, it is the owner of the vehicle who is liable instead. Ibid Lohsee.
134 Regulating artificial intelligence in industry
mated vehicles designed or adapted to be capable of driving themselves by the
Department of Transport. However, the AEVA 2018 leaves some substantive
issues open, such as the way in which the insurer can reclaim from manufac-
turers and whether there are any defences available, as well as in relation to
definitions.44 Furthermore, some suggest that apart from this legislation, the
harmful effects of the operation of emerging digital technologies in England
can be compensated under existing (“traditional”) laws on damages in contract
and in tort.45
44 This legislation was introduced in light of issues such as regulatory disconnection, both of which
are concerns held in relation to technology law generally; M. Channon, “Automated and Electric
Vehicles Act 2018: An Evaluation in light of Proactive Law and Regulatory Disconnect” [2019] 10
European Journal of Law and Technology, 2, 1–36.
45 This applies to all fields of application of AI and other emerging digital technologies; ibid. See also C.
Amato, “Product Liability and Product Security: Present and Future”, in Lohsee, Schulze, Stauden-
mayer (n 42), 77–99.
46 See O.J. Erdelyi, J. Goldsmith, “Regulating Artificial Intelligence: Proposal for a Global Solution”
[2018] AIES 2018, 95-101;V.Wadhwa, “Laws and Ethics can’t keep Pace with Technology” [2014]
Massachusetts Institute of Technology:Technology Review, 15.
47 ENISA supra note 35, 14.
48 ibid.
49 Governments should according to ENISA also establish, identify or assign the competent national
authority to deal with cyber security aspects as applicable to the maritime sector. In most of the
countries, these competencies are not clearly established. This identified national authority should
(as applicable) be the central contact point for national cyber security initiatives within maritime
sector; ibid.
50 Article 6 of the Directive also provides that a product is defective when it does not provide the
safety which a person is entitled to expect, taking all circumstances into account, including the use
AI, smart seaports, and supply chain management 135
of causation and leaves most issues to national laws.51 This may potentially lead
to different results as to the liability regulations concerning AI technologies.52
The Directive does include some defences, which can be found in art. 7. For
instance, the producer shall not be liable if it can be proved that “having regard
to the circumstances, it is probable that the defect which caused the damage did
not exist at the time when the product was put into circulation by him or that
this defect came into being afterwards”. Moreover, the producer will not be
liable if “the state of scientific and technical knowledge at the time when he put
the product into circulation was not such as to enable the existence of the defect
to be discovered”.53 These provisions are meaningful, but they do not entirely
reflect the technological problems mentioned in this chapter. In particular, the
current product liability regime operates on the assumption that the product
does not continue to change in an unpredictable, unforeseeable manner once it
has left the production line.54 As Infantino and Zervogianni suggest, one of the
most important factors for establishing liability in many European legal systems
is foreseeability, which is based on the idea that “the defendant should be held
liable only for the damage that a reasonable person of ordinary prudence put in
his or her position would have foreseen as a likely result of his conduct”.55 This
is not necessarily the case with AI technologies, since AI may generate solu-
tions that humans might not have considered.56 According to Martin-Casals,
“foreseeability presents a vexing challenge to any legal system wishing to solve
the problem of affording redress to victims of AI caused harm”.57 He then sug-
gests that perhaps a better approach would be to rely upon theories based on
the “scope of the risk created by the defendant’s activity”.58 This “harm within
the risk” theory is actually not alien to European practice, since it can be found
to which it could reasonably be expected that the product would be put;Article 6 Council Directive
85/374/EEC.
51 They vary significantly from country to country; Martin-Casals (n 42) 227.
52 “This disparity may be increased by the acceptance of differing procedural devices alleviating the
burden of proof causation;” Ibid. See also Herbert Zech, “Liability for Autonomous Systems:Tack-
ling Specific Risks of Modern IT”, in Lohsee, Schulze, Staudenmayer (n 42), 187–200.
53 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and
administrative provisions of the Member States concerning liability for defective products.
54 Martin-Casals (n 42), 221.
55 M. Infantino, E. Zervogianni, “The European Ways to Causation” in Marta Infantino and Eleni
Zervogianni (eds.), Causation in European Tort Law (CUP 2017), 84–128.
56 M.U Schere, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and
Strategies” [2016] 29 Harvard Journal of Law & Technology, 353, 363.
57 Martin-Casals (n 42), 223.
58 For example, the American Restatement (third) builds the scope of liability factors around a risk test
and considers that an actor’s liability is limited to those harms that result from the risks that made the
actor’s conduct tortious; ibid, pp. 223.Yet, one has to note that also the Restatement (third) provides
that “pre-existing physical or mental condition or other characteristics of the person shall be taken into
account when a harm is of greater magnitude or different type than might be reasonably expected;
Article 31 Restatement (third) of law of torts: preexisting conditions and unforeseeable harm.
136 Regulating artificial intelligence in industry
as the basis of some of the tests already applied in Europe.59 Moreover, Kötz
and Wagner suggest that the protective purpose of such a rule (which refers to
those damages that are the continuing effect of the risk that made the tortfeasor
liable), together with the general risk of life (that are attributable to the victim)
enable a better solution without requiring the speculation involved in the fore-
seeability test.60 AI is a source of risk that exceeds other risks that have their
source in human behaviour.61 Thus, even in the case when harm is caused by
the unforeseeable behaviour of AI, its programmers should also be held liable.62
Law and economics literature offers some interesting responses to potential
regulatory intervention in the area of AI, which can also be applied to the mar-
itime industry. For example, Shavell suggests that if there is a party (a principal)
who has some control over the behaviour of another party (an agent/AI), then
the principal can be held vicariously liable for the losses caused by the agent.63
Hence, vicarious liability and a specific principal–agent relationship could be
introduced. The principal could be held vicariously liable for the damages
and/or losses caused by the agent (autonomous AI). This extension of liability
could lead indirectly to the reduction of risk and higher industry standards.
In particular, this could impose some of the following standards: a) compul-
sory insurance coverage (e.g. by the principal); b) regulations concerning safety
standards;64 c) subsidies for taking precautions and fines for causing harm; and
d) establishment of a worldwide publicly-privately-financed insurance fund.65
Additionally, some suggest introducing a strict liability for AI manufacturers,
which could be supplemented with a requirement that an unexcused violation
of the statutory safety standards constitutes negligence per se. Simultaneously,
compliance with this standard should not take away tort liability. 66
59 Infantino and Zervogianni emphasize that the idea often substantiates some other tests, such as the
adequacy test in Austria, Bulgaria, Poland and Portugal as the foreseeability test in England. See
Infantino and Zervogianni (n 57), 106.
60 H. Kötz, G.Wagner, Delikstrecht (Franz Vahlen, 2016), 84–92.
61 Martin-Casals (n 42), 224.
62 “In this case the harm occurs as a realization of a risk which is inherent to AI in accordance with
the dictate of the initial programming which enables the AI system to alter its behavior according to
subsequent experiences;” ibid at 224. See also Schere (n 58), 366–369.
63 S. Shavell,“The Judgement Proof Problem” [1986] 6 International Review of Law and Economics, 45–58.
64 ibid. Shavell points out that such direct regulation of safety standards will help to form principal’s
and manufacturer’s incentives to ex ante reduce risk as a precondition for engaging in an activity.
65 See for instance G. Calabresi, A.D. Malamed, “Property Rules, Liability Rules, and Inalienability:
One view of the Cathedral” [1972] Harvard Law Review, 85, 1089–1128; M.A Polinsky,“Strict Liabil-
ity vs. Negligence in a Market Setting” [1980] 70 American Economic Review, 363–367; M.L.Weitz-
man, “Prices vs. Quantities” [1974] Review of Economic Studies, 41, 477–491; D. Wittman, “Prior
Regulation versus Post Liability: The Choice between Input and Output Monitoring” [1977] 6
Journal of Legal Studies, 193–212.
66 P. Schmitz, “On the Joint Use of Liability and Safety Regulation” [2000] 20 International Review of
Law and Economics, 3, 371–382.
AI, smart seaports, and supply chain management 137
Conclusions
The task of handling an ever-increasing amount of cargo in a safe, efficient and
environmentally friendly way is one of the biggest challenges facing seaports
and the wider maritime industry. Smart AI-governed ports, have the potential
to cut human error, enable active communication with the social environ-
ment, make operations faster, safer, and more efficient, and cut carbon emis-
sions for environmental and energy sustainability. Such smart seaports strive to
provide seamless supply chain management, integrating both the supply and
demand sides to optimize the allocation of relevant resources, services, and
supervision, as well as autonomous loading and unloading.
However, the overwhelming employment of AI may lead to a variety of
new potential risks concerning maritime and supply chain security. When ships
and seaports become more and more “smarter” and need fewer and fewer
people to intervene in their activities, the question for lawmakers is how to
prevent, deter, and mitigate potential AI-generated public risks. For example,
one of the most triggering issues is how best to implement cyber security and
how to keep the digital and automated ports safe. Namely, AI-operated smart
ports and ships are potential subjects of attack that can result in safety and
security risks. Consequently, erroneous data might result in uncontemplated
losses, public safety hazards, and liability issues that call for substantive regula-
tory treatment.
Generally speaking, current legal mechanisms adequately address respon-
sibility for non-autonomous, human-supervised AI. However, if AI evolves
and becomes unsupervised, then regulatory intervention is needed. Both the
ISPS, as well as the International Maritime Organization’s Guide to Maritime
Security, should be amended to include AI-related specific risks. Similarly,
current Regulation (EC) No 725/2004 on enhancing ship and port facility
security could be amended to include AI-related risks. Among the proposed
amendments mentioned in this chapter is the inclusion of compulsory insur-
ance coverage, but also the adoption of safety standards, and an establishment
of a wide publicly-privately-financed insurance fund.
10 Artifcial intelligence and
climate-energy policies
of the EU and Japan1
Maciej M. Sokołowski
Although originating in the 1970s, the above quotation has not lost its sig-
nificance in the 21st century. However, due to the tremendous progress that
has been made over the years towards the advancement of AI, the Internet of
Things (IoT), robotics, and deep learning, the former ground-breaking inno-
vations have become everyday items. The AI revolution affects almost every
part of our lives, as AI finds its uses in various branches and industries.
This process does not omit the energy sector. On the contrary, it has had a
lasting effect on the energy market, gradually impacting the climate and energy
policy spheres. For instance, AI could support the modernisation of the elec-
tric grid, boost its stability, and prevent blackouts. This could be achieved by
controlling supply and demand at both local and national levels, helping energy
consumers to save energy and money by enabling the monitoring of their own
usage in real-time and adjusting the tariff accordingly, as well as facilitating the
interaction with the grid.3 Thus, AI may enhance energy efficiency and tackle
1 This chapter was prepared during a research stay in Japan under the Mobility Plus funding received
from the Ministry of Science and Higher Education (currently Ministry of Science and Education)
of Poland.
2 Commission, ‘The common policy in the field of science and technology’ (Communication) COM
(1977) 283 final.
3 STOA,‘The ethics of artificial intelligence: Issues and initiatives’ (Study) PE (2020) 634.452, 11.
DOI: 10.4324/9781003246503-12
AI and climate-energy policies of the EU and Japan 139
energy poverty,4 strengthen the production of energy from renewable sources5
(also by individuals),6 and facilitate the reduction of emissions.7
In this light, this chapter looks at AI through the prism of energy policies
and climate actions of the EU and Japan. These two examples were chosen for
analysis because both the EU and Japan have extensive experience in research
and development, which, when combined with their plans energy and climate
ambitions, including achieving net-zero greenhouse gas emissions by 2050,
could be used to highlight the needs of energy transition of important world
economies. Given the background, the study covers the public approach to
these matters along with related regulatory aspects. It juxtaposes the primary
documents, both past and current, relevant to the analysis of AI in the climate
and energy fields of these leading economies.8 In this context, the EU plans
to open its new multiannual financial framework (2021–2027) for investments
into unsupervised machine learning, energy, and data efficiency, offering sup-
port to more energy-efficient technologies and infrastructure ‘making the AI
value chain greener’.9 In Japan, high-quality data has been used for many years
to increase productivity at monozukuri10 production sites.11 In the 21st century,
AI, IoT, big data, and robotics are called the ‘most important key[s] to lead-
ing future revolution in productivity’ in Japan, as well as the ‘fourth industrial
20 K. Kulovesi, ‘Climate Change in EU External Relations: Please Follow My Example (or I Might
Force You To)’ in E. Morgera (ed.) The External Environmental Policy of the European Union: EU and
International Law Perspectives (CUP 2012), 118–119.
21 Commission, ‘The Greenhouse Effect and the Community’ (Communication) COM (1988) 656
final.
22 ibid., 42–50.
23 L. Massai, European Climate and Clean Energy law and Policy (Earthscan 2012) 50.
24 Council Decision (EEC) 93/389 of 24 June 1993 for a monitoring mechanism of Community CO2
and other greenhouse gas emissions [1993] OJ L167/31.
25 Adopted May 9, 1992, entered into force March 21, 1994, 31 ILM 849.
26 The Protocol to the United Nations Framework Convention on Climate Change adopted on 1
December 1997, entered into force on 16 February 2005, 1155 UNTS 331.
27 In July 1997, the US Senate passed the Byrd-Hagel Resolution opposing the Kyoto Protocol, and
in 2001 the Protocol was rejected by the George W. Bush administration, see J. Hovi, D.F. Sprinz, G.
Bang,‘Why the United States Did Not Become a Party to the Kyoto Protocol: German, Norwegian,
and US Perspectives’ (2012) 18(1) Eur. J. Int. Relat., 129.
28 Massai (n 22), 54.
29 In details: the reduction of emissions was set by at least 20% in comparison to 1990 (or even 30%,
if an international agreement on climate had been adopted), the increase in the share of renewable
capacity had to grow by 20%, and the improvement of energy efficiency was to improve by 20%;
an exception concerned the reduction of emissions which could be decreased by 30%, see M.M.
Sokołowski, European Law on Combined Heat and Power (Routledge 2020), 99.
142 Regulating artificial intelligence in industry
ing new directives on renewable energy and energy efficiency, which deter-
mined at least 32% for a share of renewable energy sources of the EU’s gross
final consumption30 and reducing EU’s energy consumption by at least 32.5%
through improvements in energy efficiency by 2030.31
Then, in December 2015, the first multilateral agreement on climate change
covering almost all global emissions – the Paris Agreement – was signed.32 Under
this international regime, the EU’s nationally determined contribution (NDC)
was established under the wider 2030 climate and energy framework (the pre-
viously mentioned 40%, 32%, 32.5%) for emissions, renewables, and energy
efficiency, respectively. At the end of 2019, driven by climate strikes and calls
for declaring a climate emergency, the European Commission announced an
initial roadmap of key policies and measures needed to achieve the European
Green Deal.33 The Deal set as a growth strategy brings ambitious EU goals of ‘a
fair and prosperous society, with a modern, resource-efficient and competitive
economy where there are no net emissions of greenhouse gases in 2050 and
where economic growth is decoupled from resource use’.34 Furthermore, in
March 2020, a proposal of a regulation on the European Climate Law, aimed
at creating a basis for achieving EU climate neutrality, was presented.35
Finally, the framework of the European Green Deal is a result of a multi-
layer approach focused on rethinking policies for clean energy supply in differ-
ent fields. The Deal covers economy, industry, production and consumption,
large-scale infrastructure, transport, food and agriculture, construction, taxa-
tion, and social benefits.36 In all these areas, digital technologies are ‘a crucial
30 According to Eurostat, the share of energy from renewable sources in gross final energy consump-
tion in the European Union (EU) reached 18.0% in 2018, up from 17.5% in 2017 and more than
doubling the share in 2004 (8.5%), the first year for which data are available, Eurostat, ‘Share of
Renewable Energy in the EU up to 18.0%’ (23 January 2020) <https://ec.europa.eu/info/news/
share-renewable-energy-eu-180-2020-jan-23_en> accessed 27 May 2021.
31 With the withdrawal of the United Kingdom, the EU-27 energy consumption for 2020 and 2030
after adjustment accounts for primary energy consumption of no more than 1 312 million tonnes
of oil equivalent (Mtoe) in 2020 and 1 128 Mtoe in 2030 and a final energy consumption of no
more than 959 Mtoe in 2020 and 846 Mtoe in 2030; in 2019, primary energy consumption in the
EU-27 reached 1 352 Mtoe (3.0% above the efficiency target for 2020 and 19.9% away from the
2030 target) while final energy consumption reached 984 Mtoe (2.6% above the efficiency target for
2020 and 16.3% away from the 2030 target) - when compared with 2018, primary energy consump-
tion decreased by 2% at EU level and final energy consumption by 1%, Eurostat,‘Primary and Final
Energy Consumption Slowly Decreasing’ (28 January 2021) <https://ec.europa.eu/eurostat/web/
products-eurostat-news/-/ddn-20210128-1?redirect=%2Feurostat%2Fweb%2Fenergy%2Fpublica-
tions> accessed 27 May 2021.
32 Paris Agreement Under the United Nations Framework Convention on Climate Change adopted
on 12 December 2015, entered into force 4 November 2016.
33 Commission,‘The European Green Deal’ (Communication) COM (2019) 640 final.
34 ibid., 2.
35 Commission,‘European Climate Law’ (Proposal) COM (2020) 80 final.
36 ibid., 4.
AI and climate-energy policies of the EU and Japan 143
for attaining the sustainability goals of the Green Deal’.37 Here, AI is listed as
one of those digital technologies (along with 5G, cloud and edge computing,
the IoT, etc.), which will be explored due to their potential for accelerating
and maximising the effects of policies to combat climate change and protect
the environment.38 Apart from technologies, an emphasis is put on accessible
and interoperable data, which, combined with digital infrastructure based on
supercomputers, cloud computing, and networks as well as artificial intelli-
gence solutions, provide a framework for evidence-based decisions and expand
the opportunity to recognise, analyse, understand, and challenge environmen-
tal issues.39
Nevertheless, just as in the cases of emissions, renewables, and energy effi-
ciencies, which were addressed by the EU well before the 21st century, the
issue of energy-related AI has been a topic of European policies for several dec-
ades. AI applications in energy already have a presence in Europe. At the turn
of the 1970s and 1980s, the Commission’s science and knowledge institution
– the Joint Research Centre (JRC) – initiated studies on expert systems and
man–machine communication ran as part of the nuclear safety research pro-
grammes used for monitoring fissile materials management.40 As anticipated,
this could have led to further improvements in robotics, which in the early
1980s Europe was ‘fairly well-placed as regards the micro-mechanics aspect,
but somewhat behind in terms of “intelligent” robots’.41 Studies have shown
that AI could be applied in the management of the nuclear industry, covering
decision making, diagnostic systems, and modelling of operator’s behaviour.42
The last two elements were further addressed in a light-water reactor (LWR)
safety programme providing research on human factors and man–machine
interaction.43
In the 1990s, the research priorities under the European framework followed
the possibilities of improving sustainability by applying innovative and clean
technologies to production systems.44 This concerned the ‘development of clean
37 ibid,. 9.
38 ibid.
39 ibid., 18.
40 Commission, ‘The future activities of the Joint Research Centre’ (Communication) COM (1983)
107 final, 22.
41 ‘This research work should make it possible to improve basic knowledge of operator behaviour from
the standpoint of mechanisms for information acquisition by taking into account factors, circum-
stances and the environment which influence them … Such theoretical knowledge will then be
applied to the definition of the models that can be used for probabilistic risk assessment and for the
improvement and rationalization of procedures; it will also make it possible to direct development
along the desired lines and to validate new sophisticated diagnostic aid systems’, ibid.
42 Proposal for a Council Decision adopting a research programme on reactor safety (1984–1987)
[1983] OJ C250/6.
43 ibid.
44 Commission, ‘The S and T content of the specific programmes implementing the 4th Framework
programme for community research and technological development (1994–1998) and the Frame-
144 Regulating artificial intelligence in industry
production’ by, e.g. using AI to increase productivity, improve energy efficiency,
or reduce waste.45 Examples of the implemented actions show that these results
have been achieved.46 In the 2000s, in addition to research, various European
institutions gradually began to notice the benefits of implementing AI in energy-
related areas. This applies, inter alia, to situations where robots, advanced auto-
mation, and AI could improve workplace safety and cost-effectiveness.47
In the 2010s, this move accelerated, and different incentives appeared. They
included, e.g. EU plans to create digital industrial platforms, i.e. multi-sided
market gateways of interactions between several groups of economic actors.
They had several aims, including addressing the challenges of digital technolo-
gies, IoT, big data, and autonomous cloud systems, AI, 3D printing, as well
as investments in pilot and lighthouse initiatives, such as smart cities and smart
living environments.48 The adjective – ‘smart’ – has, over the years, become a
crucial element of modern energy systems. Apart from smart cities and smart
homes, one may list such notions as smart manufacturing, smart mobility, smart
farming, or smart energy, with smart metres and smart grids.49 The latter has
become the current paradigm of development of the energy sector, where
real-time smart grid data helps with the efficient operation of the system and
facilitates electricity services.50 Nevertheless, within all of these ‘smart fields’,
‘energy’ plays an important role. This refers to the optimisation of energy
usage using various methods and technologies to lower it, adapting energy
consumption based on dynamic tariffs, improving the management of self-
consumption, storage and feed-in to the grids, or changing behaviour and
consumption patterns.51
work programme for community research and training for the European Atomic Energy Commu-
nity (1994–1998)’ (Working document) COM (1993) 459 final.
45 Council Decision 94/571/EC adopting a specific programme for research and technological devel-
opment, including demonstration, in the field of industrial and materials technologies (1994–1998)
[1994] L222/23.
46 For instance,AI used in the production of timber and pulp (to predict the quality of the end product
from raw material data and progress of the production process) in one project resulted in a drop
of the electricity, water, and starch consumption of industrial users as well as the amount of waste
produced. See Commission, ‘Research and technological development activities of the European
Union’ (Annual report) COM (1998) 439 final, 32.
47 Opinion of the European Economic and Social Committee on ‘The Perspectives of European Coal
and Steel Research’ [2005] OJ C294/7.
48 The Commission underlines the convergence of these technologies (IoT, big data and cloud, robotics
and AI) as the driver of the digital change which responds ‘to major aspirations of today’s customers’
which, apart from personalization or higher safety and comfort, also have energy dimension (like
advanced sensors and big data used in industrial processes which can improve energy and resource
efficiency), see Commission, ‘Digitising European Industry: Reaping the full benefits of a Digital
Single Market’ (Communication) COM (2016) 180 final, 4, 10.
49 see Commission, ‘Advancing the Internet of Things in Europe’ (Working document) SWD (2016)
110 final, 26.
50 Commission (n 14), 25.
51 see Commission (n 45), 31–38.
AI and climate-energy policies of the EU and Japan 145
Although this technological shift, of which AI is a part, can help the energy
transition and make a step (or even a jump) towards a zero-emission economy,
there are some concerns about the ecological aspects of this move. The grow-
ing use of AI results in the increased use of natural resources, as well as elevated
demand for energy and problems with waste disposal.52 While AI, cloud com-
puting, or IoT can accelerate and optimise the effect of policies on climate
change and environmental protection, their development and operation must
be climate-neutral and environmentally friendly.53 Therefore, ‘[a]t the same
time, Europe needs a digital sector that puts sustainability at its heart’.54
52 STOA (n 2), 3.
53 Commission (n 25), 9.
54 ibid., 9.
55 K.Takahashi,‘Sunshine Project in Japan – Solar Photovoltaic Program’ (1989) 26(1–2) Sol. Cells, 87.
56 Y. Hamakawa,‘Present Status of Solar Photovoltaic R&D Projects in Japan’ (1979) 86 Surf. Sci., 444.
57 In 1983 this production accounted for around 5,000 kW, while in 1986 it reached 12,500 kW,
being used for the needs of consumer appliances including calculators, watches, radios, and toys, see
Takahashi (n 52), 96.
58 S. Chowdhury, U. Sumita, A. Islam, I. Bedja, ‘Importance of Policy for Energy System Transforma-
tion: Diffusion of PV Technology in Japan and Germany’ (2014) 68 Energy Policy, 285, 288.
59 M. Tatsuta, ‘New Sunshine Project and New Trend of PV R&D Program in Japan’ (1996) 8(1–4)
Renew. Energy, 40.
60 Y. Fukasaku,‘Energy and Environment Policy Integration:The Case of Energy Conservation Policies
and Technologies in Japan’ (1995) 23(12) Energy Policy, 1063, 1067.
146 Regulating artificial intelligence in industry
of 1993.61 This was enabled by reorganisation, which included combining the
‘Sunshine Project’ with the ‘Moonlight Project’ and the ‘Global Environmental
Technology Program’ in 1993.62 With the objective of developing innovative
technology for the needs of sustainable growth, the new programme included
the development of low-cost photovoltaic (PV) technologies.63 Along with the
incentive programmes such as ‘Residential PV System Dissemination Program’,
as well as its predecessor ‘Residential PV System Monitoring Program’ Japan
was able to build up a self-supporting market for PV.64
As a result of these incentives, during the 1980s and 1990s, many demon-
stration projects and basic research and development work were supported in
Japan, creating the necessary demand for solar cells and continued improve-
ment of conversion efficiency and PV economics.65 Thanks to these projects,
Japan has been a long-time world leader in solar energy;66 however, with the
reduction and then discontinuation of solar subsidies in 2005, the PV market
in Japan has stagnated.67 To change this, in 2009, the national goal of increas-
ing PV capacity by a factor of 20 by the year 2020 (taking into account the
2005 levels), and by a factor of 40 by 2040 was announced.68 In the early 2010s
this was enhanced by a new feed-in tariff for electricity production in solar
installations.69 A further discussion on the energy policy in Japan was driven by
the 2011 Fukushima nuclear accident.70
In June 2019, Japan approved its ‘Long-term Strategy under the Paris
Agreement’ (Strategy 2019).71 It brought a vision of ‘decarbonised society’
potentially to be achieved by Japan in the second half of the 21st century (or
earlier) by reducing greenhouse gases emissions by 80%.72 With seven key
73 The key areas for decarbonisation in Japan include: hydrogen; carbon dioxide capture and storage
(CCS); carbon dioxide capture and utilization (CCU); renewable energy; storage batteries; nuclear
energy; and, finally, other issues such as challenges and collaboration (both internal and external),
see ibid., 16.
74 ibid., 15–16.
75 ibid., 17.
76 F. Shimpo, ‘The Principal Japanese AI and Robot Law. Strategy and Research toward Establishing
Basic Principles’ (2018) 3 J. Law. Info. Syst., 44, 49.
77 ‘Long-Term Strategy’ (n 68), 18.
78 ibid., 54–56.
79 M.M. Sokołowski, ‘European Law on the Energy Communities: A Long Way to a Direct Legal
Framework’ (2018) 27(2) EurEnergyEnvironLawRev 60; M.M. Sokołowski, ‘Renewable Energy
Communities in the Law of the EU, Australia, and New Zealand’ (2019) 28(2) Eur. Energy. Environ.
Law Rev., 34.
80 ‘Long-Term Strategy’ (n 68) 54-56.
81 A.Visvizi, M.D. Lytras, G. Mudri,‘Smart Villages: Relevance,Approaches, Policymaking Implications’
in Anna Visvizi, Miltiadis D. Lytras, György Mudri (eds) Smart Villages in the EU and Beyond (Emerald
2019), 2.
82 ‘Long-Term Strategy’ (n 68), 51, 55, 58.
148 Regulating artificial intelligence in industry
reduce nitrous oxide emissions by controlling the amount of fertilisers applied
as well as using AI to monitor greenhouse gas emissions.83 Moreover, Japan
will promote a broader usage of new energy-efficient products employing AI,
IoT, and big data, the market for which will be created before 2040.84
Nevertheless, Strategy 2019 is not the first reference to AI as a potential
solution for Japan and its energy sector. At the beginning of the 1980s, Japan
launched the Fifth Generation Computer System project, a bold plan to rev-
olutionise computing hardware and software. The project, besides its major
technical goals, introduced various social aims, including the improvement
of energy efficiency.85 Also, in the 1980s, applications of AI in the operation
and maintenance of nuclear power plants were promoted under the long-term
national programme for the development and utilisation of nuclear energy in
Japan.86 Moreover, in the early 1980s, the Japanese electricity sector embarked
on a large effort to employ AI tools for improved system operation, planning,
diagnosis,87 and control.88 The need for these solutions was driven by different
reasons, including: shortages of qualified staff due to the retirement of post-
World War II era workers, the desire to run the system with lower margins,
increased efficiency, to speed up fault clearing and service restoration, enhance
customer relations, and provide the utility engineer with more efficient design
and planning tools.89 Additionally, at that time, various research platforms were
used to promote AI applications in Japan. The Society of Electrical Cooperative
83 ibid., 58.
84 ibid., 52.
85 C. Shunryu Garvey, ‘“AI for Social Good” and the First AI Arms Race Lessons from Japan’s Fifth
Generation Computer Systems (FGCS) Project’ (Proceedings of the 34th Annual Conference of
the Japanese Society for Artificial Intelligence, Kumamoto, June 2020) 1–2; R.P. van de Riet, ‘An
Overview and Appraisal of the Fifth Generation Computer System Project’ (1993) 9(2) Future Gener.
Comput. Syst., 83.
86 This included the creation of an autonomous AI-supported system which was used, inter alia, for the
diagnosis and corrective action of plant anomalies as well as the development of an inspection and
maintenance robot capable of making sophisticated decisions and operating in a highly radioactive
environment. See: M. Itoh, I. Tai, K. Monta, K. Sekimizu, ‘Artificial Intelligence Applications for
Operation and Maintenance: Japanese Industry Cases’, in M.C. Majumdar, D. Majumdar, J. I. Sackett
(eds) Artificial Intelligence and Other Innovative Computer Applications in the Nuclear Industry (Springer
1988), 41.
87 S. Rahman, ‘Artificial Intelligence in Electric Power Systems: A Survey of the Japanese Industry’
(1993) 8(3) IEEE Trans. Power Syst., 1211, 1212.
88 For instance, the hybrid systems using fuzzy control techniques built in Fukuchiyama and Kongosan
tunnels in 1988 have proved to be successful in precisely regulating ventilation and providing energy
savings of up to 30%. See S.J. Biondo, ‘CI Controls for Energy and Environment’ (Computational
Intelligence and Its Impact on Future High-Performance Engineering Systems: Proceedings of a
Workshop Sponsored by the National Aeronautics and Space Administration and the University of
Virginia, Hampton 27–28 June 1995), 46.
89 Rahman (n 86).
AI and climate-energy policies of the EU and Japan 149
Research, R&D Group for AI in Power Utilities, and Institute of Electrical
Engineers in Japan (IEEJ) could be found among them.90
However, in the late 1980s, Japan started losing its advantages. Although
emerging technologies had become more dynamic and cross-disciplinary, the
Japanese administration had not been well suited to play a constructive role.91
The Fifth Generation Computer System project ended in 1992 without a clear
sign of success.92 Bureaucratic rivalry, poor coordination, and strong interven-
tionism undermined the advantageous position of Japanese companies in the
race for leadership in an information technology-driven business.93 However,
various types of research and work in the field of AI in the energy sector were
still carried out. Such examples include the experimental knowledge-based
system, Japanese Energy Supply Security Expert (JESSE),94 which was a model
of decision making in the Japanese energy policy.95 Outside these systems,
certain activities were conducted in developing fuzzy logic, neural networks,
and machine reasoning algorithms for commercial applications, also as part of
the discussion on the prospects of Japan’s economic performance in the post-
bubble era.96
Nevertheless, despite vast experience in technologies, electronics, and
robotics, some problems were encountered in the growth of the AI industry
in Japan.97 As highlighted in the ‘Artificial Intelligence Technology Strategy’
of 2017,
90 For example, in 1987, IEEJ launched a research committee on AI applications in the electricity sec-
tor with the intention of summarising the data on expert systems, see ibid.
91 According to Lehmann, the Japanese ‘ministries have “succeeded” in protecting domestic players
from foreign competition but have failed to promote global players’ by being wrongly painted ‘with
the same industrial-policy brush’. See: J-P. Lehmann,‘Japan and Pacific Asia: From Crisis to Drama’,
in Gerald Segal, David S. G. Goodman (eds) Towards Recovery in Pacific Asia (Routledge 2000) 98;
C.J. McMillan,‘Going Global: Japanese Science-Based Strategies in the 1990s’ (1991) 12(2) Manage
Decis. Econ., 171.
92 Y. Mikanagi, Japan’s Trade Policy:Action or Reaction? (Routledge 1996), 138n11.
93 Lehmann (n 90).
94 D.A. Sylvan, A. Goel, B. Chandrasekaran, ‘Analyzing Political Decision Making from an Informa-
tion-Processing Perspective: JESSE’ (1991) 34(1) Am. J. Pol. Sci., 74.
95 Aimed at proposing policies based on a collection of stored plans, JESSE, by asking users about the
current energy situation in Japan, integrated multiple decision-makers and the cognitive, interpre-
tive processes forming a part of those decisions to create (from the decision-makers’ responses) a
knowledge base describing Japanese energy security. However, one should notice the inflexibility of
its integrated classification schemes, since the categories reflected the worldview of the coder, which
may not align with the worldviews of actual decision-makers. See: G. Duffy, S.A. Tucker, ‘Political
Science:Artificial Intelligence Applications’ (1995) 13(1) Soc. Sci. Comput. Rev., 1, 6.
96 J. Lee,‘Overview and Perspectives on Japanese Manufacturing Strategies and Production Practices in
Machinery Industry’ (1997) 37(10) Int. J. Mach.Tools. Manufact., 1449, 1450.
97 This concerns, inter alia, deep learning. Japan is significantly behind the US in this regard, but trying
to catch up, see Jolley (n 13).
150 Regulating artificial intelligence in industry
[w]hen looking at … papers related to AI … the number of Japanese
papers falls below the number of papers in the US and China, and it is clear
that there is insufficient investment in research and development by both
the public and private sectors.98
104 STOA, ‘Legal and ethical reflections concerning robotics’ (Policy Briefing) PE (2016)
563.501 <https://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/EPRS_
STU(2016)563501(ANN)_EN.pdf> accessed 27 May 2021.
105 ibid., 6–7.
106 Directive 2012/27/EU of the European Parliament and of the Council of 25 October 2012 on
energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and repealing Directives
2004/8/EC and 2006/32/EC [2012] OJ L315/1 (Energy Efficiency Directive).
107 Directive 2010/30/EU of the European Parliament and of the Council of 19 May 2010 on the
indication by labelling and standard product information of the consumption of energy and other
resources by energy-related products [2010] OJ L153/1 (Energy Labelling Directive).
108 B. Morgan, K.Yeung, An Introduction to Law and Regulation:Text and Materials (CUP 2007), Barry
Barton, ‘The Theoretical Context of Regulation’, in Barry Barton, Alastair Lucas, Lila Barrera-
Hernández,Anita Rønne (eds) Regulating Energy and Natural Resources (OUP 2006) 11; S.P. Croley,
Regulation and Public Interests:The Possibility of Good Regulatory Government (PUP 2008), 81–101.
109 For instance ‘regulation’ means both a concrete legal act – like an EU Regulation or a regulation
adopted by a relevant authority – and a wide action conducted by a government and/or its entities
to steer, adjust, or influence a process or an issue, etc., see M.M. Sokołowski,‘Regulatory Dilemma:
Between Deregulation and Overregulation’ in J. Jagielski, D. Kijowski, M. Grzywacz (eds) Prawo
administracyjne wobec współczesnych wyzwań. Księga jubileuszowa dedykowana profesorowi Markowi Wier-
zbowskiemu [Administrative Law Facing Contemporary Challenges: Jubilee Anniversary Publica-
tion Dedicated to Professor Marek Wierzbowski] (CH Beck 2018) 592.
110 For instance Anglo-American vs Continental European.
111 Barton (n 107), 12.
112 Sokołowski (n 17), 87.
113 A. I. Ogus, Regulation: Legal Form and Economic Theory (OUP 1994), 1–2.
114 Sokołowski (n 17), 222–223.
152 Regulating artificial intelligence in industry
Nevertheless, what is left for further analysis is the activity of an energy
regulator using AI. In reality, despite many examples of technological advances
in AI, public administration (including regulatory authorities) still provides
services in an old-fashioned way, even though the deployment of new tech-
nologies and solutions in public administration could be very beneficial.115 It
could increase both government effectiveness and efficiency, improve citizen
satisfaction, facilitate connectivity and interactivity, and reduce administrative
burdens.116 Such improvements in the electricity sector could help to facili-
tate greater changes much faster than those introduced by the administration
responsible for energy alone.
History shows that certain regulations (or a lack thereof) may cause issues.
A good example is a deregulatory matter that occurred in California in the
2000s, which affected the quality of the services, as well as prices.117 This may
also occur in other parts of the world, including Japan, if the energy market
reform fails. In order to avoid this, a solution could be the so-called model of
the ‘day-watchman’ regulator.118 The day-watchman bridges the gap between
total subordination and complete release.119 Under this model, the state nei-
ther actively participates in the market, nor is it completely absent (balance
between the market and the state, and between public and private interests).120
To provide this balance, the regulator is a vigilant, curious, and active observer
(but not an active participant). The regulator is the day-watchman who, acting
under the state’s power, may clarify the rules of the market, provide informa-
tion to the market players, and enforce the rules through the use of sanctions.121
However, in reality this model can prove to be inadequate because the
regulator is just a human, and the wrong person in the wrong position may
115 M.Wierzbowski, R. Galán Vioque, E.o Gamero Casado, M.k Grzywacz, M.M. Sokołowski,‘Chal-
lenges and Prospects of E-Governance in Poland and Spain’ (2021) 17(1) Electron. Gov. Intl. J., 1.
116 W. Gomes de Sousa, E/ Regina Pereira de Melo, P.H. De Souza Bermejo, R.Araújo Sousa Farias,A.
Oliveira Gomes,‘How and Where is Artificial Intelligence in the Public Sector Going? A Literature
Review and Research Agenda’ (2019) 36(4) Gov. Inf. Q., 101392 <www.sciencedirect.com/science
/article/pii/S0740624X18303113> accessed 27 May 2021.
117 The liberalisation (deregulation) of the electricity sector in California caused a giant increase in the
final prices and led to a significant decline in the quality of services offered to energy users in the
early 2000s. However, this miscalculation turned out to be an important lesson for the EU and its
Member States as well as for other countries which led to a conclusion of not leaving the market-
challenge in the electricity sector alone, see Sokołowski (n 17), 113.
118 M.M. Sokołowski, ‘Rozważania o istocie współczesnej regulacji’ [Considerations on the Essence
of Modern Regulation] in A. Walaszek-Pyzioł (ed) Regulacja: innowacja w sektorze energetycznym
[Regulation: Innovation in the Electricity Sector] (CH Beck 2013), 318, M.M. Sokołowski,‘Aks-
jologia europejskiego prawa energetycznego’ [Axiology of European Energy Law], in J. Zimmer-
man (ed), Aksjologia prawa administracyjnego [Axiology of administrative law] vol. 2 (Wolters Kluwer
Polska 2017), 642.
119 M.M. Sokołowski,‘Balancing Energy Regulation: A Day-Watchman Approach’ in R.t Grzeszczak
(ed) Economic Freedom and Market Regulation: In Search of Proper Balance (Nomos 2020), 174.
120 ibid., 175.
121 Sokołowski (n 17), 89.
AI and climate-energy policies of the EU and Japan 153
lead to inadequate and ineffective regulation.122 Every good solution needs
floodgates, e.g. collegiate bodies in regulatory systems (commissions, boards,
etc.), which can correct the mistakes of the regulator.123 Nevertheless, no-one
can absolutely avoid the appointment of three, five, or seven (or any other
number) of unsuitable candidates as commissioners of such a regulatory com-
mission, as shown by past experience.124
If it comes to alternatives, one solution could be the possibility of applying
AI to support the day-watchman model. New technologies and solutions can
make this model work by making it smart enough to regulate a smart electricity
sector. This, however, introduces other issues. If this alternative is feasible, how
can this model be developed within the legal system? How can AI enhance tra-
ditional regulatory tools in the electricity sector? Could AI be treated as a sepa-
rate tool or a point of regulatory validation? What kinds of risks could emerge?
Can these risks be mitigated, and if so what is required to mitigate them? These
questions need further research125 on energy regulators and the application of
AI, both in the electricity sector and in their regulatory duties.
Conclusions
In October 2020, the European Council decided to provide, for the digi-
tal transition, at least 20% of the funds under the Recovery and Resilience
Facility. This, along with the amounts under the MFF, should help to advance
EU environmental and climate objectives by ‘unleashing the full potential of
digital technologies’.126 Moreover, the new European Research Area (ERA)
will improve Europe’s recovery and help its green and digital transformations
by fostering innovation-based competitiveness and promoting technological
sovereignty in key strategic areas like AI, data, microelectronics, quantum
computing, 5G, energy storage, renewable energy, hydrogen, zero-emission,
and smart mobility.127
In this context, it must not be overlooked that issues related to energy con-
sumption are a permanent part of the discussion on the use of AI in the EU.133
Propelled by the climate action, steered by the growth of renewable energy
sources, reduction of emissions, and the enhancement of energy efficiency,
European energy policy moves towards climate neutrality, set to be reached
by 2050.134 In Japan, the Prime Minister Suga’s administration promised to cut
greenhouse gas emissions in Japan to net zero by the same year.135
However, this transition will not happen by itself, and much effort is still
needed. For instance, by setting up a new agency in 2021, Japan plans to
enhance the digitalisation of government functions.136 AI, robotics, IoT, cloud
128 Changes were introduced, inter alia, by Directive (EU) 2018/2002 of the European Parliament
and of the Council of 11 December 2018 amending Directive 2012/27/EU on energy efficiency
[2018] OJ L328/210, see Sokołowski (n 28), 198–200.
129 Regulation (EU) 2017/1369 of the European Parliament and of the Council of 4 July 2017 setting
a framework for energy labelling and repealing Directive 2010/30/EU [2017] OJ L198/1.
130 Commission Regulation (EU) 2019/424 of 15 March 2019 laying down ecodesign requirements
for servers and data storage products pursuant to Directive 2009/125/EC of the European Parlia-
ment and of the Council and amending Commission Regulation (EU) No 617/2013 [2019] OJ
L74/46.
131 ‘[A] server which is designed and optimized to execute highly parallel applications, for higher
performance computing or deep learning artificial intelligence applications’, see ibid, annex I (11).
132 ibid., recital 4.
133 see STOA (n 2), 28–29.
134 European Council,‘European Council meeting (12 December 2019)’ (Conclusions) EUCO 29/19.
135 S. Sasaki,‘Japan PM Suga Vows Goal of Net Zero Emissions by 2050’ Kyodo News (Tokyo, 26 Octo-
ber 2020) <https://english.kyodonews.net/news/2020/10/7a5539cd0324-japan-pm-suga-vows
-goal-of-net-zero-emissions-by-2050.html> accessed 27 May 2021.
136 ibid.
AI and climate-energy policies of the EU and Japan 155
computing, big data, and many other technological innovations have a great
potential for improving this process, making the transition smart, sustainable,
green, and efficient. Incentives similar to the Green Deal or post-COVID-19
pandemic actions are the right framework for not only unlocking the capac-
ity of AI for this transformation, but also for managing and regulating it in a
smart way.137 This also applies to the ongoing COVID-19 pandemic, where
AI is finding a variety of applications to manage changes in electricity demand
(increased residential – decreased industrial/commercial) due to shifts in job
patterns and lifestyles.138 They cover, inter alia, real-time monitoring of energy
production and consumption from various sources, including renewables, data
analytics, and the development of predictive methods for analysing volatile
energy use trends.139
In this light, AI should also be clearly recognised as a tool for achieving
climate neutrality, so its development must be low-emission, renewable-
energy-driven, and energy-efficient. This also concerns the recent proposal
of regulation laying down harmonised rules on AI (the Artificial Intelligence
Act) in the EU.140 Despite addressing AI as a key competitive advantage to sup-
port socially and environmentally beneficial outcomes, in areas such as energy
efficiency and climate change mitigation and adaptation, and briefly covers
environmental sustainability, the proposed Artificial Intelligence Act does not
include specific provisions on enhancing climate and energy issues through the
use of AI.141 If we really want to prepare for the realities of tomorrow, we must
act now and fast, since yesterday has become today in many areas related to AI.
137 M.M. Sokołowski,‘Regulation in the Covid-19 Pandemic and Post-Pandemic Times: Day-Watch-
man Tackling the Novel Coronavirus’ (2020) 15(2) TGPPP, 206 <DOI: 10.1108/TG-07-2020-
0142>.
138 R. Madurai Elavarasan, G.M. Shafiullah, K. Raju,V. Mudgal, M.T.Arif,T. Jamal, S. Subramanian,V.S.
Sriraja Balaguru, K.S. Reddy, U. Subramaniam,‘Covid-19: Impact Analysis and Recommendations
for Power Sector Operation’ (2020) 279 Applied Energy, 115739.
139 Ch. Ghenai, M.r Bettayeb, ‘Data Analysis of the Electricity Generation Mix for Clean
Energy Transition During Covid-19 Lockdowns’ (2021) Energy Source Part A <DOI:
10.1080/15567036.2021.1884772>.
140 Commission ‘Regulation of the European Parliament and of the Council Laying Down Harmo-
nised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union
legislative acts’ (Proposal) COM (2021) 206 final.
141 See Article 63(3), ibid.
11 The regulation of militarised
artifcial intelligence
Protecting civilians through legal reviews
of new weapons and precautions
Tsvetelina van Benthem
Introduction
According to the International Court of Justice, the legal principles of the law
of armed conflict (LOAC) apply ‘to all forms of warfare and to all kinds of
weapons, those of the past, those of the present and those of the future.’1 One
of the most difficult and complex discussions on the application of the inter-
national law to novel technologies is that on autonomous weapons systems
(AWS). For years, this discussion has been unfolding from the pages of aca-
demic journals and the halls of Palais des Nations in Geneva, where the Group
of Governmental Experts on Lethal Autonomous Weapons Systems (GGE on
LAWS) regularly convenes.2 This Group has not only greatly contributed to
the sophistication of the debate on these weapons, but it has also facilitated a
dynamic exchange of State positions on the scope of international legal rules
and their application to emergent military capabilities with varying degrees of
autonomy. The Group has adopted, by consensus, eleven Guiding Principles,
the first of which affirms that ‘[i]nternational humanitarian law continues to
apply fully to all weapons systems, including the potential development and use
of lethal AWS.’3 While there is agreement that international humanitarian law4
1 Legality of the Threat or Use of Nuclear Weapons (ICJ Advisory Opinion 1986) at 86.
2 This Group was established within the framework of the Convention on Certain Conventional Weap-
ons and has been convening since 2017. Before the establishment of the Group, the Meeting of High
Contracting Parties, in 2013, agreed on a mandate on lethal AWS to be carried out by the Meet-
ing of Experts on Lethal Autonomous Weapons Systems. The Meeting of Experts held meetings in
2014, 2015 and 2016. More information on the sessions of the Group can be found on this website:
<https://unog.ch/80256EE600585943/(httpPages)/5535B644C2AE8F28C1258433002BBF14?Op
enDocument>.
3 Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous
Weapons Systems, Report of the 2019 session of the Group of Governmental Experts on Emerg-
ing Technologies in the Area of Lethal Autonomous Weapons Systems (25 September 2019) CCW/
GGE.1/2019/3, Annex IV, available at: <https://documents.unoda.org/wp-content/uploads/2020
/09/CCW_GGE.1_2019_3_E.pdf> accessed 5 May 2021 (2019 GGE on LAWS Report).
4 In this Chapter, ‘international humanitarian law’ and ‘the law of armed conflict’ will be used inter-
changeably.
DOI: 10.4324/9781003246503-13
The regulation of militarised artificial intelligence 157
applies to all weapons systems, the difficult question that remains is how exactly
it applies to those systems. This question of ‘how’ is related to the scope of the
rules of international humanitarian law, including the precise contours of the
duties imposed on parties to armed conflict.
The aim of this Chapter is to provide an overview of State positions on the
regulation of autonomous military capabilities relying on AI, and to outline
three obligations of particular importance in ensuring the safe deployment of
weapons with autonomy in decision-making: the obligation to conduct legal
reviews of new weapons, the obligation to take precautions in attack, and the
obligation to take precautions against the effects of attacks.
5 ICRC,‘How does Law Protect in War? – Means of Warfare’, available at: <https://casebook.icrc.org
/glossary/means-warfare> accessed 27 May 2021.
6 Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection
of Victims in International Armed Conflicts (adopted 8 June 1977, entered into force 7 December
1978) 1125 UNTS 3 (‘Additional Protocol I’),Art. 35(1).
7 ibid.,Art. 35(2).
8 ibid.,Art. 35(3).
9 ibid.,Art. 51(4)(b).
10 ibid.,Art. 51(4)(c).
11 Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel
Mines and on their Destruction, done in Oslo on 18 September 1997, entered into force 1 March
1999, 2056 UNTS 211.
12 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons (CCW)
which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (as amended
on 21 December 2001) 1342 UNTS 137, Protocol IV.
158 Regulating artificial intelligence in industry
human fallibility, to protect one’s own forces, and to minimise costs.13
Autonomy has, in recent years, come to be seen as an attractive feature of
weapons, not least because it promises high levels of precision achieved from
a safe distance. It is therefore unsurprising that many states have made signifi-
cant advances in the research and development of such weapons and weapon
systems, and that these advances only signal the beginning of battlefield auto-
mation. In a recently updated report prepared for Members and Committees
of the United States Congress entitled ‘Artificial Intelligence and National
Security’, the US Congress confirmed that the US is already integrating AI
into combat through Project Maven, and that it does not consider that LOAC
imposes a prohibition on the development of lethal AWS.14 Russia has also
demonstrated its commitment to the harnessing of AI across sectors through
its 2019 AI Strategy.15 On the military front, the Kalashnikov company, a sub-
sidiary of the Russian state-owned Rostec corporation, acknowledged its cur-
rent research and development of AI-based weapons,16 and President Vladimir
Putin, in his 2018 address to the Federal Assembly, announced that Russia had
developed unmanned submersible vehicles that could carry either conventional
or nuclear warheads.17 China is also seeking to modernise its military by opera-
tionalising AI.18
But what exactly is the autonomy that these and other states are seeking
to achieve? Autonomy in weapons can manifest itself in a multitude of ways:
autonomy in the gathering of information, autonomy in identifying a target,
autonomy in deploying force towards a person or object. Of these types of
13 For an overview of the arguments in favour of deploying autonomous, see G.P. Noone and D.C.
Noone, ‘The Debate over Autonomous Weapons Systems’ [2015] 47 Case Western Reserve Journal of
International Law, 25.
14 Congressional Research Service,‘Artificial Intelligence and National Security’ (updated 10 Novem-
ber 2020), available at: <https://fas.org/sgp/crs/natsec/R45178.pdf> accessed 27 May 2021, 1, 15.
The importance the US places on the development of AI for military purposes was also outlined
in the Summary of the ‘2018 Department of Defence Artificial Intelligence Strategy: Harnessing
AI to Advance our Security and Prosperity’, available at: <https://media.defense.gov/2019/Feb/12
/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF> accessed 27 May 2021.
15 ‘Указ Президента РФ от 10 октября 2019 г. № 490 “О развитии искусственного интеллекта
в Российской Федерации”’ (translation by author: Proclamation by the President of the Russian
Federation from 10 October 2019, No 490 ‘On the development of artificial intelligence in the Rus-
sian Federation), available at: <http://www.kremlin.ru/acts/bank/44731>, accessed 27 May 2021.
16 ‘В России тестируют оружие с искусственным интеллектом’ (translation by author: ‘Weapons
equipped with AI being tested in Russia’), Tass, 26 February 2019, available at: <https://tass.ru/
armiya-i-opk/6157919> accessed 27 May 2021.
17 Presidential Address to the Federal Assembly, 1 March 2018, available at: <http://en.kremlin.ru/
events/president/news/56957> accessed 27 May 2021.
18 E.B. Kania,‘Chinese Military Innovation in Artificial Intelligence’,Testimony before the U.S.-China
Economic and Security Review Commission Hearing on Trade, Technology, and Military-Civil
Fusion (7 June 2019), available at: <https://s3.us-east-1.amazonaws.com/files.cnas.org/documents
/June-7-Hearing_Panel-1_Elsa-Kania_Chinese-Military-Innovation-in-Artificial-Intelligence.pdf
?mtime=20190617115242&focal=none> accessed 27 May 2021.
The regulation of militarised artificial intelligence 159
autonomy, the one that has been singled out as particularly problematic is the
autonomy that relies on sensor suites and computer algorithms to indepen-
dently identify a target and employ force towards that target without manual
human control and concrete human decision-making for the engagement of
the particular target.
AI is likely to play a key role in the development of such autonomy for the
direct engagement of persons and objects.19 This is because AI is seen as par-
ticularly well-suited to address the needs of modern battlefields: an algorithm
that can learn from its environment and adapt to changing circumstances even
in communications-degraded or -denied environments,20 and that can further
enable precision strikes in areas that may be inaccessible to ground troops. In
particular, advances in machine learning have made it possible to envisage an
entity that is capable of learning, of becoming better by experience, through
finding statistical comparisons in past data.21 Such learning military applications
would allow states to bypass detailed hand programming, which can be time-
consuming and research-heavy.22 Boothby gives the example of a machine
which ‘may observe the pattern of life in the area of interest and may then
adjust a target list fed into it in advance of the mission to take account of the
observations that it has made’.23
That the introduction of AI in decision-making on the direct engagement
of targets will necessitate changes in the procedures for training, legal reviews,
and deployment is uncontested.24 What is, however, contested is whether the
introduction of such AI capabilities will lead to a fundamental reappraisal of
19 V. Boulanin (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk,Volume I: Euro-
Atlantic Perspectives (SIPRI 2019), 14.
20 Congressional Research Service,‘Artificial Intelligence and National Security’ (updated 10 Novem-
ber 2020), available at: <https://fas.org/sgp/crs/natsec/R45178.pdf> accessed 27 May 2021, 15.
21 SIPRI 2019 (n 19), 16.
22 ibid., 15–16.
23 W.H. Boothby,‘Highly Automated and Autonomous Technologies’ in W.H. Boothby (ed.), New Tech-
nologies and the Law in War and Peace (CUP 2018), 150.
24 According to the 2020 Statement of Sweden to the GGE on LAWS, ‘Weapon systems need to be
sufficiently predictable and reliable allowing for the operators to be certain that the systems will
function in accordance with the intention of the operator. It is essential that any complex system
has rigorous handling regulations, including manuals, procedures of use and methods for training.
These must all form part of the essential toolbox required for human-machine interaction and for
lethal AWS to be used in compliance International Law. Furthermore, legal advisors specialized
in international law can play a valuable advisory role in military decision-making related to the
interpretation and application of International Humanitarian Law’, Statement by Sweden at the
CCW GGE on Lethal Autonomous Weapons (LAWS), Geneva 21–25 September 2020, available at
<https://documents.unoda.org/wp-content/uploads/2020/09/200921-Anforande-LAWS-human
-control_.pdf> accessed 27 May 2021. France acknowledged the importance of technical certifica-
tions, training on the system and of the system itself, training on using AI-based command systems
(CCW GGE on LAWS, Operationalization of the 11 guiding principles at national level – Com-
ments by France (2020), available at: <https://documents.unoda.org/wp-content/uploads/2020/07
/20200610-France.pdf> accessed 27 May 2021.
160 Regulating artificial intelligence in industry
the ways in which weapons are conceptualised and regulated.25 To consider
this question, it is worth comparing these new military applications to previous
weapons. This study focuses on landmines and guided missiles as examples.
Landmines produce a blast upon exertion of a certain kinetic force.26 After
their placement in a given location, there is no need to make decisions over
specific impacts. In this sense, landmines have a degree of autonomy vis-à-vis
those using them to kill, injure, damage, or destroy enemy forces or mate-
riel. Precisely for that reason, anti-personnel landmines have been banned by
the Convention on the Prohibition of the Use, Stockpiling, Production and
Transfer of Anti-Personnel Mines and on Their Destruction.27 However, the
lack of concrete decision-making on specific impacts by armed forces does
not mean that landmines make their own ‘decisions’. Certainty, conditions
for engagement is the key characteristic of this weapon; the exact condition to
trigger the blast is precisely known in advance.
Guided missiles can be deployed across vast geographical distances, even
without physical proximity.28 What characterises these weapons is, first, the
fact that they are guided through their course, and second the fact that there
is no independence in the choice of targets. That said, geographical distance
and temporal lapses between launch and impact can have an effect on some
intervening factors, including sudden changes in weather conditions. As with
landmines, there is a certain understanding of the factors that may affect the
performance of these weapons, including their relation to the environment.
What is it, then, that makes us uneasy about the deployment of weapons
that can engage in the direct targeting of persons and objects without specific
targeting decisions made by humans? There are two factors that can explain
this unease: the first is the acceptance that these weapons can learn, thereby
acting in ways that are not specifically predicted prior to target engagement,
and the second is the potential for unexplainable AI decision-making. These
two factors are closely related.
25 H.-Y. Liu considers that ‘the prospect for weapons systems autonomy is clearly something genuinely
new, even if human soldiers can be considered as close analogues or historical precedents’ – see H.-Y.
Liu,‘From the Autonomy Framework towards Networks and Systems Approaches for ‘Autonomous’
Weapons Systems’ [2019] 10 Journal of International Humanitarian Legal Studies 89, 93.
26 See T. Krupiy,‘Of Souls, Spirits and Ghosts:Transposing the Application of the Rules of Targeting to
Lethal Autonomous Robots’ [2015] 2(16) Melbourne Journal of International Law, 1, 34–35.
27 The Anti-Personnel Mines Convention (n 11).The impact of these weapons on innocent civilians is
highlighted in the first preambular paragraph of the Convention:‘Determined to put an end to the
suffering and casualties caused by anti-personnel mines, that kill or maim hundreds of people every
week, mostly innocent and defenceless civilians and especially children, obstruct economic develop-
ment and reconstruction, inhibit the repatriation of refugees and internally displaced persons, and
have other severe consequences for years after emplacement’.
28 S.J. Freedberg, ‘Army Says Long Range Missiles Will Help Air Force, Not Compete’ (Breaking
Defense, 16 July 2020), available at: <https://breakingdefense.com/2020/07/army-says-long-range
-missiles-will-help-air-force-not-compete/> accessed 27 May 2021.
The regulation of militarised artificial intelligence 161
With regards to the first point, all weapons carry some degree of unpredict-
ability in concrete engagements. For instance, rifles can misfire, and laser beams
can get deflected due to weather conditions. It is nonetheless predictable that
inclement weather conditions can affect laser beams, even if we cannot fully
predict rapid changes in weather conditions for particular attacks. With AI
weapons, there may be a point where it will become accepted that the weap-
on’s method of target engagement need not be fully predictable to the armed
personnel deploying it if it can nevertheless be predicted that it will stay within
certain acceptable pre-programmed parameters. Spatial and temporal distance
are not, as such, determinative of increased risk of unpredictability. One could
deploy a long-range missile with a full prediction of the circumstances that
can affect its trajectory. However, as with traditional weapons, the spatial and
temporal distance could increase the risk of factors interfering with the opera-
tion of the weapon.
With regards to the second point, the possibility of accepting a degree of
unpredictability in the method of operation carries additional risks of ‘unex-
plainability’. A lack of full understanding of the ways in which an AI weapon
can interact with its environment may inhibit the detection of warning signs
by those fielding it, as well as subsequent efforts to understand the reason for
particular unintended target engagements. The need to ensure ‘full under-
standing of the weapons’ capabilities and limitations’ has been affirmed by state
delegations.29
With more tasks and decision-making transferred to weapons, the distinc-
tion between means of warfare and human agents becomes harder to sustain.
This is because a weapon is traditionally understood as an ‘extracorporeal
instrument’,30 ‘a device, system, munition, implement, substance, object, or
piece of equipment’,31 a ‘mechanism to achieve purpose’.32 In essence, this
seems to imply no more than an instrument to channel human will. Even
though AI is impacted by the will of teams of programmers, legal advisors, and
commanders who set the weapon’s purpose, tasks, and parameters, an emerg-
ing space of independent decision-making can be noticeable. In this emerging
space, there can be decisions on optimal flight patterns to a particular target,
decisions sent to human operators on target identification, or even independ-
ent decisions to directly engage a particular person or object. According to
Liu, ‘the autonomy framework foregrounds the liminal status of AWS between
33 Liu (n 25) 97. For an early examination of this blurring of boundaries between weapons and com-
batants, see H.Y. Liu, ‘Categorization and legality of autonomous and remote weapons systems’
[2012] 94 IRRC 627 636.
34 T. Chengeta, ‘Are Autonomous Weapon Systems the Subject of Article 36 of Additional Protocol I
to the Geneva Conventions?’ [2016] 23(1) UC Davis Journal of International Law and Policy, 65, 77.
35 R. Michalczak,‘Animals’ Race against the Machines’ in V.A.J. Kurki and T. Pietrzykowski (eds), Legal
Personhood:Animals,Artificial Intelligence and the Unborn (Springer 2017), 96.
36 This is evidenced by the very name of the Group of Governmental Experts on Lethal Autonomous
Weapons Systems.
37 2019 GGE on LAWS Report (n 3), para. 17(b). In the 2020 Statement of the United States to the
GGE on LAWS, we read that ‘anthropomorphizing emerging technologies in the area of LAWS can
lead to legal and technical misunderstandings that could be detrimental to the efficacy of potential
policy measures. From a technical perspective, anthropomorphizing emerging technologies in the
area of LAWS can lead to mis-estimating machine capabilities. From a legal perspective, anthropo-
morphizing emerging technologies in the area of LAWS can obscure the important point that IHL
imposes obligations on States, parties to a conflict, and individuals, rather than machines. “Smart”
weapons cannot violate IHL any more than “dumb” weapons can’, Agenda item 5(b) Characteriza-
tion of the systems under consideration in order to promote a common understanding on concepts
and characteristics relevant to the objectives and purposes of the Convention, UNCLASSIFIED//
FINAL// 09 22 2020, available at: <https://documents.unoda.org/wp-content/uploads/2020/09
/LAWS-GGE-TPs-AGenda-item-5b-Characteristics-09-22-2020-FINAL-FOR-TRANSLA-
TORS.pdf> accessed 27 May 2021.
38 Additional Protocol I (n 6),Art. 91.
The regulation of militarised artificial intelligence 163
course39 and bullets can ricochet,40 and that such accidents will not be captured
by the prohibitive rules of the LOAC. AWS, like all weapons, will be sus-
ceptible to malfunctions.41 To take the prohibition of directing attacks against
civilians,42 it is conceivable that a commander could aim to direct an attack
against a lawful military objective and yet, through malfunction, the act of
violence mediated through the weapon will hit a target different from the one
sought by the commander. The consequences of the attack will not, as such,
requalify the initial act of direction. In such cases, to determine whether there
had been a breach of International Humanitarian Law, it would be necessary to
examine the reasonableness of fielding this particular capability, all risks consid-
ered, in the circumstances of the attack.
Drawing from the above, certain degrees of unpredictability may indicate
that a weapon is prohibited per se, as it is incapable of being directed at specific
military objectives. At the same time, all weapons carry a certain degree of
unpredictability of malfunction, and what is unpredictable in particular deploy-
ments (for instance, the ricochet of a bullet) is, in fact, a predictable feature of
the weapon. Whether a weapon is inherently prohibited under the LOAC falls
within the questions asked by ‘weapons law’.43 However, the real difficulty in
determining whether a particular employment of AI-based weapons has been
conducted in breach of humanitarian law will lie in the area of uncertainty
between weapons that are indiscriminate and weapons that operate within the
regular and tolerated boundaries of expected malfunctions.
Unless it can be shown that there is something about weapons relying on
AI for targeting that places them, as a category, in a certain regulatory box
under International Humanitarian Law,44 the analysis would have to be more
fine-grained and based on the characteristics of specific models of AI weapons.
Just as within the broader category of aircraft bombs, which are not per se indis-
criminate, we have modified air bombs that are indiscriminate. It is therefore
necessary to assess each weapon system on its own merit. And even if a given
39 J. Hruska, ‘US Patriot Missile Defense System Malfunctions, Crashes in Saudi Arabia’s Capital’
(ExtremeTech, 28 March 2018) <https://www.extremetech.com/extreme/266523-us-patriot-mis-
sile-defense- system-malfunctions-crashes-saudi-arabias-capital> accessed 27 May 2021.
40 ICTY, Prosecutor v Stanislav Galić, IT-98-29,Trial Chamber Judgement, 5 December 2003, para. 251.
41 M.N. Schmitt,‘Autonomous Weapon Systems and International Humanitarian Law:A Reply to the
Critics’ [2013] Harvard National Security Journal Features, 7.
42 According to Art. 51(2) of Additional Protocol I, ‘The civilian population as such, as well as indi-
vidual civilians, shall not be the object of attack’ and, according to the ICRC Customary International
Humanitarian Law Study, Rule 1, ‘The parties to the conflict must at all times distinguish between
civilians and combatants. Attacks may only be directed against combatants. Attacks must not be
directed against civilians’.
43 B.T.Thomas,‘Autonomous Weapon Systems:The Anatomy of Autonomy and the Legality of Lethal-
ity’ [2015] 37(1) Houston Journal of International Law, 235, 247.The author provides a good overview
of the differences between ‘weapons law’ and ‘targeting law’.
44 This would be the case if they are found to be, as a class of weapons, inherently indiscriminate or
otherwise failing the test for means of warfare permitted under international humanitarian law.
164 Regulating artificial intelligence in industry
AWS is not inherently indiscriminate, it may well be that such a weapon can
only be fielded in non-urban areas. AI-based weapons will not emerge as a
monolithic group. In the future, the careful consideration of different types of
autonomous capabilities and their intended use will become a crucial factor for
ensuring meaningful constraints in deployment, including in the further clari-
fication of procedural guarantees for the mitigation of risk.
45 Meeting of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the
Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to
Have Indiscriminate Effects, Final Report (13 December 2019) CCW/MSP/2019/9,Annex III.
46 See, for example, para. 24(a) of the 2019 GGE on LAWS Report.
47 2019 GGE on LAWS Report (n 3), para. 22(a), (b).
48 For instance, Merel Ekelhof has argued that a thicker understanding of distributed control should
find its way in the discussions on AWS. According to Ekelhof,‘Whereas meaningful human control
theoretically assumes the existence of an ultimate decision (e.g., the trigger pull moment), military
practice shows that discretion is typically distributed’ – see M. Ekelhof,Autonomous weapons: Oper-
ationalizing meaningful human control (Humanitarian Law and Policy, 15 August 2018), available
at: <https://blogs.icrc.org/law-and-policy/2018/08/15/autonomous-weapons-operationalizing
-meaningful-human-control/> accessed 27 May 2021.
49 2019 GGE on LAWS Report (n 3), para. 22(c). For a detailed overview of the positions of States on
the human element expressed at the GGE on LAWS, see E.T. Jensen, ‘The (Erroneous) Require-
ment for Human Judgment (and Error) in the Law of Armed Conflict’ [2020] 96 International Law
Studies, 26.
The regulation of militarised artificial intelligence 165
compliant targeting without direct human input on specific targets. Some delega-
tions have considered precautionary measures – such as testing, training, and
establishing procedures – to be an adequate way of ensuring an operation of
the weapon consistent with the rules of IHL; others have raised serious doubts
about the possibility of operating in complex operational environments with-
out ‘human judgment and context-based assessments’.50
At a very high level of generality, it can be said that all parties at the GGE
on LAWS agree that AWS stand in a special relation to battlefield risks. But
while some delegations see these weapons as a way to avoid traditional risks,
others consider the weapons themselves a significant and unprecedented risk
to civilians. For instance, in the 2020 report ‘Commonalities in National
Commentaries on Guiding Principles’, the then Chair of the Group of
Governmental Experts, His Excellency Mr Jānis Kārkliņš, noted that ‘several
commentaries argued that emerging technologies in the area of lethal AWS
could support the implementation of International Humanitarian Law due to,
inter alia, the reduction of human-related errors and risks, improved preci-
sion and accuracy, and the ability to incorporate self-destruct, self-deactivation,
or self-neutralisation mechanisms. Others argued that this outcome was not
assured and should not be assumed’.51
Moreover, the International Committee for the Red Cross (ICRC) and
the Stockholm International Peace Research Institute (SIPRI) published an
important report on the concept of meaningful human control,52 which was
subsequently cited by state delegations.53 It explored the degree of human con-
trol that needs to be ensured in view of mitigating the risks posed by AWS.
To mitigate their inherent unpredictability, the report considered the impor-
tance of controls on the weapon system’s parameters of use (for example, the
existence of fail-safe mechanisms), controls on the environment (for example,
temporal and spatial constraints in deployment), and controls through human–
machine interaction (for example, human supervision and capacity to inter-
vene in the operation of the weapon).
Concerns over unacceptable levels of risk lie at the core of these discus-
sions. This risk, manifested through the introduction of yet another element
of battlefield uncertainty and unpredictability, may generate new dangers for
protected persons and objects. Some delegations at the GGE consider that
international law, and in particular LOAC, provides significant constraints on
the development and deployment of AWS. These views have been fleshed
out in a range of statements and working papers submitted by participants in
Protective obligations
Under Art. 51(1) of Additional Protocol I, ‘[t]he civilian population and indi-
vidual civilians shall enjoy general protection against dangers arising from mili-
tary operations’55 and under Art. 57(1), ‘[i]n the conduct of military operations,
constant care shall be taken to spare the civilian population, civilians and civil-
ian objects’.56 Civilians are exposed to a range of threats in times of conflict:
being intentionally terrorised57 or used as human shields,58 or becoming collat-
eral damage in the context of attacks against lawful military objectives,59 among
54 Recent examples are the Working paper by the Bolivarian Republic of Venezuela on behalf of the
Non-Aligned
Movement (NAM) and Other States Parties to the Convention on Certain Conventional Weap-
ons (CCW), submitted in 2020, available at: <https://documents.unoda.org/wp-content/uploads
/2020/09/G2022909.DOC4.pdf>; Statement by Costa Rica submitted on 22 September 2020,
available at: <https://documents.unoda.org/wp-content/uploads/2020/09/Intervencion-22-seti.
-2020-ESP-PDF.pdf> (in Spanish) accessed 27 May 2021.
55 Additional Protocol I,Art. 51(1).According to the 1987 Commentary to Additional Protocol I, this
first paragraph of Art. 51 acknowledges ‘that armed conflicts entail dangers for the civilian popula-
tion, but these should be reduced to a minimum’.This reduction of dangers is considered to be the
aim of the remaining paragraphs of that article.
56 Additional Protocol,Art. 57(1).This rule is also reflected in the ICRC Study on Customary Interna-
tional Humanitarian Law – see Rule 15. Principle of Precautions in Attack, available at: <https://ihl
-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule15> accessed 27 May 2021.
57 ICTY, Prosecutor v. Stanislav Galić, IT-98-29-T, Judgement and Opinion, 5 December 2003, para. 769.
At para. 597, the Majority affirmed that ‘a campaign of sniping and shelling was conducted against
the civilian population of ABiH-held areas of Sarajevo with the primary purpose of spreading ter-
ror’.
58 According to Rule 97 of the ICRC Study on Customary International Humanitarian Law,‘the use
of human shields is prohibited’, available at: <https://ihl-databases.icrc.org/customary-ihl/eng/docs
/v1_rul_rule97>. A particular instance of condemnation of the practice of using civilians as human
shields, we find in the Fifth periodic report on the situation of human rights in the territory of the
former Yugoslavia submitted by Mr.Tadeusz Mazowiecki, Special Rapporteur of the Commission on
Human Rights, pursuant to paragraph 32 of Commission resolution 1993/7 of 23 February 1993,
para. 84:‘Other Muslims, Croats and Roma (gypsies), have been arrested to provide a labour force in
conflict zones, or to act as “human shields”’.
59 The protections of international humanitarian law do not guarantee that civilians will not be
harmed. According to Principle 14 of the ICRC Study on Customary International Humanitarian
Law, ‘Launching an attack which may be expected to cause incidental loss of civilian life, injury to
civilians, damage to civilian objects, or a combination thereof, which would be excessive in rela-
tion to the concrete and direct military advantage anticipated, is prohibited’. A contrario, when the
expected loss, injury and damage of an attack launched against a lawful military objective is not
excessive, the attack will not be deemed a violation of the law under the proportionality principle.
The difficulties of assessing the rule on proportionality were highlighted in ICTY, Final Report to
the Prosecutor by the Committee Established to Review the NATO Bombing Campaign Against
The regulation of militarised artificial intelligence 167
others. Not only are the dangers faced by civilians wide-ranging, but they are
also continuously evolving. The urbanisation of conflict is an obvious example
of a changing landscape of dangers, and so are technological advancements in
the area of weaponry. In this context, certain obligations of a procedural char-
acter may gain particular significance, as they seek to guarantee the elimination
or at least the minimisation of battlefield risk. Recognising the importance of
such measures, states participating in the GGE on LAWS agreed by consensus
to Guiding Principle (g), which provides that ‘[r]isk assessments and mitigation
measures should be part of the design, development, testing and deployment
cycle of emerging technologies in any weapons systems’.60 In the Commonalities
Report mentioned above, His Excellency Mr Jānis Kārkliņš noted that:
It was suggested that the GGE LAWS should catalogue potential risks and
mitigation measures that should be considered in the design, development,
testing and deployment of weapon systems based on emerging technolo-
gies in the area of lethal AWS.61
the Federal Republic of Yugoslavia: ‘The main problem with the principle of proportionality is not
whether or not it exists but what it means and how it is to be applied. It is relatively simple to state
that there must be an acceptable relation between the legitimate destructive effect and undesirable
collateral effects. For example, bombing a refugee camp is obviously prohibited if its only military
significance is that people in the camp are knitting socks for soldiers. Conversely, an air strike on
an ammunition dump should not be prohibited merely because a farmer is ploughing a field in the
area. Unfortunately, most applications of the principle of proportionality are not quite so clear cut’.
60 2019 Report GGE on LAWS, Guiding Principle (g).
61 GGE on LAWS, Commonalities in national commentaries on guiding principles (2020), para. 16.
168 Regulating artificial intelligence in industry
The ICRC, in its 2006 ‘Guide to the Legal Review of New Weapons, Means
and Methods of Warfare’, highlighted that this obligation flows ‘logically from
the truism that states are prohibited from using illegal weapons, means and
methods of warfare or from using weapons, means and methods of warfare in
an illegal manner’, and thus it applies irrespective of whether a state is a party
to Additional Protocol I.62 The GGE on LAWS provides a reference to the
wording of Art. 36 by stating that
Additionally, the same report acknowledges that states are free to ‘indepen-
dently determine the means to conduct legal reviews’.64 The benefit of shar-
ing best practices was also acknowledged, not just in the report,65 but also
in the submissions of individual delegations.66 As a step in the direction of
standardising state practices in the area of legal reviews, Argentina submitted
a ‘Questionnaire on the Legal Review Mechanisms of New Weapons, Means
and Methods of Warfare’.67 On substantive matters, the questionnaire raises
important questions on whether states take into account examinations already
done by other countries, whether they conduct their own tests, and whether
they have ever rejected the acquisition of a new weapon.68
According to the Australian National Commentary submitted to the GGE
on LAWS in 2020, ‘strengthening compliance with existing IHL, including
through Art. 36 reviews, is the most effective way to manage new weapons
systems, including the potential development of LAWS’.69 Due to the distanc-
ing between direct human input on concrete targeting decisions and the final
act of targeting, legal reviews need to check whether the weapon is capable of
operating within the parameters of the LOAC.70 When it comes to the mate-
rial scope of legal reviews under Art. 36, the ICRC considers that it is broad
62 ICRC,‘A Guide to the Legal Review of New Weapons, Means and Methods of Warfare: Measures
to Implement Article 36 of Additional Protocol I of 1977’ (Geneva, 2006), p. 4.
63 2019 Report GGE on LAWS, Guiding Principle (e).
64 ibid.,, para. 17(i).
65 ibid.
66 Working Paper Submitted by Argentina to the GGE on LAWS,‘Strengthening of the review mecha-
nisms of a new weapon, means or methods of warfare’ (4 April 2018), CCW/GGE.1/2018/WP.2.
67 Argentina,‘Questionnaire on the Legal Review Mechanisms of New Weapons, Means and Methods
of Warfare’ submitted to the GGE on LAWS (29 March 2019), CCW/GGE.1/2019/WP.6.
68 ibid, p. 4.
69 CCW GGE on LAWS, National commentaries on the 11 guiding principles of the GGE on
LAWS – Australia, available at: <https://documents.unoda.org/wp-content/uploads/2020/08
/20200820-Australia.pdf> accessed 27 May 2021.
70 Boothby (n 23) 147.
The regulation of militarised artificial intelligence 169
and that it encompasses ‘an existing weapon that is modified in a way that
alters its function, or a weapon that has already passed a legal review but that
is subsequently modified’.71 For AWS relying on algorithmic decision-making,
it can be expected that there will be updates to their parameters. An important
boundary will be that of ‘modification’ – which are the updates that qualify as
a ‘modification’ of the weapon and which are the ones that fall short of that
threshold.
Legal reviews are and should be, therefore, not only an initial, but a con-
tinuous check on the development and fielding of military capabilities. In addi-
tion to providing the necessary framework for the responsible introduction of
new weapons, this process is a source of information about the system, with all
its attendant risks and limitations. Being information-generating in nature, it
creates a pool of knowledge that can then feed into the understanding of those
using it on the battlefield. In this way, the obligation to undertake reviews
impacts our assessment under the prohibitive rules of LOAC, as the informa-
tion available to the decision-maker is crucial in judging the lawfulness of their
conduct.72
Precautions in attack
An important aspect of the protection of civilians is the obligation to take
precautions in attack.73 An aspect of this obligation applies to all military opera-
tions74 and requires the attacking party to take constant care to spare the civilian
population and civilian objects. Additionally, we have a list of specific precau-
tionary obligations in attack, including: (a) to verify that the target is a military
objective and that the attack is compliant with the principle of proportionality;
(b) to take precautions in choosing the means and methods of attack in view of
avoiding, or minimising, civilian harm; (c) to cancel or suspend an attack if it
becomes apparent that the target is a protected one or that the incidental harm
will be excessive to the military advantage anticipated; (d) to give effective
advance warning; and (e) in cases of choice between several military objectives
71 ICRC Guide Legal Reviews (n 64), citing Australian Instruction, section 2 and subsection 3(b) and
footnote 3 thereof; Belgian General Order, subsection 5(i) and (j); Norwegian Directive, subsec-
tion 2.3 in fine; US Air Force Instruction, subsections 1.1.1, 1.1.2, 1.1.3; and US Army Regulation,
subsection 6(a)(3).
72 See, for instance, the declarations made by states under the heading of ‘Rule for decision-making by
commanders’ in J. Gaudreau, ‘The reservations to the Protocols additional to the Geneva Conven-
tions for the protection of war victims’ [2003] 849 IRRC, p. 14.
73 Additional Protocol I, Art. 57, also considered a rule of customary international law – ICRC Study
on Customary International Humanitarian Law, Rule 15.
74 As explained by Quéguiner, the obligation of ‘constant care’ has a broader scope than the specific
obligations of Art. 57.This is because the concept of ‘military operations’ includes troop movements,
manoeuvres and other deployment or retreat activities carried out by armed forces before actual
combat, and is thus broader than the term ‘attack’ – see J-F. Quéguiner, ‘Precautions under the law
governing the conduct of hostilities’ [2006] 88 International Review of the Red Cross, 797.
170 Regulating artificial intelligence in industry
yielding similar military advantage, the obligation to choose the one expected
to cause the least civilian harm.75
By taking precautions, attacking forces create the conditions for observing
the principle of distinction between civilians and civilian objects, on the one
hand, and combatants and military objectives, on the other.76 In this sense, they
are an important ingredient in the protective architecture of LOAC. Despite
the linkages between precautionary obligations and other targeting rules, the
regime on precautions has autonomous significance, as it sets its own con-
straints on the conduct of attacks.77
A potential for tension between the obligations of the parties to the conflict
and the deployment of AI-based weapons for direct targeting arises through
the possibility of the independent decision-making of the weapon on the basis
of its own assessment of the changing circumstances. For instance, if an attack is
launched against a building deemed to be used for the storage of ammunitions
by the opposing side, new information on the use of the building could arrive,
indicating its civilian rather than military use. The attacking party may have the
capacity to cancel the attack or, depending on the means employed, to redi-
rect the weapon post-deployment.78 With AI-based weapons, the assessment
of new information and any respective decision may ultimately be left to the
algorithm. Any possibility to intervene in the operation of the weapon could
also be limited. Therefore, some states, including Austria and Costa Rica, have
emphasised the need for a human override over the autonomous system.79 In
the area of precautions, the United States has been consistent in affirming that
AWS can provide a whole new range of measures to protect civilians. For
example, according to the 2019 Working Paper submitted by the United States
to the GGE, ‘[e]ven if the risk of civilian casualties was not expected to be
excessive in relation to the military advantage expected to be gained, it would
be important to take further feasible precautions. For example, warnings, mon-
itoring, and self-destruct, self-deactivation, or self-neutralisation mechanisms
are all precautions that have been usefully employed to reduce the risk of
80 CCW GGE on LAWS, Implementing International Humanitarian Law in the Use of Autonomy
in Weapon Systems, submitted by the United States of America, CCW/GGE.1/2019/WP.5, 28
March 2019, available at: <https://undocs.org/en/CCW/GGE.1/2019/WP.5> accessed 27 May
2021, 8(c).
81 ibid. In the US Department of Defence Law of War Manual, there is a section on requirements
to take precautions regarding specific weapons (5.2.3.4), but the weapons listed to not include
weapons relying on artificial intelligence.The line of argumentation that autonomy in weapons can
contribute to the implementation of international humanitarian law is also present in the Law of
War Manual, where we read that ‘in many cases, the use of autonomy could enhance the way law of
war principles are implemented in military operations. For example, some munitions have homing
functions that enable the user to strike military objectives with greater discrimination and less risk
of incidental harm. As another example, some munitions have mechanisms to self-deactivate or to
self-destruct, which helps reduce the risk they may pose generally to the civilian population or after
the munitions have served their military purpose’ (6.5.9.2).
82 Additional Protocol I,Art. 57(2)(a)(i). Emphasis added.
83 Additional Protocol I,Art. 57(2)(a)(ii). Emphasis added.
84 Théo Boutruche,‘Expert Opinion on the Meaning and Scope of Feasible Precautions under Inter-
national Humanitarian Law and Related Assessment of the Conduct of the Parties to the Gaza Con-
flict in the Context of the Operation “Protective Edge”’ (2015), available at: <https://www.diakonia
172 Regulating artificial intelligence in industry
forces may deploy weapons that retain an element of supervision as a guarantee
against unintended engagements. While this may go some way towards alle-
viating fears of uncheckered autonomous targeting, one must not lose sight
of the difficulties that remain: the need to ensure meaningful understanding
between the supervisor and the system.85 Of particular importance is the need
to ensure that those tasked with supervision have the ‘necessary knowledge of
risks’.86 The need for understanding within the human–machine dynamic was
made explicit by the United Kingdom in its 2020 Working Paper submitted
to the GGE:
.se/globalassets/blocks-ihl-site/ihl-file-list/ihl--expert-opionions/precautions-under-international
-humanitarian-law-of-the-operation-protective-edge.pdf> accessed 27 May 2021, 28.
85 N. Sharkey, ‘Guidelines for the human control of weapons systems (2018) International Commit-
tee for Robot Arms Control’, available at: <https://www.icrac.net/wp-content/uploads/2018/04/
Sharkey_Guideline-for-the-human-control-of-weapons-systems_ICRAC-WP3_GGE-April-2018
.pdf> accessed 27 May 2021, 2-3.
86 M. Sassòli, A. Quintin, ‘Active and Passive Precautions in Air and Missile Warfare’ [2014] 44 Israel
Yearbook on Human Rights, 69, 111.
87 CCW GGE on LAWS, United Kingdom Expert paper: The human role in autonomous warfare,
CCW/GGE.1/2020/WP.6, 18 November 2020, para. 10.
88 ibid., 87.
The regulation of militarised artificial intelligence 173
endeavour to remove the civilian population, individual civilians, and civilian
objects under their control from the vicinity of military objectives; (2) avoid
locating military objectives within or near densely populated areas; and (3) take
the other necessary precautions to protect the civilian population, individual
civilians, and civilian objects under their control against the dangers result-
ing from military operations. All three obligations are subject to the standard
of ‘to the maximum extent feasible’ found in the chapeau of the provision.89
According to the ICRC, the obligation to take precautions against the effects
of attacks is part of customary International Humanitarian Law.90 This obliga-
tion is seen as complementary to the one binding on attacking forces, and the
parallel operation of these two obligations provides the foundation for imple-
menting the principle of distinction.91
However, some of these obligations could place a very heavy burden on
parties to the conflict, which explains the language conditioned through a fea-
sibility standard. For instance, the obligation to avoid locating military objec-
tives within or near densely populated areas could be particularly difficult for
densely populated states or states with particular geographic features.92 The
term ‘feasibility’ is undefined in Additional Protocol I and has to be assessed
depending on the individual circumstances of each case. It is nevertheless clear
that feasibility is not to be appraised in hindsight, and that the decisions of the
defending forces are to be judged by reference to the circumstances and infor-
mation at the time.93
Additional difficulties arise when one considers the interplay between the
obligations of attacking forces and the obligations of defenders. These obliga-
tions have an independent existence, and the violation of one party does not
relieve the other from its own obligations. It may nevertheless be the case that
the way in which one party carries out its precautionary obligations will affect
the effectiveness of those taken by the other party, or the strategy that it should
adopt. Such questions of interplay are likely to gain particular prominence in
the context of technological developments. In his article ‘Precautions Against
the Effects of Attacks in Urban Areas’, Eric Talbot Jensen persuasively argues
that new technologies ‘will allow the defender greater situational awareness
It can also be used to warn civilians through ‘visual signals, such as flashing
lights or brightly coloured spray paint of some pre-designated colour’.96 While
it may be true that technological advancements could assist in minimising the
risks faced by civilians, a new risk could also emerge through the interaction
of measures taken by attacking and defending forces. As an example, let’s con-
sider an AI-based missile trained on data from the neighbourhood towards
which it is being deployed, including the habits of resident civilians and the
previously employed tactics. If the defender decides to employ signalling or
marking measures that are new, i.e. not part of the information the algorithm
was trained on, then these measures could increase the risk to civilians rather
than decrease it. That is because there would be a new environmental factor to
which it may react in unintended ways. This example highlights the need for
at least a basic level of understanding of how different measures may affect the
functioning of AI systems. Otherwise, certain precautionary measures could
become not only ineffective but detrimental to the safety of civilians.
Conclusions
Risk is inherent in armed conflict. Its effect on civilians can be felt in many
ways, and their vulnerability requires robust legal protection. Under LOAC,
civilians are protected through a complex web of interrelated obligations. With
the introduction of AI, these well-established rules have come under careful
scrutiny. A lot more work is needed to go past the stage of general affirmation
that international law applies to AWS, and into the concrete ways in which
it applies. The discussions at the GGE on LAWS have been instrumental in
narrowing down the most challenging questions and in laying bare the disa-
greements between the interpretations that states give to specific rules of inter-
national law. While there is common ground on a general level, that ground
gets pixelated when one enters the concrete questions of determining the per-
sons who should exercise control over these weapons, the stage of that control,
Introduction
As discussed in Chapter 11, the debate over AWS has revealed many defects in
the regulations of the LOAC and state responsibility. AWS may also affect the
balance between considerations of humanity and military necessity. However,
mens rea of LOAC violations is constructed on the basis of the intended behav-
iour of the perpetrator, which would be difficult to prove in the context of
AWS. Consequently, the law on state responsibility cannot address all the major
challenges posed by AWS. This chapter presents state liability regimes adopted
by NATO and the United States, which, arguably, increase the effectiveness of
accountability mechanisms, transparency in international law, and the victims’
sense of justice. These models allow individuals to ask for compensation for
damages and harms that occurred in the conduct of military operations.
This chapter is divided into two main parts. The first examines the impact
of the law on state responsibility on the secondary rules of LOAC, in particular
the Articles on Responsibility of States for Internationally Wrongful Acts of
20011 (ARSIWA). The second part covers the liability regime for the effects of
hostilities on the civilian side, based on the analysis of regulations from NATO
and the United States.
DOI: 10.4324/9781003246503-14
The use of AI in armed conflicts 177
state responsibility is of a secondary character to the LOAC regulations, and it
applies once the primary obligation has been violated. However, a violation of
the LOAC does not necessarily equal an international crime.2
The character of state responsibility – either civil or criminal – is, however,
not entirely clear. For example, in the Fifth Report on State Responsibility,
Arangio-Ruiz concluded that international responsibility should be perceived
from both a civil and criminal perspective. The civil aspect of the responsibility
means that in some cases it can lead to compensation, while the criminal aspect
may lead to prosecution.3 However, it is important to note that, principally,
states are not under the compulsory jurisdiction of international courts. Even if
such jurisdiction was applicable, international courts do not provide enforce-
ment mechanisms. Although there is a mechanism for sanctions, it is based on
the UN Charter, and it is oriented toward international peace and security.
The emergence of new regimes of international responsibility (namely inter-
national liability and individual criminal responsibility) has shed new light on
state responsibility, with the two former perceived as a function of the lat-
ter.4 Nevertheless, they have increased interdependencies and the rise of global
issues that highly impact the nature of state responsibility. The emergence of
new technologies, and AWS among others, challenges the universal values of
humankind, thus calling for a global response.5
The nature of state responsibility further depends on the obligation violated.
Pursuant to art. 42 of the ARSIWA, the international obligations are divided
into the following categories: (1) owed to an injured state individually; (2)
owed to a group of states (obligations erga omnes partes); and (3) owed to the
international community as a whole (obligations erga omnes). An independent
category of international obligations arises under art. 40 of ARSIWA, which
refers to peremptory norm of general international law and corresponding seri-
ous breach of the obligation in question. Therefore, the character of the viola-
tion determines the subject and scope of admissible reaction to the violation.6
2 See art. 25(4) of the Rome Statute of the International Criminal Court (ICC), as well as Application of
the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia and Herzegovina v. Serbia
and Montenegro) (Judgment) [2007] ICJ Rep., 43, 173.
3 ILC,‘Fifth Report on State Responsibility by Mr Gaetano Arangio-Ruiz, Special Rapporteur’ [1993]
II ILC Yearbook 1, UN Doc.A/CN.4/453 and Add. 1–3, 253–256. See also:A. Pellet,‘The Definition
of Responsibility in International Law’, in J. Crawford, A. Pellet, S. Olleson (eds), The Law of Interna-
tional Responsibility (OUP 2010), 13.
4 K. Creutz, State Responsibility in the International Legal Order:A Critical Appraisal (Cambridge University
Press 2020) 31.
5 Crawford refers to communitarian norms entailing obligations erga omnes that enable invocation of
responsibility in the public interest. See: J. Crawford,‘Responsibility for Breaches of Communitarian
Norms: an Appraisal of Article 48 of the ILC Articles on Responsibility of States for Internationally
Wrongful Acts’, in U. Fastenrath, R. Geiger, D.E. Khan,A. Paulis, S. von Schorlemer, Ch.Vedder (eds),
From Bilateralism to Community Interest: Essays in Honour of Bruno Simma (OUP 2011), 224-240.
6 A. Cassese,‘The Character of the Violated Obligation’, in: J. Crawford, A. Pellet, S. Olleson (eds), The
Law of International Responsibility (OUP 2010), 415–416.
178 Regulating artificial intelligence in industry
In cases where the obligation is owed to an injured state individually, only an
injured state possesses the legal interest in the realisation of the responsibility.
If there is a violation of an obligation owed to the international community
as a whole, other states (not only the injured one) can invoke the responsi-
bility.7 Consequently, it is suggested that the use of certain technologies in an
armed conflict, such as ‘weapon systems that select and engage targets based
on sensor processing’,8 is a matter of concern to the international community
as a whole.9 Examples of such weapons are: the Israeli-made Harpy system
(described as an AWS ‘designed to detect, attack and destroy radar emitters’10),
the US Phalanx Close-In-Weapons System for Aegis-class cruisers,11 the UK
Taranis jet-propelled combat drone, or the Samsung Techwin.12 The list of
LOAC obligations amounting to obligations owed to the international com-
munity can be found in art. 85(1) of the Additional Protocol I, describing the
grave breaches of the I–IV Geneva Conventions. They cover, for example, the
wilful killing of protected persons, wilful cause of great suffering, or serious
injury to body or health, extensive and unjustified destruction and appropria-
tion of property. Pursuant to the 2006 report of the ILC, a significant part of
the LOAC are erga omnes obligations because of their non-reciprocal character
and protection performed in the interest of all states.13 However, the ILC gives
no examples of such obligations in the field of the LOAC. In several cases the
ICJ referred to ‘certain’ or a ‘great many’ rules of the LOAC as creating erga
omnes obligations.14 None of them puts an exhaustive list as to the violation
of exactly which LOAC rules lead to the creation of obligations of this kind.
This situation gives rise to an increased ambiguity and vagueness of obligations
owed to the international community as a whole.15
7 ILC, ‘Report of the Study Group of the International Law Commission: Fragmentation of Inter-
national Law: Difficulties Arising From the Diversification and Expansion of International Law’ (13
April 2006) A/CN.4/L.682, 395.
8 Human Rights Watch, ‘New Weapons, Proven Precedent. Elements of and Models for a Treaty on
Killer Robots’ (Human Rights Watch, 20 October 2020) <https://www.hrw.org/report/2020/10/20/
new-weapons-proven-precedent/elements-and-models-treaty-killer-robots> accessed 27 May 2021.
9 It is not clear, who can be considered as a member of the international community as a whole.This
community exists in legal terms i.e. in the context of the general law of state responsibility. See:A.L.
Vaurs-Chaumette, ‘The International Community as a Whole’, in: J. Crawford, A. Pellet, S. Olleson
(eds), The Law of International Responsibility (OUP 2010), 1023–1024.
10 Human Rights Council, ‘Report of the Special Rapporteur on extrajudicial, summary or arbitrary
executions, Christof Heyns’, 9 April 2013, UN Doc A/HRC/23/47, 45.
11 Phalanx systems have been deployed by the U.S. since 1980s. They are defence systems that auto-
matically detect, evaluate and engage anti-ship missiles and high-speed aircraft threats.
12 Human Rights Council Report (n 10), 45.
13 ILC Report (n 3) 391-392.
14 Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion) [1996] ICJ Rep., 257 [79]. See also:
Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory (Advisory Opinion)
[2004] ICJ Rep., 136, 155.
15 According to Crawford, IHL obligations are characterised as erga omnes partes and not erga omnes.This
is due to consequences flowing from their violation, as opposed, for example, to violation of the
The use of AI in armed conflicts 179
Nonetheless, such LOAC obligations are derived from art. 1 of I–IV of
the Geneva Conventions, stipulating the obligation to ensure respect to the
LOAC. This obligation was initially considered as the duty directed to the
state’s armed forces and public authorities. Nowadays, it is considered that the
erga omnes obligations impose a duty to claim performance by other states.16
In some instances, this obligation would involve the duty to ensure respect
from another state.17 In the case of AWS, the obligation to ensure respect from
another state would involve lawful measures to abstain from transferring AWS
to a state violating the LOAC, or the imposition of economic sanctions against
the violating state.18 A state is therefore obliged not to encourage persons or
groups to act in violation of the LOAC.19
right to self-determination.The latter should be of interest not only to all states but the international
community as a whole, whereas a violation of an obligation erga omnes partes entails reactions of state-
parties to a particular treaty (in this sense to I-IV Geneva Conventions). See: Crawford (n 5), 233.
16 K. Zemanek,‘New Trends in the Enforcement of erga omnes Obligations [2000] Max Planck UNYB,
4, 1, 5.
17 Palestinian Wall Case (n 14) 159. Similarly, the content of the obligation to ensure respect has been
confirmed by the ICJ in Military and Paramilitary Activities In and Against Nicaragua, (Nicaragua v. USA)
(Merits) [1986] ICJ Rep., 14, 220.The Tribunal stipulated that this obligation derived not only from
I-IV Geneva Conventions, but from the general principles of IHL.
18 Diakonia International Humanitarian Law Resource Centre,‘Accountability for violations of Inter-
national Humanitarian Law. An introduction to the legal consequences stemming from violations
of international humanitarian law’ (Diakonia, October 2013) <https://www.diakonia.se/globalassets
/documents/ihl/ihl-resources-center/accountability-violations-of-international-humanitarian-law
.pdf> accessed 27 May 2021, 4.
19 The judgment included references to violations of Art. 3 common to the I-IV Geneva Conventions.
See: Nicaragua case (n 17), 220.
20 The ICJ in Corfu Channel case concluded that the elementary considerations of humanity are ‘more
exacting in peace than in war’. Corfu Channel Case (UK v. Albania) (Merits) [1949] ICJ Rep., 4, 22
21 M. Bothe,‘Rights and obligations of the EU and its Member States to ensure compliance with IHL
and IHRL in relation to the situation of the occupied Palestinian territory. Legal Expert Opin-
ion, (Diakonia, June 2018) <https://www.diakonia.se/globalassets/blocks-ihl-site/ihl-file-list/ihl-
-expert-opionions/bothe.-july-18.-eu-obligations-under-il.pdf> accessed 27 May 2021, 11.
180 Regulating artificial intelligence in industry
treaties, but also from customary international law.22 A state shall ensure com-
pliance with the LOAC by preventing violations, and by prosecuting and
punishing LOAC violations committed by its citizens, or committed on the
territory of that state. In the context of AWS, the guarantee of compliance
can be achieved by conducting a weapons review under art. 36 of Additional
Protocol I,23 which states that:
22 See art. 1 of the Basic Principles and Guidelines on the Right to a Remedy and Reparation for Vic-
tims of Violations of International Human Rights and Humanitarian Law, adopted and proclaimed
by General Assembly resolution 60/147 of 16 December 2005.
23 Some scholars suggest that there is a corresponding, albeit not identical, customary rule in relation
to the weapons review. Jevglevskaja indicates that under Harvard Manual and Tallin Manual 2.0, the
core of the obligation seems to have become a customary rule. The core there is the obligation to
conduct a pre-deployment analysis of weapons. N. Jevglevskaja,‘Weapons Review Obligation under
Customary International Law’ [2018] ILS, 94, 186, 213–214. For supporting the implied character of
the obligation, Boothby recalls practice of certain States (namely the U.S. and Sweden) concerning
weapons reviews that preceded the adoption of the Additional Protocol I, and the existence of some
weapons law provisions that are in fact customary. Therefore, in the pre-deployment phase a non-
State party to the Protocol has to review the legality of using a weapon by verifying compliance with
its customary obligations. See:W.H. Boothby, Weapons and the Law of Armed Conflict (OUP 2009), 341.
24 Pursuant to art. 84 of the Additional Protocol I, State parties shall communicate to one another not
only their official translations, but also the laws and regulations adopted to ensure the application
of the Protocol. However, this is rather a rudimentary mechanism for ensuring IHL compliance
concerning new weapon systems.
25 Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapon Systems, 12–16 December
2016, UN Doc CCW/CONF.V/2 (10 June 2016), 48–51. Furthermore, Stockholm International
Peace and Research Institute prepared a case study concerning several weapons reviews. See: V.
Boulanin, M.Verbruggen,‘SIPRI Compendium on Article 36 Reviews’ [2017] SIPRI Background
Paper, passim. For publicly available national legislation see, for example, 6 U.S. Air Force, Legal
Reviews of Weapons and Cyber Capabilities, 27 July 2011, Instruction 51-402; Belgium, Défense, Etat-
Major de la Défense, Ordre Général – J/836, establishing La Commission d'Evaluation Juridique
des nouvelles armes, des nouveaux moyens et des nouvelles méthodes de guerre, 18 July 2002; New
The use of AI in armed conflicts 181
of Certain Autonomous or Semi-Autonomous Weapon Systems provide a
two-fold duty of assessing AWS legality.26 A double-check is required in both
the development and deployment phases. Therefore, states are not willing to
present any of the results of their review procedures concerning AWS; how-
ever, the procedures themselves are indeed publicly communicated.
States are under obligation to introduce procedural guarantees applicable
to LOAC violations. These guarantees shall involve: (1) effective penal sanc-
tions; (2) prosecution or extradition of alleged perpetrators; and (3) universal
jurisdiction for serious LOAC violations.27 With regards to the first obligation,
effective penal sanctions shall be introduced to domestic legislation. If the state
develops or possesses AWS, which it plans to use in armed conflict, it should
consider passing relevant legislation that would regulate the use of such tech-
nologies. In the context of AWS, states developing and deploying such tech-
nologies in military operations include Israel and the USA.28
The second point refers to the aspect of prosecution and extradition that
imposes an obligation erga omnes partes, i.e. established for the protection of col-
lective interests.29 A good example of this is the case of Israel, who was charged
with the crime of designing, manufacturing, and selling AWS to an unknown
state.30 Nevertheless, extradition is a challenging aspect because of several excep-
tions that can be applied, as well as the need for an extradition treaty. It is therefore
suggested that the obligation to prosecute violations shall prevail over extradi-
tion.31 These problems are even deeper in cases of alleged war crimes with AWS
involved in conduct constituting a breach of the LOAC. This aspect has been
discussed in the previous chapter. Practices relating to other weapons reveal the
inefficiency of international protection against the illegal use of weapon systems.
For example, despite attacks with incendiary and chemical weapons in Syria,
there is no information on attempts to hold the state responsible or trying the
Zealand, Manual of Armed Forces Law.Volume 4 Law of Armed Conflict, New Zealand Defence Force
DM 69 (2nd ed., 2019), Chapter 7, section 4.
26 USA, Department of Defense, Directive 3000.09:Autonomy in Weapon Systems, 21 November 2012, as
changed 8 May 2017, Department of Defense Documents No. 3000.09.
27 J.M. Henckaerts,‘The Grave Breaches Regime as Customary International Law’ [2009] 7 Journal of
International Criminal Justice, 4, 683, 693.
28 States like Germany, Spain, and the United Kingdom informed that they do not plan to use weapons
wholly uncontrolled by humans. However, they do not opt for a prohibition of AWS. See: R. Livoja,
‘Why It’s so Hard to Reach an International Agreement on Killer Robots’ (The Conversation, 12
September 2018) <https://theconversation.com/why-its-so-hard-to-reach-an-international-agree-
ment-on-killer-robots-102637> accessed 27 May 2021.
29 Obligation erga omnes partes, contrary to the obligation erga omnes, imposes a duty based on a
multilateral treaty. In international criminal law it is e.g. art. 90 of the Rome Statute of the ICC.
30 T. Nedwick,‘A Group of Israelis Secretly Built And Tested Suicide Drones For An Unknown Asian
Customer’ (The Drive, 11 February 2021) <https://www.thedrive.com/the-war-zone/39209/a
-group-of-israelis-secretly-built-and-tested-suicide-drones-for-an-unknown-asian-customer>
accessed 27 May 2021.
31 J. Nowakowska-Małusecka,‘Indywidualna odpowiedzialność karna za zbrodnie popełnione w byłej
Jugosławii i Rwandzie’ (Wydawnictwo Uniwersytetu Śląskiego 2000), 27.
182 Regulating artificial intelligence in industry
alleged perpetrators.32 It leaves the victims with no tools against the state organs
responsible for determining the need for investigating a case.
The final point, concerning the rule of universal jurisdiction, covers exer-
cising jurisdiction for serious LOAC violations33 beyond a state’s boundaries,34
irrespective of the relationship between a state and the location, perpetrator,
or act. It is enforced in the common interest of all states. The universal juris-
diction has not only been incorporated into I-IV Geneva Conventions (art.
49, 50, 129, 146, respectively35), but also into other treaties that apply to the
situation of armed conflict.36 It therefore also covers acts committed with the
use of AWS in armed conflicts. Part of such acts would involve grave breaches
or serious LOAC violations, assuming that they were committed intentionally
and directed against civilian populations or objects. The rationale is that the
use of AWS is considered as a use of the means and methods of warfare. An
exercise of universal jurisdiction contributes to a culture of responsibility by
preventing any avoidance of responsibility by means of procedural inefficien-
cies in the national legal order.37 It is a preventive tool against accidents with
AWS committed by inadequately trained personnel or in an environment in
which the system itself has not been sufficiently tested.
Law on state responsibility does not give precise information as to which
LOAC violations amount to state responsibility and what consequences should
42 Art. 8 of ARSIWA.
43 Nicaragua Case (n 17).
44 Nicaragua Case (n 17) 115-116. See also:A. Cassese,‘The Nicaragua and Tadić Tests Revisited in Light
of the ICJ Judgment on Genocide in Bosnia’ [2007] 18 EJIL, 4, 649, 660–661.
45 Interestingly, a different perspective was taken by the International Criminal Tribunal for Former
Yugoslavia in the case Prosecutor v. D.Tadić. For the attribution of the conduct of a group to a State,
the Tribunal was satisfied with the overall control of a State upon an armed group, irrespective of the
factual control, demand or imposition of the conduct of a group to a State, the Tribunal was satisfied
with the overall control of a State upon an armed group, irrespective of the factual control, demand
or imposition of the conduct. However, this conclusion was reached for the examination of the situ-
ation specifically in the Former Yugoslavia, and not of the responsibility of a State for the conduct of
an armed group in general. See: Tadić Case (Judgment) ICTY-94-1-A (15 July 1999), 120-121. See
also The Application of the Convention on the Prevention and Punishment of the Crime of Genocide (Bosnia
and Herzegovina v. Serbia and Montenegro) (Judgment) [2007] ICJ Rep., 43, at 400-404.
46 J. Magid,‘Israeli dronemaker said to have bombed Armenians for Azerbaijan faces charges’ (Times of
Israel, 29 August 2018) <https://www.timesofisrael.com/israeli-dronemaker-said-to-have-bombed
-armenians-for-azerbaijan-faces-charges/> accessed 27 May 2021; J. Ari Gross,‘Licences suspended
for dronemaker accused of bombing Armenia for Azerbaijan’ (Times of Israel, 27 January 2019)
<https://www.timesofisrael.com/licenses-suspended-for-dronemaker-accused-of-bombing-arme-
nia-for-azerbaijan/> accessed 27 May 2021.
47 The Genocide Case (n 2).
The use of AI in armed conflicts 185
that state and the responsibility of the state.48 As a consequence, an act of an
individual is attributable to the state if the individual, in fact, acted in complete
dependence on the state.49 Therefore, there is a gap in provisions relating to
state responsibility in terms of the sole delivery of AWS and the lack of effec-
tive control on those equipped with AWS by the state. The borderline in
non-effective and effective control over AWS ceded to an armed group would
be difficult to prove. The control test adopted by the ICJ narrows state respon-
sibility exclusively to the specific conduct of that individual upon which the
state exercised effective control. It means that each conduct requires a separate
examination.50 The use of AWS by the non-state armed group would therefore
be assessed from the effective control perspective. Another challenge is related
to private military and security companies that would deliver or train armed
forces on how to use AWS. The argument for attributing the conduct of indi-
viduals who were instructed by the state to create or control AWS is that states
cannot avoid responsibility by actually delegating their functions to private
individuals.51 If contractors are functionally linked to a state, this state bears an
obligation to ensure respect for the LOAC by the contractors.52 Finally, pursu-
ant to art. 16 of ARSIWA, a state is responsible for aiding or assisting in the
unlawful conduct of another state. The attribution in this case arises if the state
is aware of the conduct and the aid or assistance is intended to facilitate the
conduct.53 This provision can be of tremendous value to the transfer of AWS
to another state or to AI-related surveillance-sharing. The UN Register of
Conventional Arms presents transfers of technology in weapon systems among
states, also covering the category of combat aircraft and unmanned aerial vehi-
cles.54 According to the UN Register, it captures over 90% of the global arms
trade.55 Nonetheless, none of the categories refers to AI-weapon technologies
specifically.
56 J.M. Sorel,‘The Concept of >>Soft Responsibility<<’, in J. Crawford,A. Pellet, S. Olleson, The Law
of International Responsibility (OUP 2010), 167.
57 M. Montjoie, ‘The Concept of Liability in the Absence of an Internationally Wrongful Act’, in: J.
Crawford,A. Pellet, S. Olleson, The Law of International Responsibility (OUP 2010), 503–504
58 Boyle argues for emerging principle of ‘polluter pays’. See:A. Boyle,‘Polluter Pays’ [2009] MPEPIL,
passim. See also:A. Duhan,‘Liability for Environmental Damage’ [2019] MPEPIL, 8.
59 R. Crootof, ‘War Torts: Accountability for Autonomous Weapons’ [2016] U. Pa. L. Rev., 164, 1347,
1388-1389.
60 D.M. Grütters, ‘NATO, International Organisations and Functional Immunity’ [2016] International
Organisations Law Review, 13, 211, 212.
The use of AI in armed conflicts 187
must be proved that the organisation exercised effective control over the con-
duct of an organ or an agent placed at the disposal of that organisation.61 It can
be apportioned to the use of AWS in a NATO-led operation if, for example,
a state delivers AWS that is then used by and controlled by the NATO-led
forces.
A case study of NATO involvement in Afghanistan illustrates the matter of
liability well.62 In order to avoid any links to responsibility, the term ‘compen-
sation’ was very often replaced with ‘ex gratia’, battle damages, honour or sola-
tium payments. In consequence, the International Security Assistance Force
(ISAF) in Afghanistan established the so-called Civilian Casualty Tracking Cell.
Its aim was to collect and maintain data on harm caused to civilians (but not to
their property). Due to the lack of transparency in military procedures leading
to the identification of the responsible unit, civilians were usually not aware
of the role of the Cell.63 Due to its ineffectiveness, the Cell was later replaced
by the Civilian Casualty Mitigation Team.64 The Team was responsible for the
coordination of subject-specific studies and recommendations to the chain of
command, as well as for the coordination of working groups that addressed
the establishment of guidelines and standard operating procedures concern-
ing civilian harm tracking processes.65 The working groups were composed of
ISAF members and of international and Afghani organisations. Additionally,
NATO states adopted a set of non-binding Guidelines on Monetary Payments
to Civilian Casualties in Afghanistan.66 Pursuant to the Guidelines, when deter-
mining the appropriate response to a particular civilian casualty, armed forces
should take into account local customs and norms, including potential ex gratia
payments. The Guidelines were useful in Afghanistan, but were not adopted in
later conflicts, e.g. in Libya.67
68 USA, Foreign Claims Act: 10 U.S. Code par. 2734. Property loss; personal injury or death: incident to non-
combat activities of the armed forces; foreign countries, 10.8.1956, <https://uscode.house.gov/statviewer
.htm?volume=70A&page=154> accessed 27 May 2021.
69 C.V. Daming, ‘When in Rome: Analyzing the Local Law and Custom Provision of the Foreign
Claims Acts’ [2012] Wash. U. J. L. & Pol’y, 39, 311.
70 USA, Foreign Claims Act: 10 U.S. Code par. 2735. Settlement: final and conclusive, 10 August 1956
(Office of the Law Revision Counsel) <https://www.govinfo.gov/app/details/USCODE-2011-title10
/USCODE-2011-title10-subtitleA-partIV-chap163-sec2735/context> accessed 27 May 2021.
71 J.Walerstein,‘Coping with combat claims: an analysis of the Foreign Claims Act’s combat exclusion’,
[2009] 11 Cardozo Journal of Conflict Resolution 1, 319, 345.
72 United States Government Accountability Office, ‘Report to Congressional Requesters: Mili-
tary Operations. The Department of Defense’s Use of Solatia and Condolence Payments in Iraq
and Afghanistan’, 27 May 2007, GAO-07-699 (US Government Accountability Office, 31 May 2007)
<https://www.gao.gov/assets/gao-07-699.pdf> accessed 27 May 2021, 50.
73 U.S. Army, The Judge Advocate General’s Legal Center & School, National Security Law Depart-
ment,‘Operational Law Handbook’, Charlottesville 2018 (Library of Congress, 2018) <https://www
.loc.gov/rr/frd/Military_Law/pdf/operational-law-handbook_2018.pdf> accessed 27 May 2021, at
243-234.
The use of AI in armed conflicts 189
the local customs. Moreover, solatia payments and FCA-based claims exclude
one another. As the Operational Law Handbook of 2018 indicates, ‘the indi-
vidual or unit involved in the damage has no legal obligation to pay’.74 This
type of compensation is only an expression of goodwill. The Commander’s
Emergency Response Program funds have been used to make condolence pay-
ments in Afghanistan and in Iraq as a recognition of loss.75 Unpredicted effects
of using AWS in hostilities that would not amount to LOAC violations would
be addressed in this regime. The term ‘unpredicted’ refers, for example, to a
situation when AWS made decisions autonomously or when circumstances
constituting a basis for the AWS’ decision changed.
Conclusions
The enforcement of the LOAC is mainly focused on individual criminal
responsibility. However, victims, presumably protected by the LOAC, possess
few effective measures addressing harm or damage caused by the use of AWS
in armed conflicts. Although LOAC provides state responsibility through the
obligation to pay compensation for some LOAC violations, the responsibility
would be difficult to invoke in addressing the use of AWS, mainly due to the
problem of the attribution of AWS’ actions to the state.
The autonomous behaviour of AWS can be unpredictable and ultra-haz-
ardous at the same time. The acceptance of this risk by states should be fol-
lowed with the acceptance of liability for some of the acts not prohibited by
international law. Compensation programmes for effects of hostilities contrib-
ute to the general success of, and civilian support for, military operations. At
the domestic level, the programmes, albeit not expressly addressing the use of
AWS, have already been in place for more than 60 years. They lead to differ-
entiated, and sometimes divergent, protection systems for victims, since they
rely on a character of conflict and the parties involved. Moreover, compensa-
tion programmes encounter significant challenges in implementing national
laws, including interpretation of combat exclusion or combat zones. All of this
further limits their application to AWS cases.
74 ibid 320.
75 Amsterdam International Law Clinic, ‘Monetary Payments for Civilian Harm in International and
National Practice’ (Nuhanovic Foundation, 2013) <http://www.nuhanovicfoundation.org/user/file
/2013_civic_report_on_monetary_payments.pdf> accessed 27 May 2021, at 13-15.
13 The problematisation of human
control over lethal autonomous
weapons
A case study of the US Department of State
Mikolaj Firlej
Introduction
The requirement of human control over the use of autonomous weapons sys-
tems (AWS) is a highly discussed topic, yet still elusive. Many policy advocates
and academics argue that AWS could potentially comply with the law of armed
conflict (LOAC) only if guided by human control, and this has been discussed
in the preceding chapters.1 They postulate to recognise the requirement of
human control to be on par with other key principles of LOAC, that is, the
principles of distinction, proportionality, humanity, and military necessity.
The US government has adopted Directive 3000.09 on AWS, which states
that autonomous weapon systems ‘shall be designed to allow commanders and
operators to exercise appropriate levels of human judgment over the use of force’.2
The US formulation of the role of humans over the use of AWS has gained
significant traction, and authors started to equate this concept with the require-
ment of ‘human control’.3 However, the US government did not support such
description and opposed any international effort to regulate AWS on the basis
of the concept of ‘human control’. This chapter explores in more depth the
difference between the concepts of ‘human judgement’ and ‘human control’
by studying the assumptions that underpin these seemingly similar policy con-
cepts. It also identifies key points that have been left out of US Department of
Defence (DoD) policy problematisation but are nonetheless important consid-
erations that can inform a critical assessment of the US DoD approach.
1 T. Chengeta, ‘Defining the emerging notion of “meaningful human control”’ (2016) NYU J. Int’l L.
& Pol., 839.
2 DoD, Directive 3000.09 Autonomy in Weapon Systems (2012).
<https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf> accessed
27 May 2021.
3 ICRC, ‘Ethics and autonomous weapon systems: An ethical basis for human control?’ (Report)
(2018), 2, 8.
DOI: 10.4324/9781003246503-15
The problematisation of control over autonomous weapons 191
The problem addressed by US DoD policy
The US policy reflected in Directive 3000.09 delineates three types of robotic
systems that get the ‘green light’ for approval in the policy.4 These are: (1)
semi-autonomous weapons, such as homing munitions; (2) defensive super-
vised autonomous weapons, such as the ship-based Aegis weapon system; and
(3) non-lethal, non-kinetic autonomous weapons, such as electronic warfare
to jam enemy radars.5 These three classes of AWS are in wide use today, and
the policy confirms that developers can use autonomy according to existing
practices.6 These weapons are subject to normal acquisition rules and do not
require any additional approval. However, any future weapons that would use
autonomy in a novel way outside those three types get a ‘yellow light’. Those
systems are subject to a lengthy review process, focusing primarily on tests and
evaluations, both before development and fielding.
A potential novel way of using autonomy that falls outside the specified
instances refers to any kind of autonomous weapon used for offensive purposes.7
While offensive AWS are not prohibited by Directive 3000.09, the additional
restrictions aim to mitigate risks associated with the potential development
of such weapons. The reason why these restrictions are in place is to ‘mini-
mise the probability and consequences of failures in autonomous and semi-
autonomous weapon systems that could lead to unintended engagement’.8 An
unintended engagement is defined as ‘the use of force resulting in damage to
persons or objects that human operators did not intend to be the targets of US
military operations’.9 The result could lead to ‘unacceptable levels of collateral
damage’ that go against the law of war or Rules of Engagement (ROE).10 The
US delegation to the UN provides an example of accidental attacks that killed
civilians, or friendly forces would be considered as ‘unintended engagements’
under DoD Directive 3000.09.11
Failures are defined as ‘an actual or perceived degradation or loss of intended
functionality or inability of the system to perform as intended or designed’.12
Humans should therefore retain control over the choice to use deadly
force. Eliminating human intervention in the choice to use deadly force
could increase civilian casualties in armed conflict.21
At present, military officials generally say that humans will retain some
level of supervision over decisions to use lethal force, but their statements
often leave open the possibility that robots could one day have the ability
to make such choices on their own power.22
20 The concept of human control with the adjective ‘meaningful’ has been formulated in more struc-
tured way later in 2013 by Article 36 and then adopted by HRW.Article 36,‘Killer Robots: UK Gov-
ernment Policy on Fully Autonomous Weapons’ (2013) <http://www.article36.org/wp-content/
uploads/2013/04/Policy_Paper1.pdf.> accessed 27 May 2021.
21 HRW and others, ‘Losing Humanity. The Case Against Killer Robots’ (Report) (April 2015) 978-
1-6231-32408, 37.
22 ibid., 1.
23 ICRC (n 3).
24 US Government (n 11).
194 Regulating artificial intelligence in industry
too restrictive and may imply a so-called direct human control. Humans histori-
cally exercised ‘direct control’ over weapons because weapons have been seen
merely as a tool in the hands of fighters. In a sense, humans were ‘masters’ of
their weapons.25 The concept of ‘direct control’ of weapons by humans can be
traced to the 1949 Geneva Convention and their 1977 Additional Protocols,
whose provisions invoke the idea that without human control or use, a weapon
is nothing but a mere tool.26 As an example, in armed conflict, participating
in hostilities is shown by the ‘bearing of arms’. Thus, persons ‘who have laid
down their arms’ are considered to be ‘taking no active part in the hostilities’.27
Such interpretation of LOAC would suggest that all types of weapons should
be guided by ‘direct control’ to be fully compliant with the law.28 This view is
reflected by ICRC and HRW, who argue that without having direct human
control AWS would violate the principle of proportionality, among others.29
The US delegation does not agree with this argument, and they cite various
examples to support a broader understanding of human factors on the bat-
tlefield. One of the examples is the Automatic Ground Collision Avoidance
System developed by the US Air Force (USAF) that has helped prevent so-
called ‘controlled flight into terrain’ accidents. The system assumes control
of the aircraft when an imminent collision with the ground is detected, and
returns control back to the pilot when the collision is averted.30 Therefore,
they prefer to place emphasis on the human commander or operator and their
capacity to judge the likely effect of using AWS, rather than on the notion of
‘control’, which is often limited and interpreted too rigidly.31
Let us highlight this distinction between human control and human judge-
ment in the most radical context, that is, in the context of offensive AWS,
weapons that fall outside three types of robotic systems that get the ‘green light’
for approval in Directive 3000.09. According to the requirement of human
control, humans must always retain control over life and death decisions; thus,
this means that delegating lethal authority for a machine to make its own deci-
sion shall not be allowed. That is in contrast with the requirement of human
judgement, whereby the development of such weapons, while subject to addi-
tional scrutiny, is, however, in principle allowed. It is worth emphasising how
such subtle semantic difference plays a transformative role. By using the word
‘judgement’, Directive 3000.09 steers the focus from direct control at the level
of engagement and targeting to the design requirement of weapons that allows
25 Chengeta (n 1) 839.
26 Chengeta (n 1) 840.
27 Art. 3, Geneva Convention Relative to the Treatment of Prisoners of War (Third Geneva Conven-
tion), 12 August 1949, 75 UNTS 135.
28 Chengeta (n 1), 840.
29 HRW, ‘Shaking the Foundations. The Human Rights Implications of Killer Robots’ (Report)
(2014), 12.
30 US Government (n 11).
31 ibid.
The problematisation of control over autonomous weapons 195
humans to make informed decisions about the potential deployment of weap-
ons. The appreciation of design requirements generates two subsequent posi-
tive obligations: (1) that humans deploying the systems must understand how
weapons operate in realistic environments so that humans can make informed
decisions regarding their use, and (2) to satisfy this obligation, AWS require
adequate levels of operational testing, verification, validation and evaluation.
The design-oriented requirement does not, however, generate the obligation
not to delegate lethal authority for a machine to make its own decision as long
as two positive obligations are satisfied. On the contrary, the requirement of
human control explicitly states that ‘humans should retain control over the
choice to use deadly force’.
To restate, both policies – the concept of human control and human judge-
ment – recognise the challenges that pose lethal autonomy, yet they come
with different propositions. Both approaches affirm that the development of
robotic weapons has arrived at the point that militaries are able to delegate
lethal authority to machines, and that this represents a novel challenge for
policy-makers. Yet one approach argues for the prohibition of weapons that
make their own targeting decisions, while the second approach leaves the door
open for the potential development of such weapons. Does it mean that both
the Campaign and DoD identified the same problem, but only the remedies
differ? In order to investigate this further, one has to consider what presupposi-
tions underlie both representations of the ‘problem’.
40 5 U.S.C. § 553(b)(A).
41 Attorney General’s Manual on the Administrative Procedure Act (1947), 30.
42 5 U.S.C. §553(b), (c).
43 ibid (a) (1).
44 J. Cole, T. Garvey (Congressional Research Service), ‘General Policy Statements: Legal Overview’
(2016), 9.
45 American Bus Ass'n v. United States 627 F.2d 531 (D.C. Cir. 1980).
46 Guardian Federal Savings & Loan Ass'n v. Federal Savings & Loan Insurance Corp. 589 F.2d 658,
666 (D.C. Cir. 1978)
47 DoD (n 2), 4d.
48 ibid., 4a.
198 Regulating artificial intelligence in industry
‘shall’ to establish mandatory requirements, the US Supreme Court held that
‘shall’ could also mean ‘may’.49 Thus, the wording of Directive 3000.09 does
not necessarily suggest whether that the requirement of human judgement
over the use of AWS is a legislative rule or a soft policy intent that leave the US
military departments with a wide degree of discretion. Given that the topic of
AWS is still a nascent area of research and it is uncertain how to regulate such
weapons, the motivation behind the ambiguity could be purposeful and stra-
tegic. This is because the policy of human judgement seeks to balance at times
competing interests – ensuring the operational safety of weapons of which we
still know relatively little yet not restricting their potential use for the combat
advantage. The policy of human control, on the other hand, has left out of the
problem representation the military competition among countries and desire to
achieve asymmetric combat advantage.
49 Gutierrez de Martinez v. Lamagno, 515 U.S. 417, 434 n.9 (1995) (‘Though ”shall” generally means
“must,” legal writers sometimes use, or misuse,“shall” to mean “should,”“will,” or even “may.”’)
50 R.Work, Interview (22 June 2016) in Scharre (n 4), 98.
51 ibid 99.
52 Fedscop, ‘Pentagon unveils strategy for military adoption of artificial intelligence’ <https://www
.fedscoop.com/artificial-intelligence-pentagon-military-unclassified/> accessed 27 May 2021.
The problematisation of control over autonomous weapons 199
of delegating an autonomous machine to make an offensive, lethal decision.
This is an existing phenomenon; in other words, there are already existing
weapons that are capable of making their own lethal decisions. This research
does not refer here to the three types of weapons that receive the ‘green light’
from Directive 3000.09, i.e. semi-autonomous weapons, defensive supervised
autonomous weapons, and non-lethal, non-kinetic autonomous weapons –
they are all in use today. It refers to weapons that fall beyond the three categories
and use autonomy capabilities in a novel way – weapons that have been occa-
sionally in use before and are in potential use today. Weapons such as advanced
loitering munitions, i.e. the Israeli Harpy, where no human approves the spe-
cific target before engagement.53 The Harpy weapon is in the arsenal of various
countries today, including China, India, South Korea, Chile, and Turkey. It
is also reported that the Chinese have reverse engineered their own variant.54
Harpy loitering munitions were used a number of times in 2018 and 2019 by
the Israel Defence Forces to destroy Syrian Pantsir-S1 SAM batteries. The US
DoD currently owns a miniature, high-precision loitering munition called the
AeroVironment Switchblade.55 The Switchblade was used by the US Army
in Afghanistan to target ‘high value targets’, such as insurgent leaders, mortar
teams, or insurgents travelling in vehicles.56 Switchblade still keeps humans in
the loop via a functioning radio link to approve targets before engagement,
making them semi-autonomous weapons, but it can be potentially deployed
without direct human intervention. A more contested example of delegating
decision-making for a machine to make a lethal decision is the Long-Range
Anti-Ship Missile AGM-158C (LRASM). The LRASM is a stealthy anti-ship
cruise missile released by Lockheed Martin and based on the Joint Air-to-
Surface Standoff Missile (AGM-158B JASSM-ER) that incorporates a multi-
mode radio frequency sensor, a new weapons’ datalink and altimeter, and an
uprated power system.57 These enable the LRASM to have a significant degree
of independence from a human operator. The LRASM is initially directed by
pilots, but then halfway to its destination, it severs communication with its
operators. The weapon itself decides which of the selected targets to attack
among the given pool.58 It is thus capable of selecting its own target based
on processing information and a continuous stream of data. LRASM weapon
is already available to the US military, while in February 2020, the US State
Moreover, some of the key architects of DoD Directive 3000.09, such as Work,
are going even further by arguing that if adversaries start to develop AGI the
US will be at an operational disadvantage and the DoD will need to rethink its
position.63 This means that the architects of Directive 3000.09 leave open the
possibility to develop and use AGI if there is ‘an urgent military need’.
This is not to say that the DoD’s goal is to develop and deploy military
AGI. None of the official strategies evokes this concept, and even among the
architects of Directive 3000.09 there is a degree of scepticism towards such
59 Navy Recognition, ‘US approves a sale to Australia for 200 AGM-158C Long Range Anti-Ship
Missiles LRASM’ <http://www.navyrecognition.com/index.php/news/defence-news/2020/feb-
ruary/8029-us-approves-a-sale-to-australia-for-200-agm-158c-long-range-anti-ship-missiles-lrasm
.html> accessed 27 May 2021.
60 See e.g. D. Lewis, G Blum, N Modirzadeh,‘War-Algorithm Accountability’ (Report) (2016), 18.
61 J. Searle,‘Minds, Brains and Programs’ [1980] 3 Behavioral and Brain Sciences 417–424; S. Beckers,
‘AAAI: an Argument Against Artificial Intelligence’ In V. Müller (ed), Philosophy and Theory of Artificial
Intelligence 2017 (Springer 2018), 235–247
62 US Government (n 11).
63 R.Work, Interview (22 June 2016) in Scharre (n 4), 99.
The problematisation of control over autonomous weapons 201
advancements.64 However, it seems that the DoD leaves the door open for fur-
ther significant improvements of already existing AI applications, including in
the targeting and engagement phase, and that the global competition between
countries further accelerates this process, which does not currently have any
hard red-lines.
69 ibid.
70 J.Thurnher, ‘The Law that Applies to Autonomous Weapon Systems’ (2013) 17 ASILI; M Schmitt,
J Thurnher, ‘“Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict’
(2013) HNSJ, 243.
71 US Government (n 69).
The problematisation of control over autonomous weapons 203
Campaign authors argue in one place that AGI, ‘which would try to mimic
human thought’, is a potential option to promote autonomous weapons’ com-
pliance with LOAC.72 Further, in the report, however, they seem to cast doubt
whether technology can indeed truly emulate human action:
However, in order to move beyond the debate about the technical feasibility of
designing a weapon to comply with LOAC, the Campaign turned to another
important problem – the issue of responsibility for a weapon’s wrongdoing.
79 In the debate about AWS the words ‘responsibility’ and ‘accountability’ are often used interchange-
ably. Both the UN GGE and the Campaign refer to the term ‘accountability’ as an umbrella term
to describe various forms of legal responsibility, i.e. state responsibility, administrative proceedings
undertaken in response to violations of IHL, civil liability, and individual criminal responsibility. For
the purpose of this article, I refer to the broad notion of ‘responsibility gap’, but I discuss both state
and individual responsibility. See: Campaign, ‘Mind the Gap:The Lack of Accountability for Killer
Robots’ (Report) (2015); UN GGE, ‘Report of the 2018 session of the Group of Governmental
Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems’ (Report
CCW/GGE.1/2018/3) (2018)
80 Campaign (n 79).
81 US Government (n 69); See also: R. Arnold, ‘Legal Challenges Posed by LAWS: Criminal Liability
for Breaches of IHL by (the Use of) LAWS’ in R Geiß, Lethal Autonomous Weapon Systems (2016), 10.
82 Arnold (n 81), 10.
83 Campaign (n 79); UN GGE (n 80).
84 Campaign (n 79).
85 US Government (n 69).
86 Arnold (n 81), 10.
The problematisation of control over autonomous weapons 205
neither programmer, not commander can be initially identified, the impos-
sibility to ascribe criminal responsibility to a person is not caused by the fact
that the harmful conduct was committed by AWS, but the problem is rather
the impossibility to collect evidence allowing for the proper identification of a
relevant human element responsible for the machine wrongdoings.87
Thus, the discussion about responsibility started as legal discourse, but then
developed into a debate about the reasonable risk threshold and risk standards.
The risk analysis in turn is largely dominated by the technical knowledge of the
current and potential future performance of AWS, their testing and training speci-
fications. Here again, as it was with the legal problem of distinction, the DoD
and their experts shift the focus from legal and moral arguments towards almost
entirely technical considerations. ‘The standard of care or regard that is due in
conducting military operations with regard to the protection of civilians is a com-
plex question to which the law of war does not provide a simple answer’,88 we
read. The DoD further states that this standard must be assessed based on the gen-
eral practice of states and common standards of the military profession in conduct-
ing operations. In particular, the best measure to promote ‘accountability’ are:
training on the weapon system and rigorous testing of the weapon system
can help commanders be advised of the likely effects of employing the
weapon system.89
Taking the above into account, while the DoD recognises the increased risks
associated with the current use of autonomy, it does not necessarily consider
AWS as weapons that inherently render risks that are too excessive. The poten-
tial limitation of the military development is AGI, although not without reserva-
tion. That said, a significant leap of technical progress is required to close the gap
between existing weapons utilising narrow AI and weaponised AGI. This means
therefore that the US military, subject to all precautions, leaves themselves open
for further advancements of AWS, fuelled by international competition, which
only accelerates in pace.90 The DoD takes a qualitatively different take on the
legal problems of discrimination and responsibility relative to the Campaign.
They shift the legal ramifications of the problems onto a technical issue and build
confidence in their view on the basis of their technology prowess and relevant
testing and monitoring capabilities. Thus, the potential novel legal and ethical
challenges for human–machine controls arising from AWS are predominately,
if not exclusively, subjugated to technical expertise and military ‘know-how’.
87 ibid.
88 US Government (n 69).
89 ibid.
90 See Congressional Research Service, ‘Renewed Great Power Competition: Implications for
Defence—Issues for Congress’ (Report) (2020).
206 Regulating artificial intelligence in industry
What has been left out of the US DoD
problem representation?
By shifting from ‘human control’ to ‘human judgement’, in this part, it is argued
that the policy challenges the notion of an active human role in the operation
of AWS. A response to the problem of increasing risk associated with the use of
AWS is ‘more autonomy, less human’, but this assumption has been largely left
out of the problem representation discourse, in the place of the vague notion
of ‘human judgement’. Further, Directive 3000.09 exempts a cyber domain
from their scope and thus creates a legal and policy vacuum when it comes to
the potential development and use of offensive autonomous cyber weapons – a
threat arguably more persistent than still incidental uses of kinetic AWS. Finally,
the problem formulated by the DoD is framed primarily as technical in nature.
While the US policy recognises specific challenges associated with the develop-
ment and use of AWS, it leaves unproblematic considerations that such weap-
ons, even if they meet all necessary technical reviews, may still be considered as
unacceptable solely on an ethical basis. It is further argued that these three argu-
ments may inform an alternative problem and policy representation of AWS.
Increasingly humans will no longer be ‘in the loop’ but rather ‘on the
loop’ – monitoring the execution of certain decisions. Simultaneously,
advances in AI will enable systems to make combat decisions and act
within legal and policy constraints without necessarily requiring human
input.91
We can see how the description of AI-based systems prepares the setting for the
removal of direct human engagement. The Plan also emphasises the changing
91 US Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047 (Report) (2009) <https://fas.org
/irp/program/collect/uas_2009.pdf> accessed 27 May 2021.
The problematisation of control over autonomous weapons 207
role of human operators moving from being in the loop to being on the loop.
Their role is to ‘monitor the execution of operations’ and retain ‘the ability
to override the system during the mission’.92 In the same Plan we read about
the embedding of human qualities within machines at the level of design: ‘the
systems programming will be based on human intent’.93
The second important transformation was the removal of reference to direct
human control after DoD Directive 3000.09. In 2009 the USAF asked for
policy to guide the development of future weapons capabilities, including fully
autonomous systems. The office of Defence Policy started to receive questions
about legal and ethical issues associated with the use of lethal autonomy, while
different military branches responded without coordination. The US Army
initially claimed that it ‘will never delegate use-of-force decisions to a robot’.94
The USAF had a different perspective.95 After two years, in 2011, the DoD
responded with a temporary Unmanned Systems Roadmap stating:
In the same document, however, we read that the DoD envisions unmanned
systems to operate with manned systems ‘while gradually reducing the degree of
human control [MF emphasised] and decision making required for the unmanned
portion of the force structure’.97 In DoD Directive 3000.09 from 2012 the
notion of human control is replaced by ‘human judgement’, signalling a wider
shift towards the controls at the level of design relative to direct human con-
trol. In the Unmanned Systems Roadmap from 2013 the notion of human
control is used in the following way:
92 ibid.
93 ibid.
94 P. Scharre, ‘Interview with Dan Saxon’. Qtd. in D. Saxon, ‘A human touch: autonomous weapons,
DoD Directive 3000.09 and the interpretation of ‘appropriate levels of human judgment over the
use of force’ in N. Bhuta and others (eds), Autonomous Weapons Systems: Law, Ethics, Policy (CUP
2016), 195.
95 US Air Force (n 92), 41, 51.
96 DoD, Unmanned Aircraft Systems Integrated Roadmap FY2011-2036 (Report) (2011).
97 ibid.
98 DoD,‘Unmanned Aircraft Systems Integrated Roadmap FY2013-2038’ (Report) (2013), 15.
208 Regulating artificial intelligence in industry
Military practitioners argue today that DoD Directive 3000.09 was transforma-
tional because it was the first policy on AWS. The most significant revolution
was, however, unnoticed – the DoD removed the notion of ‘human control’
and by including the phrase ‘appropriate levels of human judgement over the
use of force’, they left the door open for the exercise of no direct human con-
trol at all over the use of AWS, including in the offensive settings.
The third aspect of changing human–machine controls between 2005 and
2012 is related to the introduction of ‘levels of autonomy’ by the DoD as a frame-
work to inform the interaction with human operators.99 In the 2005 Roadmap,
the DoD defined ten levels of autonomy ranging from remotely guided sys-
tems to fully autonomous.100 The 2009 Roadmap specifically asked opera-
tors and commanders to ‘retain the ability to refine [MF emphasised] the level
of autonomy’.101 And later: ‘The level of autonomy should be dynamically
adjusted [MF emphasised] based on workload and the perceived intent of the
operator’.102 This guidance attempted to aid the development process and the
use of weapons by grouping functions for generalised scenarios. They assume
the separation of duties between humans and machines. Humans are set to
respond to dynamic situations and shift to the right ‘mode of automation’
rather than co-actively interface with machines to achieve the best capabili-
ties. The concept of levels of autonomy has been criticised by practitioners
who argue that the framework of ‘levels of autonomy’ only represents situa-
tions where increased automation must come with less human control, which
creates an unnecessary trade-off and does not accurately represent the com-
plexity of human–machine interferences embedded in AWS. The criticism
has also been levied inside the DoD. Few months after the introduction of
Directive 3000.09, which reaffirms the idea of levels of autonomy, the DSB
published a report that strongly criticised this concept. The DSB argued for
the replacement of ‘levels of autonomy’ with a framework that focuses on the
explicit allocation of cognitive functions and responsibilities between humans
and machines to achieve specific capabilities.103 However, despite these criti-
cisms, the concept of ‘levels of autonomy’ is still in use within the US military,
although the DSB report suggested a different approach.
To sum up, in the academic debate, many authors equate the concept of
human control with the notion of human judgement, although the DoD
distances itself from such claims. The DoD established Directive 3000.09 as
a response to the problem of the increasing risk associated with the use of
AWS, but grounded the response in the wider approach of ‘unmanning’ and
99 Saxon (n 94).
100 DoD, Unmanned Aircraft Systems Roadmap 2005-2030 (Report) (2005), 48.
101 US Air Force (n 92).
102 ibid.
103 DSB,‘The Role of Autonomy in Weapon Systems’ (Report) (2012) <https://fas.org/irp/agency/
dod/dsb/autonomy.pdf> accessed 27 May 2021.
The problematisation of control over autonomous weapons 209
disavowing human control. Directive 3000.09 even leaves the door open for
the exercise of no human judgement at all. The current DoD conviction of
‘more autonomy, less human’ as a response to operational risks is particularly
problematic but has been largely left out of the problem representation. An
alternative way to see the problem is to appreciate the necessity of human factors
by arguing that the more advanced a weapon is, the more complex the control
system is, and thus paradoxically the contribution of human operators is more
crucial.104 This alternative approach could stem from the Human-Centred AI
framework, which postulates to design technologies that offer both high lev-
els of human control and high levels of computer automation.105 According
to this perspective, irrespective of the system’s level of autonomy, a human
should always have the possibility to override the machine’s action. Moreover,
humans should also conduct active oversight of weapon systems in order to
avoid ‘algorithmic hubris’, which equates to unreasonable expectations regard-
ing the machine’s performance.106 These are only the most common exam-
ples, but the key argument is that there is an alternative to the DoD problem
formulation of delegating lethal autonomy to machines that complicates the
seemingly ‘natural technology development’ towards more autonomy and less
human engagement.
104 See L. Bainbridge,‘Ironies of automation’ (1983) 19/6 Automatica 775–779; J. Bradshaw et al.‘The
seven deadly myths of autonomous systems’ (2013) 28(3) IEEE Intelligent Systems, 54–61.
105 B. Shneiderman, ‘Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy’ (2020)
International 36/6 Journal of Human–Computer Interaction, 495–504.
106 ibid., 497.
107 DoD (n 2).
108 F. Delerue, Cyber Operations and International Law (CUP 2020) 160.
109 Scharre (n 4) 224.
210 Regulating artificial intelligence in industry
originate with humans, security experts already warn about more advanced
autonomous offensive cyberweapons with the ability to self-replicate.110
As is the case with kinetic weapons, here again the DoD representatives
fear that the adversaries may prompt the US to launch offensive AWS, despite
the lack of clear rules regarding their use. We read in the National Security
Strategy that US competitors have not only achieved significant progress in
integrating data analytic capabilities in the military operations, but they are also
developing advanced weapons that could threaten the US current command-
and-control architecture.111 Advanced command-and-control instructions are used
to identify attackers, relate attacks, and disrupt ongoing malicious activity.
However, there are at least four operational challenges to have effective com-
mand-and-control architecture in a cyber domain. First, there is the problem
of offensive unpredictability.112 An intelligent malware agent with self-learning
capacities can learn and override defensive acts in order to exploit any potential
vulnerabilities of a system.113 The second challenge is concerned with the offence
undetectability, called in cyber studies ‘the attribution problem’.114 Complex mal-
ware is difficult to detect and, even when recognised, one can only provide an
account for known intrusions. The challenge is to confront the prospect of per-
manent intrusion of the defender’s infrastructure when the scale and scope of
infiltration can raise significant security issues. The third challenge relates to the
complexity of the defence system.115 While the attacker usually only needs to
understand the procedures of entry, the defender must protect the entire network
against many interrelated points of vulnerabilities. Lastly, traditional command-
and-control architecture is under pressure from supply chain risks, such as manu-
facturers who introduce vulnerabilities into the specific components of a system.116
That said, Directive 3000.09 exempts the cyber domain and thus creates a
legal and policy vacuum when it comes to the potential development and use
of offensive autonomous cyber weapons. Hence, an alternative way to see the
formulated problem is to consider currently separate frameworks for the cyber
domain and kinetic AWS jointly and to explore whether existing controls from
Directive 3000.09 are congruent with autonomous cyber weapon operations.
If not, it is worth exploring what makes kinetic weapons special that they
require increased regulatory oversight. A potential answer is that kinetic AWS
could be lethal, while cyber weapons could not. However, cyber weapons
could either cause lethal or related harm as a side effect or they could cause sig-
nificant damage, even if not lethal. Thus, one can elevate these insights, expose
110 P. Scharre interview with Bradford Tousley (27 April 2016) Qtd.in Scharre (n 4), 228.
111 DoD, National Security Strategy (2017) 8.
112 L. Kello, The Virtual Weapon and International Order (Yale University Press 2017), 68–69.
113 M. Brundage and others,‘The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation’ (Report) (2018), 20.
114 Kello (n 113), 69–72, 129–132.
115 ibid., 72–73.
116 ibid., 73–74.
The problematisation of control over autonomous weapons 211
complexities related to cyber weapons, and challenge the representation of a
problem in the current form by the DoD.
1 It is worth noting that on 21 April 2021, the European Commission published its proposal for a
comprehensive regulatory framework for AI in the EU. See: European Commission, Proposal for a
Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on
Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, Brus-
sels, 24 April 2021, COM (2021) 206 final, 2021/0106 (COD).
214 Regulating artificial intelligence in industry
rights. A good example of this is the copyright system that applies to many
creative industries, such as music. However, as presented in Chapter 7, they
do not seem to deal with some of the fundamental problems of that industry.
Any attempts at the future harmonisation of regulatory matters of AI
should aim at providing the legal certainty necessary to facilitate innova-
tion and investment in AI, while also safeguarding fundamental rights and
ensuring that AI applications are used safely. Moreover, they should provide
supervisory and enforcement mechanisms that would allow for national
or regional harmonisation, and implementation of corrective actions and
sanctions. For any regulatory frameworks to be effective, their impact and
effectiveness would also need to be systematically revised. This will require
greater collaboration between engineers, researchers, scholars, regulators,
and industry representatives. Such diverse teams are more likely to contrib-
ute to a meaningful regulatory framework for AI in industry.
Bibliography
Table of Cases
China
Feilin v Baidu (2018) Beijing Internet Court <https://www.bjinternetcourt.gov.cn/cac/zw
/1556272978673.html> accessed 27 May 2021.
Shenzhen Tencent v Yingxun Tech (2019) Shenzhen Nanshan District People’s Court, Yue
Min Chu No. 14010. zhongguo caipan wenshuwang (China Judgements Online)
<https://wenshu.court.gov.cn/website/wenshu/181107ANFZ0BXSK4/index.html
?docId=30ba2cab36054d80a864ab8000a6618a> accessed 27 May 2021.
European Union
Case 120/78 Rewe-Zentral AG v Bundesmonopolverwaltung für Branntwein (‘Cassis de Dijon’)
[1979] ECR 649.
Case C-470/93 Verein gegen Unwesen in Handel und Gewerbe Köln.e.V. v Mars GmbH [1995]
ECR I-1923.
Case C-55/94 Reinhard Gebhard v Consiglio dell’Ordine degli Avvocati e Procuratori di Milano
[1995] ECR I-4165.
Case C-210/96 Gut Springenheide and Tusky [1998] ECR I-4657.
Case C-604/10 Football Dataco Ltd and Others v Yahoo! UK Ltd and Others [2012] ECJ 115.
Case C-112/00 Eugen Schmidberger v Austria [2003] ECR I-5659.
Case C-322/01 Deutscher Apothekerverband eV v 0800 DocMorris NV and Jacques Waterval
[2003] ECLI:EU:C:2003:664.
Case C-110/05 Commission v Italy (‘Trailers’) [2009] ECR I-519.
Joined Cases C-402/07 and C-432/07 Sturgeon and Others [2009] ECR I-10923.
Case C-5/08 Infopaq International A/S v Danske Dagbaldes Forening [2009] ECJ 465.
Case C-497/13 Froukje Faber v Autobedrijf Hazet Ochten BV [2015] ECLI:EU:C:2015:357.
Case C-99/16 Jean-Philippe Lahorgue v Ordre des avocats du barreau de Lyon [2017]
ECLI:EU:C:2017:107, Opinion of AG Wathelet.
Case T-48/19 Smart Things Solutions GmbH v European Union Intellectual Property Office
(EUIPO) [2020] ECLI:EU:T:2020:483.
Case C-410/19 The Software Incubator Ltd v Computer Associates UK Ltd [2020]
ECLI:EU:C:2020:1061, Opinion of AG Tanchev.
216 Bibliography
International Courts and Tribunals
Application of the Convention on the Prevention and Punishment of the Crime of
Genocide (Bosnia and Herzegovina v. Serbia and Montenegro) (Judgment), [2007] ICJ
Rep 43.
Certain Questions Relating to Settlers of German Origin in the Territory Ceded by
Germany to Poland (Advisory Opinion) 1923, [1923] PCIJ Publications Series B. No. 6.
Corfu Channel Case (UK v. Albania) (Merits), [1949] ICJ Rep 4.
Judgment (Military and Paramilitary Activities in and Against Nicaragua, (Nicaragua v.
USA) (Merits), [1986] ICJ Rep 14.
Legal Consequences of the Construction of a Wall in the Occupied Palestinian Territory
(Advisory Opinion), [2004] ICJ Rep 136.
Legal Status of Eastern Greenland (Judgment) 1933, [1933] PCIJ Publications Series A/B
No 53.
Legality of the Threat or Use of Nuclear Weapons (Advisory Opinion), [1996] ICJ Rep 257.
Military Tribunal V, 19 February 1948, in Trials of War Criminals Before the Nuremberg
Military Tribunals Under Control Council Law No. 10, Vol. 11, 1950.
Prosecutor v. Stanislav Galić, IT-98-29-T, ICTY Judgement and Opinion, 5 December 2003.
Prosecutor v Tadić, ICTY-94–1-A (Judgement 15 July 1999).
United States Military Tribunal, Nuremberg, United States v. Wilhelm List et al., Case
No. 47, 1948.
Israel
Israel, Supreme Court of Justice, Adalah Legal Center for Arab Minority Rights in Israel
et al v. Minister of Defense et al., judgment, HCJ 8276/05, 12 December 2006, [2006]
Israel Law Reports 2, 352, 353.
United Kingdom
Montgomery v Lanarkshire Health Board (Scotland) [2015] UKSC 11.
NHS Trust v T [2004] EWHC 1279 Fam.
Nova Productions Ltd v Mazooma Games Ltd & Ors (CA). Reference: [2007] EWCA Civ 21.
R (On the Application of Edward Bridges) v The Chief Constable of South Wales [2019] EWHC
2341.
Re AK (Medical Treatment: Consent) [2001] 1 FLR 129.
Re T (Adult: Refusal of Medical Treatment) [1992] 4 All ER 649.
W Healthcare NHS Trust v KH [2004] EWCA Civ 1324.
United States
American Bus Ass’n v. United States 627 F.2d 531 (D.C. Cir. 1980).
Guardian Federal Savings & Loan Ass’n v. Federal Savings & Loan Insurance Corp. 589 F.2d 658,
666 (D.C. Cir. 1978).
Gutierrez de Martinez v. Lamagno, 515 U.S. 417, 434 No.9 (1995).
Naruto v Slater 888 F3d 418 (9th Cir 2018).
State v Loomis 881 N.W.2d 749 (Wis. 2016).
Bibliography 217
Table of International Instruments
International Treaties
Additional Protocol to the Geneva Conventions of 12 August 1949, and Relating to the
Protection of Victims in International Armed Conflicts (adopted 8 June 1977, entered
into force 7 December 1978) 1125 UNTS 3.
Berne Convention for the Protection of Literary and Artistic Works, signed on 19 September
1886, entered into force 5 December 1887.
CEPEJ, European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems
and their Environment, adopted at the 31st plenary meeting of the CEPEJ, Strasbourg,
30 December 2018.
Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or
Punishment (adopted 10 December 1984, entered into force 26 June 1987), 1645
UNTS 85.
Convention for the Protection of Human Rights and Fundamental Freedoms (European
Convention on Human Rights, as amended), opened for signature in Rome on 4
November 1950, came into force on 3 September 1953.
Convention for the Protection of Individuals with Regard to Automatic Processing of
Personal Data, ETS No. 108 as amended by the CETS amending protocol No. 223, 1981.
Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons
(CCW) Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate
Effects (as amended on 21 December 2001) 1342 UNTS 137.
Convention on the International Liability for Damage Caused by Space Objects (adopted 29
March 1972, entered into force 1 September 1972) 961 UNTS 187.
Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-
Personnel Mines and on Their Destruction, done in Oslo on 18 September 1997,
entered into force 1 March 1999, 2056 UNTS 211.
Convention on the Safety of United Nations and Associated Personnel (adopted 9 December
1994, entered into force 15 January 1999), 2051 UNTS 363.
Geneva Convention Relative to the Treatment of Prisoners of War (Third Geneva
Convention), 12 August 1949, 75 UNTS 13.
ICAO Convention on International Civil Aviation (Chicago Convention), 7 December
1944, (1994) 15 UNTS 295.
Inter-American Convention on Forced Disappearance of Persons (adopted 6 September
1994, entered into force 28 March 1996).
International Convention for the Safety of Life at Sea of 1974, entered into force on 25 May
1980, 1184 UNTS 2.
International Convention on Civil Liability for Oil Pollution Damage (adopted 29
November 1969, entered into force 19 June 1975) 973 UNTS 3.
OECD Council Recommendation on Artificial Intelligence, adopted on 22 May 2019, C/
MIN(2019)3/FINAL.
Paris Agreement Under the United Nations Framework Convention on Climate Change
adopted on 12 December 2015, entered into force on 4 November 2016.
Protocol to the United Nations Framework Convention on Climate Change adopted on 1
December 1997, entered into force on 16 February 2005, 1155 UNTS 331.
Rome Statute of the International Criminal Court (adopted 17 July 1998, entered into force
1 January 2001), 2187 UNTS 38544.
218 Bibliography
United Nations Convention on the Elimination of Discrimination against Women,
18 December 1979, 1249 UNTS 13.
United Nations Framework Convention on Climate Change adopted 9 May 1992, entered
into force March 21, 1994, 31 ILM 849.
Vienna Convention on Civil Liability for Nuclear Damage (adopted 21 May 1963, entered
into force 12 November 1977), 1063 UNTS 266.
EU Law
Commission Directive on the Provisions of Article 33 (7), on the abolition of measures
which have an effect equivalent to quantitative restrictions on imports and are not
covered by other provisions adopted in pursuance of the EEC Treaty (1969)OJ
L13/29.
Consolidated Version of the Treaty on the Functioning of the European Union (2012) OJ
C326/01.
Consolidated Version of the Treaty on the European Union (2012) OJ C326/15.
Council Decision (EEC) 93/389 of 24 June 1993 for a monitoring mechanism of
Community CO2 and other greenhouse gas emissions (1993) OJ L167/31.
Council Decision 94/571/EC adopting a specific programme for research and technological
development, including demonstration, in the field of industrial and materials
technologies (1994 – 1998) (1994) L222/23.
Council Directive 85/374/EEC on the approximation of the laws, regulations and
administrative provisions of the Member States concerning liability for defective products
(1985) OJ L210/29.
Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on
the legal protection of databases (1996) OJ L77/20.
Directive (EU) 2018/2002 of the European Parliament and of the Council of 11 December
2018 amending Directive 2012/27/EU on energy efficiency (2018) OJ L328/210.
Directive 2009/24/EC of the European Parliament and of the Council of 23 April 2009 on
the legal protection of computer programs (2009) OJ L 111/16.
Directive 2009/103/EC of the European Parliament and of the Council of 16 September
2009 relating to insurance against civil liability in respect of the use of motor vehicles, and
the enforcement of the obligation to insure against such liability, OJ L 263, 7 October
2009.
Directive 2010/30/EU of the European Parliament and of the Council of 19 May 2010 on
the indication by labelling and standard product information of the consumption of
energy and other resources by energy-related products (2010) OJ L153/1 (Energy
Labelling Directive).
Directive 2012/27/EU of the European Parliament and of the Council of 25 October
2012 on energy efficiency, amending Directives 2009/125/EC and 2010/30/EU and
repealing Directives 2004/8/EC and 2006/32/EC (2012)OJ L315/1 (Energy Efficiency
Directive).
Directive 2014/104/EU of the European Parliament and of the Council of 26 November
2014 on certain rules governing actions for damages under national law for infringements
of the competition law provisions of the Member States and of the European Union,
OJ L 349, 5.12.2014.
Regulation (EC) 725/2004 of the European Parliament and of the Council of 31 March
2004 on enhancing ship and port facility security.
Bibliography 219
Rome II Regulation (EC) 864/2007 of the European Parliament and of the Council of
11 July 2007 on the law applicable to non-contractual obligations (Rome II), OJ L 199,
31.7.2007.
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing
of personal data and on the free movement of such data, and repealing Directive 95/46/
EC (General Data Protection Regulation) (2016) OJ L119.
Regulation (EU) 2017/1369 of the European Parliament and of the Council of 4 July
2017 setting a framework for energy labelling and repealing Directive 2010/30/EU
(2017) OJ L198/1.
Regulation (EU) 2018/1807 of the European Parliament and of the Council of
14 November 2018 on a framework for the free flow of non-personal data in the
European Union.
EU/EC Documents
Commission ‘Regulation of the European Parliament and of the Council Laying Down
Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending
Certain Union Legislative Acts’ (Proposal) COM (2021) 206 final.
Commission Regulation (EU) 2019/424 of 15 March 2019 laying down ecodesign
requirements for servers and data storage products pursuant to Directive 2009/125/EC
of the European Parliament and of the Council and amending Commission Regulation
(EU) No 617/2013 (2019) OJ L74/46.
Commission, ‘A New ERA for Research and Innovation’ (Communication) COM (2020)
628 final.
Commission, ‘A Policy Framework for Climate and Energy in the Period from 2020 to
2030’ (Communication) COM (2014) 15 final.
Commission, ‘Advancing the Internet of Things in Europe’ (Working document) SWD
(2016) 110 final.
Commission, ‘An Overall View of Energy Policy and Actions’ (Communication) COM
(1997) 167 final.
Commission, ‘Artificial Intelligence for Europe’ (Communication) COM (2018) 237 final.
Commission, ‘Communication to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions. Tackling Online
Disinformation: A European Approach’ COM (2018) 236 final.
Commission, ‘Digitising European Industry: Reaping the full benefits of a Digital Single
Market’ (Communication) COM (2016) 180 final.
Commission, ‘European Climate Law’ (Proposal) COM (2020) 80 final.
Commission, ‘Free Flow of Data and Emerging Issues of the European Data Economy’
(Working document) SWD (2017) 2 final.
Commission, ‘Proposal for a Regulation of the European Parliament and of the Council.
Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)
and Amending Certain Union Legislative Acts’ COM (2021) 206 final 2021/0106
(COD).
Commission, ‘Research and Technological Development Activities of the European Union’
(Annual report) COM (1998) 439 final.
Commission, ‘Report to the European Parliament, the Council and the European Economic
and Social Committee, on the Safety and Liability Implications of Artificial Intelligence,
the Internet of Things and Robotics’ COM (2020) 64 final.
220 Bibliography
Commission, ‘Strengthening Environmental Integration within Community Energy Policy’
(Communication) COM (1998) 571 final, Commission, ‘A sustainable Europe for a Better
World: A European Union Strategy for Sustainable Development’ (Communication)
COM (2001) 264 final.
Commission, ‘The Common Policy in the Field of Science and Technology’
(Communication) COM (1977) 283 final.
Commission, ‘The European Green Deal’ (Communication) COM (2019) 640 final.
Commission, ‘The Future Activities of the Joint Research Centre’ (Communication) COM
(1983) 107 final.
Commission, ‘The Greenhouse Effect and the Community’ (Communication) COM
(1988) 656 final.
Commission, ‘The S and T Content of the Specific Programmes Implementing the 4th
Framework Programme for Community Research and Technological Development
(1994 – 1998) and the Framework Programme for Community Research and Training
for the European Atomic Energy Community (1994 – 1998)’ (Working document)
COM (1993) 459 final.
Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence
and Trust’ COM (2020) 65 final.
Ethics Guidelines for Trustworthy AI, High-Level Expert Group on Artificial Intelligence
(Independent), European Commission B-1049 (Brussels, 8.4.2019).
European Commission, ‘Investor Citizenship and Residence Schemes in the European
Union’ Report from the Commission to the European Parliament, The Council, the
European Economic and Social Committee and the Committee of the Regions, Brussels,
23 Jauary.2019, COM (2019) 12 final.
European Commission, White Paper on Artificial Intelligence - A European Approach to Excellence
and Trust COM (2020) 65 final.
European Council, ‘European Council Meeting (12 December 2019)’ (Conclusions)
EUCO 29/19.
European Council, ‘Special Meeting of the European Council (1 and 2 October 2020)’
(Conclusions) EUCO 13/20.
Eurostat, ‘Primary and Final Energy Consumption Slowly Decreasing’ (28 January 2021)
<https://ec.europa.eu/eurostat/web/products-eurostat-news/-/ddn-20210128-1
?redirect=%2Feurostat%2Fweb%2Fenergy%2Fpublications> accessed 27 May 2021.
Eurostat, ‘Share of Renewable Energy in the EU up to 18.0%’ (23 January 2020) <https://
ec.europa.eu/info/news/share-renewable-energy-eu-180-2020-jan-23_en> accessed
27 May 2021.
Opinion of the European Economic and Social Committee on ‘The Perspectives of
European Coal and Steel Research’ (2005) OJ C294/7.
Proposal for a Council Decision adopting a research programme on reactor safety (1984 –
1987) (1983) OJ C250/6.
Public Consultation on AI White Paper: Final Report, European Commission, DG for
Communications Networks, Content and Technology (November 2020).
STOA, ‘Legal and Ethical Reflections Concerning Robotics’ (Policy Briefing) PE (2016)
563.501 <https://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/
EPRS_STU(2016)563501(ANN)_EN.pdf> accessed 27 May 2021.
STOA, ‘The Ethics of Artificial Intelligence: Issues and Initiatives’ (Study) PE (2020)
634.452.
White Paper on Artificial Intelligence: A European approach to excellence and trust, COM
(2020) 65 Final (Brussels, 19.02.2020).
Bibliography 221
Other documents
Basic Principles and Guidelines on the Right to a Remedy and Reparation for Victims
of Violations of International Human Rights and Humanitarian Law, adopted and
proclaimed by General Assembly resolution 60/147 of 16 December 2005.
Human Rights Council, ‘Report of the Independent International Commission on Inquiry
on the Syrian Arab Republic’, 9 August 2018, UN Doc A/HRC/39/65.
Human Rights Council, ‘Report of the Special Rapporteur on Extrajudicial, Summary or
Arbitrary Executions, Christof Heyns’, 9 April 2013, UN Doc A/HRC/23/47.
ILC, ‘Fifth Report on State Responsibility, by Mr. Gaetano Arangio-Ruiz, Special
Rapporteur’ (1993) II ILC Yearbook 1, UN Doc. A/CN.4/453 and Add. 1–3.
ILC, ‘Final Report of the International Law Commission: The Obligation to Extradite
or Prosecute (aut dedere aut iudicare) of its 66th Session’ (2014) YILC, vol. II (Part
Two).
ILC, ‘Report of the Study Group of the International Law Commission: Fragmentation
of International Law: Difficulties Arising from the Diversification and Expansion of
International Law’ (13 April 2006) A/CN.4/L.682.
International Law Commission, ‘Draft Articles on the Responsibility of International
Organisations, UN Docs A/66/10’ (2011) Yearbook of the International Law
Commission II, Part Two.
NATO, ‘NATO Nations Approve Civilian Casualty Guidelines’ (NATO, 6 August 2010)
<https://www.nato.int/cps/en/SID-9D9D8832-42250361/natolive/official_texts
_65114.htm> accessed 27 May 2021.
NATO, ‘Military Decision on MC 362/1 – NATO Rules of Engagement (Military
Decision)’, 30 June 2003, NATO UNCLASSIFIED.
Organisation on the Prohibition of Chemical Weapons, ‘Note by the Technical Secretariat,
Second Report by the OPCW Investigation and Identification Team Pursuant to
Paragraph 10 of Decision C-SS-4/Dec.3 <<Addressing the Threat from Chemical
Weapons Use>>’, 12 April 2021, OPCW Official Series S/1943/2021.
Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapon Systems,
12–16 December 2016, UN Doc CCW/CONF.V/2 (10 June 2016).
UN General Assembly, Resolution 56/83: Articles on Responsibility of States for
Internationally Wrongful Acts, (adopted 28.01.2002), UN Doc. A/RES/56/83 (2002).
Table of Legislation
Automated and Electric Vehicles Act 2018 (England & Wales)
Belgium, Défense, Etat-Major de la Défense, Ordre Général - J/836, establishing La
Commission d’Evaluation Juridique des nouvelles armes, des nouveaux moyens et des
nouvelles méthodes de guerre, 18 July 2002.
Copyright, Designs and Patents Act 1988 (England & Wales).
Copyright Law of the People’s Republic of China 2010 (as amended).
‘Decree of the President of the Russian Federation on the Development of Artificial
Intelligence in the Russian Federation’ (CSET, 28 October 2019).
French Decree n° 2018-211 of 28 March 2018 on Experimentation with Automated
Vehicles on Public Roads.
French Justice Reform Act 2019.
German Road Traffic Act (Straßenverkehrsgesetz).
Israeli Civil Wrongs (Liability of the State) Law, 5712 (1952).
222 Bibliography
President of the Russian Federation, ‘Decree of 10.10.2019 No. 490 on the Development
of Artificial Intelligence in the Russian Federation’.
U.S. Accountability Act of 2019.
U.S. Administrative Procedure Act (5 U.S.C. Subchapter II).
U.S. DoD, Directive 3000.09 Autonomy in Weapon Systems (2012).
U.S. Foreign Claims Act: 10 U.S. Code par. 2734. Property loss; personal injury or death:
incident to noncombat activities of the armed forces; foreign countries, 10 August 1956.
USA, Foreign Claims Act: 10 U.S. Code par. 2735. Settlement: final and conclusive, 10
August 1956 (Office of the Law Revision Counsel).
U.S. Malicious Deep Fake Prohibition Act of 2018.
U.S. National Defense Authorization Act for Fiscal Year 2020.
Books
Abeyratne R., Megatrends and Air Transport (Springer 2017).
Abeyratne R., Strategic Issues in Air Transport: Legal, Economic and Technical Aspects (Springer
2012).
Adams M., de Waele H., Meeusen J., Straetmans G. (eds), Judging Europe’s Judges (Hart
Publishing 2013).
Arvind T.T., Contract Law (2nd ed., OUP 2019).
Bacchi C., Analysing Policy: What’s the Problem Represented to Be? (Pearson 2009).
Barfield W., Pagallo U. (eds), Research Handbook on the Law of Artificial Intelligence (Edward
Elgar 2018).
Barton B., Lucas A., Barrera-Hernández L., Rønne A. (eds), Regulating Energy and Natural
Resources (OUP 2006).
Bazarkina D., Pashentsev E., Simons G. (eds), Terrorism and Advanced Technologies in
Psychological Warfare: New Risks, New Opportunities to Counter the Terrorist Threat (Nova
Science Publishers 2020).
Beauchamp T., Childress J., Principles of Biomedical Ethics (5th ed., OUP 2001).
Beck G., The Legal Reasoning of the Court of Justice of the EU (Hart Publishing 2012).
Bektaş T., Coniglio S., Martinez-Sykora A., Voß S. (eds), Computational Logistics (Springer
2017).
Bell J., Boyron S., Whittaker S., Principles of French Law (2nd ed., OUP 2008).
Bell S., McGillivray D., Pedersen O., Environmental Law (8th ed., OUP 2013).
Bernstein P.L., Against the Gods: The Remarkable Story of Risk (Wiley 1998).
Besson S., Tasioulas J.(eds), The Philosophy of International Law (OUP 2010).
Bhuta N., et al. (eds), Autonomous Weapons Systems: Law, Ethics, Policy (CUP 2016).
Boothby W.H. (ed.), New Technologies and the Law in War and Peace (CUP 2018).
Boothby W.H., Weapons and the Law of Armed Conflict (2nd ed., OUP 2016).
Bothe M., Partsch K.J., Solf W.A., New Rules for Victims of Armed Conflicts: Commentary on
the Two 1977 Protocols Additional to the Geneva Conventions of 1949 (2nd ed., Martinus
Nijhoff 2013).
Boulanin V. (ed.), The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk,
Volume I: Euro-Atlantic Perspectives (SIPRI 2019).
Bronwen M., Yeung K., An Introduction to Law and Regulation: Text and Materials (CUP
2007).
Buyers J., Artificial Intelligence: The Practical Legal Issues (Law Brief Publishing 2018).
Cappelletti M., Seccombe M., Weiler J.H.H. (eds), Integration Through Law: Europe and the
American Federal Experience (Walter de Gruyter and Co 1986).
Clayton G., Firth G., Immigration & Asylum Law (8th ed., OUP 2018).
Corn G.S., Van Landingham R.E., Reeves S.R. (eds), U.S. Military Operations (OUP 2016).
Corrales M., Fenwick M., Forgo N. (eds), Robotics, AI and the Future of Law (Springer 2018).
Crawford J., Pellet A., Olleson S. (eds), The Law of International Responsibility (OUP 2010).
Creutz K., State Responsibility in the International Legal Order: A Critical Appraisal (CUP 2020).
224 Bibliography
Croley S.P., Regulation and Public Interests: The Possibility of Good Regulatory Government (PUP
2008).
Dawson M., de Witte B., Muir E. (eds), Judicial Activism at the European Court of Justice
(Edward Elgar Publishing 2013).
Delerue F., Cyber Operations and International Law (CUP 2020).
Dennett A.., Public Law Directions (OUP 2019).
Dignum V., Responsible Artificial Intelligence, in Artificial Intelligence: Foundations, Theory, and
Algorithms (Springer 2019).
Embley J., Goodchild P., Shephard C., Slorach S., Legal Systems & Skills (4th ed., OUP 2020).
Evon A-T., El Sheikh A., Jafari M.(eds), Technology Engineering and Management in Aviation:
Advancements and Discoveries (IGI Global 2012).
Fastenrath U., Geiger R., Khan D.E., Paulis A., Schorlemer S. von, Vedder Ch. (eds), From
Bilateralism to Community Interest: Essays in Honour of Bruno Simma (OUP 2011).
Forlati S., Franzina P. (eds), Universal Civil Jurisdiction (Brill Nijhoff 2020).
Foster N., Sule S., German Legal System and Laws (4th ed., OUP 2011).
Gates K.A., Our Biometric Future: Facial Recognition Technology and the Culture of Surveillance
(NYU Press 2011).
Giliker P., Vicarious Liability in Tort: A Comparative Perspective (CUP 2010).
Grzeszczak R. (ed.), Economic Freedom and Market Regulation: In Search of Proper Balance
(Nomos 2020).
Guldhal C.C., NATO Rules of Engagement. On ROE, Self-Defence and the Use of Force during
Armed Conflict (Brill Nijhoff 2019).
Herring J., Legal Ethics (2nd ed., OUP 2017).
Higgins R., Problems & Progress: International Law and How We Use It? (Oxford Clarendon
Press 1994).
Infantino M., Zervogianni E. (eds), Causation in European Tort Law (CUP 2017).
Kaplan J., Artificial Intelligence: What Everyone Needs to Know (OUP 2016).
Kearns M., Roth A., The Ethical Algorithm (OUP 2019).
Kello L., The Virtual Weapon and International Order (Yale University Press 2017).
Kindt E., Privacy and Data Protection Issues of Biometric Applications (1st ed., Springer 2013).
Kinsey C., Corporate Soldiers and International Security. The Rise of Private Military Companies
(Routledge 2006).
Koch B.A., Koziol H., Unification of Tort Law: Strict Liability (Kluwer International Publishing
2002).
Kötz H., Wagner G., Delikstrecht (Franz Vahlen 2016).
Koulu R., Kontiainen L., How Will AI Shape the Future of Law? (Legal Tech Lab 2019).
Kurki V.A.J., Pietrzykowski T. (eds), Legal Personhood: Animals, Artificial Intelligence and the
Unborn (Springer 2017).
Kushan M.G. (ed.), Aircraft Technology (IntechOpen 2018).
Legg M., Bell F., Artificial Intelligence and the Legal Profession (Hart 2020).
Leveringhaus A., Ethics and Autonomous Weapons (Palgrave Macmillan 2016).
LexisNexis PSL TMT Team (ed.), An Introduction to Technology Law (LexisNexis 2018).
Lohsee S., Schulze R., Staudenmayer D. (eds), Liability for Artificial Intelligence and the Internet
of Things (Hart 2019).
Luger G.F., Stubblefield W.A., Artificial Intelligence: Structures and Strategies for Complex
Problem Solving (6th ed., Pearson 2008).
Macdonald E., Atkins R., Koffman & Macdonald’s Law of Contract (9th ed., OUP 2018).
Maglogiannis I., Iliadis L., Pimenidis E. (eds), Artificial Intelligence Applications and Innovations
(Springer 2020).
Bibliography 225
Majumdar M.C., Majumdar D., Sackett J.I. (eds), Artificial Intelligence and Other Innovative
Computer Applications in the Nuclear Industry (Springer 1988).
Markesinis B.S., Unberath H., The German Law of Torts: A Comparative Treatise (Hart 2002).
Marszałek-Kawa J. (ed.), Economic and Energy Stability in Asia. Perspectives and Scenarios (2016)
Marszałek Publishing House.
Massai L., European Climate and Clean Energy law and Policy (Earthscan 2012).
Matthews H., Pioneer Aviators of the World: A Biographical Dictionary of the First Pilots of
100 Countries (McFarland 2003).
Mikanagi Y., Japan’s Trade Policy: Action or Reaction? (Routledge 1996).
Morgera E. (ed.), The External Environmental Policy of the European Union: EU and International
Law Perspectives (CUP 2012).
Müller V. (ed.), Philosophy and Theory of Artificial Intelligence 2017 (Springer 2018).
Mustapha H. (ed.), Artificial Intelligence in Renewable Energetic Systems: Smart Sustainable Energy
Systems (Springer 2018).
Newton M., May L., Proportionality in International Law (OUP 2014).
Neznamov A. (ed.), Novye zakony robototehniki. Reguljatornyj landshaft. Mirovoj opyt
regulirovanija robototehniki i tehnologij iskusstvennogo intellekta (New Laws of Robotics. The
Regulatory Landscape. Global Experience in Regulating Robotics and Artificial Intelligence
Technologies) (Infotropic Media 2018).
Nowakowska-Małusecka J., ‘Indywidualna odpowiedzialność karna za zbrodnie popełnione
w byłej Jugosławii i Rwandzie’ (Wydawnictwo Uniwersytetu Śląskiego 2000).
O’Connell R.L., Of Arms and Men: A History of War, Weapons and Aggression (OUP 1990).
Ogus A.I., Regulation: Legal Form and Economic Theory (OUP 1994).
Parpworth N., Constitutional and Administrative Law (11th ed., OUP 2020).
Paunio E., Legal Certainty in Multilingual EU Law: Language, Discourse and Reasoning at the
European Court of Justice (Ashgate Publishing 2013).
Pellegrino F., The Just Culture Principles in Aviation Law: Towards a Safety-Oriented Approach
(Springer 2019).
Posner R.A., Economic Analysis of Law (8th ed., Wolters Kluwer 2011).
Runco M., Pritzker S. (eds), Encyclopaedia of Creativity (3rd ed., Elsevier 2020).
Scharre P., Army of None (W. W. Norton & Company 2018).
Segal G., Goodman D.S.G. (eds), Towards Recovery in Pacific Asia (Routledge 2000).
Shelton D. (ed.), The Oxford Handbook of International Human Rights Law (OUP 2015).
Shmelova T., Sikirda Y., Sterenharz A. (eds), Handbook of Research on Artificial Intelligence
Applications in the Aviation and Aerospace Industries (IGI Global 2020).
Sokołowski M.M.., European Law on Combined Heat and Power (Routledge 2020).
Sokołowski M.M., Regulation in the European Electricity Sector (Routledge 2016).
Sonnefeld R. (ed.), Odpowiedzialność państwa w prawie międzynarodowym (Polski Instytut
Spraw Międzynarodowych 1980).
Stefano C. de, Attribution in International Law and Arbitration (OUP 2020).
Twidell J., Weir T., Renewable Energy Sources (3rd ed., Routledge 2015).
Umit H. (ed.), Digital Business Strategies in Blockchain Ecosystems Transformational Design and
Future of Global Business (Springer 2020).
Van den Bergh R., The Roundabouts of European Law and Economics (Eleven International
Publishing 2018).
Visvizi A., Lytras M.D., Mudri G. (eds), Smart Villages in the EU and Beyond (Emerald 2019).
Waisberg N., Hudek A., AI for Lawyer: How Artificial Intelligence is Adding Value, Amplifying
Expertise, and Transforming Careers (Wiley 2021).
Weller M., The Oxford Handbook of the Use of Force in International Law (OUP 2015).
226 Bibliography
Wilson S., Rutherford H., Storey T., Wortley N., Kotecha B., English Legal System (4th ed.,
OUP 2020).
Zeben van J., Rowell A., A Guide to EU Environmental Law (University of California Press
2020).
Zimmerman J. (ed.), Aksjologia prawa administracyjnego [Axiology of Administrative Law] vol. 2
(Wolters Kluwer Polska 2017).
Law enforcement 21, 28–29, 40, 44, 57 Patient: care 66–67, 73, 78–80; doctor-
Law firms 84 patient relationship 69, 71, 73, 75–76,
Law of Armed Conflict (LOAC) 156, 158, 80–82; end-of-life care 66–68
162–170, 174–183, 189, 194, 201–204 Pattern 6, 8, 59–60, 67, 70, 84–85, 103,
Lawful objectives 163, 166, 202 108, 120, 130, 144, 155, 159, 161
Lawful measures 25, 31, 179 Personal data 8, 9, 11, 23–24, 27–33, 46,
LAWS see Lethal Autonomous Weapons 49, 95
Systems Phishing 36, 39–40, 51
Legal: Advisors 159, 161; Aid 91; Piracy (copyright) 99
Framework 49, 64, 77, 213; Practice Police 29–30, 55, 57
83–88, 90; professional privilege; Pre-trial 86–87, 96
research 83–84; sector 18, 90, 91, 96; Pre-Trial Risk Assessment Instrument
services 95 (PTRA) 86–87
Lethal Autonomous Weapons Systems 156, Predictive analytics 38, 62, 83–85, 91,
180, 190 116, 121
LexisNexis 84 Principle of distinction 161, 170, 173, 190,
Liability: fault-based 133; product 10, 201–202, 205
132, 135; strict / risk-based 133, 136; Privacy 21, 25, 28–35, 42, 44–45, 77,
vicarious 133, 136 80–81, 89–92, 213
Litigation 55, 85, 93, 133 Privilege: travel 61; legal professional
Litigation analytics 84 93–95
Long-Range Anti-Ship Missile (LRASM) Proportionality 16–17, 166–167, 169,
199–200 190, 194
238 Index
Psychological: security 36–37, 46; warfare STOA see European Parliament’s
37–39 Science and Technology Options
Public interest 30, 32, 177 Assessment
Public safety assessment (PSA) 86–87 Stockholm International Peace Research
Institute (SIPRI) 159, 165
Racial bias 8, 24, 41, 44 Stop Killer Robots Campaign 192–193,
Ranking technologies 38–39 195, 200, 202–205, 211–212
Real-time 27, 60, 121, 131, 139, 144, Stuxnet 209
155, 203 Supreme Court of India 86, 92
Reasoning 18, 62, 104. 149, 192, 201 Supreme Court of the United States 198
Regulatory: analytics 213; compliance Surveillance 60, 120, 185
5–7, 25, 30–34, 54–59, 78, 89, 95, 115, Sustainable development 52, 79, 129,
136, 159, 164, 168, 180, 201–203, 211; 140, 147
framework 9–11, 41, 77, 80, 122–125, Swarm 196
134, 214; precautions 200 Sweden: Analytic Imaging Diagnostic
Reliability 32, 71–75, 80, 91, 192 Arena 79; healthcare in 77–79; National
Renewable energy sources 139, 141–147, Strategy for AI 78; statement on lethal
153–155 autonomous weapon systems 159;
Responsibility gap 201, 203–204, 212 weapons review 179
Risk: assessment 35, 64, 86–87, 143; in Syria: attacks in 181; Pantsir-S1 SAM
healthcare 70–82; management 115, batteries 199
118–124, 129
Robotics 40–41, 78, 108, 112, 138–139, Tax 55–56, 64, 142
143, 149–151, 154 Technology: audio-visual 42, 46, 91;
Rule of attribution 183 blockchain 147; communications 50,
Rule of law 42, 90–91, 96 96; facial recognition 21–28, 31–35;
Rules of engagement 191 information 130, 149
Russia: AI strategy 158, 212; legal response Templates 21–22, 24, 26, 30, 32,
to AI 45–49; Ministry of Defence 47 34–35, 111
Third-party suppliers 131
Safeguards 8, 33, 48, 73–74, 77–81, 91, 96, Tort 132–134, 136 see also war torts
175, 180, 209, 214 Trade-off 26–27, 208
Safety standards 118, 123, 136–137 Trade secrets 55
SARP see Standards and Recommended Transparency (principle) 3–13, 19, 25–33,
Practices 42, 49, 57, 79, 88–90, 92–93, 95–96,
Seaport activities 127 187, 213
Secrecy 55, 95 Trial 41, 86–87, 89, 91–92, 109
Sensitive: data 24, 96; information
86–87, 93 Unintended engagements (military) 161,
Sentencing 74 172, 191
Smart manufacturing 144 United Kingdom (UK): AI Committee of
Smart port ecosystem 129–134 the House of Lords 78; Centre for Data
Smart ships 129, 133 Ethics and Innovation 78; Copyright,
Social engineering 40 Designs and Patents Act 1988, CDPA
Social media 101 page 106–108; Immigrant Investor
Soft law 33 Programme 54; use of uncontrolled
Solatia payments 188 weapons 181; Taranis combat drone 178
Solicitors 90, 95–96 United Nations: Convention against
Solicitors Regulations Authority (SRA) for Torture and Other Cruel, Inhuman or
England and Wales 90, 95–96 Degrading Treatment or Punishment
Standards and Recommended Practices 182; Convention on the Elimination
(SARP) 124 of Discrimination Against Women 64;
State: Liability 176, 186; Responsibility Convention on the Safety of United
162, 176–177, 182–183, 185, 189, 204 Nations and Associated Personnel 182;
Index 239
Framework Convention on Climate Third Offset Strategy 196, 212; weapons
Change 141–142; High Commissioner review 179
for Human Rights 36; Register of Universal jurisdiction 181–182
Conventional Arms 185; Sustainable Unlawful acts 30, 132, 185, 204
Development Goals 52, 147 Unmanned aerial vehicle (UAV) 36, 46
United States (US): Air Force 169, 194,
206–208; Army 207; Congress 44, 158; Virtual agents 116
copyright law 105, 109; Copyright Virtual reality and AI 46, 60
Office 105; Defense Science Board 192, Voice recognition 70, 115
208; Foreign Claims Act 188; Immigrant
Investor Programme 54; Legal Reviews War torts 186
of Weapons and Cyber Capabilities Weapons review 180
180; National Cyber Security Centre Westlaw 84
96; National Defense Strategy 196; White Paper on AI 3–4, 7–19, 41, 78, 89,
Operational Law Handbook 188; 93, 116
Phalanx systems 178; policy statements Wireless 128–129
197–198; solatia and condolence World Intellectual Property Organisation
payments 188–189; Supreme Court 198; (WIPO) 99, 104–106, 112