Professional Documents
Culture Documents
Algorithms permeate our lives in numerous ways, performing tasks that until recently
could only be carried out by humans. Artificial Intelligence (AI) technologies, based on
machine learning algorithms and big-data-powered systems, can perform sophisticated
tasks such as driving cars, analyzing medical data, and evaluating and executing complex
financial transactions – often without active human control or supervision. Algorithms
also play an important role in determining retail pricing, online advertising, loan
qualification, and airport security. In this work, Martin Ebers and Susana Navas bring
together a group of scholars and practitioners from across Europe and the US to analyze
how this shift from human actors to computers presents both practical and conceptual
challenges for legal and regulatory systems. This book should be read by anyone
interested in the intersection between computer science and law, how the law can better
regulate algorithmic design, and the legal ramifications for citizens whose behavior is
increasingly dictated by algorithms.
martin ebers is Associate Professor of IT Law at the University of Tartu, Estonia and
permanent research fellow at the Humboldt University of Berlin. He is co-founder and
president of the Robotics & AI Law Society (RAILS). In addition to research and
teaching, he has been active in the field of legal consulting for many years. His main
areas of expertise and research are IT law, liability and insurance law, and European and
comparative law. In 2016, he published the monograph Rights, Remedies and Sanctions
in EU Private Law. Most recently, he co-edited the book Rechtshandbuch Künstliche
Intelligenz und Robotik (C.H. Beck 2020).
susana navas is Professor of Private Law at the Autonomous University of Barcelona,
Spain. Her main fields of interest are very broad, comprising matters as varied as child
law, copyright law, and European private law. In recent years she has focused on the
study of digital law. Her most recent publication in this field are Inteligencia artificial.
Tecnología. Derecho (Tirant Lo Blanch 2017), El ciborg humano (Comares 2018) and
Nuevos desafíos para el Derecho de autor. Robótica, Inteligencia artificial y Derecho
(Reus 2019). She has been involved in a number of research projects and has been a key
speaker at many conferences and workshops at national and European level. She has
enjoyed of some research stays at European and North American institutes and univer-
sities and has supervised several doctoral thesis that have been published.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Algorithms and Law
Edited by
MARTIN EBERS
Humboldt University of Berlin
University of Tartu
SUSANA NAVAS
Autonomous University of Barcelona
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
University Printing House, Cambridge cb2 8bs, United Kingdom
One Liberty Plaza, 20th Floor, New York, ny 10006, USA
477 Williamstown Road, Port Melbourne, vic 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
www.cambridge.org
Information on this title: www.cambridge.org/9781108424820
doi: 10.1017/9781108347846
© Cambridge University Press 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
names: Ebers, Martin, 1970– author. | Navas, Susana, 1966– author.
title: Algorithms and law / Martin Ebers, Susana Navas.
description: 1. | New York : Cambridge University Press, 2020. | Includes bibliographical
references and index.
identifiers: lccn 2019039616 (print) | lccn 2019039617 (ebook) | isbn 9781108424820 (hardback) |
isbn 9781108347846 (epub)
subjects: lcsh: Law–Data processing. | Information storage and retrieval systems–Law. |
Computer networks–Law and legislation.
classification: lcc k87 .e24 2020 (print) | lcc K87 (ebook) | ddc 343.09/99–dc23
LC record available at https://lccn.loc.gov/2019039616
LC ebook record available at https://lccn.loc.gov/2019039617
isbn 978-1-108-42482-0 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Contents
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
Contents ix
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
x Contents
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
List of Figures and Tables
figures
1.1 Overview of available mobile robotic systems page 28
1.2 Overview of existing and upcoming service-oriented humanoid
systems 31
1.3 Telemedicine case scenario 35
tables
9.1 Six legal requirements to achieve and enforce wake neutrality 243
9.2 Legal risks of AI agent-contracting processes 258
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:35:31, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. xi
https://www.cambridge.org/core/product/7BC47A2AF1D3EEAD41FA80D5CFC342EC
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:35:31, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/7BC47A2AF1D3EEAD41FA80D5CFC342EC
Notes on Contributors
Aachen University. He has published more than 130 articles in international journals
and for conferences. Awards received include the George Giralt PhD Award (2012),
the RSS Early Career Spotlight (2015) and IEEE/RAS Early Career Award (2015),
the Alfred Krupp Award for Young Professors (2015), the German Future Prize of the
Federal President (2017), and the Leibniz Prize (2019).
Ruth Janal is Professor of Civil Law, Intellectual Property and Commercial Law at
the University of Bayreuth, Germany. She has authored and co-authored several
books on consumer protection, unfair commercial practices, comparative IP law,
and international civil procedure. Her current research focuses on the interplay
between the digital transformation and private law. She has given presentations and
written articles on commercial communication in the digital space, data access in
connected cars, liability of internet intermediaries, data protection in the Internet of
Things, and algorithmic decision-making.
Dennis Knobbe is a PhD student in the Department of Robotics and System
Intelligence at the Technical University of Munich (TUM), Germany. In 2016 he
was awarded an MSc in electrical engineering and information technology, with a
focus on control and systems theory, from the Christian-Albrecht University of Kiel.
His research interests are modeling, analysis and control of complex dynamic
systems, optimal and adaptive control, as well as collective intelligence, systems
biology, and bioinformatics.
Mario Martini holds the Chair of Administrative Science, Constitutional Law,
Administrative Law and European Law at the German University of Administrative
Sciences Speyer and is head of the Transformation of the State in the Digital Age
program at the German Research Institute for Public Administration, a fellow at the
Center for Advanced Internet Studies and a member of the German government’s
Data Ethics Commission. Since 2016, he has directed the Digitization program at the
German Research Institute for Public Administration. Until April 2010, he held a
chair in constitutional and administrative law at the Ludwig Maximilian University in
Munich. Mario Martini habilitated at the Bucerius Law School in 2006 and received
his PhD from the Johannes Gutenberg University, Mainz in 2000. His research
focuses in particular on the internet, data protection, media and telecommunications
law, law and economics, as well as open government and artificial intelligence.
Susana Navas is Professor of Private Law at the Autonomous University of Barce-
lona, Spain. Her main fields of interest are very broad, comprising matters as varied
as child law, copyright law, and European private law. In recent years she has
focused on the study of digital law. She is author or editor of more than 13 books
and over 80 articles, reviews, and chapters for national and international publishing
houses and journals. She has been involved in a range of research projects and has
been a key speaker at many conferences and workshops at national and European
level. She has enjoyed research stays at European and North American institutes and
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
Notes on Contributors xv
Gerald Spindler studied law and economics in Frankfurt am Main, Hagen, Genf,
and Lausanne. He is Professor of Civil Law, Commercial and Economic Law,
Comparative Law, Multimedia and Telecommunications Law at the University of
Göttingen, Germany, where he is occupied, among other topics, with the legal
aspects of e-commerce. He was elected a full tenured member of the German
Academy of Sciences, Göttingen in 2004. He has published over 300 articles in
law reviews, as well as expert legal opinions. He serves as general rapporteur regarding
privacy and personality rights on the internet for the bi-annual German Law Confer-
ence. He is editor of two of the best-known German law reviews covering the whole
field of cyberspace law and telecommunications law as well as co-editor of inter-
national journals on copyright law. He is also the founder and editor of JIPITEC, an
open access-based journal for intellectual property rights and e-commerce. In
2007 he was commissioned by the EU to review the e-commerce directive and
currently acts as an expert on data economy for the single market (2017).
Björn Steinrötter was recently appointed junior professor of IT law and media law
at the University of Potsdam, Germany. Prior to that he was a postdoctoral
researcher at the Institute for Legal Informatics, Leibniz University Hanover, Ger-
many. His research activities focus on private law, its European and international
implications, IT law, in particular data protection and data economy law, and IP
law. He is a founding and board member of the Robotics and Artificial Intelligence
Law Society (RAILS; www.ai-laws.org).
Brian Subirana is Director of the MIT Auto-ID lab, and teaches at both MIT and
Harvard University, USA. Prof. Subirana’s research centers on fundamental
advances at the cross section of the Internet of Things (IoT) and Artificial Intelli-
gence focusing on use-inspired applications in industries such as Sports, Retail,
Health, Manufacturing and Education. He wants to contribute to a world were
spaces can have their own “brain” with which humans can converse. His Harvard
classes on artificial intelligence and the science of intelligence are the first MIT-run
non-residential online classes ever to offer academic credits. His MIT Sloan class
was the first course ever to offer a recorded lecture on MIT Open Courseware. He
obtained his PhD in computer science at the MIT Artificial Intelligence Laboratory
(now CSAIL) and his MBA at MIT Sloan, and has been affiliated to MIT for over 20
years in various capacities, including visiting professor at the MIT Sloan School of
Management. He has founded three start-ups and earlier in his career he worked at
The Boston Consulting Group. He has over 200 publications, including three
books, one of them on legal programming, and currently is working on publishing
the MIT Voice Name System, a conversational commerce open standard that can
be used in multiple industries such as health, education and retail.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
Preface
1
For definitions of the terms “algorithms”, “artificial intelligence”, “robotics”, “machine learn-
ing”, etc., used in this volume, see 1.2.1, 1.2.3 and 2.1.2.
Downloaded from https://www.cambridge.org/core. University of NewxviiEngland, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
xviii Preface
In this volume, German and Spanish scholars have collaborated to study the
practical and legal implications that algorithms present for individuals, society, and
political and economic systems – discussing the various policy options for future
regulation and ethical codes.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
Preface xix
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
xx Preface
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
Acknowledgments
The editors are grateful to the Autonomous University of Barcelona (Spain) for
providing access to relevant materials during the preparation of this book.
This book was supported by the Estonian Research Council’s grant no. PRG124
and by the Research Project “Machine learning and AI powered public service
delivery”, RITA1/02-96-04, funded by the Estonian Government.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:40:06, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. xxi
https://www.cambridge.org/core/product/FA620A398B0F0410E925A4310778BB57
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:40:06, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/FA620A398B0F0410E925A4310778BB57
1
introduction
The rise of artificial intelligence is mainly associated with software-based robotic
systems such as mobile robots, unmanned aerial vehicles, and increasingly, semi-
autonomous cars. However, the large gap between the algorithmic and physical
worlds leaves existing systems still far from the vision of intelligent and human-
friendly robots capable of interacting with and manipulating our human-centered
world. The emerging discipline of machine intelligence (MI), unifying robotics and
artificial intelligence, aims for trustworthy, embodiment-aware artificial intelligence
that is conscious both of itself and its surroundings, adapting its systems to the
interactive body it is controlling. The integration of AI and robotics with control,
perception and machine-learning systems is crucial if these truly autonomous
intelligent systems are to become a reality in our daily lives. Following a review of
the history of machine intelligence dating back to its origins in the twelfth century,
this chapter discusses the current state of robotics and AI, reviews key systems and
modern research directions, outlines remaining challenges and envisages a future of
man and machine that is yet to be built.
golem was described as a harmless creature used by its creator as a servant. In the
legend of the golem of Prague, first written down at the beginning of the nineteenth
century, Rabbi Löw created the golem to relieve him of heavy physical work and to
serve humans in general.2 The real-world realization of this idea had a long way to
go.Some of the earliest scientific writings relating to machine intelligence date back
to the fifteenth century, the period of the Renaissance. Leonardo da Vinci
(1452‒1519), the universal savant of his time,3 decisively influenced both art and
science with a variety of inventions, including, for example, a mechanical jumper,
hydraulic pumps, musical instruments, and many more. However, the two inven-
tions that stand out from a robotics point of view were Leonardo’s autonomous flying
machine and his mechanical knight, also known as Leonardo’s robot.4 The latter is a
mechanism integrated into a knight’s armor, which could be operated via rope pulls
and deflection pulleys, enabling it to perform various human-like movements ‒
clearly first steps in robotics. Wilhelm Schickard (1592‒1635)5 developed and built
the first known working mechanical calculator. It was a gear-based multiplication
machine that was also used for some of Kepler’s lunar orbit calculations.
Sir Isaac Newton (1642‒1726), one of the world’s greatest physicists, is best known
for laying the foundations of classical physics by formulating the three laws of
motion.6 He was also an outstanding mathematician, astronomer and theologian.
In the field of mathematics, he developed a widely used technique for solving
optimization problems (nowadays called Newton’s method) and founded the field
of infinitesimal calculus. Gottfried Wilhelm Leibniz (1646‒1716) worked in parallel
with Newton on this topic but conceived the ideas of differential and integral
calculus independently of Newton.7 Leibniz, who is known for various other
contributions to science, is often referred to as one of the first computer scientists
due to his research on the binary number system. Slightly later, Pierre Jaquet-Droz
(1721‒1790) built amazing mechanical inventions such as The Writer, The Musi-
cian and The Draughtsman.8 The Draughtsman, for example, is a mechanical doll
that draws with a quill pen and real ink on paper. The input device was a cam disk
that essentially functions as a programmable memory defining the picture to be
drawn. With three different cam disks, the The Draughtsman was able to draw four
different artworks. In addition to these fascinating machines, Jaquet-Droz and his
2
Grün and Müller, Der hohe Rabbi Löw und sein Sagenkreis (Verlag von Jakob B Brandeis
1885).
3
Grewenig and Otto, Leonardo da Vinci: Künstler, Erfinder, Wissenschaftler (Historisches
Museum der Pfalz 1995).
4
Moran, “The da Vinci Robot” (2006) 20(12) Journal of Endourology 986–990.
5
Nilsson, The Quest for Artificial Intelligence (Cambridge University Press 2009).
6
Westfall, Never at Rest. A Biography of Isaac Newton (Cambridge University Press 1984).
7
Nilsson (n 5).
8
Soriano, Battaïni, and Bordeau, Mechanische Spielfiguren aus vergangenen Zeiten (Sauret
1985).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 3
business partner Jean-Frédéric Leschot later started to build prosthetic limbs for
amputees.
Another memorable figure in the history of machine intelligence is Augusta Ada
Byron King (1815‒1852).9 The Countess of Lovelace is known to be one of the first to
recognize the full potential of a computing machine. She wrote the first computer
program in history, which was designed to be used for the theoretical analytical
engine proposed by Charles Baggage. The programming language Ada was named
after her. These fundamental technological advances in the areas of mechanics,
electronics, communications and computation paved the way for the introduction
of the first usable computing machines and control systems, which began around
1868. The first automatic motion machines were systematically analyzed, docu-
mented, reconstructed, and taught via collections of mechanisms.
A mechanism can be defined as an automaton that transforms continuous,
typically linear, movements into complex spatial motions. Ludwig Burmester
(1840‒1927) was a mathematician, engineer and inventor, and the first person to
develop a theory for the analysis and synthesis of motion machines.10 Later in this
period, Czech writer and dramatist Karel Čapek (1890‒1938) first used the word
“robot” in his science-fiction work. The word “robot” is derived from robota, which
originally meant serfdom, but is now used in Czech for “hard work.” Through his
1920 play R.U.R. (Rossums Universal Robots), Čapek spread his definition of robot to
a wider audience.11 In this play, the robots were manufactured to industry standards
from synthetic organic materials and used as workers in industry to relieve people
from heavy and hard work.
We now come to the pre-eminent philosopher and mathematician Norbert
Wiener (1894‒1964). From his original research field of stochastic and mathematical
noise processes, he and his colleagues Arturo Rosenblueth, Julian Bigelow and
others founded the discipline of cybernetics in the 1940s.12 Cybernetics combines
the analysis of self-regulatory processes with information theory to produce new
concepts, which can be said to be the precursors of modern control engineering,
thus building significant aspects of the theoretical foundations of robotics and AI.
Wiener developed a new and deeper understanding of the notion of feedback,
which has significantly influenced a broad spectrum of natural science disciplines.
Alan Turing (1912‒1954) worked in parallel with Wiener in the field of theoretical
computer science and artificial intelligence.13 Most people interested in artificial
intelligence today are familiar with his name through the Turing test. This test was
9
Nilsson (n 5).
10
Koetsier, “Ludwig Burmester (1840–1927)” in Ceccarelli (ed), Distinguished Figures in Mech-
anism and Machine Science, History of Mechanism and Machine Science, vol 7 (Springer 2009)
43–64.
11
Nilsson (n 5).
12
Ibid.
13
Ibid.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
4 Sami Haddadin and Dennis Knobbe
14
Ibid.
15
Von Neumann and Burks, “Theory of Self-Reproducing Automata” (1966) 5(1) IEEE Transac-
tions on Neural Networks 3.
16
Shannon, “A Mathematical Theory of Communication” (1948) 27(3) Bell System Technical
Journal 379‒423.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 5
17
Shannon, “Communication in the Presence of Noise” (1949) 86 Proceedings of the IRE 10–21.
10.1109/JRPROC.
18
Shannon, “Communication Theory of Secrecy Systems” (1949) 28(4) Bell System Technical
Journal 656‒715.
19
Bauer et al., Die Rechenmaschinen von Konrad Zuse (Springer 2013).
20
Eibisch, “Eine Maschine baut eine Maschine baut eine Maschine. . .” (2011) 1 Kultur und
Technik 48‒51.
21
Zuse, “Gedanken zur Automation und zum Problem der technischen Keimzelle” (1956) 1(1)
Unternehmensforschung 160‒165.
22
Ibid.
23
Feynman, “There’s Plenty of Room at the Bottom,” talk given on 29 December 1959 (1960) 23
(22) Science and Engineering 1–13.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
6 Sami Haddadin and Dennis Knobbe
micro- and nanotechnology, which speaks for the high regard in which his early
vision is held in expert circles.
Very few people had the knowledge and skills to program complex early comput-
ing machines like the Z3 computer. Unlike today’s programming languages that use
digital sequence code, these machines were programmed with the help of strip-
shaped data carriers made of paper, plastic or a metal-plastic laminate, which store
the information or the code lines in the punched hole patterns. One person who
mastered and shaped this type of programming was American computer scientist
Grace Hopper (1906‒1992).24 She did not work with the Z3, but on the Mark I and II
computers she designed the first compiler called A-0. A compiler is a program that
translates human readable programming code into machine-readable code. She also
invented the first machine-independent programming language, which led to high-
level languages as we know them today.
Returning to robotics in literature, a short story that still exerts a powerful influ-
ence on real-world implementation of modern robotics and AI systems as we know
them today is Isaac Asimov’s (1920‒1992) science-fiction story “Runaround,” pub-
lished in 1942, which contained his famous “Three Laws of Robotics”:25
One, a robot may not injure a human being, or, through inaction, allow a human
being to come to harm. [. . .] Two, a robot must obey the orders given it by human
beings except where such orders would conflict with the First Law. [. . .] And three,
a robot must protect its own existence as long as such protection does not conflict
with the First or Second Laws.
Asimov’s early ideas, including his vision of human‒robot coexistence, paved the
way for the concept of safety in robotics. Asimov’s Three Laws, formulated as basic
guidance for limiting the behavior of autonomous robots in human environments,
are enshrined, for example, in the Principles of Robotics of the UK’s Engineering
and Physical Sciences Research Council (EPSRC)/Art and Humanities Research
Council (AHRC), published in 2011.26 These principles lay down five ethical
doctrines for developers, designers and end users of robots, together with seven
high-level statements for real-world applications.
Shortly before the vast technological advancements in the second half of the
twentieth century began, the first rudimentary telerobotic system was developed in
1945 by Raymond Goertz at the Argonne National Laboratory.27 It was designed to
control, from a shelter, a robot that could safely handle radioactive material. From
the 1950s on, the first complex electronics were developed, further optimized
and miniaturized, and modern concepts of mechanics were created. The first
24
Beyer, Grace Hopper and the Invention of the Information Age (BookBaby 2015).
25
Asimov, Astounding Science Fiction, chapter “Runaround” (Street & Smith 1942).
26
Prescott and Szollosy, “Ethical Principles of Robotics” (2017) 29(2) Connection Science 119‒123.
27
Goertz and Thompson, “Electronically Controlled Manipulator” (1954) 12 Nucleonics (US)
46‒47.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 7
28
Milecki, “45 Years of Mechatronics–History and Future” in Szewczyk, Zieliński, and Kalic-
zyńska (eds), Progress in Automation, Robotics and Measuring Techniques in Szewczyk,
Zieliński, and Kaliczyńska (eds), Progress in Automation, Robotics and Measuring Techniques
(Springer 2015).
29
Nilsson (n 5).
30
Denavit and Hartenberg, “A Kinematic Notation for Lower-Pair Mechanisms Based on
Matrices” Trans. of the ASME (1955) 22 Journal of Applied Mechanics 215‒221.
31
Nilsson (n 5).
32
Ibid.
33
Boltyanskii, Gamkrelidze, and Pontryagin, “Towards a Theory of Optimal Processes”
(in Russian) (1956) 110(1) Reports Acad Sci USSR 1–10.
34
Pontryagin et al., Mathematical Theory of Optimal Processes (in Russian) 1961.
35
Bellman, Dynamic Programming, vol 295 (Rand Corp Santa Monica CA 1956); Bellman,
Dynamic Programming (Princeton University Press 1957).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
8 Sami Haddadin and Dennis Knobbe
36
Van Wagenen, Murphy, and Nodland, An Unmanned Self-Propelled Research Vehicle for Use
at Mid-Ocean Depths (University of Washington 1963); Widditsch, “SPURV-The First Decade”
No APL-UW-7215, Washington University Seattle Applied Physics Lab 1973.
37
Kalman, “A New Approach to Linear Filtering and Prediction Problems” Transaction of the
ASME (1960) 82(1) Journal of Basic Engineering 35–45.
38
Kalman, “On the General Theory of Control Systems” (1960) Proceedings First International
Conference on Automatic Control, Moscow, USSR.
39
Nilsson (n 5).
40
Chervonenkis, Early History of Support Vector Machines. Empirical Inference (Springer 2013);
Vapnik and Chervonenkis, Об одном классе алгоритмов обученияраспознаванию
образов (On a Class of Algorithms of Learning Pattern Recognition) (1964) 25(6) Avtomatika
i Telemekhanika.
41
Boser, Guyon, and Vapnik, “A Training Algorithm for Optimal Margin Classifiers” Proceedings
of the Fifth Annual Workshop on Computational Learning Theory (ACM 1992) 144–152.
42
Cortes and Vapnik, “Support-Vector Networks” (1995) 20(3) Machine Learning 273‒297.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 9
Back in 1966, the computer program ELIZA was developed and introduced at
MIT’s Artificial Intelligence Laboratory under the direction of Joseph Weizen-
baum.43 ELIZA is a program for natural language processing that uses pattern
matching and substitution methodologies to demonstrate communication between
humans and machines by simulating a coherent conversation. Three years later
American engineer Victor Scheinman (1942‒2016) designed the first successful
electrically operated, computer-controlled manipulator.44 This robotic arm had six
degrees of freedom, and was light, multi-programmable and versatile in its motion
capabilities. Later on, the robot was amended for industrial uses such as spot welding
for the automotive industries. In the field of machine learning, David E Rumelhart,
Geoffrey E Hinton, and Ronald J Williams introduced the modern version of the
backpropagation algorithm in 1968.45 This method is used in artificial neural
networks to train networks and is a standard tool in this field today.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
10 Sami Haddadin and Dennis Knobbe
50
Witten, “An Adaptive Optimal Controller for Discrete-Time Markov Environments” (1977)
34(4) Information and Control 286‒295.
51
Samuel, “Some Studies in Machine Learning Using the Game of Checkers” (1959) 3(3) IBM
Journal of Research and Development 210‒229.
52
Barto, Sutton, and Anderson, “Neuronlike Adaptive Elements That Can Solve Difficult
Learning Control Problems” (1983) 5 IEEE Transactions on Systems, Man, and Cybernetics
834‒846.
53
Watkins, Learning from Delayed Rewards PhD Thesis, King’s College 1989.
54
Raibert and Craig, “Hybrid Position/Force Control of Manipulators” (1981) 103(2) Journal of
Dynamic Systems, Measurement, and Control 126‒133.
55
Hogan, “Impedance Control: An Approach to Manipulation: Part I – Theory, Part II –
Implementation, Part III – Applications” (1985) 107 Journal of Dynamic Systems, Measurement
and Control 1‒24.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 11
56
Khatib, Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. Autonomous
Robot Vehicles (Springer 1986).
57
Khatib, “A Unified Approach for Motion and Force Control of Robot Manipulators: The
Operational Space Formulation” (1987) 3(1) IEEE Journal on Robotics and Automation 43‒53.
58
Hirose and Ogawa, “Honda Humanoid Robots Development” (2006) 365(1850) Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 11‒19.
59
Hirzinger et al., “Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features” (1993)
9(5) IEEE Transactions on Robotics and Automation 649‒663.
60
Dickmanns, “Computer Vision and Highway Automation” (1999) 31(5–6) Vehicle System
Dynamics 325‒343; Dickmanns,“Vehicles Capable of Dynamic Vision” (1997) 97 IJCAI.
61
Nilsson (n 5).
62
Thrun, Burgard, and Fox, Probabilistic Robotics (The MIT Press 2005).
63
Ibid.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
12 Sami Haddadin and Dennis Knobbe
64
Burgard et al., “The Interactive Museum Tour-Guide Robot” Aaai/iaai. 1998.
65
Thrun et al., “Stanley: The Robot that Won the DARPA Grand Challenge” (2006) 23(9)
Journal of Field Robotics 661‒692.
66
Knight, “With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots” (2015)
MIT Technology Review. https://www.technologyreview.com/2015/09/16/247936/the-roomba-
now-sees-and-maps-a-home/. Date of consultation: May 2020.
67
Hirose and Ogawa (n 58).
68
Hockstein et al., “A History of Robots: From Science Fiction to Surgical Robotics” (2007) 1(2)
Journal of Robotic Surgery 113‒118.
69
Leung and Vyas, “Robotic Surgery: Applications” (2014) 1(1) American Journal of Robotic
Surgery 1–64.
70
Hirzinger et al., “DLR’s Torque-Controlled Light Weight Robot III-Are We Reaching the
Technological Limits Now?” (2002) 2 Proceedings 2002 IEEE International Conference on
Robotics and Automation (Cat No 02CH37292), Washington, DC 1710‒1716; Albu-Schäffer,
Haddadin, Ott, Stemmer, Wimböck, and Hirzinger, “The DLR Lightweight Robot: Design
and Control Concepts for Robots in Human Environments” (2007) 34(5) Industrial Robot: An
International Journal 376‒385.
71
Haddadin et al., “Collision Detection and Reaction: A Contribution to Safe Physical Human‒
Robot Interaction” 2008 IEEE/RSJ International Conference on Intelligent Robots and
Systems. IEEE, 2008, 3356–3363.
72
Squyres, Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hachette
Books 2005).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 13
French company Parrot of its Parrot AR Drone, the first ready-to-fly drone available
on the open market.73
After years of basic research in the field of safe physical human‒robot interaction,
ranging from standardized dummy crash tests to injury analysis of human‒robot
impacts by soft-tissue experiments, in 2011 Sami Haddadin published a comprehen-
sive study of how robots could for the first time meet Asimov’s First Law in everyday
situations.74 The study developed the injury analysis, design paradigms and
collision-handling algorithms to ensure that robots could interact safely with
humans. It laid the foundations for the essential international safety standardization
and regulation of physical human‒robot interaction, paving the way for robotics in
everyday life.
In the same year, a new AI system was introduced by IBM.75 Watson was the first
computer system that could answer questions on the American quiz show Jeopardy!
In 2013, IBM made the Watson API available for software application providers. The
system is frequently used today as an assistive system in medical data analysis, for
example in cancer research.76
73
Bristeau et al., “The Navigation and Control Technology inside the ar. drone micro uav” (2011)
44(1) IFAC Proceedings 1477‒1484.
74
Haddadin, Towards Safe Robots: Approaching Asimov’s 1st Law, PhD Thesis, RWTH Aachen
2011; published by Springer 2014.
75
Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not” New York Times (16 February 2011).
76
Somashekhar et al., “Watson for Oncology and Breast Cancer Treatment Recommendations:
Agreement with an Expert Multidisciplinary Tumor Board” (2018) 29(2) Annals of Oncology
418‒423.
77
Parloff, “Why Deep Learning Is Suddenly Changing Your Life” (2016) Fortune.
78
Ivakhnenko and Lapa, “Cybernetic Predicting Devices” (1965) CCM Information Corporation.
79
Krizhevsky, Sutskever, and Hinton, “Imagenet Classification with Deep Convolutional Neural
Networks” (2012) Advances in Neural Information Processing Systems.
80
LeCun, Bottou, Bengio, and Haffner, “Gradient-Based Learning Applied to Document Rec-
ognition” (1998) 86(11) Proceedings of the IEEE 2278‒2324; LeCun, Bengio, and Hinton,
“Deep Learning” (2015) 521(7553) Nature 436.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
14 Sami Haddadin and Dennis Knobbe
Boston Dynamics, founded by ex-MIT professor Marc Raibert, first made the
news in 2012 with its four-legged robot BigDog.81 BigDog was a dynamically stable
four-legged military robot that could withstand strong physical hits and remain
stable. In 2013 Boston Dynamics unveiled their two-legged humanoid robot, Atlas.82
Its humanoid shape was designed to allow it to work with tools and interact with the
environment. The system has since been further developed and equipped with
increasingly complex acrobatic skills.
In the same year a team from Johns Hopkins University and DLR conducted a
telepresence experiment in which a Da Vinci master console in Baltimore, USA
controlled a DLR lightweight robot in Oberpfaffenhofen, Germany, over 4,000
miles away.83 This marked a milestone in telerobotics by combining telepresence
via standard internet with the slave robot system’s local AI capabilities.
In 2014, a major step forward in certification and standardization of personal
care robot safety requirements was taken with the publication of the ISO 13482
standard, a catalogue of requirements, protective measures and guidelines for the
safe design and use of personal care robots, including mobile servant robots, physical
assistant robots and person-carrier robots, generally earthbound robots for nonmedi-
cal use.84
The next step in software-based AI was demonstrated a year later, in 2015, by
DeepMind’s AlphaGo system.85 AlphaGo’s learning algorithms included a self-
improvement capability through which it could master highly complex board
games, such as Go, chess and shogi, by playing the games with itself.
By 2016, virtual assistants had finally arrived in everyday life.86 In 2011, Apple
started to deliver smartphones with a beta version of their virtual assistant Siri.
Further systems have been launched, including Cortana from Mirosoft, Alexa from
Amazon and finally Google Assistant from Google. Virtual assistants in general
are designed to perform tasks given by a user, usually by voice command, and
reflect current state-of-the-art speech-based human‒machine communication
technologies.
81
Playter, Buehler, and Raibert, “BigDog, Unmanned Systems Technology VIII” vol 6230 Inter-
national Society for Optics and Photonics, 2006.
82
Fukuda, Dario, and Yang, “Humanoid Robotics – History, Current State of the Art, and
Challenges” (2017) 13(2) Science Robotics, eaar4043.
83
Bohren, Papazov, Burschka, Krieger, Parusel, Haddadin, Shepherdson, Hager, and Whitcomb,
“A Pilot Study in Vision-Based Augmented Telemanipulation for Remote Assembly over High-
Latency Networks” (2013) Proceedings of IEEE ICRA 3631‒3638.
84
ISO, ISO 13482:2014: Robots and Robotic Devices ‒ Safety Requirements for Personal Care
Robots (International Organization for Standardization, 2014); Jacobs and Virk, “ISO 13482:
The New Safety Standard for Personal Care Robots” ISR/Robotik, 41st International Sympo-
sium on Robotics 2014.
85
Silver et al., “Mastering the Game of Go without Human Knowledge” (2017) 550 Nature
354‒359.
86
Goksel and Emin Mutlu, “On the Track of Artificial Intelligence: Learning with Intelligent
Personal Assistants” (2016) 13(1) Journal of Human Sciences 592‒601.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 15
The next level of underwater robotics and telerobotics was introduced by Khatib
and his research team at Stanford University in 2016. The teleoperated underwater
humanoid robot system OceanOne demonstrated its bimanual manipulation cap-
abilities in an underwater research mission to study the wreck of La Lune, King
Louis XIV’s flagship, off the Mediterranean coast of France in 2016.87 In 2017 Franka
Emika’s human-centered industrial robot system Panda was introduced.88 This next-
generation industrial robot is the first sensitive, networked, cost-effective and adap-
tive tactile robot. It is operated via simple apps on personal devices like tablets or
smartphones. This first mass-produced robot is self-assembled, showing the potential
for versatile manufacturing and marking the first step into the future of self-
replicating machines.89
One year later Skydio launched its Skydio R1 drone, a further step in the direction
of intelligent flying robots. This system has stable flying capability in windy environ-
ments, can follow its user reliably and while following avoids obstacles in its way.90
A new concept in neural networks was also published in 2018.91 First-order
principles networks (FOPnet) use basic physical assumptions to build a physically
informed neural network. With the application of this new concept, it has already
been shown that both the body structure and dynamics of a humanoid can be
learned on the basis of basic kinematic laws as well as the balance of force and
moments acting on this kind of multi-body system. This can be regarded as the first
step toward machines able to learn self-awareness.
The lighthouse initiative Geriatronics from the School of Robotics and Machine
Intelligence at the Technical University of Munich was launched in 2018 with the
aim of developing robot assistants for independent living for the elderly.92 This
initiative is sustainably supported by the Bavarian State Ministry of Economic
Affairs, Energy and Technology and LongLeif GaPa Gemeinnützige GmbH.
In early 2019, Haddadin, Johannsmeier, and Ledezma published a paper in which
they discussed a concept they called Tactile Internet as the next-generation Internet
of Things.93 They propose that 5G communication infrastructures combined with
rich tactile feedback and advanced robotics provide the potential for a meaningful
87
Khatib et al. “Ocean One: A Robotic Avatar for Oceanic Discovery” (2016) 23(4) IEEE Robotics
& Automation Magazine 20‒29.
88
Franka Emika GmbH, Franka Emika <https://www.franka.de/>, 17 January 2019.
89
Franka Emika GmbH, “Franka Emika R:Evolution” <https://www.youtube.com/watch?v=_
FbhNsRjqdQ, 04/05/2019>.
90
Skydio Inc, Skydio <https://www.skydio.com>, 4 May 2019.
91
Díaz Ledezma and Haddadin, “FOP Networks for Learning Humanoid Body Schema and
Dynamics” (2018) 2018 IEEE-RAS 18th International Conference on Humanoid Robots
(Humanoids), Beijing, China 1‒9.
92
Technische Universität München MSRM – Munich School of Robotics and Machine Intelli-
gence, “Lighthouse Initiative Geriatronics” <https://www.msrm.tum.de/en/geriatronics/>,
5 May 2019.
93
Haddadin, Johannsmeier, and Díaz Ledezma, “Tactile Robots as a Central Embodiment of the
Tactile Internet” (2019) 107(2) Proceedings of the IEEE 471‒487.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
16 Sami Haddadin and Dennis Knobbe
and immersive connection to human operators via advanced “smart wearables” and
Mixed Reality devices, effectively making real avatars a reality.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 17
94
Haddadin and Croft, “Physical Human‒Robot Interaction” in Bruno, Siciliano, Oussama and
Khatib (eds) Springer Handbook of Robotics (Springer 2016) 1835–1874.
95
For a deeper insight into this topic, please refer to Haddadin and Croft (n 94).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
18 Sami Haddadin and Dennis Knobbe
Lightweight concepts involve the whole system and the moving parts are designed
to be as light as possible to reduce possible collision metrics. Generally, there are
two major approaches, the mechatronic approach and the tendon-based approach.96
In both, the robot structure consists of light and strong materials such as light metal
alloys or composites. In order to optimize power consumption and to meet safety
standards, both motors and moving parts are designed to have low inertia.
The mechatronic approach is based on a highly modular structure. To achieve
this, the majority of the robot’s electronics are integrated into its joints. This
modularity enables the development of highly complex, self-contained robotic
systems that can be controlled efficiently. An important feature of the motors used
in this approach is that they can generate high torque, enabling the system to act and
react fast and dynamically. One characteristic that stands out in the mechatronic
approach is the use of a redundant sensor. Normally only motor-position sensors are
used, but with this concept, additional sensors for measuring torque, force or current
are integrated into the system. These additional sensors can be used to increase the
measuring accuracy and/or to provide certain safety features.
In contrast to the mechatronic approach, tendon-based robots use remotely located
motors to reduce weight. The motors are connected to the parts to be moved via a
cable. One disadvantage of this approach is that the motors required to move such a
system are quite large: the weight of the moving parts is reduced but the total weight
of the system remains relatively high. Further information on robot design concepts
and other important classes of robot structures can be found in the literature.97
96
Bicchi and Tonietti, “Fast and Soft Arm Tactics: Dealing with the Safety Performance Trade-
off in Robot Arms Design and Control” (2004) 11 IEEE International Conference on Robotics
and Automation Magazine; Albu-Schäffer et al., “Soft Robotics” (2008) 15(3) IEEE Robotics
and Automation Magazine 20‒30.
97
Khatib, “Inertial Properties in Robotic Manipulation: An Object-Level Framework” (1995) 14(1)
International Journal of Robotics Research 19‒36; Bicchi and Tonietti (n 96); Haddadin and
Croft (n 94).
98
Siciliano and Khatib (eds) Springer Handbook of Robotics (Springer 2016).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 19
most commonly used sensing techniques are strain gauges within a measuring bridge
or implicit deflection-based measurement. This perceptual technique enables force-
regulated manipulations and sensitive haptic interactions with humans.
The tactile perception approach was inspired by the properties of human skin.
Here, the entire robot is enveloped in a tactile skin consisting of many small-
networked sensor elements. In contrast to the previous type of sensing, contacts
occurring in close proximity to each other can be specifically measured by the
sensor skin during the completion of a task. The skin can give the robots significant
sensory capabilities, but also increases complexity and computational cost. Distrib-
uted data processing could help here. If each sensor element was equipped with its
own microcontroller, which prepared the sensor data in such a way that the central
computer only has to process simple high-level signals, the high computing effort for
the main controller could be reduced. Such systems still require a lot of research
work in order to be fully mature and robust.
Visual perception is a quite common non-contact sensor technology, often used
for the autonomous execution of robotic tasks without interaction with humans or
for preparatory activities, such as identifying humans or objects in the environment,
in connection with a human‒robot interaction. One technique in this field, marker-
based visual sensing, is used as a high-resolution tracking system, for example to
navigate drones safely through a room. These systems usually consist of infrared
cameras, which measure the positions of the highly reflective markers in a room
even during very fast movements. The use of such a system is not always practicable
or universally applicable, since markers must always be positioned and calibrated
beforehand. In addition, this principle is often sensitive to interference, for example
from sunlight, or has problems with sensor shading. Another type of visual percep-
tion is the use of inexpensive 3D RGB depth cameras in combination with AI
algorithms for the visual tracking of objects or people or for general navigation in
space during everyday operations. However, from a robustness and performance
point of view, visual perception with 3D RGB depth cameras still needs several years
of research before it can be used reliably in all everyday conditions.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
20 Sami Haddadin and Dennis Knobbe
in the reference system. There are several techniques to do this as Global Position-
ing System (GPS)-based techniques are quite accurate for outdoor self-localization
but not suitable for indoor applications. For indoor navigation, visual perception-
based techniques combined with inertial sensors are more promising. Once the
robot has its position, it must plan the route to the target position. The first step is to
calculate the distance between the robot’s position and its destination. The next step
is map generation, which in general terms means the analysis of the environment
between the robot’s own position and the destination. The subsequent interpretation
of this generated map is crucial in order to execute the overall task of movement.
Here, the algorithm performs a semantic recognition of the environment, for
example recognizing obstacles on the map as non-movable areas between the robot’s
own position and the target.
A more specific application area is indoor navigation and cartography without a
comprehensive decentralized tracking system. A quite simple and robust method of
solving the navigation problem is the use of line markings on the ground that are
recognized and tracked by the robotic system’s sensors and controls. This is a rather
static method, since the predefined paths ‒ the environment map ‒ are fixed on an
abstract level. Dynamic changes, which can occur frequently when interacting with
humans, are difficult to update online with this approach.
The SLAM algorithm100 is more suitable for use in environments with fast
dynamically changing conditions. This algorithm can simultaneously determine
the robot’s own position and create an online map of the previously unknown
environment using sensing systems such as 3D RGB depth cameras or LIDAR (laser
detection and ranging) systems.101 The robot performs relative measurements of its
own motion and of features in its environment to obtain the necessary information
for navigation. Both measurements are often noisy due to disturbances, so the
SLAM algorithm now tries to reconstruct a map of the environment from these
noisy measurements and to calculate the distance the robot has covered during the
measurement.102 The biggest issue with using SLAM is that the complexity of
constantly changing dynamic environments leads to a high computing effort, thus
the real-time capability of the overall system cannot always be guaranteed.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 21
103
Hogan (n 55); Craig and Raibert, “A Systematic Method for Hybrid Position/Force Control of a
Manipulator” (1979) IEEE Computer Software Applications Conference 446‒451.
104
Haddadin and Croft (n 94).
105
Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics)
(Springer 2006).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
22 Sami Haddadin and Dennis Knobbe
106
Sutton and Barto, Reinforcement Learning: An Introduction (MIT Press 2018).
107
Haykin, Neural Networks: A Comprehensive Foundation (Prentice Hall PTR 1994); Bishop,
Neural Networks for Pattern Recognition (Oxford University Press 1995).
108
Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization
in the Brain” (1958) 65(6) Psychological Review 386.
109
Judea Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Elsevier
2014).
110
Dan, Evolutionary Optimization Algorithms (John Wiley & Sons 2013).
111
Rosenblatt, “Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms”
No VG-1196-G-8. Cornell Aeronautical Lab Inc Buffalo NY, 1961; Minsky and Papert, Percep-
trons (MIT Press 1969).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 23
between the input and output layer of this type of network. If a specific network
structure is then designed for a desired application, the network can be trained to a
desired behavior, using the backpropagation algorithm and training data, by setting
the parameters of the network accordingly. In this context, a deep neural network is a
more complex variant of a normal neural network, where, for example, a higher
number of hidden layers are used.112 The hidden layers can generally be seen as a not
directly reachable layer with encoded information after the training phase. The
dynamics and properties of these layers are not yet fully understood.
Another machine-learning model is the support-vector machine, which is often
used as a classifier or regressor for pattern recognition tasks. This mathematical
algorithm tries to calculate so-called hyperplanes (decision boundaries) to separate
and therefore classify two or more objects in the feature space, using labeled training
data. Important training data is the data that is close to the transition from one object
to the neighboring object and only this data is needed to span the hyperplane
mathematically. These data points are called support vectors and give this model
approach its name.
Bayesian networks are used for decision-making. They are basically directed
acyclic graphs, but each node represents a conditional probability distribution of a
random variable and each edge, the associated conditional relationships or depend-
encies between the random variables. If one now considers a random variable that is
not conditionally independent, that is, it has relations to other random variables
represented by the connected edges, one can easily recognize the functionality of a
Bayesian net. This node gets input values for its probability function via the edges
directed to it, then the probability of the random variable belonging to the probabil-
ity function is obtained as an output. If you calculate this for the whole network, you
get a compact representation of the common probability distribution of all variables
involved. From this a conclusion or inference about complex problems such as
unobserved variables can be obtained. Not every Bayesian network is fully specified
because some conditional probability distributions may be unknown. These missing
pieces can be obtained by learning the probability distribution parameters from data,
for example by using maximum likelihood estimation (MLE). Sometimes the
relations between the random variables are unknown. In this case, structure learning
is applied to estimate the structure of the network and the parameters of the local
probability distributions from data. Various optimization-based search approaches
such as the Markov chain Monte Carlo algorithm can be used.
The last machine-learning model to be presented here is the genetic algorithm,
which belongs to the evolutionary class of algorithm. This algorithm works with
metaheuristics and is based on the idea of natural selection. In general, the algorithm
starts with a population of possible solutions, where each solution has certain
parameters that can be used to mutate or vary it. At the beginning, individuals are
112
Goodfellow et al., Deep Learning, vol 1 (MIT Press 2016).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
24 Sami Haddadin and Dennis Knobbe
randomly selected from the starting population, from which the strongest individuals
are then selected using an object function. Now the parameters of these individuals
are changed according to a measure given by the number of remaining individuals in
this generation. From this, the new generation is created, from which the fittest ones
are selected again. This continues until a previously defined number of generations
or a specific fitness level is reached.
113
Asfour, Azad, Gyarfas, and Dillmann, “Imitation Learning of Dual-Arm Manipulation Tasks in
Humanoid Robots” (2008) 5(2) International Journal of Humanoid Robotics 183–202; Ijspeert,
Nakanishi, and Schaal, “Learning Attractor Landscapes for Learning Motor Primitives” in
Becker, Thrun, and Obermayer (eds), Advances in Neural Information Processing Systems 15
(MIT Press 2003).
114
Theodorou, Buchli, and Schaal, “A Generalized Path Integral Control Approach to Reinforce-
ment Learning” (2010) 11 Journal of Machine Learning Research 3137‒3181; Peters and Schaal,
“Reinforcement Learning of Motor Skills with Policy Gradients” (2008) 21(4) Neural Networks
682‒697.
115
Van Hoof, Kroemer, Ben Amor, and Peters, “Maximally Informative Interaction Learning for
Scene Exploration” (2012) Proceedings of the International Conference on Robot Systems
(IROS); Petsch and Burschka, “Representation of Manipulation-Relevant Object Properties
and Actions for Surprise-Driven Exploration” (2011) Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems 1221‒1227.
116
Mair, Hager, Burschka, Suppa, and Hirzinger, “Adaptive and Generic Corner Detection Based
on the Accelerated Segment Test” (2010) Computer Vision-ECCV 2010 183‒196; Burschka and
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 25
Hager, “V-gps (slam): Vision-Based Inertial System for Mobile Robots” (2004) 1 Robotics and
Automation ICRA’04. IEEE International Conference 409‒415.
117
Wilkinson, Bultitude, and Dawson, “Oh Yes, Robots! People Like Robots; The Robot People
Should Do Something: Perspectives and Prospects in Public Engagement with Robotics”
(2011) 33(3) Science Communication 367–397; Pineau, Montemerlo, Pollack, Roy, and Thrun,
“Towards Robotic Assistants in Nursing Homes: Challenges and Results” (2003) 42(3) Robotics
and Autonomous Systems 271–281.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
26 Sami Haddadin and Dennis Knobbe
generally valid models would reduce the development time and the costs of these
systems. Another problem is to find an elegant and at best purely model-based
approach to distinguish aerodynamic forces from collision and interaction forces.
A secure physical human‒flying-robot interaction interface still requires a lot of
research before it could enter the market in a product. Flight time would also have
to be extended to reach a level suitable for everyday use. This could be achieved, for
example, by further reducing the total weight with new materials or structural
approaches. This development would also increase the safety of human‒robot
interaction, since less energy would be transferred to the human body in the event
of a collision. It is clear from all these factors that there is still a long way to go before
small and affordable fully autonomous flying robots become ubiquitous.
mobility leveled floor leveled floor leveled floor leveled floor leveled floor leveled floor
usability expert knowledge expert knowledge expert knowledge expert knowledge expert knowledge expert knowledge
required required required required required required
inertia and high active compliance have been developed and implemented. The
result is systems such as the Barrett WAM arm118 and the DLR lightweight robot
series,119 whose arm technology later led to the LWR iiwa robot from the company
KUKA. One of the most modern, human-centered lightweight robot systems
developed to date is Franka Emika’s Panda system.120 A high-precision force and
impedance control system allows the system to perform sensitive and accurate manipu-
lation and enables a high degree of compliance, which, in conjunction with safety
aspects already considered in the design phase of this robot, guarantee safe human‒
robot collaboration. One of the most important pragmatic aspects of human‒robot
collaboration besides safety is the operating, programming and interaction interface
between human and robot. Many collaborative robots use a tablet computer and
complex software as operating, programming and interaction interface. The Panda
system offers an elegantly designed interface in which the human can interact with the
robot in a natural way via haptic interactions such as tapping on the robot gripper to
stop the robot or to give a process confirmation. In addition, in the teaching mode, it is
possible to teach the compliant robot various work processes by taking it by the hand
and guiding it extremely smoothly through the process. Once the process has been
shown, it can be played repeatedly by simply pressing a button. This kind of program-
ming is extended by apps representing two levels of interaction with the robot: the
expert-level robot apps programmer and the user who does not need any special
robotics knowledge. The expert provides the basic robot capabilities, which are
assembled and operated by the user for complex processes and solutions. These basic
robot apps will be shared over a cloud-based robotic app store and made available to a
broad range of users. With the growth of this robotics skills database, many new
applications will emerge, bringing robotics more and more into our daily lives.
118
Townsend and Salisbury, “Mechanical Design for Whole-Arm Manipulation, Robots and
Biological Systems: Towards a New Bionics?”; Barrett Technology, “Barrett Arm” <http://
barrett.com/products-arm.htm>, 25 September 2017.
119
Hirzinger et al., “A Mechatronics Approach to the Design of Lightweight Arms and Multi-
fingered Hands” Robotics and Automation, 2000. Proceedings, ICRA’00. IEEE International
Conference on Robotics and Automation, vol 1 IEEE, 2000; Albu-Schäffer et al., “The DLR
Lightweight Robot: Design and Control Concepts for Robots in Human Environments” (2007)
34(5) Industrial Robot: An International Journal 376‒385.
120
Franka Emika GmbH, Franka Emika, <https://www.franka.de/>, 17 January 2019.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
30 Sami Haddadin and Dennis Knobbe
finding extraordinary intelligent service robots, which can act in a similar social
manner to humans, for example while supporting elderly people in their everyday
life, still presents a gap in the technology. Furthermore, technologies available today
are not able to adapt to short-term changes, are not user friendly in terms of
“programmability” and do not learn from experience. In addition, unlike the case
of industrial robots, security aspects have not been considered in these systems.
Early approaches in this direction can already be seen, for example in the user
interface developed by Franka Emika for their robotic arm system. Nevertheless,
what is still missing in these systems is the possibility of improving learned abilities
autonomously. Intelligent service robots have to be able to adapt to new conditions.
They have to meet the “lifelong learning” paradigm in order to be also accepted by
older people, who may be more skeptical about new technologies. In addition,
specific design and technology decisions regarding the acceptance and usability of
these robots need to be made in the development phase of these systems if they are
to be usable in the private sector.
A promising subfield of service robots are humanoids. As we have seen, service
robots should be human centered from the beginning of their development, espe-
cially from the point of view of safety. For this reason, systems like the NASA
Robonauten, DLR’s Justin or Boston Dynamics’ Atlas System are not considered
here. Figure 1.2 gives a current overview of existing service-oriented humanoid
systems or those under development.
One of the first complex service humanoids available was the PR2 system from
Willow Garage.121 It consists of a mobile motion platform, two grab arms and
numerous sensors to navigate in space by using position control. In addition to
“pick-and-place” tasks, the user can teach this humanoid simple motion sequences.
PR2 has relatively simple interaction channels such as motion control via a gamepad
or tablet. Other service robots such as the Care-O-Bot 4 from Fraunhofer IPA,122 the
Tiago system from PAL Robotics123 and the HSR robot from Toyota124 have similar
capabilities to the PR2, but some systems also have additional human interaction
channels such as voice command input. The Care-O-Bot 4 can even gesticulate and
interact with people via facial expressions or by touch from its built-in display.
Furthermore, all of the humanoids mentioned here can be teleoperated to a certain
extent. Two systems that stand out here are the Twendy-One robot from Waseda
121
Willow Garage Inc, PR2, <www.willowgarage.com/pages/pr2/overview>, 25 September 2017;
Bohren et al., “Towards Autonomous Robotic Butlers: Lessons Learned with the pr2” 2011 IEEE
International Conference on Robotics and Automation (ICRA) 2011.
122
Fraunhofer-Gesellschaft, Fraunhofer-Institut für Produktionstechnik und Automatisierung,
Care-O-Bot 4, <www.care-o-bot-4.de/>, 25 September 2017.
123
PAL Robotics, SL, TiaGo, <http://tiago.pal-robotics.com/>, 25 September 2017.
124
Toyota Motor Corporation, Human Support Robot (HSR), <www.toyota-global.com/innov
ation/partner_robot/family_2.html>, 25 September 2017; Hashimoto et al., “A Field Study of
the Human Support Robot in the Home Environment” 2013 IEEE Workshop on Advanced
Robotics and Its Social Impacts (ARSO) 2013.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
University125 and the RIBA II robot from Riken.126 Both systems have special features
making human‒robot interactions possible. Twendy-One has the ability to actively
help a person to stand up from seated. It also has a tactile skin, which enables
complex tactile manipulations. The RIBA II system is designed to be able to lift and
relocate bedridden people, reducing the burden on medical staff.
In general, service robots in nursing have the potential to partially solve the lack of
applicants and to enable older people to live independently as long as possible. The
value of direct human‒robot interactions, apart from these approaches to physical
interaction with the patient, has so far gone largely unnoticed. The systems pre-
sented here are not yet equipped with the necessary capabilities to perform smaller
pick-up and delivery services or even sensitive manipulation tasks such as tying shoe
laces. In general, there is great potential for helping humans in daily tasks and for
human‒robot communication through haptic gestures.
The company Franka Emika is currently working on a humanoid service robot
called GARMI, which will provide a sensitive human‒robot interaction. GARMI
will be equipped with two multi-sensorial robotic arms, which will have soft-robotic
features and the solutions required for direct human interaction and safe human‒
robot interaction. In addition, the small robot will have a multisensory “head” and
an agile platform, allowing it to move from a standing position in the desired
direction. It should be able to perform both simple tasks and pick-up services, but
also to be remotely controlled by relatives and professional helpers.
125
Sugano Laboratory, TWENDY-ONE <www.twendyone.com/concept_e.html>, 25 September
2017; Iwata and Sugano, “Design of Human Symbiotic Robot TWENDY-ONE” ICRA’09.
IEEE International Conference on Robotics and Automation 2009.
126
Riken, RIBA-II, <www.riken.jp/en/pr/press/2011/20110802_2/>, 25 September 2017; Mukai
et al., “Development of a Nursing-Care Assistant Robot RIBA That Can Lift a Human in Its
Arms” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) 2010.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 33
In the coming decades, the development of autonomous vehicles will also create
major changes. Autonomous vehicles are already widely used today, but mostly in
closed warehouses or in confined areas that have been completely mapped in
advance. These application areas are also predominantly shielded from dynamic
sources of interference such as humans. One example is American online retailer
Amazon’s warehouse system. In a completely systematic environment, hundreds of
robots arrange themselves autonomously to select goods or goods shelves and drive
them to the parcel assembly. Simply put, these robots are nothing more than
powerful cleaning robots that can carry up to 300 kg. A lot of research will still be
required to move this technology on from the retail environment to transporting
people autonomously in our world. However, this next generation of autonomous
ground and air vehicles will not only be able to navigate safely in the real world, but
will also provide much more energy-efficient and environmentally friendly drives.
The interconnectedness of these systems now makes it possible to automate com-
plete logistics chains, and passengers can now be transported on demand, optimally
in terms of both time and energy. Through the temporary networking and coordin-
ation of heterogeneous vehicle fleets, the fundamental principles of public transport
are being redefined.
127
Hirzinger et al., “Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features” (1993)
9(5) IEEE Transactions on Robotics and Automation 649‒663.
128
Bohren et al. (n 83).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
34 Sami Haddadin and Dennis Knobbe
models generated from knowledge gained a priori and updated step by step during
manipulation.129 Thus, the teleoperation remains applicable even in the presence of
delays of up to 4 seconds, as an approach with model-based teleoperation and haptic
feedback has shown.130 Franka Emika goes one step further with the market launch
of the first cloud and distributed telerobotic-capable commercial robot system
Panda. The possibilities of this system were demonstrated in late 2018, when
37 Panda systems were connected in real time, with twelve operating in Düsseldorf
(Germany) and twenty-five in Munich (Germany). As a result, thirty-six robots could
be successfully teleoperated with one robot as an input device, with a maximum
distance of approximately 600 km between them.
The future benefits of this technology will be available in various applications
enabled by its high level of robustness, such as operating in space, defusing bombs,
firefighting or rescue and containment in the event of a nuclear catastrophe.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 35
131
Cavallaro, Rosen, Perry, and Burns, “Real-Time Myoprocessors for a Neural Controlled
Powered Exoskeleton Arm” (2006) 53(11) IEEE Transactions on Biomedical Engineering
2387–2396; Jäntsch, Non-linear Control Strategies for Musculoskeletal Robots, PhD Thesis,
Technische Universität München 2014.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
36 Sami Haddadin and Dennis Knobbe
Molecular robots are small autonomous synthetic systems that can be used for
numerous medical purposes. Different molecule chains can map both structural
and functional properties of the molecular robot. Internal sensors will make it
possible to explore the human body and explore areas of medical interest. Through
controlled movement, they can penetrate the body, move to the treatment site (such
as a tumor) and perform medical treatment only where it is needed. In addition,
these robots will be able to take tissue samples and control the delivery of drugs
based on sophisticated micro sensors. The movement and control mechanisms used
here can be chemical, electromagnetic, bio-hybrid cell-driven or completely new
mechanisms that are yet to be researched. Robotic theory should be translated to
molecular and cellular-level systems, the dynamics of which are explained via first-
order principle-based machine-learning algorithms. In addition, the practical closed-
loop control and analysis of these systems via macro-robotic human‒machine
interaction technologies should be explored, enabling a multitude of applications
ranging from basic understanding of cellular dynamics and control to various
medical applications such as targeted drug transportation.
Cellular manipulation is one field of research that will serve as an indispensable
basis for molecular robotics. The mechanisms to be researched may be used to
communicate with cells in a natural way and, if necessary, to control them. For
example, it will be possible to have cells targeting certain positions, proliferating,
producing certain proteins or, if the cell is harmful to the body, to have it removed
through the body’s own degradation system. This research field combines concepts
from biology research (cell biology, genetics, biochemistry, biophysics, etc.) with
approaches from modern engineering sciences (systems theory, control engineering,
computer science, information theory, robotics, AI, etc.) to create a standardized
analysis environment for cell research. Over the next few years, this field will provide
completely new insights into how cells function or communicate and can be
expected to deliver new technologies.
1.5 conclusion
This chapter has shown the current technological status of robotics and AI and has
examined current problems, as well as providing an insight into the possible future
of these technologies in the age of machine intelligence. MI will change our
everyday life and our society. It offers a lot of potential to deal with existing problems
as well as those that society can already anticipate. The responsibility that comes
with this technology should not be underestimated. The focus must be on a
trustworthy, safe and human-centered development of this technology. Framework
conditions, for example, must be created that prohibit the exploitation of this
technology to the detriment of individuals and humanity as a whole.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
2
Martin Ebers*
introduction
Rapid progress in AI and robotics is challenging the traditional boundaries of law.
Algorithms are widely employed to make decisions that have an increasingly far-
reaching impact on individuals and society, potentially leading to manipulation,
biases, censorship, social discrimination, violations of privacy and property rights,
and more. This has sparked a global debate on how to regulate AI and robotics.
This chapter outlines some of the most urgent ethical and legal issues raised by
the use of self-learning algorithms in AI systems and (smart) robotics and provides an
overview of key initiatives at the international and European levels on forthcoming
regulation and ethics. The chapter does not aim at definitive answers; indeed, the
policy debate is better served by refraining from rushing to solutions. What is needed
is a more precise inventory of the concrete ethical and legal challenges that can
strengthen the foundations of future evidence-based AI governance.
2.1 scenario
* This work was supported by Estonian Research Council grant no PRG124 and by the Research
Project “Machine learning and AI powered public service delivery”, RITA1/02-96-04, funded by
the Estonian Government. The chapter was submitted to the publisher in April 2019 and has
not been updated since, apart from all internet sources which were last accessed in April 2020.
Downloaded from https://www.cambridge.org/core. University of New 37 England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
38 Martin Ebers
1
Christl, “Corporate Surveillance in Everyday Life. How Companies Collect, Combine, Ana-
lyze, Trade, and Use Personal Data on Billions.” A Report by Cracked Labs, June 2017 <http://
crackedlabs.org/en/corporate-surveillance>.
2
Chen, Mislove, and Wilson, “An Empirical Analysis of Algorithmic Pricing on Amazon
Marketplace” (2016) Proceedings of the 25th International Conference on World Wide Web
1339–1349 <www.ccs.neu.edu/home/amislove/publications/Amazon-WWW.pdf>.
3
BI Intelligence, “The Evolution of Robo-Advising: How Automated Investment Products Are
Disrupting and Enhancing the Wealth Management Industry” (2017); Finance Innovation and
Cappuis Holder & Co., “Robo-Advisors: une nouvelle réalité dans la gestion d’actifs et de
patrimoine” (2016); OECD, “Robo-Advice for Pensions” (2017).
4
Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions” (2014) 89
Washington Law Review 1.
5
DeBarr and Harwood, “Relational Mining for Compliance Risk,” Presented at the Internal
Revenue Service Research Conference, 2004 <www.irs.gov/pub/irs-soi/04debarr.pdf>
<https://perma.cc/Y9F8-RWNK>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 39
6
Barrett, “Reasonably Suspicious Algorithms: Predictive Policing at the United States Border”
(2017) 41(3) NYU Review of Law & Social Change 327; Ferguson, “Predictive Policing and
Reasonable Suspicion” (2012) 62 Emory Law Journal 259, 317; Rich, “Machine Learning,
Automated Suspicion Algorithms, and the Fourth Amendment” (2016) 164 University of
Pennsylvania Law Review 871; Saunders, Hunt, and Hollwood, “Predictions Put into Practice:
A Quasi Experimental Evaluation of Chicago’s Predictive Policing Pilot” (2016) 12 Journal of
Experimental Criminology 347.
7
Such processes are used at least once during the course of criminal proceedings in almost every
US state; Barry-Jester, Casselman, and Goldstein, “The New Science of Sentencing,” The
Marshall Project, 4 April 2015, <www.themarshallproject.org/2015/08/04/the-new-science-of-
sentencing#.xXEp6R5rD>. More than 60 predictive tools are available on the market, many
of which are supplied by companies, including the widely used COMPAS system from
Northpointe.
8
Hvistendahl, “In China, a Three-Digit Score Could Dictate Your Place in Society” Wired (14
December 2017) <www.wired.com/story/age-of-social-credit>; Botsman, “Big Data Meets Big
Brother as China Moves to Rate Its Citizens” Wired UK (21 October 2017) <https://www.wired
.co.uk/article/chinese-government-social-credit-score-privacy-invasion>; Chen, Lin, and Liu,
“‘Rule of Trust’: The Power and Perils of China’s Social Credit Megaproject” (2018) 32(1)
Columbia Journal of Asian Law 1 <https://ssrn.com/abstract=3294776>, pointing out that the
Social Credit System has not – at least for now – employed AI technologies, real-time data or
automated decisions, despite foreign media reports to the contrary.
9
Abu-Nasser, “Medical Expert Systems Survey” (2017) 1(7) International Journal of Engineering
and Information Systems 218; Gray, “7 Amazing Ways Artificial Intelligence Is used in Health-
care,” 20 September 2018 <www.weforum.org/agenda/2018/09/7-amazing-ways-artificial-intelli
gence-is-used-in-healthcare>.
10
The combination of AI, advanced robots, additive manufacturing, and the Internet of Things
will combine to usher in the Fourth Industrial Revolution; World Economic Forum, “Impact
of the Fourth Industrial Revolution on Supply Chains,” October 2017 <www.weforum.org/
whitepapers/impact-of-the-fourth-industrial-revolution-on-supply-chains>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
40 Martin Ebers
Last but not least, new devices make it possible to connect the human
brain to computers. Brain‒computer interfaces (BCIs) enable informa-
tion to be transmitted directly between the brain and a technical circuit.
In this way, it is already possible for severely paralyzed people to com-
municate with a computer solely through brain activity.11 Researchers at
Elon Musk’s company Neuralink predict that machines will be con-
trolled in the future solely by thoughts.12 What is more, Facebook is
researching a technology that sends thoughts directly to a computer in
order to make it possible to “write” one hundred words per minute
without any muscle activity.13 Thus, the boundary between man and
machine is becoming blurred. Human and machine are increasingly
merging.
The technological changes triggered by AI and smart robotics raise a number of
unresolved ethical and legal questions which will be discussed in this chapter.
Before addressing these issues more fully, it is important to take a closer look at
the question of what we actually mean when we speak of “algorithms, AI and
robots,” whether common definitions are necessary from a legal point of view,
and, more generally, how AI systems and advanced robotics differ fundamentally
from earlier technologies, making it so difficult for legal systems to cope with them.
11
Blankertz, “The Berlin Brain – Computer Interface: Accurate Performance from First-Session
in BCI-naïve Subjects” (2008) 55 IEEE Transactions on Biomedical Engineering 2452; Nicolas-
Alonso and Gomez-Gil, “Brain Computer Interfaces” (2012) 12(2) Sensors 1211 <www.ncbi.nlm
.nih.gov/pmc/articles/PMC3304110>.
12
<www.theverge.com/2017/2/13/14597434/elon-musk-human-machine-symbiosis-self-driving-cars>.
13
<https://techcrunch.com/2017/04/19/facebook-brain-interface/?guccounter=1>.
14
Kitchin, “Thinking Critically about and Researching Algorithms” (2017) 20(1) Information,
Communication and Society 1‒14. According to Miyazaki, the term “algorithm” emerged in
Spain during the twelfth century when scripts of the Arabian mathematician Muhammad ibn
_
Mūsā al-Khwārizmī were translated into Latin. These scripts describe “methods of addition,
subtraction, multiplication and division with the Hindu-Arabic numeral system.” Thereafter,
“algorism” meant “the specific step-by-step method of performing written elementary arith-
metic”; Miyazaki, “Algorhythmics: Understanding Micro-temporality in Computational
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 41
This definition is on the one hand too broad and on the other too narrow, since a
purely technical understanding of algorithms as computer code does not go far
enough in assessing their legal and social implications. As Kitchin15 points out,
algorithms “cannot be divorced from the conditions under which they are
developed and deployed.” Rather, “algorithms need to be understood as relational,
contingent, contextual in nature, framed within the wider context of their socio-
technical assemblage.”16
Popular definitions of AI are equally unrefined.17 AI is a catch-all-term referring to
the broad branch of computer science that studies and designs intelligent
machines.18 The spectrum of applications using AI is already enormous, ranging
from virtual assistants, automatic news aggregation, image and speech recognition,
translation software, automated financial trading, and legal eDiscovery to self-driving
cars and automated weapon systems.
From a legal standpoint, this lack of definitional clarity is sometimes regarded as
problematic. Scholars emphasize that any regulatory regime must define what
exactly it is that the regime regulates, and that we must therefore find a common
definition for the term “artificial intelligence.”19 Others believe that an all-
encompassing definition is not necessary at all, at least for the purposes of legal
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
42 Martin Ebers
research and regulation.20 After all, AI systems pose very different problems
depending on who uses them, where, and for what purpose. For example, an
autonomous weapon system can hardly be compared to a spam filter, even though
both are based on an AI system. Indeed, this example alone illustrates the futility of
lawmakers considering a general Artificial Intelligence Act that would regulate the
whole phenomenon top down, administered by an Artificial Intelligence Agency.
Accordingly, there is no need for a single all-encompassing definition for “algo-
rithms” and “AI.” Rather, it is more important to understand the different character-
istics of various algorithms and AI applications and how they are used in practice.
The same applies to the term “robot,” for which no universally valid definition
has yet emerged.21 Admittedly, at the international level some definitions can be
found. For example, the International Standards Organization defines a robot as an
“actuated mechanism programmable in two or more axes with a degree of auton-
omy, moving within its environment, to perform intended tasks.”22 This interpret-
ation, however, is a functional rather than legal definition for the purpose of
technical standards. Ultimately, all attempts at providing an encompassing defin-
ition are a fruitless exercise because of the extremely diverse nature of robots,
ranging from driverless cars, prosthetic limbs, orthotic exoskeletons, and manufac-
turing (industrial) robots to care robots, surgical robots, lawn mowers, and vacuum
cleaners. Rather than finding a common definition, greater insight can be gained
from keeping all these robots separate, looking at their peculiarities and the differ-
ences between them.
For our purposes, it is therefore sufficient to use a broad definition according to
which a robot is a machine that has a physical presence, can be programmed, and
has some level of autonomy depending, inter alia, on the AI algorithms used in such
a system; it is, in short, “AI in action in the physical world.”23
In the absence of a universally accepted characterization, this chapter uses the
terms AI/algorithmic/self-learning/intelligent/smart/autonomous and/or robotic
systems/machines interchangeably to refer to AI-driven systems with a high degree
of automation.
20
Jabłonowska, Kuziemski, Nowak, Micklitz, Pałka, and Sartor, “Consumer law and artificial
intelligence. Challenges to the EU consumer law and policy stemming from the business’s use
of artificial intelligence.” Final report of the ARTSY project, European University Institute
(EUI) Working Papers, LAW 2018, 11, p 4.
21
By contrast, the EU Parliament calls for a uniform, Union-wide definition of robots in its 2017
resolution; European Parliament, Resolution of 16 February 2017 with recommendations to the
Commission on Civil Law Rules on Robotics, P8_TA(2017)0051. Critical Lohmann, “Ein
europäisches Roboterrecht – überfällig oder überflüssig?” (2017) 168 Zeitschrift für Rechtspolitik
(ZRP) 169.
22
ISO 8373, 2012, available at <www.iso.org/obp/ui/#iso:std:iso:8373:ed-2:v1:en>. Additionally,
ISO makes a distinction between industrial robots and service robots, as well as between
personal service robots and service robots for personal use.
23
Cf. AI HLEG, A Definition of AI (n 17) 4.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 43
24
The idea of “learning machines” was raised as early as 1950 by Turing, “Computing Machinery
and Intelligence” (1950) 49 Mind 433, 456 (suggesting that machines could simulate the child-
brain which is “subjected to an appropriate course of education”). Just a few years later, in 1952,
Samuel would then go on to create the first computer learning program, a Checkers-playing
program which improved itself through self-play; Samuel, “Some Studies in Machine Learning
Using the Game of Checkers” (1959) 3 IBM Journal of Research and Development 210.
25
Anitha, Krithka, and Choudhry (2014) 3(12) International Journal of Advanced Research in
Computer Engineering & Technology 4324 <http://ijarcet.org/wp-content/uploads/IJARCET-
VOL-3-ISSUE-12-4324-4331.pdf>; Buchanan and Miller, “Machine Learning for Policymakers.
What It Is and Why It Matters” Harvard Kennedy School, Belfer Center for Science and
International Affairs, Paper, June 2017; Mohri, Rostamizadeh, and Talwalkar, “Foundations of
Machine Learning” (2012). Cf. also Haddadin and Knobbe, Chapter 1 in this book.
26
Anitha, Krithka, and Choudhry (n 25), 4325 et seq.
27
Anitha, Krithka, and Choudhry (n 25), 4328 et seq.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
44 Martin Ebers
yield the best (scalar) reward.28 ML applications based on this approach are used
especially in a dynamic environment, such as driving a vehicle or playing a game
(such as DeepMind’s AlphaGo).
2.1.3 Overview
Before considering the legal and ethical problems posed by autonomous systems in
detail, it is worth taking a broader look at the general characteristics of algorithmic
systems, which are ultimately responsible for the irritations and disruptive effects we
are currently observing worldwide in all legal systems.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 45
suspended. When such an event occurs, assumptions are destroyed about the
individuality of actors who are constitutive in the attribution of action and responsi-
bility. Both the actor and the causal relationships are difficult, if not impossible, to
identify.
In order to address these problems, various solutions have been proposed. For
contractual claims there have been discussions as to whether the doctrine of privity
of contract must be overcome by, for example, accepting linked contracts31 or
through the concept of a contractual network.32 For non-contractual claims, some
scholars propose a pro-rata liability for all those involved in the network, requiring
actors themselves to stand up for the unlawful behavior of the networked algo-
rithms,33 whereas others are in favor of attributing legal responsibility not to people,
organizations, networks, software agents, or algorithms, but rather to risk pools and
the decisions themselves.34
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
46 Martin Ebers
2.2.3 Autonomy
Probably the biggest problem is the growing degree of autonomy of AI systems and
smart robotics.40 Self-learning systems are not explicitly programmed; instead, they
37
Zarsky, “Correlation versus Causation in Health-Related Big Data Analysis. The Role of
Reason and Regulation” in Cohen et al. (n 36) 42, 50.
38
Silver, The Signal and the Noise. Why So Many Predictions Fail – but Some Don’t (The
Penguin Press 2012) 162.
39
Marcus and Davis, “Eight (No, Nine!) Problems with Big Data” New York Times (6 April 2014)
<www.nyti.ms/1kgErs2>. Cf. also Kosinski, Stillwell, and Graepel, “Private Traits and Attri-
butes are Predictable from Digital Records of Human Behavior” (2013) Proceedings of the
National Academy of Sciences of the United States of America (PNAS) 5802 <www.pnas.org/
content/110/15/5802.full>, stating a correlation between high intelligence and Facebook likes of
“thunderstorms,” “The Colbert Report,” and “curly fries,” while users who liked the “Hello
Kitty” brand tended to be higher in openness and lower in conscientiousness, agreeableness,
and emotional stability.
40
In the discussion, various criteria are offered as the starting point from which an AI system can
be regarded as autonomous. What is clear, however, is that autonomy seems to be a gradual
phenomenon. On the different concepts of autonomy cf. Bertolini, “Robots as Products: The
Case for a Realistic Analysis of Robotic Applications and Liability Rules” (2013) 5(2) Law,
Innovation and Technology 214, 220 et seq.; Floridi and Sanders, “On the Morality of Artificial
Agents” in Anderson and Anderson (eds), Machine Ethics (Cambridge University Press 2011)
184, 192; Zech, “Zivilrechtliche Haftung für den Einsatz von Robotern: Zuweisung von
Automatisierungs- und Autonomierisiken” in Gless and Seelmann (eds), Intelligente Agenten
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 47
are trained by thousands and millions of examples, so that the system develops by
learning from experience. The increasing use of ML systems poses great challenges
for legal systems. With a certain level of automation, it seems impossible to ascertain
with certainty whether the programmer, the producer, or the operator is responsible
for actions caused by such systems. Specific problems arise in particular from the
point of view of foreseeability and causation.
As regards foreseeability, we have already seen numerous instances of AI making
decisions that a person would not have made or would have made differently.
A particularly fascinating example highlighted by Mathew Scherer41 comes from
C-Path, a machine-learning program for the detection of cancer. Pathologists had
believed that the study of tumor cells is the best method for diagnosing cancer, whereas
studying the supporting tissue (stroma) might only aid in cancer prognosis. But in a
large study, C-Path found that the properties of stroma were actually a better prognostic
indicator for breast cancer than the properties of the cancer cells themselves – a
conclusion that contradicted both common sense and predominant medical think-
ing.42 Another example concerns AlphaGo, a computer program developed by Google
DeepMind that defeated Lee Sedol, the South Korean world champion Go player, in a
five-game match in March 2016. As DeepMind noted on their blog, “during the games
AlphaGo played a handful of highly inventive winning moves, one of which–move
37 in game 2–was so surprising it overturned hundreds of years of received wisdom and
has been intensively examined by players since. In the course of winning, AlphaGo
somehow taught the world completely new knowledge about perhaps the most studied
game in history.”43 Both examples show that AI systems may act in unforeseeable ways,
as they come up with solutions that humans may not have considered, or that they
considered and rejected in favor of more intuitively appealing options.
The experiences of a self-learning AI system can also be viewed, as Scherer
correctly points out, as a superseding cause – that is, “an intervening force or act
that is deemed sufficient to prevent liability for an actor whose tortious conduct was
a factual cause of harm”44 – of any harm that such systems cause. This is especially
true when an AI system learns not only during the design phase, but also after it has
already been launched on the market. In this case, even the most cautious designers,
und das Recht, (Nomos Verlagsgesellschaft 2016) 163, 170 et seq., fn. 16. For the different levels
of automation for self-driving cars, see the categories proposed by SAE International (Society of
Automotive Engineers) and DOT (US Department of Transportation); DOT, “Federal Auto-
mated Vehicles Policy” (September 2016) 9, available at <www.nhtsa.gov/technology-innov
ation/automated-vehicles-safety>.
41
Scherer (2016) 29(2) Harvard Journal of Law and Technology 353, 363‒364.
42
Beck et al., “Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features
Associated with Survival” (2011) 108(3) Science Translational Medicine 1.
43
Hassabis, “The Mind in the Machine: Demis Hassabis on Artificial Intelligence” Financial
Times (21 April 2017) <www.ft.com/content/048f418c-2487-11e7-a34a-538b4cb30025>.
44
Restatement (Third) of Torts: Phys. & Emot. Harm § 34 cmt. b (AM. LAW INST. 2010).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
48 Martin Ebers
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 49
The process level refers to the different steps an AI system has gone through in order
to make an autonomous decision, usually beginning with the data-acquisition phase,
followed by data pre-processing, the selection of features, the training and application
of the AI model, and the post-processing phase, in which steps are taken to improve
and revise the output of the AI model. Exact knowledge of these steps is necessary to
understand decisions. If, for example, a discriminatory decision is based on biased
training data, precise knowledge of the data-acquisition phase is required. The model
level, on the other hand, refers to the different types of algorithms that are used for
decision-making, for example, decision trees, Bayesian networks, support-vector
machines, k-nearest neighbors, or neural networks. This must be distinguished from
the classification level, which provides information about which attributes (e.g.,
gender, age, salary) are used in the model and what weight is given to each attribute.
Opacity in ML algorithms can have quite different causes.51 First, it might be that
algorithms are kept secret intentionally for the sake of competitive advantage,52
national security,53 or privacy.54 Keeping an AI system opaque can also be important
to ensure its effectiveness, for example preventing spambots from using the disclosed
algorithm to attack the system.55 Moreover, corporations might wish to protect their
ADM system to avoid or confound regulation, and/or to conceal manipulation or
discrimination of consumers.56 Second, opacity can be an expression of technical
illiteracy. Writing and reading code as well as designing algorithms requires expert-
ise that the majority of the population does not have. Third, it may be that opacity
arises due to the unavoidable complexity of ML models. As Burrell notes, in the era
of big data, “Billions or trillions of data examples or tens of thousands of properties of
the data (termed ‘features’ in ML) may be analyzed. . . . While datasets may be
extremely large but possible to comprehend, and code may be written in clarity, the
interplay between the two in the mechanism of the algorithm is what yields the
complexity (and thus opacity).”57
Apart from that, it is important to understand that different classes of ML
algorithms have different degrees of transparency as well as performance.58 Thus,
51
Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algo-
rithms” (2016 January‒June) Big Data & Society 1.
52
Kitchin (n 14).
53
Leese, “The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-discriminatory
Safeguards in the European Union” (2014) 45(5) Security Dialogue 494.
54
Mittelstadt, Allo, Taddeo, Wachter, and Floridi, “The Ethics of Algorithms: Mapping the
Debate” (2016 July‒September) Big Data & Society 1, 6.
55
Sandvig, Hamilton, Karahalios, and Langbort, “Auditing Algorithms: Research Methods for
Detecting Discrimination on Internet Platforms” in Annual Meeting of the International
Communication Association (2014) <http://social.cs.uiuc.edu/papers/pdfs/ICA2014-Sandvig
.pdf>, 1, 9.
56
Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information
(Harvard University Press 2015) 2.
57
Burrell (n 51) 5.
58
Waltl and Vogl (n 49).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
50 Martin Ebers
for example, deductive and rule-based systems (such as decision trees) have a
high degree of transparency: since each node represents a decision, the way to
the respective leaf can be understood as an explanation for a concrete decision.
By comparison, artificial neural networks (ANN), especially deep learning
systems, show a very high degree of opacity. In such a network, all learned
information is not stored at a single point but is distributed all over the neural
net by modifying the architecture of the network and the strength of individual
connections between neurons (represented as input “weights” in artificial net-
works). Therefore, ANN systems possess a high degree of unavoidable complex-
ity and opacity. On the other hand, when it comes to performance, it is precisely
ANNs that show a much higher degree of accuracy and effectiveness than
decision trees.59
We are therefore faced with a dilemma: How can human-interpretable systems be
designed without sacrificing performance?
59
Waltl and Vogl (n 58).
60
Melzer, Targeted Killing in International Law (Oxford University Press 2008); Wagner, “The
Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implica-
tions of Autonomous Weapons Systems” (2014) 47 Vanderbilt Journal of Transnational Law
1371; Crawford, “The Principle of Distinction and Remote Warfare” (2016) Sydney Law School
Research Paper No 16/43; Ohlin, “Remoteness and Reciprocal Risk” (2016) Cornell Legal
Studies Research Paper No 16–24.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 51
be that the decision to kill a human person in a concrete combat situation cannot be
delegated to a machine.61
The question as to whether decisions should be delegated to machines also arises
in many other cases, especially when decisions by states are involved:
61
European Parliament, Resolution of 12 September 2018 on autonomous weapon systems,
P8_TA-PROV(2018)0341; Scharre, “The Trouble with Trying to Ban ‘Killer Robots,’” World
Economic Forum, 4 September 2017 <www.weforum.org/agenda/2017/09/should-machines-
not-humans-make-life-and-death-decisions-in-war/>.
62
Cf. Coglianese and Lehr, “Regulating by Robot: Administrative Decision Making in the
Machine-Learning Era” (2017) 105 Georgetown Law Journal 1147; <https://ssrn.com/abstract=
2928293>.
63
Cf. Council of Europe, “European Commission for the Efficiency of Justice (CEPEJ),
European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their
environment,” adopted by the CEPEJ during its 31st Plenary meeting (Strasbourg, 3‒4 Decem-
ber 2018), CEPJ (2018)14 (Council of Europe, Ethical Charter).
64
Pagallo and Durante, “The Pros and Cons of Legal Automation and Its Governance” (2016) 7
European Journal of Risk Regulation 323.
65
Porat and Strahilevitz, “Personalizing Default Rules and Disclosure with Big Data” (2014) 112
Michigan Law Review 1417; Ben-Shahar and Porat, “Personalizing Negligence Law” (2016) 91
NYU Law Review 627; Hacker, “Personalizing EU Private Law. From Disclosures to Nudges
and Mandates” (2017) 25 European Review of Private Law (ERPL) 651. Moreover, see (2019)
86.2 University of Chicago Law Review, a special issue on “Personalized Law.”
66
Möslein, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in Barfield and
Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (2018) <https://ssrn.com/
abstract=3037403>.
67
Cf. Section 2.7.1.4.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
52 Martin Ebers
At present, there is no legal system in the world that provides satisfactory answers
to these questions. In the European Union, Art 22 GDPR68 prohibits fully auto-
mated decisions. However, this provision has a rather limited scope of application.
First, it establishes numerous exceptions in Art 22(2) GDPR. And second, it only
covers decisions “based solely on automated processing” of data (Art 22(1) GDPR).
Since most algorithmically prepared decisions still involve a human being, the
majority of ADM procedures is not covered by the prohibition of Art 22 GDPR.69
The policy decision as to which decisions must be reserved for humans is by no
means an easy one,70 as the transfer of decision-making power to machines brings
great advantages, especially in terms of efficiency and costs. The political decision
not to transfer certain tasks to machines can thus lead to economic loss. Moreover,
in most cases it is impossible to make a clear distinction between purely machine
and purely human decisions. Rather, many decisions are made in a more or less
symbiotic relationship between humans and machines. For this reason, it is very
difficult to determine at what point in this continuum the “essence of humanity” is
compromised.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 53
healthy person connects his body with a BCI in order to be more efficient (BCI
enhancement). The blurring of the distinction between man and machine makes it
more difficult to assess the limits of the human body and raises questions concerning
free will and moral responsibility.71
Should everyone be free to expand and influence their cognitive, mental, and
physical abilities beyond the boundaries of the natural? Is such a fusion socially
desirable and ethically acceptable? If we restrict individual enhancement, should
those limits include only biological considerations (in order to restore the body to a
“normal” state) or psychological ones as well? Does our existing liability framework
provide appropriate remedies for those who suffer injuries caused by BCI systems,
especially in cases where users may be able to send thoughts or commands to other
people, including unintended commands? Is the existing data protection law suffi-
cient or do we need to protect highly sensitive personal BCI data emanating from
the human mind in a particular way? What precautions must be taken against brain
spyware?
Leading international neuroscientists facing such questions demand ethical and
legal guidelines for the use of BCI.72
71
Schermer, “The Mind and the Machine. On the Conceptual and Moral Implications of
Brain‒Machine Interaction” (2009) 3(3) Nanoethics 217.
72
Clausen et al., “Help, Hope, and Hype: Ethical Dimensions of Neuroprosthetics” (2017) 356
Science 1338 et seq. <http://science.sciencemag.org/content/356/6345/1338>. Bostrom and
Sandberg, “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges” (2009) 15 Sci
Eng Ethics 311; Holder et al., “Robotics and Law: Key Legal and Regulatory Implications of the
Robotics Age (part II of II)” (2016) 32 Computer Law & Security Review 557, 570 et seq.
73
Bostrom, Superintelligence (Oxford University Press 2014); Russell, “3 Principles for Creating
Safer AI” (2017), retrieved from <www.youtube.com/watch?v=EBK-a94IFHY>; Yudkowsky,
“Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Bostrom and
Cirkovic (eds), Global Catastrophic Risks (Oxford University Press 2008) 308–348.
74
Turchin and Denkenberger, “Classification of the Global Solutions of the AI Safety Problem”
PhilArchive copy v1 <https://philarchive.org/archive/TURCOT-6v1>; Sotala and Yampolskiy,
“Responses to Catastrophic AGI Risk: A Survey,” last modified 13 September 2013 <https://
iopscience.iop.org/article/10.1088/0031-8949/90/1/018001>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
54 Martin Ebers
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 55
highlighting that the suitability of the Directive may be tested when it comes to
AI-powered advanced robots and autonomous self-learning systems.78 In the same
vein, the UK Science and Technology Committee maintains that so far, according
to experts, “no clear paths exist for the verification and validation of autonomous
systems whose behavior changes with time.”79 Another report notes that regulation
lags behind and is not yet consolidated, resulting in gaps and overlaps between
standards.80
International standard-setting organizations also see a need for action. Work in
this area has already started with the Joint Technical Committee 1 between ISO and
IEC (JTC 1) and its subcommittee (SC) 42 (JTC 1/SC 42)81 led by the American
National Standards Institute (ANSI)82 and US secretariat. Similar initiatives have
been taken since 2018 by the European standardization organizations CEN and
CENELEC.83
78
Commission Staff Working Document, “Evaluation of the Machinery Directive,” SWD (2018)
161 final 38.
79
UK Science and Technology Committee, “Robotics and Artificial Intelligence, Fifth Report,”
Session 2016‒17, HC 145<www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/
145/145.pdf>.
80
Jacobs, “Report on Regulatory Barriers, Robotics Coordination Action for Europe Two,” Grant
Agreement Number: 688441, 3 March 2017.
81
<www.iso.org/committee/6794475.html>.
82
<www.ansi.org/>.
83
Schettini Gherardini, “Is European Standardization Ready to Tackle Artificial Intelligence?,”
19 September 2018 <www.linkedin.com/pulse/european-standardization-ready-tackle-artificial-
bardo/>.
84
Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation,” arXiv preprint arXiv:1802.07228, 2018. Cf. also King et al., “Artificial Intelligence
Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions” (2019) Science and
Engineering Ethics <https://doi.org/10.1007/s11948-018-00081-0>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
56 Martin Ebers
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 57
85
Cf. the extensive references in note 94. For the discussion in the USA, cf. moreover Geistfeld,
“A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and
Federal Safety Regulation” (2017) 105(6) California Law Review 1611; Hubbard, “‘Sophisticated
Robots’: Balancing Liability, Regulation, and Innovation” (2014) 66(5) Florida Law Review
1803; Karnow, “The Application of Traditional Tort Theory to Embodied Machine Intelli-
gence” in Calo, Froomkin, and Kerr (eds), Robot Law (Edward Elgar Publishing 2016) 51 et
seq.; Selbst, “Negligence and AI’s Human Users,” Boston University Law Review, forthcoming
<www.ssrn.com/abstract=3350508>. For the European discussion cf. Pagallo, The Laws of
Robots: Crimes, Contracts, and Torts, (Springer 2013); Ebers, “La utilización de agentes
electrónicos inteligentes en el tráfico jurídico: ¿Necesitamos reglas especiales en el Derecho
de la responsabilidad civil?,” InDret 3/2016 <www.indret.com/pdf/1245.pdf>; Ebers, “Autono-
mes Fahren: Produkt- und Produzentenhaftung” in Oppermann and Stender-Vorwachs (eds),
Autonomes Fahren (CH Beck 2017) 93 et seq. <https://ssrn.com/abstract=3192911>; Wagner,
“Produkthaftung für autonome Systeme” (2017) 217 Archiv für die civilistische Praxis (AcP) 707.
Cf. also Navas, Chapter 5, and Janal, Chapter 6 in this book.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
58 Martin Ebers
system of strict liability, that is, liability without fault, for producers when a defective
product causes physical or material damage to the injured person. Whether this
directive is sufficient to take into account the special features of AI systems and
robots is controversial.
First of all, it is not clear whether the directive, with its definition of “product,”86
also covers non-tangible AI software and especially cloud technologies. Second, the
directive only applies to products and not to services.87 Companies providing
services such as (real-time) data services, data access, data-analytics tools, and
machine-learning libraries are therefore not liable under the Product Liability
Directive88 so that national (non-harmonized) law decides whether the (strict)
liability rules developed for product liability can be applied accordingly to services.
Third, there is the problem that, under Art 4 Product Liability Directive, the
injured party must prove that the product was defective when it was put into
circulation. This is precisely what is difficult with learning AI systems. Is an unin-
tended autonomous behavior of an AI system or an advanced robot a defect? Can
the producer invoke the “development risks defence” admitted by Art 7(e) of the
directive and claim an exemption from liability, arguing that he could not have
foreseen that the product would not provide the safety a person could expect? How
can a defect be proven at all,89 if the product’s behavior is changing over its lifetime
through learning experiences, over which the manufacturer no longer has any
influence once the product is launched onto the market? And how about cyber
security? Could software vulnerability (for instance, a cyber-attack, a failure to
update security software, or a misuse of information) be considered a defect?
86
According to Art 2(1) Product Liability Dir., “product” means all movables even if incorporated
into another movable or into an immovable. The directive, however, is silent on whether
movables need to be tangible. Given that Art 2(2) explicitly includes an intangible item like
electricity, this could mean that tangibility is not a relevant criterion in terms of the directive.
On the other hand, it could be argued that electricity is an exception which cannot be
generalized.
87
Cf. ECJ, 21.12.2011, case C-495/10 (Dutrueux), ECLI:EU:C:2011:869; Commission Staff
Working Document, Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the
approximation of the laws, regulations, and administrative provisions of the Member States
concerning liability for defective products, SWD(2018) 157 final 7. Cf. also the failed proposal
for a Council Directive on the Liability of Suppliers of Services, COM(90) 482 final, OJ 1990
C 12/8. The new Digital Content Directive (DCD) does not change this either, as damages are
left to national law; cf. Art 3(10) DCD.
88
Service providers could only be liable if they manufacture the product as part of their service; if
they put their name, trade mark, or other distinguishing feature on the product; or if the they
import the product into the EU. However, they do not incur any product liability for the service
rendered by them.
89
According to Borghetti, “How Can Artificial Intelligence be Defective?” in Lohsse, Schulze,
and Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos
2019) 63, 71, “defectiveness is not an adequate basis for liability,” because in most circum-
stances, “it will be too difficult or expensive to prove the algorithm’s defect.”
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 59
Finally, the question arises whether the definition of damages is adequately laid
out in the directive, since it does not cover all types of possible damages, especially
with regard to the damages which can be caused by new technological develop-
ments, such as economic losses, privacy infringements, or environmental damages.
With these factors in mind, the European Commission is currently in the process
of assessing whether national and EU safety and liability frameworks are fit for
purpose considering these new challenges, or whether any gaps should be addressed.
Within the next months, a report is to be drawn up on this subject, supplemented by
guidance on the interpretation of the Product Liability Directive in light of techno-
logical developments, to ensure legal clarity for consumers and producers in the
event of defective products.90
90
European Commission, Communication “Coordinated Plan on Artificial Intelligence,” COM
(2018) 795 final 8.
91
Selbst (n 85).
92
European Parliament, Resolution (n 21), Nos 2, 53, 57, 58.
93
Critical Lohmann (n 21) 170.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
60 Martin Ebers
This proposal has been strongly criticized, including in an open letter from a group
of “Artificial Intelligence and Robotics Experts” in April 201896 calling for the
creation of a legal status of an “electronic person” to be discarded from both a
technical perspective and a normative, in other words legal and ethical, viewpoint.
Indeed, the introduction of a legal personhood for AI systems and/or robots is
problematic for several reasons. First, it is questionable how AI systems and/or robots
can be identified at all. Should personhood be conferred on the hardware, the
software, or some combination of the two? To make matters worse, the hardware
and software may be dispersed over several sites and maintained by different
individuals. They may be copied, deleted, or merged with other systems at very
low cost. Even if software agents and/or robots had to be registered in the future,
there would be a number of cases in which the “acting” machine could not be
94
Solum, “Legal Personhood for Artificial Intelligence” (1992) 70 North Carolina Law Rev 1231;
Karnow, “Liability for Distributed Artificial Intelligence” (1996) 11 Berkeley Technol Law J 147;
Allen and Widdison, “Can Computers Make Contracts?” (1996) 9 Harvard Journal of Law &
Technology 26; Sartor, “Agents in Cyber Law” in Proceedings of the Workshop on the Law of
Electronic Agents, CIRSFID (LEA02) (Gevenini 2002) 7; Teubner, “Rights of Non-humans?
Electronic Agents and Animals as New Actors in Politics and Law” (2006) 33 Journal of Law &
Society 497, 502; Matthias, Automaten als Träger von Rechten. Plädoyer für eine Gesetzesänder-
ung, PhD Thesis, Berlin 2007; Chopra and White, A Legal Theory for Autonomous Artificial
Agents, 2011. For an overview of the different concepts cf. Koops, Hildebrandt, and Jaquet-
Chiffelle, “Bridging the Accountability Gap: Rights for New Entities in the Information
Society?” (2010) 11(2) Minnesota Journal of Law, Science & Technology 497; Pagallo, “Apples,
Oranges, Robots: Four Misunderstandings in Today’s Debate on the Legal Status of AI
Systems” (2018) Philosophical Transactions of the Royal Society A376.
95
European Parliament, Resolution (n 21), No 59.
96
<www.robotics-openletter.eu/>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 61
identified as a person at all. The introduction of a specific legal status for machines
would therefore by no means solve all liability problems.
The second problem is that the electronic agent would have to be equipped with
its own assets in order to compensate victims. Such a solution raises, first of all, the
question of who should make the assets available: The manufacturer? The operator?
The keeper/owner or the user? All of them? Or the robot itself, depending on the
profit it makes? Additionally, it remains unclear how the relevant funds should be
paid out in the event of damages. If strict liability were applied here, it is not clear
what advantages the introduction of a legal personhood would bring over introdu-
cing a stricter tort law. All these considerations show that creating a legal person-
hood for machines does not seem economically very efficient, as the same purpose
can be more easily achieved simply by introducing strict lability and/or requiring
insurance.97
Last but not least, many fear that the agenthood of artificial agents could be a
means of shielding humans from the consequences of their conduct.98 Damages
provoked by the behavior and decisions of AI systems would not fall on the
manufacturers, keepers, etc. Instead, only AI systems would be liable. Moreover,
there is the danger of machine insolvency: “Money can flow out of accounts just as
easily as it can flow in; once the account is depleted, the robot would effectively be
unanswerable for violating human legal rights.”99
All in all, the decision to confer a legal personality on an autonomous system
would most likely lead to more questions and problems than solutions.
97
Nevejans, “Citizens’ Rights and Constitutional Affairs – Legal Affairs, European Civil Law Rules
in Robotics.” Study, European Union 2016 <https://www.europarl.europa.eu/RegData/etudes/
STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf> 15; Keßler, “Intelligente Roboter – neue
Technologien im Einsatz” (2017) MultiMedia und Recht (MMR) 593.
98
Bryson, Diamantis, and Grant, “Of, for, and by the People: The Legal Lacuna of Synthetic
Persons” (2017) 25 Artificial Intelligence and Law 273.
99
Bryson, Diamantis, and Grant (n 98) 288.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
62 Martin Ebers
of ML method used.100 Or, as Peter Norvig, chief scientist at Google, puts it: “We
don’t have better algorithms than anyone else. We just have more data.”101 This is
precisely one of the reasons why some of the most successful companies today are
those that have the most data on which to train their algorithms.
The race for AI is particularly influenced by the network effects that are already
known from the platform economy: the more users a company has, the more
personal data can be collected and processed to train the algorithms. This in turn
leads to better products and services, which results in more customers and more
data. In view of these network effects, some fear that the market for AI systems will
become oligopolistic with high barriers to entry.102 According to Pedro Domingos,
“Control of data and ownership of the models learned from it is what many of the
twenty-first century’s battles will be about – between governments, corporations,
unions, and individuals.”103
A number of very different questions arise from consideration of these points:
When should companies and governments be allowed to process personal data
using big-data analysis? Is (European) data protection law compatible with big-
data and AI systems? Who “owns” personal and non-personal data? How can
companies protect investments that flow into big-data analysis? Should we recognize
“data ownership” or “data producer’s rights”? To what extent must competitors be
given the opportunity to gain access to data from other companies?
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 63
personal data is increasingly both the source and the target of AI applications.
Accordingly, AI technologies create strong incentives to collect and store as much
additional data as possible in order to gain meaningful new insights. This trend is
further reinforced by the shift to ubiquitous tracking and surveillance through
“smart” devices and other networked sensors omnipresent in the IoT. AI amplifies
large-scale surveillance through techniques that analyze video, audio, images, and
social media content across entire populations. The spread of smart robots in
everyday life contributes to this development. As Ryan Calo104 points out, robots
not only greatly facilitate direct surveillance; they also introduce new points of
access to historically protected spaces. Moreover, through becoming increasingly
human-like, the social nature of robots may lead to new varieties of highly sensitive
personal information.
In light of this development, there is growing doubt as to whether the existing data
protection rules are sufficient to ensure adequate protection. This is particularly the
case in countries such as the USA, where data protection legislation is a patchwork
of sector-specific laws that fail to adequately protect privacy.105
104
Calo, “Robots and Privacy” in Lin, Abney, and Beke (eds), Robot Ethics: The Ethical and
Social Implications of Robotics (MIT Press 2012) 187 et seq.
105
According to Solove, “Privacy and Power: Computer Databases and Metaphors for Information
Privacy” (2001) 53 Stanford Law Review 1393, 1430, the US system of data protection is one
which “uses whatever is at hand [. . .] to deal with the emerging problems created by the
information revolution.”
106
In the era of big data, anonymous information can be de-anonymized by employing related and
non-related data about a person; Barocas and Nissenbaum, “Big Data’s End Run around
Anonymity and Consent” in Julia Lane et al. (eds), Privacy, Big Data and the Public Good
(Cambridge University Press 2014) 49 et seq.; Floridi, The 4th Revolution (Oxford University
Press 2014) 110; Rubinstein and Hartzog, “Anonymization and Risk” (2016) 91 Washington Law
Review 703, 710‒711.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
64 Martin Ebers
obligations concerning the ML models themselves in the period after they have
been built but before any decisions have been taken about using them. As a rule, ML
models do not contain any personal data, but only information about groups and
classes of persons.107 Although algorithmically designed group profiles may have a
big impact on a person,108 (ad hoc) groups are not recognized as holders of privacy
rights. Hence, automated data processing by which individuals are clustered into
groups or classes (based on their behavior, preferences, and other characteristics)
creates a loophole in data protection law, pointing toward the need to recognize in
the future some type of “group privacy” right.109
Beyond the issue of group privacy there is a series of further issues that show how
little the GDPR takes into account the peculiarities of AI systems, self-learning
algorithms, and big-data analytics, as many basic concepts and rules are in tension
with these practices.110
First of all, the principle of purpose limitation (Art 5(1)(b) GDPR) is at odds with
the prospect of big-data analyses.111 According to this principle, personal data must
be collected for specified, explicit, and legitimate purposes and not further processed
in a way incompatible with those purposes. However, analyzing big data quite often
involves methods and usage patterns which neither the entity collecting the data nor
the data subject considered or even imagined at the time of collection. Additionally,
107
This could change due to evolving technologies. Cf. in particular Veale, Binns, and Edwards,
“Algorithms that Remember: Model Inversion Attacks and Data Protection Law,” Philosophical
Transactions of the Royal Society A 376: 20180083, <http://dx.doi.org/10.1098/rsta.2018.0083>,
with the assumption that new forms of cyber attacks are able to reconstruct training data (or
information about who was in the training set) in certain cases from the model.
108
As Hildebrandt, “Slaves to Big Data. Or Are We?” (2013) IDP Revista De Internet, Derecho y
Política 27, 33 et seq., notes: “If three or four data points of a specific person match inferred data
(a profile), which need not be personal data and thus fall outside the scope of data protection
legislation, she may not get the job she wants, her insurance premium may go up, law
enforcement may decide to start checking her email or she may not gain access to the
education of her choosing.”
109
For further discussion, see Mittelstadt, “From Individual to Group Privacy in Biomedical Big
Data” in Cohen, Lynch, Vayena, and Gasser (eds), Big Data, Health Law, and Bioethics
(Cambridge University Press 2018) 175 et seq.; Taylor, Floridi, and van der Sloot (eds), Group
Privacy: New Challenges of Data Technologies (1st edn, Springer 2017).
110
Zarsky, “Incompatible: The GDPR in the Age of Big Data” (2017) 47(4) Seton Hall Law Review
995; Humerick, “Taking AI Personally: How the E.U. Must Learn to Balance the Interests of
Personal Data Privacy & Artificial Intelligence” (2018) 34 Santa Clara High Tech Law Journal
393. In contrast, the Information Commissioner’s Office (ICO) in the UK does “not accept the
idea that data protection, as currently embodied in legislation, does not work in a big data
context,” ICO, “Big Data, Artificial Intelligence, Machine Learning, and Data Protection,”
20170904 (Version 2.2) 95. Cf. also Pagallo, “The Legal Challenges of Big Data: Putting
Secondary Rules First in the Field of EU Data Protection” (2017) 3 European Data Protection
Law Review 36, with reference to two possible solutions to make the collection and use of Big
Data compatible with the GDPR: the use of pseudonymization techniques and the exemption
of data processing for statistical purposes.
111
Forgó, Hänold, and Schütze, “The Principle of Purpose Limitation and Big Data” in Corrales,
Fenwick, and Forgó (eds), New Technology, Big Data and the Law (Springer 2017) 17 et seq.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 65
112
Noto La Diega, “Against the Dehumanisation of Decision-Making. Algorithmic Decisions at
the Crossroads of Intellectual Property, Data Protection, and Freedom of Information” (2018)
9(1) jipitec (Journal of Intellectual Property, Information Technology and E-Commerce Law) 1.
113
Cf. Section 2.7.1.
114
Council of Europe, “Report on Artificial Intelligence. Artificial Intelligence and Data Protection:
Challenges and Possible Remedies,” report by Alessandro Mantelero, T-PD(2018)09Rev <https://
rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6> 7.
To address these issues, legal scholars have highlighted the potential role of transparency, or risk
assessment as well as more flexible forms of consent, such as broad consent and dynamic consent;
Mantelero, “Regulating Big Data. The guidelines of the Council of Europe in the Context of the
European Data Protection Framework” (2017) 33(5) Computer Law and Security Review 584.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
66 Martin Ebers
respect, two (extreme) scenarios are conceivable.115 On the one hand, the GDPR
might allow EU citizens to benefit from enhanced data protection, while still
enjoying the innovations data analytics bring about.116 On the other hand, the
GDPR could threaten the development of AI, creating high market-entry barriers
for companies developing and/or using AI systems. According to this view, over-
regulation of personal data would lead to limited research and use of AI products.
Recent surveys show that such a scenario is not unlikely: many companies see data
protection as an obstacle to competition and are already complaining that AI
products cannot be developed and distributed in the EU due to the strict rules.117
For all these reasons, a thorough balancing seems necessary. If the EU wants to
keep up with the global race to AI, it must carefully balance its interest in protecting
personal data against its interest in developing new AI technologies.
commodities has long been recognized.121 However, the problem with every “con-
tractual” approach is that contractual obligations are only binding inter partes.
Consequently, third parties cannot be prevented legally by contracts from using the
data. In light of these considerations, there is an intensive discussion, especially in
Europe, about whether a(n) (intellectual) property right in personal and/or non-
personal data with erga omnes effect should be recognized.122
personal data The discussion about possible property rights in data is not new. US
scholars have been debating whether personal information should be viewed as
property since the early 1970s.123 The current debate, however, is based on very
different premises. As Purtova points out, the propertization of personal information
was viewed in the USA mainly as an alternative to the existing data protection regime
and one of the ways to fill in the gaps in the US data protection system.124 It is different
in Europe, where the GDPR provides a comprehensive set of data protection rules
that in the end would interfere with the recognition of property rights in personal data.
First of all, as the European Commission points out, such a property right would be
incompatible with the fact that “the protection of personal data enjoys the status of a
fundamental right in the EU.”125 In addition, a property right in personal data would
be inconsistent with Art 7(3) GDPR, according to which consent can be withdrawn
even against the will of the entitled legal entity. Finally, even if a right to one’s data was
constituted, it would remain a challenge to assign such a right to one single person, as
most personal data relates to more than one data subject.126
for a Directive on certain aspects concerning contracts for the supply of digital content (14
March 2017) 7‒9 and 16‒17.
121
COM (2017) 228 final, under 3.2; SWD (2017) 2 final 16; cf. Berger, “Property Rights to Personal
Data? – An Exploration of Commercial Data Law” (2017) Zeitschrift für geistiges Eigentum
(ZGE) 340: “data contract law lies at the heart of commercial data law.”
122
For an overview of the academic discussion in several countries cf. Osborne Clarke LLP,
“Legal Study on Ownership and Access to Data,” Study prepared for the European Commis-
sion DG Communications Networks, Content & Technology, 2016.
123
Westin, Privacy and Freedom (Atheneum 1967); Lessig, “Privacy as Property” (2002) 69(1)
Social Research: An International Quarterly of Social Sciences 247; Schwartz, “Property, Privacy
and Personal Data” (2004) 117(7) Harvard Law Review 2055.
124
Purtova, “Property Rights in Personal Data: Learning from the American Discourse” (2009) 25
Computer Law & Security Review 507.
125
European Commission, “Staff Working Document on the free flow of data and emerging
issues of the European data economy accompanying the document Communication Building
a European data economy,” 10 January 2017, SWD (2017) 2 final 24.
126
Purtova, “Do Property Rights in Personal Data Make Sense after the Big Data Turn?: Individ-
ual Control and Transparency” (2017) 10(2) Journal of Law and Economic Regulation 64.
127
Raw machine-generated data are not protected by existing IP rights since they are not deemed
to be the result of an intellectual effort and/or have no degree of originality. Likewise, the
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
68 Martin Ebers
argued in favor of the creation of a new property right with the objective of
enhancing the tradability of anonymized machine-generated data.128 The European
Commission also temporarily considered the introduction of a “data producer’s
right” with the aim of “clarifying the legal situation and giving more choice to the
data producer, by opening up the possibility for users to utilize their data.”129
There are serious concerns about the introduction of such a right, however. First,
there is no practical need for such a property right, since companies can effectively
control the access to “their” data by technical means. Second, companies “possess-
ing” data are protected through a number of other legal instruments (e.g., tort and
criminal law) against destruction, certain impediments to access and use, and against
compromising their integrity.130 Third, the legal discussion has shown that the
specification of the subject matter and the scope of protection seems to be extremely
difficult in regard to data.131 Last but not least, the introduction of an exclusive right to
data carries the serious risk of an inappropriate monopolization of data.132 Granting
data holders an absolute (intellectual) property right over data would strengthen their
(dominant) position, increasing entry barriers for competitors.
It is therefore fitting that the European Commission no longer appears to be
pursuing the discussion on the introduction of data ownership rights and is instead
concentrating on the question of how to deal with data-driven barriers to entry.
Database Directive 96/9/EC does not protect data as such, but only data originating from a
protected database. Similarly, the Trade Secrets Directive 2016/943, does not grant an absolute
right to data but is based on the maintenance of factual secrecy; as Wiebe, “Protection of
Industrial Data – A New Property Right for the Digital Economy?” (2016) Gewerblicher
Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int.) 877, points out: “Once
secrecy is lost, legal protection is lost as well.”
128
Cf. in particular Zech, “Data as a Tradeable Commodity” in de Franceschi (ed), European
Contract Law and the Digital Single Market. The Implications of the Digital Revolution
(Intersentia 2016) 51 et seq.; Becker, “Rights in Data. Industry 4.0 and the IP-Rights of the
Future” (2017) 9 ZGE/Intellectual Property Journal (IPJ) 253.
129
European Commission, “Communication ‘Building a European Data Economy,’” COM
(2017) 2 final 13; cf. moreover “Commission Staff Working Document” (n 125) 33 et seq.
130
Kerber, “A New (Intellectual) Property Right for Non-personal Data? An Economic Analysis”
(2016) GRUR Int. 989.
131
Wiebe (n 127) 881‒883.
132
Max Planck Institute for Innovation and Competition, “Position Statement of 26 April 2017 on
the European Commission’s ‘Public consultation on Building the European Data Economy’”
6; Drexl, “Neue Regeln für die Europäische Datenwirtschaft? Ein Plädoyer für einen wettbe-
werbspolitischen Ansatz – Teil 1” (2017) Neue Zeitschrift für Kartellrecht (NZKart) 339, 343.
133
EU Commissioner Vestager, “Competition in a Big Data World,” paper presented at the
Digital Life Design (DLD) Conference, 2016. Cf. moreover Rubinfeld and S Gal, “Access
Barriers to Big Data” 2017 (59) Arizona Law Review 339; Vezzoso, “Competition Policy in a
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 69
Organisation for Economic Cooperation and Development (OECD) points out that
larger incumbents – due to the network effects previously discussed134 – are likely to
benefit from significant advantages over smaller firms and “second movers” in
collecting, storing, and analyzing large and heterogeneous types of data.135 Smaller
firms and new entrants might therefore face barriers to entry, preventing them from
developing algorithms that can effectively exert competitive pressure.
Some argue that we only need to apply competition law and split up internet
giants, as was the case with Standard Oil or AT&T in decades past.136 Others believe
that the appropriate remedy against a concentration of data in too few hands is
aggressive anti-trust action and a mandate for companies to share proprietary data
proportional to market share. In this spirit, Mayer-Schönberger and Ramges propose
in their book Reinventing Capitalism a progressive data-sharing mandate which
would require Facebook (and any similarly structured powerful player) to share
proprietary data proportional to their market share.137 However, neither demand can
in practice be realized on the basis of current competition law. According to many
legal systems, an unbundling of an entire company is only permissible – if at all – in
cases where it repeatedly violates competition law in a particularly serious
manner.138 The essential facility doctrine, under which a company in a dominant
position must grant access to a facility under specific conditions,139 does not help
either, because this doctrine only applies under “extraordinary circumstances.”140
World of Big Data” in Olleros and Zhegu (eds), Research Handbook on Digital Transform-
ations (Edward Elgar Publishing 2016) 400 et seq.
134
Cf. Section 2.6.1.
135
OECD, “Big Data: Bringing Competition Policy to the Digital Era,” 2016, <https://www.oecd
.org/competition/big-data-bringing-competition-policy-to-the-digital-era.htm>.
136
In this sense, for example Galloway, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking
Machine,” Esquire (March 2018) <www.esquire.com/news-politics/a15895746/bust-big-tech-sil
icon-valley/?src=nl&mag=esq&list=nl_enl_news&date=020818>.
137
Mayer-Schönberger and Ramges (n 102).
138
For the EU, cf. Regulation 1/2003, recital (12): “Changes to the structure of an undertaking as it
existed before the infringement was committed would only be proportionate where there is a
substantial risk of a lasting or repeated infringement that derives from the very structure of the
undertaking.” For the USA, cf. Sec 2 of the Sherman Antitrust Act 1890: “Every person who
shall monopolize, or attempt to monopolize, or combine or conspire with any other person or
persons, to monopolize any part of the trade or commerce among the several States, or with
foreign nations, shall be deemed guilty [. . .].”
139
For the US, see MCI Commc’ns Corp v American Tel & Tel Co, 708 F.2d 1081, 1132–33 (7th
Cir. 1983); Maurer and Scotchmer, “The Essential Facilities Doctrine: The Lost Message of
Terminal Railroad” 10 March 2014, UC Berkeley Public Law Research Paper No 2407071,
<https://ssrn.com/abstract=2407071>; Pitofsky, Patterson, and Hooks, “The Essential Facilities
Doctrine under US Antitrust Law” (2002) 70 Antitrust Law Journal 443, 448. For the EU, see
ECJ, 6.4.1995, joined cases C‑241–242/91 P (RTE and ITP/Kommission – “Magill”), ECLI:EU:
C:1995:98; 29.4.2004, case C‑418/01 (IMS Health), ECLI:EU:C:2004:257; CFI, 17.9.2007, case
T‑201/04 (Microsoft/Commission), ECLI:EU:T:2007:289; Evrard, “Essential Facilities in the
European Union: Bronner and Beyond” (2004) 10 Columbia Journal of European Law 491.
140
On the question of whether data can be regarded as an essential facility, cf. from a US
perspective Sokol and Comerford, “Antitrust and Regulating Big Data” (2016) 23 George Mason
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
70 Martin Ebers
Apart from this, anti-trust law is a very limited tool for mandating access to data,
for three main reasons. First, in dynamic multi-sided markets it is very difficult to
prove the existence of a monopolistic position and/or market dominance141 and
establish clear criteria for exploitative abuse in regard to data. Second, competition
law is generally unable to limit the price that can be set by the data monopolist in
exchange for access. And third, anti-trust law does not deal effectively with situations
in which market power arises from oligopolistic coordination.142
For all these reasons, it seems more promising to create specific statutory data
access rights. In the European Union, such rights already exist in specific contexts.143
Accordingly, there are models upon which the European legislature could build.
A general right of access to data applicable to all sectors, on the other hand, does not
seem appropriate. Rather, a targeted approach is to be preferred144 which, depending
on the sector, attempts to balance the legitimate interest of persons in access to
external data with the legitimate interest of data generators (or data holders) in the
protection of their investment and – where personal data is involved – the interests of
data subjects.
Law Review, 1129, 1158 et seq.; Balto, “Monopolizing Water in a Tsunami: Finding Sensible
Antitrust Rules for Big Data,” 2016 <http://ssrn.com/abstract=2753249>. For the European
perspective cf. Graef, “Data as Essential Facility. Competition and Innovation on Online
Platforms,” PhD Thesis, KU Leuven 2016 <https://core.ac.uk/download/pdf/34662689.pdf>;
Lehtioksa, “Big Data as an Essential Facility: The Possible Implications for Data Privacy,”
Master’s Thesis, University of Helsinki 2018 <https://www.paulo.fi/sites/default/files/inline-files/
Lehtioksa%20Jere_pro%20gradu.pdf>; Telle, “Kartellrechtlicher Zugangsanspruch zu Daten
nach der essential facility doctrine” in Hennemann and Sattler (eds), Immaterialgüter und
Digitalisierung (Nomos 2017) 73‒87.
141
Traditional approaches to market definition fail with digital platforms because many platforms
(i) work with free goods and services and (ii) are characterized by having several market sides,
which makes it very difficult to assess the competitive powers at play; cf. Podszun and Kreifels,
“Digital Platforms and Competition Law” (2016) EuCML 33.
142
OECD, Directorate for Financial and Enterprise Affairs Competition Committee, “Competi-
tion Enforcement in Oligopolistic Markets” Issues paper by the Secretariat, 16‒18 June 2015,
DAF/COMP(2015)2.
143
Cf. for example Art 6‒9 Regulation 715/2007/EC, Art 35‒36 Directive 2015/2366/EU, Art 27,
30 Regulation 1907/2006/EC, Art 30, 32 Directive 2009/72/EC and Recital 11 Directive 2010/40/
EU. The right to portability embodied in Art 20 GDPR is also based on the ratio to avoid lock-
in effects and to improve the switching process from one service provider to another.
144
Similarly, Max Planck Institute for Innovation and Competition, “Position Statement of
26 April 2017 on the European Commission’s ‘Public consultation on Building the European
Data Economy’” 11.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 71
145
Calo, “Digital Market Manipulation” (2014) 82(4) The George Washington Law Review 995,
1015 et seq.; O’Neil, Weapons of Math Destruction (Crown 2016) 194 et seq.; European Data
Protection Supervisor (EDPS), “Opinion 3/2018 on online manipulation and personal data,” 19
March 2018.
146
Kosinski, Stillwell, and Graepel (2013) 110(15) PNAS 5802, <www.pnas.org/content/110/15/5802
.full>; Youyou, Kosinski, and Stillwell, “Computer-Based Personality Judgments Are More
Accurate than Those Made by Humans” (2015) 112(4) PNAS 1036 <www.pnas.org/content/112/
4/1036.full>.
147
Summarizing Grassegger and Krogerus, “The Data That Turned the World Upside Down,”
Motherboard, 28 January 2017 <https://motherboard.vice.com/en_us/article/mg9vvn/how-our-
likes-helped-trump-win>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
72 Martin Ebers
The processed data can be used, in a third step, in a variety of ways. Companies can
tailor their advertising campaigns but also their products and prices specifically to the
customer profile,148 credit institutions can use the profiles for credit rating,149 insurance
companies can better assess the insured risk,150 HR departments can pre-select candi-
dates,151 and parties can use the data for political campaigns – a practice which in the
end led to the well-known Cambridge Analytica scandal.152 In the USA, the judicial
system is now using big-data analysis to predict the future behavior of criminals.153
148
Hofmann, “Der maßgeschneiderte Preis” (2016) Wettbewerb in Recht und Praxis (WRP) 1074;
Zuiderveen Borgesius, and Poort, “Online Price Discrimination and EU Data Privacy Law”
(2017) 40 Journal of Consumer Policy 347.
149
Cf. Citron and Pasquale (2014) 89 Washington Law Review 1; Zarsky, “Understanding Discrim-
ination in the Scored Society” (2014) 89 Washington Law Review 1375.
150
Cf. Swedloff, “Risk Classification’s Big Data (R)evolution” (2014) 21(1) Connecticut Insurance
Law Journal 339; Helveston, “Consumer Protection in the Age of Big Data” (2016) 93(4)
Washington University Law Review 859.
151
Cf. O’Neil (n 145) 105 et seq.
152
Cf. the speech by Alexander Nix, ex CEO of Cambridge Analytica, at the 2016 Concordia
Annual Summit in New York, <www.youtube.com/watch?v=n8Dd5aVXLCc>; moreover
Rubinstein, “Voter Privacy in the Age of Big Data” (2014) 5 Wisconsin Law Review 861;
Hoffmann-Riem (2017) 142 Verhaltenssteuerung durch Algorithmen, Archiv des öffentlichen
Rechts (AöR) 1.
153
Angwin et al., “Machine Bias,” 23 May 2016 <www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing>.
154
Goel, “Facebook Tinkers with Users’ Emotions in News Feed Experiment, Stirring Outcry”
New York Times (29 June 2014) <www.nytimes.com/2014/06/30/technology/facebook-tinkers-
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 73
In early 2017, it also became known that Facebook Australia had offered
its advertisers software that could accurately locate psychologically
unstable, depressed teenagers.155
In 2012, Microsoft registered a patent on “Targeting Advertisements
Based on Emotion.”156 And in 2013, Samsung filed the patent “Apparatus
and methods for sharing user’s emotion.”157
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
74 Martin Ebers
claims that algorithms cause bubbles of like-minded content around news users.160
For these reasons, there are serious concerns both in the USA and in Europe that
(media) diversity could be drastically reduced.161 Moreover, AI systems create new
opportunities to enhance “fake news” by simplifying the production of high-quality
fake video footage, automating the writing and publication of fake news stories, and
microtargeting citizens by delivering the right message at the right time to maximize
persuasive potential.162
In light of these considerations, there are a number of (regulatory) issues for
discussion.163 Are information intermediaries such as Facebook and Google simply
hosts of user-created content, or have they already turned into media companies
themselves? At what point is it no longer justifiable to maintain the differences in
(self-) regulation between traditional media and these new platforms in terms of
advertising regulation, taxation, program standards, diversity, and editorial independ-
ence? What are the responsibilities of information intermediaries regarding fake
news and the filtering of information in general? Should users be (better) informed
about the personalization of (news) content? Do we want to legislate to limit the
personalization of information/communication? Is it perhaps even necessary to
regulate the algorithm itself in order to ensure adequate diversity of media and
opinion?
Although these questions certainly need to be addressed, it should also be noted
that there is still no established scientific evidence for the existence of echo
chambers and filter bubbles. Recently published studies claim that these fears might
be blown out of proportion, because most people already have media habits that
help them avoid “echo chambers” and “filter bubbles.”164 Moreover, it is unclear to
what extent political bots spreading fake news succeed in shaping public opinion,
especially as people become more aware of these bots’ existence.165 In this light, the
160
Pariser, The Filter Bubble: What the Internet Is Hiding from You (Penguin Books 2012).
161
Epstein, “How Google Could End Democracy” US News & World Report (9 June 2014)
<www.usnews.com/opinion/articles/2014/06/09/how-googles-search-rankings-could-manipulate-
elections-and-end-democracy>. See also the 2016 Report of the UN Special Rapporteur on the
promotion and protection of the right to freedom of opinion and expression, David Kaye, to the
32nd session of the Human Rights Council (A/HRC/32/38), noting that “search engine algo-
rithms dictate what users see and in what priority, and they may be manipulated to restrict or
prioritise content.”
162
Brundage et al. (n 84) 43 et seq.
163
Helberger, Kleinen-von Königslöw, and van der Noll, “Regulating the New Information
Intermediaries as Gatekeepers of Information Diversity” (2015) 17(6) Info 50 <www.ivir.nl/
publicaties/download/1618.pdf>.
164
Dubois and Blank, “The Echo Chamber Is Overstated: The Moderating Effect of Political
Interest and Diverse Media” (2018) 21(5) Information, Communication & Society 1; Moeller and
Helberger, “Beyond the Filter Bubble: Concepts, Myths, Evidence and Issues for Future
Debates,” 25 June 2018 <http://hdl.handle.net/11245.1/478edb9e-8296-4a84-9631-c7360d593610>.
165
Nyhan, “Fake News and Bots May Be Worrisome, but Their Political Power Is Overblown”
The New York Times (13 February 2018) <www.nytimes.com/2018/02/13/upshot/fake-news-and-
bots-may-be-worrisome-but-their-political-power-is-overblown.html>; Brundage et al. (n 84),
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 75
call for legislation appears premature. What is needed above all are further empirical
studies examining the effect of algorithm-driven information intermediaries more
closely.
46. Cf. also Kalla and Broockman, “The Minimal Persuasive Effects of Campaign Contact in
General Elections: Evidence from 49 Field Experiments” (2018) 112(1) American Political
Science Review 148‒166, <https://ssrn.com/abstract=3042867>.
166
Mik, “The Erosion of Autonomy in Online Consumer Transactions” (2016) 8(1) Law, Innov-
ation and Technology 1, <http://ink.library.smu.edu.sg/sol_research/1736>; Sachverständigen-
rat für Verbraucherfragen (SVRV), “Verbraucherrecht 2.0, Verbraucher in der digitalen Welt,”
December 2016 58 et seq., <www.svr-verbraucherfragen.de/wp-content/uploads/Gutachten_
SVRV-.pdf>.
167
Pritz, “Mood Tracking: Zur digitalen Selbstvermessung der Gefühle” in Selke (ed), Lifelogging
(Springer VS 2016) 127, 140 et seq.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
76 Martin Ebers
(UCPD). As Eliza Mik168 and others169 have pointed out, the main weaknesses of
the UCPD lie in the definitions and assumptions underlying the concepts of
“average” and “vulnerable” consumers (which disregard the findings in behavioral
economics and cognitive science), as well as the narrow definition of aggressive
practices such as undue influence, which requires the presence of pressure. It
therefore fails to address cases of subtler forms of manipulation. A similar picture
emerges for European data protection law, which suffers above all from an over-
reliance on control and rational choice that vulnerable users are unlikely to exert.170
Whether these gaps in protection can be compensated by (national) contract law
is also questionable since it is difficult to subsume microtargeting under any of the
traditional protective doctrines – such as duress, mistake, undue influence, misrep-
resentation, or culpa in contrahendo.171 At the end of the day, the impact of
microtargeting on customer behavior appears to be too subtle to be covered by
common concepts of contract law, despite the fact that such a technique affects one
of its central values: autonomy.
Future regulation will therefore have to evaluate the extent to which customers
should be protected from targeted advertisements and offers that seek to exploit their
vulnerabilities. This is by no means an easy task because – as Natali Helberger172
rightly points out – there is a very fine line between informing, nudging, and
outright manipulation.
168
Mik (n 166).
169
Ebers, “Beeinflussung und Manipulation von Kunden durch ‘Behavioral Microtargeting’”
(2018) MultiMedia und Recht (MMR) 423; Duivenvoorde, “The Protection of Vulnerable
Consumers under the Unfair Commercial Practices Directive” (2013) 2 Journal of European
Consumer and Market Law 69.
170
Hacker, “Personal Data, Exploitative Contracts, and Algorithmic Fairness: Autonomous
Vehicles Meet the Internet of Things” (2017) 7 International Data Privacy Law 266 <https://
ssrn.com/abstract=3007780>.
171
Cf. Mik (n 166).
172
Helberger, “Profiling and Targeting Consumers in the Internet of Things – A New Challenge
for Consumer Law,” in Schulze and Staudenmayer (eds), Digital Revolution: Challenges for
Contract Law in Practice (Harvard University Press 2016) 135 et seq., 152.
173
Executive Office of the [US] President, “Preparing for the Future of Artificial Intelligence”
(Report, 2016) 30‒32; European Parliament, Resolution of 14 March 2017 on fundamental rights
implications of big data: privacy, data protection, non-discrimination, security and law-
enforcement (March 2017), Art 19‒22; and for Germany: “Wissenschaftliche Dienste des
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 77
number of examples show that ADM procedures are by no means neutral, but can
perpetuate and even exacerbate human bias in various ways.
Examples include a chatbot used by Microsoft that unexpectedly learned how to
post racist and sexist tweets,174 face-recognition software used by Google which
inadvertently classified black people as gorillas,175 and the COMPAS algorithm
which is increasingly being used by US courts to predict the likelihood of recidivism
of offenders: As the news portal ProPublica revealed in 2016, COMPAS judged
black and white prisoners differently. Among other things, it was found that the
probability that black inmates were identified as high risk but did not re-offend, was
twice as high as for white inmates. Conversely, white inmates were more likely to be
classified as low-risk, but later to re-offend.176
There can be various reasons for this type of discrimination.177
Discrimination occurs primarily at the process level178 when the algorithmic
model is fed with biased training data. Such bias can take two forms.179 One occurs
when errors in data collection lead to inaccurate depictions of reality due to
improper measurement methodologies, especially when conclusions are drawn
from incorrect, partial, or nonrepresentative data. This type of bias can be addressed
by “cleaning the data” or improving the data-collection process. The second type of
bias occurs when the underlying process draws on information that is inextricably
linked to structural discrimination, exhibiting long-standing inequality. This
happens, for example, when data on a job promotion is collected from an industry
in which men are systematically favored over women. In this scenario, the data basis
itself is correct. However, by using this kind of data in order to decide whether
employees are worthy of promotion, a discriminatory practice is perpetuated and
continued in the future.
Deutschen Bundestags, Einsatz und Einfluss von Algorithmen auf das digitale Leben,” Aktueller
Begriff (27 October 2017).
174
See Vincent, “Twitter Taught Microsoft’s Friendly AI Chatbot to Be a Racist Asshole in Less
than a Day” The Verge (24 March 2016) <www.theverge.com/2016/3/24/11297050/tay-microsoft-
chatbot-racist>.
175
Barr, “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms” Wall
Street Journal (1 July 2015).
176
See Larson et al., “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, 23 May
2016 <www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>; Klein-
berg et al., “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Working Paper (2016)
<https://arxiv.org/abs/1609.05807>, 5‒6.
177
See for example, Barocas and Selbst, “Big Data’s Disparate Impact” (2016) 104 California Law
Review 671, 680; Kroll et al., “Accountable Algorithms” (2017) 165 University of Pennsylvania
Law Review 633, 680 et seq.
178
For the different dimensions (process, model, and classification level) cf. Section 2.2.4.
179
Crawford and Whittaker, “The AI Now Report, The Social and Economic Implications of
Artificial Intelligence Technologies in the Near-Term,” 2016.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
78 Martin Ebers
Apart from biased training data, discrimination can also be caused at the classifi-
cation level180 by feature selection, for example by using certain protected character-
istics (such as race, gender, or sexual orientation) or by relying on factors that
happen to serve as proxies for protected characteristics (e.g., using place of residence
in areas that are highly segregated).181
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 79
2.7.2.3 Discussion
In view of this situation, various solutions are being discussed for both the USA and
the European Union.
With regard to individual enforcement, the following measures in particular are
proposed: (i) information rights regarding the scoring process; (ii) duties to provide
Demand Is a Function of Both Preferences and (Mis)perceptions,” 29 May 2018, The Harvard John
M Olin Discussion Paper Series, No 05/2018; Harvard Public Law Working Paper No 18-32 <www
.ssrn.com/abstract=3184533>; Zuiderveen Borgesius and Poort (n 148). EU competition law pro-
hibits different prices only if a company abuses its dominant position; cf. esp Art 102(2)(a) and (c)
TFEU. EU consumer protection rules, in particular the Unfair Commercial Practices Directive
2005/29/EC, also leave traders free to set prices as long as they inform consumers about the prices
and how they are calculated; European Commission, “Guidance on the Implementation/Applica-
tion of Directive 2005/29/EC on Unfair Commercial Practices,” SWD(2016) 163 final 134.
186
Martini, Chapter 3 in this book; Zuiderveen Borgesius, “Discrimination, artificial intelligence,
and algorithmic decision-making,” Study for the Council of Europe, 2018 <https://rm.coe.int/
discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73> 35 et seq.
187
Hacker (n 183) 1156 et seq.; Busch, “Algorithmic Accountability,” ABIDA Project Report
(March 2018) <www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%
20Accountability.pdf> 47.
188
Zuiderveen Borgesius (n 186) 36.
189
Hacker (n 183) 1160 et seq.; Zuiderveen Borgesius (n 186) 19 et seq.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
80 Martin Ebers
consumers with tools for interactive modeling; (iii) access rights to data sets; or,
alternatively, (iv) a right to confidential review (e.g. by trusted third parties) of the
logics of predictive scoring, including the source code, in order to challenge
decisions based on ADM procedures. In the EU, it is disputed above all whether
a right to explanation of automated decision-making can be derived from the GDPR
itself.190
In addition to individual remedies, a number of other measures have been
proposed, ranging from (i) controlling the design stage to (ii) licensing and auditing
requirements for scoring systems to (iii) ex-post measures by public bodies.
In this vein, some authors propose for the USA an oversight by regulators, such as
the Federal Trade Commission (under its authority to combat unfair trading
practices) with the possibility of accessing scoring systems, testing hypothetical
examples by IT experts, issuing impact assessments evaluating the system’s negative
effects, and identifying risk mitigation measures.191
For the EU, some scholars suggest that the enforcement apparatus of the GDPR
should be harnessed and used by national data protection authorities, making use of
algorithmic audits and data protection impact assessments to uncover the causes of
bias and enforcing adequate metrics of algorithmic fairness.192
Although (European) data protection law can surely help to mitigate risks of
unfair and illegal discrimination, the GDPR is no panacea. As Zuiderveen Borgesius
points out, there are five plausible reasons.193 First, data protection authorities have
limited financial and human resources to take effective action. Many authorities
may also lack the necessary expertise to detect and/or evaluate algorithmic discrim-
ination. Second, the GDPR only covers personal data, not the ML models them-
selves.194 Third, the regulation is vaguely formulated, which makes it difficult to
apply its norms. Fourth, a conflict between data protection and anti-discrimination
law arises when the use of sensitive personal data is necessary for avoiding discrimin-
ation in data-driven decision models.195 And fifth, even if data protection authorities
190
Wachter, Mittelstadt, and Floridi (2017) 7(2) International Data Privacy Law 76. Cf. also
Sancho, Chapter 4 in this book.
191
Citron and Pasquale (2014) 89 Washington Law Review 1. For a detailed overview of the various
regulatory proposals, see Mittelstadt, Allo, Taddeo, Wachter, and Floridi (2016 July–
September) Big Data & Society 13.
192
Hacker (n 183). Cf. also Mantelero, “Regulating Big Data” (2017) 33(5) The Computer Law and
Security Review 584; Wachter, “Normative Challenges of Identification in the Internet of
Things: Privacy, Profiling, Discrimination, and the GDPR” (2018) 34(3) The Computer Law
and Security Review 436; Wachter and Mittelstadt, “A Right to Reasonable Inferences: Re-
thinking Data Protection Law in the Age of Big Data and AI” (2019) Columbia Business Law
Review 494 <https://ssrn.com/abstract=3248829>.
193
Zuiderveen Borgesius (n 186) 24 et seq.
194
Cf. Section 2.6.2.2.
195
Žliobaité and Custers, “Using Sensitive Personal Data May Be Necessary for Avoiding Dis-
crimination in Data-Driven Decision Models” (2016) 24 Artificial Intelligence and Law 183.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 81
are granted extensive powers of control, the black-box problem196 still remains. In
this respect, Lipton reminds us that the whole reason we turn to machine learning
rather than “handcrafted decision rules” is that “for many problems, simple, easily
understood decision processes are insufficient.”197
For all these reasons, data protection law is not a cure-all against discrimination.
Rather, further research is needed on the extent to which data protection law can
contribute to the fight against algorithmic discrimination, whether there are still
deficiencies to be addressed by other areas of law (such as consumer law, competi-
tion law, and – when ADM systems are used by public bodies – administrative law
and criminal law), or whether we need completely new rules.
196
Cf. Section 2.2.4.
197
Lipton, “The Myth of Model Interpretability,” KDnuggets, 27 April 2015 <www.kdnuggets
.com/2015/04/model-interpretability-neural-networks-deep-learning.html>.
198
Stucke and Ezrachi, “Artificial Intelligence and Collusion: When Computers Inhibit Compe-
tition,” University of Tennessee, Legal Studies Research Paper Series #267, 2015 <https://ssrn
.com/abstract=2591874>; Ezrachi and Stucke, Virtual Competition: The Promise and Perils of
the Algorithm-Driven Economy (Harvard University Press 2016); id., “Two Artificial Neural
Networks Meet in an Online Hub and Change the Future (of Competition, Market Dynamics
and Society),” 2017 <https://ssrn.com/abstract=2949434>; Mehra, “Antitrust and the Robo-
Seller: Competition in the Time of Algorithms” (2016) 100 Minnesota Law Review, 1323; Oxera,
“When Algorithms Set Prices: Winners and Losers,” 2017 <www.oxera.com/publications/
when-algorithms-set-prices-winners-and-losers/>; Woodcock, “The Bargaining Robot,” CPI
Antitrust Chronicle (May 2017) <https://ssrn.com/abstract=2972228>.
199
OECD, “Algorithms and Collusion: Competition Policy in the Digital Age,” 2017 <www.oecd
.org/competition/algorithms-collusion-competition-policy-in-the-digital-age.htm> 51.
200
Stucke and Ezrachi (n 198).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
82 Martin Ebers
201
US Department of Justice (DOJ) 2015, “Former E-Commerce Executive Charged with Price
Fixing in the Antitrust Division’s First Online Marketplace Prosecution,” Justice News of the
US Department of Justice, Office of Public Affairs <www.justice.gov/opa/pr/former-e-com
merce-executive-charged-price-fixing-antitrust-divisions-first-online-marketplace>. For the UK
see <www.gov.uk/government/news/cma-issues-final-decision-in-online-cartel-case>.
202
Ezrachi and Stucke, Virtual Competition (2016) 46 et seq. For the EU, cf. also the Eturas case
where a booking system was employed as a tool for coordinating the actions of the firms; ECJ,
21.1.2016, case C-74/14 (Eturas), ECLI:EU:C:2016:42.
203
Calvano, Calzolari, Denicolo, and Pastorello, “Artificial Intelligence, Algorithmic Pricing and
Collusion” (20 December 2018) <https://ssrn.com/abstract=3304991>. In contrast, cf. also
Schwalbe, “Algorithms, Machine Learning, and Collusion” (1 June 2018) <https://papers.ssrn
.com/sol3/papers.cfm?abstract_id=3232631s> (“problem of algorithmic collusion rather belongs
to the realm of legal sci-fi”).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 83
2.8.1 Overview
The previous overview shows that the use of AI systems and smart robotics raises a
number of unresolved ethical and legal issues. Despite these findings, there is
currently not a single country in the world with legislation that explicitly takes into
account the problematic characteristics of autonomous systems209 in general. With a
few exceptions,210 there are also no special rules for AI systems and smart robotics in
particular.
204
Cf. Mehra (n 198) 1366 et seq. See also Sections 2.2.3 and 2.5.
205
Cf. Ezrachi and Stucke, “Algorithmic Collusion: Problems and Counter-Measures,” OECD
Roundtable on Algorithms and Collusion, 31 May 2017 <www.oecd.org/officialdocuments/
publicdisplaydocumentpdf/?cote=DAF/COMP/WD%282017%2925&docLanguage=En> 25:
“Due to their complex nature and evolving abilities when trained with additional data, auditing
these networks may prove futile. The knowledge acquired by a Deep Learning network is
diffused across its large number of neurons and their interconnections, analogous to how
memory is encoded in the human brain.”
206
For further information see Ezrachi and Stucke, Virtual Competition (2016) 203 et seq.
207
Fisher, Clifford, Dinshaw, and Werle, “Criminal Forms of High Frequency Trading on the
Financial Markets” (2015) 9(2) Law and Financial Markets Review 113.
208
For the connectivity problem, see Section 2.2.1. On the problem of autonomy, see Sections
2.2.3 and 2.5.
209
Cf. Section 2.2.
210
Special regulation exists above all for self-driving vehicles, drones, and high-frequency trading.
In the USA, most of the states have either enacted legislation or executive orders governing self-
driving vehicles; cf. National Conference of State Legislatures, “Autonomous Vehicles State
Bill Tracking Database” <www.ncsl.org/research/transportation/autonomous-vehicles-legisla
tive-database.aspx>. In 2017, the House of Representatives passed a bill for a Self-Drive Act
which was supposed to lay out a basic federal framework for autonomous vehicle regulation
but, ultimately, failed to be considered on the Senate floor. In the EU, the Regulation on Civil
Aviation 2018/1139 addresses issues of registration, certification, and general rules of conduct for
operators of drones – however, without regulating civil liability directly; cf. Bertolini, “Artificial
Intelligence and civil law: liability rules for drones,” Study commissioned by the European
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
84 Martin Ebers
Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of
the JURI Committee, PE 608.848, November 2018. In addition, the EU enacted provisions on
High Frequency Trading, explained in this book by Spindler, Chapter 7. Moreover, in France,
the Digital Republic Act (Loi No 2016-1321 du 7 octobre 2016 pour une République numér-
ique), provides that, in the case of state actors taking a decision “on the basis of algorithms,”
individuals have a right to be informed about the “principal characteristics” of the decision-
making system. For more details see Edwards and Veale, “Enslaving the Algorithm: From a
‘Right to an Explanation’ to a ‘Right to Better Decisions’?” (2018 May/June) IEEE Security &
Privacy 46.
211
Cf. for example Council of Europe, “Ethical Charter” (n 63).
212
Cf. Council of Europe, “Algorithms and Human Rights, Study on the Human rights dimen-
sions of automated data processing techniques and possible regulatory implications,” Council
of Europe study, DGI(2017)12, prepared by the Committee of Experts on Internet Intermedi-
aries (MSI-NET), 2018; Berkman Klein Center, “Artificial Intelligence & Human Rights:
Opportunities and Risks,” 25 September 2018.
213
Margulies, “The Other Side of Autonomous Weapons: Using Artificial Intelligence to
Enhance IHL Compliance” (12 June 2018) <https://ssrn.com/abstract=3194713>.
214
On AI and administrative law cf. Oswald and Grace, “Intelligence, Policing and the Use of
Algorithmic Analysis: A Freedom of Information-Based Study” (2016) 1(1) Journal of Infor-
mation Rights, Policy and Practice; Cobbe, “Administrative Law and the Machines of Govern-
ment: Judicial Review of Automated Public-Sector Decision-Making,” 6 August 2018 <https://
ssrn.com/abstract=3226913>; Coglianese and Lehr (2017) 105 Georgetown Law Journal 1147;
<https://ssrn.com/abstract=2928293>.
215
Dutton, “An Overview of National AI Strategies,” 28 June 2018 <https://medium.com/politics-
ai/an-overview-of-national-ai-strategies-2a70ec6edfd>. Cf. also the overview by Thomas,
“Report on Artificial Intelligence: Part I – the existing regulatory landscape,” 14 May 2018
<www.howtoregulate.org/artificial_intelligence/>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 85
down specific and comprehensive AI strategies (China, the UK, France), some are
integrating AI technologies within national technology or digital roadmaps (Den-
mark, Australia), while still others have focused on developing a national AI R&D
strategy (USA).216
In the USA, most notably, the government was already relying heavily under the
Obama administration on the liberal notion of the free market.217 In its report
“Preparing for the Future of Artificial Intelligence,” published in October 2016,218
the White House Office of Science and Technology Policy (OSTP) explicitly
refrains from a broad regulation of AI research and practice. Instead, the report
highlights that the government should aim to fit AI into existing regulatory schemes,
suggesting that many of the ethical issues related to AI can be addressed through
increasing transparency and self-regulatory partnerships.219 The Trump administra-
tion, too, sees its role not in regulating AI and robotics but in “facilitating AI R&D,
promoting the trust of the American people in the development and deployment of
AI-related technologies, training a workforce capable of using AI in their occupa-
tions, and protecting the American AI technology base from attempted acquisition
by strategic competitors and adversarial nations” – thus maintaining US leadership
in the field of AI.220
By contrast, the AI strategy of the European Union, published in April 2018,221
focuses not only on the potential impact of AI on competitiveness but also on its
social and ethical implications.
The following sections provide a brief overview of the EU’s AI strategy, the efforts
of the most important international organizations in this field, and the individual
and collective efforts of companies and industries/branches at self-regulation.
National AI strategies, on the other hand, are beyond the scope of this chapter
and are not discussed here.
216
Delponte (n 117) 22.
217
For a detailed discussion of the various AI strategies in the US, the EU, and the UK, see Cath,
Wachter, Mittelstadt, Taddeo and Floridi, “Artificial Intelligence and the ‘Good Society’: The
US, EU, and UK approach” (2018) 24(2) Science and Engineering Ethics 505.
218
Executive Office of the President National Science and Technology Council Committee on
Technology, “Preparing for the Future of Artificial Intelligence” (OSTP report), 2016, Wash-
ington, DC, USA <https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/
microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf>. The report followed five work-
shops and a public request for Information, cf. OSTP report 12.
219
OSTP report (n 218) 17.
220
Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence,
issued on 11 February 2019 <www.whitehouse.gov/presidential-actions/executive-order-main
taining-american-leadership-artificial-intelligence/>. Cf. also Shepardson, “Trump Adminis-
tration Will Allow AI to ‘Freely Develop’ in US: Official,” Technology News, 10 May 2018
<www.reuters.com/article/us-usa-artificialintelligence/trump-administration-will-allow-ai-to-
freely-develop-in-u-s-official-idUSKBN1IB30F>.
221
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
86 Martin Ebers
222
European Parliament, Resolution (n 21). The resolution does not include unembodied AI.
Instead, AI is understood as an underlying component of “smart autonomous robots.” Critic-
ally, Cath et al. (n 217).
223
European Parliament, Resolution (n 21) No 16.
224
European Parliament, Resolution (n 21) No 17.
225
European Parliament, Resolution (n 21) No 2.
226
European Parliament, Resolution (n 21) 19.
227
European Parliament, Resolution (n 21) Nos 49 et seq. For details regarding the recommenda-
tions of the EP relating to liability, cf. Sections 2.5.2 and 2.4.
228
European Economic and Social Committee, “Opinion, Artificial intelligence – The conse-
quences of artificial intelligence on the (digital) single market, production, consumption,
employment and society (own-initiative opinion), Rapporteur: Catelijne Muller, INT/806.”
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 87
229
Declaration “Cooperation on Artificial Intelligence,” Brussels, 10 April 2018 <https://ec.europa
.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence>.
230
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final.
231
European Commission, “Communication ‘Coordinated Plan on Artificial Intelligence,’”
COM(2018) 795 final.
232
<https://ec.europa.eu/digital-single-market/en/high-level-group-artificial-intelligence>.
233
European Group on Ethics in Science and New Technologies, “Statement on Artificial
Intelligence, Robotics and ‘Autonomous’ Systems,” Brussels, 9 March 2018 <https://ec
.europa.eu/info/news/ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en>.
234
The European Union Agency for Fundamental Rights (FRA), an independent EU body
funded by the EU budget, started a new project, Artificial Intelligence, Big Data and Funda-
mental Rights, in 2018 with the aim of helping create guidelines and recommendations in these
fields. Cf. <https://fra.europa.eu/en/about-fra/introducing-fra>.
235
<https://ec.europa.eu/digital-single-market/en/european-ai-alliance>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
88 Martin Ebers
At the end of 2018, the AI HLEG presented its first draft, “Ethics Guidelines for
Trustworthy AI.”236 After an open consultation which generated feedback from
more than 500 contributors, the AI HLEG published the final version at the
beginning of April 2019.237 The guidelines are neither an official document from
the European Commission nor legally binding. They are also not intended as a
substitute for any form of policy making or regulation, nor to deter from the creation
thereof.238
One of the main goals of the guidelines is to ensure that the development and use
of AI follows a human-centric approach, according to which AI is not seen as a
means in itself but as a tool to enhance human welfare and freedom. To this end,
the AI HLEG propagates “trustworthy AI” which is (i) lawful, complying with all
applicable laws and regulations; (ii) ethical, ensuring adherence to ethical principles
and values; and (iii) robust, both from a technical and social perspective. The
document aims to offer guidance on achieving trustworthy AI by setting out in
Chapter I the fundamental rights and ethical principles AI should comply with.
From those fundamental rights and principles, Chapter II derives seven key require-
ments (human agency and oversight; technical robustness and safety; privacy and
data governance; transparency; diversity, non-discrimination, and fairness; societal
and environmental wellbeing; and accountability), which then lead in Chapter III
to a concrete but non-exhaustive assessment list for applying the requirements of,
offering AI practitioners guidance.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 89
expert groups, (i) to issue a guidance document on the interpretation of the Product
Liability Directive in light of technological developments by mid-2019 and (ii) to
publish, also by mid-2019, a report on the broader implications for, potential gaps in,
and orientations for the liability and safety frameworks for AI, the IoT, and
robotics.240
240
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final 16.
241
Convention for the Protection of Individuals with regard to Automatic Processing of Personal
Data, European Treaty Series – No 108.
242
Council of Europe, “The Protection of Individuals with Regard to Automatic Processing of
Personal Data in the Context of Profiling,” Recommendation CM/Rec(2010)13 and explana-
tory memorandum <https://rm.coe.int/16807096c3>.
243
Council of Europe, “Guidelines on the Protection of Individuals with Regard to the Processing
of Personal Data in a World of Big Data,” T-PD(2017)01 <https://rm.coe.int/CoERMPublic
CommonSearchServices/DisplayDCTMContent?documentId=09000016806ebe7a>.
244
Council of Europe, “Practical Guide on the Use of Personal Data in the Police Sector,” T-PD
(2018)01 <http://rm.coe.int/t-pd-201-01-practical-guide-on-the-use-of-personal-data-in-the-police-/
16807927d5>.
245
Council of Europe (n 114).
246
Council of Europe, “Guidelines on Artificial Intelligence and Data Protection,” T-PD(2019)01
<https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8>.
247
Council of Europe, “Algorithms and Human Rights” (n 212).
248
Zuiderveeen Borgesius (n 186).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
90 Martin Ebers
2.8.3.2 OECD
The Organisation for Economic Cooperation and Development (OECD) has been
working on AI for several years.250 In 2018, it created an expert group (AIGO) to
provide guidance in scoping principles for AI in society. The expert group’s aim is to
help governments, business, labor, and the public maximize the benefits of AI and
minimize its risks. The expert group plans to develop the first intergovernmental
policy guidelines for AI, with the goal of presenting a draft recommendation to the
next annual OECD Ministerial Council Meeting in May 2019.251
Moreover, the OECD is planning to launch in 2019 a policy observatory on AI: “a
participatory and interactive hub which would bring together the full resources of
the organization in one place, build a database of national AI strategies and identify
promising AI applications for economic and social impact.”252
249
Council of Europe, “Ethical Charter” (n 63).
250
<www.oecd.org/going-digital/ai/oecd-initiatives-on-ai.htm>.
251
<www.oecd.org/going-digital/ai/oecd-moves-forward-on-developing-guidelines-for-artificial-
intelligence.htm>.
252
<www.oecd.org/going-digital/ai/oecd-moves-forward-on-developing-guidelines-for-artificial-
intelligence.htm>.
253
Cf. especially “Report of the 2017 UN Group of Governmental Experts on Lethal Autonomous
Weapons Systems,” 20 November 2017. Moreover, see the European Parliament’s resolution of
12 September 2018 on autonomous weapon systems, P8_TA-PROV(2018)0341.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 91
254
<www.itu.int/pub/S-GEN-UNACT-2018-1>.
255
<https://ainowinstitute.org/>.
256
<https://ethics.acm.org/2018-code-draft-2/>.
257
<https://acm.org/public-policy/usacm>.
258
<https://futureoflife.org/ai-principles/>.
259
<http://responsiblerobotics.org>.
260
<www.blog.google/technology/ai/ai-principles/>.
261
<https://ethicsinaction.ieee.org/>.
262
<https://openai.com/>.
263
<www.partnershiponai.org/>. The Partnership on AI is an industry-led, non-profit consortium
set up by Google, Apple, Facebook, Amazon, IBM, and Microsoft in September 2016 to
develop ethical standards for researchers in AI in cooperation with academics and specialists
in policy and ethics. The consortium has grown to over 50 partner organizations.
264
SIIS, “Ethical Principles for Artificial Intelligence and Data Analytics,” 2017 <www.siia.net/
LinkClick.aspx?fileticket=b46tNqJuiJA%3d&tabid=577&portalid=0&mid=17113>.
265
<www.weforum.org/center-for-the-fourth-industrial-revolution/areas-of-focus>.
266
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned
Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent
Systems,” Version 2 <http://standards.ieee.org/develop/indconn/ec/autonomous_systems
.html>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
92 Martin Ebers
They represent the collective input of several hundred participants from six contin-
ents. The goal of “Ethically Aligned Design” is “to advance a public discussion
about how we can establish ethical and social implementations for intelligent and
autonomous systems and technologies, aligning them to defined values and ethical
principles that prioritize human well-being in a given cultural context.”267
Finally, it should be noted that international standard-setting organizations are
also currently in the process of developing guidance for AI systems. To this end, in
2018 the International Electrotechnical Commission of the International Organiza-
tion for Standardization (ISO) created a committee on AI which will provide
guidance to other committees that are developing AI applications.268
Similar efforts are currently being made by the three European standards insti-
tutions: CEN, CENELEC, and ETSI.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 93
271
Nemitz, “Constitutional Democracy and Technology in the Age of Artificial Intelligence”
(2018) Philosophical Transactions of the Royal Society A 376.
272
Nemitz (n 271) 11.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
94 Martin Ebers
A third way to assess the risks of intelligent systems and the corresponding need for
regulation is to carry out an algorithmic impact assessment.273 In this regard, inspir-
ation can be drawn from Art 35(1) GDPR which requires a data protection impact
assessment when a practice is “likely to result in a high risk to the rights and
freedoms of natural persons,” especially when using new technologies. The intro-
duction of such an impact assessment – combined with the obligation to monitor
the risks of intelligent systems during its use – could strengthen the necessary dialog
between companies and policymakers and at the same time help to implement a
general culture of responsibility in the tech industry.274
273
Reisman et al. discuss “algorithmic impact assessments” in the US; Reisman, Schultz, Craw-
ford, and Whittaker, “Algorithmic impact assessments: A practical framework for public agency
accountability,” AI Now Institute 2018 <https://ainowinstitute.org/aiareport2018.pdf>.
274
The added value of such an algorithmic impact assessment compared to the procedure under
Art 35 GDPR could lie especially in the fact that important aspects beyond data protection
could be analyzed.
275
Cf. for example Elon Musk, quoted by Morris, “Elon Musk: Artificial Intelligence Is the
‘Greatest Risk We Face as a Civilization,’” 2017 <http://fortune.com/2017/07/15/elon-musk-
artificial-intelligence-2/>.
276
Turchin and Denkenberger (n 74).
277
European Parliament, Resolution of 12 February 2019 on a comprehensive European industrial
policy on artificial intelligence and robotics, P8_TA-PROV(2019)0081, No 119.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 95
278
Cf. for example Art 22(1) GDPR (prohibition of fully automated decisions).
279
This option is being considered in particular by the UK House of Lords Select Committee on
AI; cf. Thomas (n 215).
280
Cf. Dignum et al., “Ethics by Design: Necessity or Curse?” in Conitzer, Kambhampati,
Koenig, Rossi, and Schnabel (eds), AIES 2018, Proceedings of the 2018 AAAI/ACM Conference
on AI, Ethics, and Society, 2018 60 et seq.; Leenes and Lucivero, “Laws on Robots, Laws by
Robots, Laws in Robots: Regulating Robot Behaviour by Design” (2014) 6(2) Law, Innovation
and Technology 194 <https://ssrn.com/abstract=2546759>.
281
Cavoukian, Privacy by Design: Take the Challenge (Information and Privacy Commissioner of
Ontario, Canada 2009).
282
Tutt, “An FDA for Algorithms” (2017) 69 Administrative Law Review 83 <https://ssrn.com/
abstract=2747994>.
283
For the US cf. Reisman, Schultz, Crawford, and Whittaker, “Algorithmic Impact Assessments:
A Practical Framework for Public Agency Accountability,” AI Now Institute, 2018 <https://
ainowinstitute.org/aiareport2018.pdf>. For the EU cf. Martini, Chapter 3 in this book.
284
Adler, Falk, and Friedler et al., “Auditing Black-Box Models for Indirect Influence,” 2016
<http://arxiv.org/abs/1602.07043>; Diakopoulos, “Algorithmic Accountability: Journalistic
Investigation of Computational Power Structures” (2015) 3(3) Digital Journalism 398; Kitchin
(n 14); Sandvig, Hamilton, Karahalios, and Langbort (n 55).
285
For the Notice and Take-Down (N&TD) procedure in the USA, see Section 512(c) of the US
Digital Millennium Copyright Act (DMCA). For the EU, see Art 15 E-Commerce Directive
2000/31/EC.
286
See Section 2.5.3.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
96 Martin Ebers
287
Cf. in this respect the certification procedures envisaged in Art 42 GDPR.
288
Busch, “Towards a ‘New Approach’ in European Consumer Law: Standardisation and Co-
Regulation in the Digital Single Market” (2016) 5 Journal of European Consumer and Market
Law 197.
289
European Parliament, Resolution (n 21) No 16. For the USA, cf. Calo, “The Case for a Federal
Robotics Commission,” 1 September 2014 <https://ssrn.com/abstract=2529151>; Brundage and
Bryson, “Smart Policies for Artificial Intelligence,” August 29, 2016 <https://arxiv.org/abs/1608
.08196>.
290
Cf. regarding these different (sub)categories Lipton (n 197).
291
Cf. AI HLEG, “Ethics Guidelines” (n 237) 34.
292
Ibid.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 97
293
Marchant, Allenby, and Herkert (eds), The Growing Gap between Emerging Technologies and
Legal-Ethical Oversight: The Pacing Problem (Springer 2011); Hagemann et al., “Soft Law for
Hard Problems: The Governance of Emerging Technologies in an Uncertain Future” (2018)
<https://ssrn.com/abstract=3118539> 24.
294
Collingridge, The Social Control of Technology (Pinter 1980) 11 et seq.
295
Reed, “Taking Sides on Technology Neutrality” (2007) 4(3) SCRIPTed 263.
296
Greenberg, “Rethinking Technology Neutrality” (2016) 100 Minnesota Law Review 1495.
297
Koops, “Should ICT regulation be technology-neutral?” in Koops et al. (eds), Starting Points
for ICT Regulation (2006) <https://ssrn.com/abstract=918746>.
298
Fenwick, Kaal, and Vermeulen, “Regulation Tomorrow: What Happens When Technology Is
Faster than the Law?” (2017) 6(3) American University Business Law Review 561, <www.aublr
.org/wp-content/uploads/2018/02/aublr_6n3_text_low.pdf>; Guihot, Matthew, and Suzor,
“Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence” (28 July 2017) 20.
Vanderbilt Journal of Entertainment & Technology Law, 385 <https://ssrn.com/abstract=
3017004> 50.
299
The model for such a living lab is the Robot Tokku created by the Japanese government in the
early 2000s; cf. Pagallo, “LegalAIze: Tackling the Normative Challenges of Artificial Intelli-
gence and Robotics through the Secondary Rules of Law” in Corrales, Fenwick, and Forgó
(eds), New Technology, Big Data and the Law (2017) 281 et seq., 293 et seq.
300
Cf. UK Financial Conduct Authority, “Regulatory Sandbox Lessons Learned Report,” 2017
<www.fca.org.uk/publications/research/regulatory-sandbox-lessons-learned-report>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
98 Martin Ebers
2.10 outlook
These uncertainties call for further risk and technology assessment to develop a
better understanding of AI systems and robotics, as well as their social implications,
with the aim of strengthening the foundations for evidence-based governance.
Collaboration with computer science and engineering is necessary in order to assess
the potential drawbacks and benefits, identify and explore possible developments,
and evaluate whether ethical and legal standards can be integrated into autonomous
systems (ethics/legality by design). Likewise, expertise from economics, political
science, sociology, and philosophy is essential to evaluate more thoroughly how
AI technologies affect our society. Since technical innovations know no boundaries,
301
Marchant and Wallach, “Coordinating Technology Governance” (2015) XXXI(4) Issues in
Science and Technology <https://issues.org/coordinating-technology-governance/>.
302
Kaal and Vermeulen, “How to Regulate disruptive Innovation – From Facts to Data,” 11 July
2016 <https://ssrn.com/abstract=2808044> 25.
303
Kaal and Vermeulen (n 302); Roe and Potts, “Detecting New Industry Emergence Using
Government Data: A New Analytic Approach to Regional Innovation Policy” (2016) 18
Innovation 373.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 99
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
3
Regulating Algorithms
Mario Martini*
introduction
Despite their profound and growing influence on our lives, algorithms remain a
partial “black box.” Keeping the risks that arise from rule-based and learning systems
in check is a challenging task for both: society and the legal system. This chapter
examines existing and adaptable legal solutions and complements them with further
proposals. It designs a regulatory model in four steps along the time axis: preventive
regulation instruments; accompanying risk management; ex post facto protection;
and an algorithmic responsibility code. Together, these steps form a legislative
blueprint to further regulate artificial intelligence applications.
* The essay is part of the project “Algorithm Regulation in the Internet of Things” externally
funded by the German Federal Ministry of Justice and Consumer Protection. It summarizes
the central findings of the project – as well as the paper by Martini, “Algorithmen als
Herausforderung für die Rechtsordnung” (2017) 72 Juristenzeitung (JZ) 1017 and the book
Martini, Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz,
(Springer 2019) on which this article is based. The author thanks especially Michael Kolain,
Anna Ludin, Jan Mysegades, and Cornelius Wiesner for their very helpful participation. The
article was finished in June 2019. Internet sources referred to are also from this date.
1
The term “software application” is herein understood as a code-based overall system which has
an external relationship to users.
2
“Algorithms” are step-by-step instructions for solving a (mathematical) problem. As such, they
are not a phenomenon of the digital age. In this chapter references to “algorithms” means
computational algorithms in the sense of a formalized procedure that can be transformed into a
programming language in finite time. See as well e.g. Güting and Dieker, Datenstrukturen und
Downloaded from https://www.cambridge.org/core. University of New100England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 101
Algorithmen (4th edn, Springer 2018) 33; Zweig and Krafft, “Fairness und Qualität algorith-
mischer Entscheidungen” in Kar, Thapa, and Parycek (eds), (Un)Berechenbar? (ÖFIT 2018)
204, 207.
3
Regarding this development, see Coglianese and Lehr, “Regulating by Robot” (2017) 105
Georgetown Law Journal 1147, 1149 ff.; Tutt, “An FDA for Algorithms” (2017) 69 Administrative
Law Review 83, 85 ff.
4
Leininger, “How to Keep Alexa from Buying a Dollhouse without Your OK” (CNN online,
6 January 2017) <http://edition.cnn.com/2017/01/06/tech/alexa-dollhouses-san-diego-irpt-trnd/
index.html>.
5
Technical analysts’ methods can draw conclusions about consumerism from it, see e.g. Epp,
Lippold, and Mandryk, “Identifying Emotional States using Keystroke Dynamics” in Tan (ed),
Proceedings of the 29th Annual ACM CHI Conference on Human Factors in Computing
Systems (2011) 715.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
102 Mario Martini
the soul of our personal data to eat from the tree of knowledge of good and evil in
digital paradise.
Algorithms can not only recognize the risk of depression or Parkinson’s disease in
the individual’s voice, compose music and copy a Rembrandt painting true to the
original. Their predictions have even made their way into law enforcement.6 The
US state of Wisconsin, for example, uses a system called COMPAS to calculate an
accused’s likelihood of recidivism. Judges incorporate the algorithmic evaluation
into their appraisal.7 The influence of algorithms (fortunately) does not yet reach
this far in European courts. However, German tax authorities are already operating
an automated decision-making system: starting in 2018, tax refunds in general are no
longer being processed by a human tax official, but by a computer system
(Section 155(4) Sentence 1 of the German Fiscal Code (Abgabenordnung – AO)).
The software divides each tax declaration into risk groups.8 It works like a traffic light
system. Red signifies that a tax official should take a closer look.Green indicates “no
in-depth examination necessary”: the tax assessment reaches the citizen fully auto-
mated without human oversight.
3.2.1 Opacity
Algorithms have (at least from a lay person’s perspective) a lot in common with the
mysticism of Kabbalah. For most (advanced) software applications, the user cannot
see how they operate; underlying algorithms and their decision-making criteria
remain a magic formula. The criteria used by the algorithm of German credit
agency SCHUFA to assess the creditworthiness of customers are not known, nor
are the parameters applied by the risk selection of the German automated tax return
accessible.9 If the software used by tax authorities flagged all tax declarations of those
who had already filed an objection, of those in which an advice-intensive tax
6
See for several examples Rieland, “Artificial Intelligence Is Now Used to Predict Crime. But Is
It Biased?” (Smithsonian.com, 5 March 2018) <www.smithsonianmag.com/innovation/artifi
cial-intelligence-is-now-used-predict-crime-is-it-biased-180968337>.
7
For the suggested discriminatory tendency of this particular software (especially based on race),
see e.g. Angwin and others, “Machine Bias” (ProPublica, 23 May 2016) <www.propublica.org/
article/machine-bias-risk-assessments-in-criminal-sentencing>. For the rebuttal by the software
developers see Dieterich, Mendoza, and Brennan, COMPAS Risk Scales: Demonstrating
Accuracy Equity and Predictive Parity (Northpointe Inc 2016).
8
See Section 88(5) AO; on that topic e.g. Martini and Nink, “Wenn Maschinen entscheiden –
vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz” (2017) 36 NVwZ-Extra
10/2017 1, 8.
9
See Section 88(5) Sentence 4 AO.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 103
consultant had participated, or those of a certain minority, for further inspection, the
individual probably would not notice.
However, tax officials and social security officers have developed audit routines
based on their intuition in the analogue past as well – and not always without
prejudice. No one can read a human officer’s free mind, which makes decisions
according to values that are beyond external control. Their decisions cannot be
technically reconstructed on the basis of stored data in order to detect discrimination
or other errors. But there is a difference: software decides not dozens or hundreds of
cases, but tens of thousands or more. The decision of an algorithm unfolds over an
enormous range.10
From a technical perspective, the supervision of algorithms becomes more and
more like squaring a circle: machine-learning systems do not typically follow a fixed
scheme.11 Their learning process requires a permanent dialogue between data and
models mutually affecting each other, in which the algorithms evolve to be faster
and more precise due to “learning” by experience.12 Neural networks, as one kind of
adaptive system, work by emulating the functions of the human brain. Their output
depends on countless weighted individual decisions of millions of network nodes.
They decide praeter propter autonomously how to react to new situations and how to
weigh different criteria.13 As a consequence, the results of a self-learning algorithmic
decision cannot easily be reproduced. Once in the world, even developers of
machine-learning systems do not necessarily understand exactly how or why the
algorithmic oracle acts in the way that it does.14 Errors in such an arcane system
cannot be prevented, traced or checked in the traditional way.
10
There is another legally relevant difference. Unlike humans, whose decision programs cannot
be programmed ex ante, software is necessarily dependent on such values. Society must give
algorithms the frame for value decisions before they are applied in real life. Computer-
generated decisions antedate the time of decision determination. Thus, the algorithm forces
early decisions on what a legitimate evaluation should look like. The software system then
implements these guidelines with relentless consistency.
11
To take an example, in 2016, Google announced that its translation service – based on several
layers of neural networks – could now translate between two languages although it was never
taught to do so. Even Google could not explain how this worked in detail, providing only
theories and potential interpretations about what happened; Schuster, Johnson, and Thorat,
“Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System”
(Google AI Blog, 22 November 2016) <https://research.googleblog.com/2016/11/zero-shot-trans
lation-with-googles.html>.
12
For a summary and several references see Tutt (n 3) 94 ff.; Surden, “Machine Learning and the
Law” (2014) 89 Washington Law Review 87, 89 ff.; for more in-depth basics see the prologue of
Flach, Machine Learning (CUP 2012) 1 ff.
13
For an older, but still quite instructive introduction see Chapter 1 of Haykin, Neural Networks
(2nd edn, Pearson Education 1999) 23 ff.; see also Goodfellow, Bengio, and Courville, Deep
Learning (MIT Press 2016) 164 f.
14
There are various reasons for this “technical opacity,” e.g. the connection with other systems,
libraries or data bases, cf. Kroll and others, “Accountable Algorithms” (2017) 165 University of
Pennsylvania Law Review 633, 648; the sheer extent of variables and code, cf. Edwards and
Veale, “Slave to the Algorithm?” (2017) 16 Duke Law & Technology Review 18, 59, 61; or the
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
104 Mario Martini
If it remains unclear how and based on which data an algorithm makes its
decision, this intransparency touches on what Art 8 Charter of Fundamental Rights
of the European Union terms “protection of personal data.”15 This fundamental
right consists in protecting the decision about who is authorized to collect and use
one’s data.16 Lack of access to the software applications’ mode of operation can
impede the legal protection of these rights: if a person does not know and under-
stand the data base, the sequence of actions and the weighing of the decision
criteria, they are not able to decide for themselves who can draw conclusions and
for what purpose, or to check the legality of data processing that relates to them.
Chilling effects may arise when it is not clear whether or not suspected surveillance
of one’s behavior is actually taking place, which can thus curtail the fundamental
right to privacy.17
special properties of the respective systems, such as neural networks, cf. Yosinski and others,
“Understanding Neural Networks through Deep Visualization” [2015] ArXiV 1, 9. For an
overview see Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learn-
ing Algorithms” (2016) 3 Big Data & Society 1.
15
In German constitutional law this fundamental right is called “the right to informational self-
determination.” The historical starting point of the right to informational self-determination in
Germany was the census of 1983. Thousands of people protested against it. “Don’t count us,
count your days,” the protesters chanted. On the night before the classic Dortmund v Hamburg
football match, activists even painted an appeal in big letters on the stadium turf: “Boycott and
sabotage the census!” The text could not be removed in time for the game. With the approval
of the Federal President, the text was promptly supplemented to read: “The Federal President:
DO NOT boycott and sabotage the census.” Germans are traditionally sceptical about bund-
ling data in one hand. They are “world champions” in data protection. The experience of
totalitarian dictatorships echoes particularly strongly in their collective consciousness. How-
ever, the amount of data collected by modern big-data collectors nowadays is far greater than
what the Federal Republic and the former East German secret service, the Stasi, could ever
have collected together. Over time, though, the German population seems to have become
more and more relaxed about sharing and disclosing their personal data. In everyday use, they
increasingly value the benefits provided by modern digitalization techniques more highly than
the protection of their privacy.
16
However, (at least under German law) this fundamental right is not an absolute right. The right
to informational self-determination is subject to prior rights of third parties or public interest
pursuant to Art 2(1) German Basic Law.
17
The term “chilling effect” has its origin in the case law of the US Supreme Court. In Europe it
first found its way into the jurisprudence of the European Court of Human Rights and was
introduced into German law by the German Federal Constitution Court (Bundesverfassungs-
gericht – BVerfG) as Einschüchterungseffekte (intimidation effects) after the court recognized
those effects in several legal situations, directly transforming the ECHR’s judication of discrim-
ination, see BVerfG, 3.3.2004, BVerfGE 109, 279 (354 f.); BVerfG 2.3.2010, BVerfGE 125,
260 (335).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 105
fluctuations in the blood sugar level before and after meals18 or any other circum-
stances beyond the essential facts to be decided on. Depending on their coding,
algorithms can make more consistent and unprejudiced decisions than the average
human. This raises the question: Are algorithms possibly even better decision-
makers, even if they are not transparent, e.g., in the equal allocation of places at a
public university?
Although the decisions of algorithms follow logical patterns, they are the product
of human programming and its preconditions and are therefore not free from bias.
They encode the values and assumptions of their creators. Hence, algorithms are
only as meticulous – not to say impartial – as the people who program them. Hidden
prejudices can creep into algorithms unnoticed not only through programming, but
also due to an inadequately selected data base.19
This effect was demonstrated involuntarily by the experimental beauty contest
Beauty Artificial Intelligence. It was the first beauty contest carried out exclusively
on the basis of the decision-making power of machine-learning algorithms. 6,000
people from 100 countries were judged by artificial intelligence. The result was
surprising in one respect: only one out of 44 winners was a person with dark skin.20
The algorithm turned out to at least partly gauge beauty by race.
Rationally it should not come as a surprise that the system paid no attention to
diversity. Its machine-learning algorithm had been fed with images of white beaut-
ies. Microsoft’s self-learning chat bot Tay’s test run on Twitter performed even
worse, when the bot mutated into a racist and sexist Holocaust denier after just
several hours of interaction with not always benevolent internet users.21
Complex algorithms, whether deterministic or with learning capacities, ultim-
ately base their decisions on stochastic inferences that only determine correlations.
Thus, by their very nature, algorithms do not offer explanations of cause and
18
See Danziger, Levav, and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions” (2011)
108 PNAS 6889, 6889 f. According to the findings of the investigation, hungry judges tend to
soften their sentences after the meal break. However, the results are not empirically validated.
The study suffered from methodological shortcomings: in particular, it failed to take into
account the special features of judicial termination practice in the courts examined, which
may have distorted the study results. See Glöckner, “The Irrational Hungry Judge Effect
Revisited: Simulations Reveal That the Magnitude of the Effect Is Overestimated” (2016) 11
Judgment and Decision Making 601, 602 ff.; Weinshall-Margel, and Shapard, “Overlooked
Factors in the Analysis of Parole Decisions” (2011) 108 PNAS E833.
19
Cf. Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against
Algorithmic Discrimination under EU Law” (2018) 55 Common Market Law Review 1143, 1147;
Martini, Blackbox Algorithmus – Grundfragen eine Regulierung Künstlicher Intelligenz
(Springer 2019) 47 ff., 239 ff.
20
Levin, “A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin” The
Guardian online (8 August 2016) <www.theguardian.com/technology/2016/sep/08/artificial-
intelligence-beauty-contest-doesnt-like-black-people>.
21
Gibbs, “Microsoft’s Racist Chatbot Returns with Drug-Smoking Twitter Meltdown” The
Guardian online (30 March 2016) <https://www.theguardian.com/technology/2016/mar/30/
microsoft-racist-sexist-chatbot-twitter-drugs>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
106 Mario Martini
22
Mayer-Schönberger and Cukier, Big Data (John Murray 2013) 248; Martini, “Big Data als
Herausforderung für den Persönlichkeitsschutz und das Datenschutzrecht” (2014)
129 Deutsches Verwaltungsblatt (DVBl) 1481, 1485.
23
See also European Parliament resolution of 14 March 2017, margin nos 19 ff., 31; Mittelstadt
and others, “The Ethics of Algorithms: Mapping the Debate” (2016) Big Data & Society 1, 5 ff.
24
O’Neil, Weapons of Math Destruction (Crown Random House 2016) 7: “downward spiral.”
25
For the risks and chances online personality tests pose for job applicants, Lischka and Klingel,
Wenn Maschinen Menschen bewerten (Bertelsmann Stiftung 2017) 22 ff.; see also Weber and
Dwoskin, “Are Workplace Personality Tests Fair?” The Wall Street Journal (29 September 2014)
<www.wsj.com/articles/are-workplace-personality-tests-fair-1412044257>; O’Neil, “How Algo-
rithms Rule Our Working Lives” The Guardian (1 September 2016) <www.theguardian.com/
science/2016/sep/01/how-algorithms-rule-our-working-lives> as a summary of the more exhaust-
ive O’Neil (n 24).
26
See Mattioli, “On Orbitz, Mac Users Steered to Pricier Hotels” The Wall Street Journal
(23 August 2012) <www.wsj.com/articles/SB10001424052702304458604577488822667325882>.
27
See Section 3.1.
28
Angwin and others (n 7) describe racial discrimination by the criminal prognosis software
COMPAS. The manufacturer of the software has issued a statement claiming methodical
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 107
concerning whites. The predictive power of the software also proved weak in
relation to a randomly selected group of people. Its hit rate was 65 per cent; the rate
of the human comparison group was 63 per cent. In the end, COMPAS turned out
to be not much better than a coin toss.29
An algorithm does not ‒ and cannot ‒ know when it crosses the line of unlawful
discrimination, exemplified by Art 21 of the Charter of Fundamental Rights of the
European Union or Protocol 12 to the European Convention on Human Rights, as
well as Art 3 of the German Constitution (Grundgesetz - GG).30 Algorithms do not
recognize that there are ethical limits to evaluating personality profiles, nor are they
aware of the thin red line between agreeable and unlawful judgement on ethical
parameters. Algorithms have no ethical compass. Their approach is not to do justice
to the individual. They lack empathy and social skills. Thus, an algorithmic process
cannot generally be considered better or worse than a human decision maker. An
algorithm must rather be programmed to the conditions under which it can exploit
its advantages and avoid unethical decisions.
errors in this assessment: Dieterich, Mendoza, and Brennan (n 7) 2 f.; on the whole topic, see
also Martini (n 19) 55 ff.
29
Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism” (2018) 4
Science Advances 1.
30
This constitutional framework bans discrimination (by state entities) based on sensitive traits
such as race, religion, gender, disability, origin or language. Private entities are subject to the
additional anti-discrimination provisions that exist in most countries.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
108 Mario Martini
the amount of data to be processed to solve a task. Attacking the market position of
big-data providers, and thus exploiting the disruptive efficiency of the “market as a
discovery process”31 becomes more difficult for them. The risk that market power
becomes concentrated in the hands of few suppliers becomes real. It can empower a
“surveillance capitalism”32 in which data-collecting companies control processes
and monitor people in a way that undermines market mechanisms.33
The use of algorithms not only changes the design of markets and the way
companies offer their products. It also creates new societal risks for democratic
societies. With the revelations about how Facebook and Twitter might have been
used to influence the electorate in previous US presidential elections, the public
realized that algorithmic decision-making can affect political equality of opportunity
and the democratic chances of participation.34 Social bots applied the rich supply of
big data to place targeted campaign messages far more precisely and expansively
than a human being could. In March 2018 it became known to the public that
Cambridge Analytica had used private information from the Facebook profiles of
more than 50 million users without their permission or knowledge to influence
voters.35 The data were mainly gathered by algorithms, and it was algorithms that
allowed and contributed to its misuse. The discussion on how society should deal
with these new possibilities of automated IT systems is still in its infancy. Establish-
ing fair competition and counteracting dominant market power is and remains one
of the key tasks the res publica will have to accomplish.
31
Von Hayek, Der Wettbewerb als Entdeckungsverfahren (Kieler Inst. für Weltwirtschaft 1968) 1.
32
Zuboff, The Age of Surveillance Capitalism (PublicAffairs 2019) 128 ff.
33
See also Ebers, “Beeinflussung und Manipulation von Kunden durch Behavioral Microtarget-
ing” (2018) 21 MultiMedia & Recht (MMR) 423, 424 ff.; Raini and Anderson, Code-Dependent:
Pros and Cons of the Algorithm Age (Pew Research Center 2017) 2 with many examples of
unexpected and adverse algorithmic effects and outcomes.
34
Facebook was suspected of deliberately suppressing news from the conservative spectrum and
manipulating the “trending topics” in favor of other political tendencies; Herrman and Isaac,
“Conservatives Accuse Facebook of Political Bias” The New York Times (9 May 2016) <https://
www.nytimes.com/2016/05/10/technology/conservatives-accuse-facebook-of-political-bias.html?
mcubz=1>. In response to these allegations, Facebook published (at least partially) its internal
selection guidelines.
35
See e.g. Ebers (n 33); Rosenberg, Confessore, and Cadwalladr, “How Trump Consultants
Exploited the Facebook Data of Millions” The New York Times (17 March 2018) <www
.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html>; see also The
Guardian online series regarding the scandal: <www.theguardian.com/news/series/cambridge-
analytica-files>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 109
The evil, however, stems not from the technology, but from those who abuse it.
Even in the analogue world society has not banned knives, but punishes the
malignant act of stabbing another person with them. The legislator should thus
not suffocate new technologies, but ensure that they are applied in a manner
compatible with public interest and prevent abuse.
36
Oberoi, “Exploring DeepFakes” (goberoi, 5 March 2018) <https://goberoi.com/exploring-deep
fakes-20c9947c22d9>.
37
See e.g. Athey, Catalini, and Tucker, The Digital Privacy Paradox: Small Money, Small Costs,
Small Talk (NBER 2018) 1 ff.; Dienlin and Trepte, “Is the Privacy Paradox a Relic of the Past?”
(2015) 45 European Journal of Social Psychology 285, 286 f. with a sociological problem analysis.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
110 Mario Martini
However, the scientifically founded empirical proof of the phenomenon corresponding to daily
observation is missing as of yet.
38
For an instructive overview of collective protection literature see Helm, “Group Privacy in
Times of Big Data. A Literature Review” (2016) 2 Digital Culture & Society 137, 139 ff. The new
European provisions of Art 7 General Data Protection Regulation (GDPR), especially para 4,
are steps in the right direction. See also Martini, “Algorithmen als Herausforderung für die
Rechtsordnung” (2017) 72 JZ 1017, 1019.
39
Among the broad scope of literature on possible algorithmic danger and potential tasks of
algorithm regulation, see e.g. , Busch, Algorithmic Accountability, ABIDA Project Report,
March 2018, <http://www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%
20Accountability.pdf>; Citron and Pasquale, “The Scored Society: Due Process for Automated
Predictions” (2014) 89 Washington Law Review 1; Edwards and Veale (n 14); Ernst, “Algor-
ithmische Entscheidungsfindung und personenbezogene Daten” (2017) 72 JZ 1026; Hoffmann-
Riem, “Verhaltenssteuerung durch Algorithmen” (2017) 142 AöR 1, 24; Kroll, Accountable
Algorithms (2015); Kroll and others (n 14) 636 with many examples; Lanier, Who Owns the
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 111
The legal system has a diverse arsenal of conceivable measures at its disposal. It
can be applied at various points on the time axis: preventive (Section 3.3.2), parallel
to the use of the software applications (3.3.3), ex post in the shape of damages and
legal protection (3.3.4) and by accompanying self-regulation (3.3.5).
However, legislators should not try to crack the regulatory walnut with a sledge-
hammer. Not every software application and not every machine-learning algorithm
poses a threat to fundamental rights justifying regulation. All regulatory efforts
should thus start by trying to determine the right scope of legal obligations: Legisla-
tors should first try to find a general and/or sector-specific list of classification criteria
that form a threshold for particular means of regulation. Only certain types of
algorithms, especially those which are sensitive to fundamental rights, should be
captured by legislation.
The lynchpin for different levels of obligation in a regulatory class system should
always be the sensitivity to fundamental rights and the degree of risk in the individ-
ual case. Particularly important factors in this context are: the type of data processed
by the system (public-sphere data, social-sphere data, special categories of personal
data within the meaning of Art 9 and 10 GDPR); the number of affected persons;
and the extent to which alternative products are available to the data subject. The
class of sensitive products includes in particular: applications that process health
data or can cause physical harm; applications that have a special impact on the
formation of opinion and democratic order (e.g. social bots);40 scoring and profiling
software that is involved in decisions about participation in important aspects of life;
new technologies that enable a particular degree of evaluation intensity (especially
facial recognition, voice and sentiment analyses, smart home applications); human‒
machine collaboration (e.g. exoskeletons41 and cobots); systematic monitoring of
work activities,42 and publicly accessible areas and algorithm-based decision-making
procedures in the judiciary and administration.
Future? (Simon & Schuster 2013) 204; O’Neil (n 24); Tufekci, “Algorithmic Harms Beyond
Facebook and Google: Emergent Challenges of Computational Agency” (2015) 13 Colorado
Technology Law Journal 203; Pasquale, The Black Box Society (Harvard UP 2015); Salamatian,
“From Big Data to Banality of Evil” (Heinrich-Böll-Stiftung, 12 April 2014) <https://soundcloud
.com/boellstiftung/vortrag-from-big-data-to-banality-of-evil>; Wischmeyer, Regulierung intelli-
genter Systeme (2018) 143 AöR 1.
40
See e.g. Libertus, “Rechtliche Aspekte des Einsatzes von Social Bots de lege lata und de lege
ferenda” (2018) 62 Zeitschrift für Urheber- und Medienrecht (ZUM) 20; Steinbach, “Social Bots
im Wahlkampf” (2017) 50 Zeitschrift für Rechtspolitik (ZRP) 101.
41
See Martini and Botta, “Iron Man am Arbeitsplatz? – Exoskelette zwischen Effizienzstreben,
Daten- und Gesundheitsschutz” (2018) 35 Neue Zeitschrift für Arbeitsrecht (NZA) 625.
42
See e.g. Brecht, Steinbrück, and Wagner, “Der Arbeitnehmer 4.0?” (2018) 6 Privacy in
Germany (PinG) 10; Byers and Wenzel, “Videoüberwachung am Arbeitsplatz nach dem neuen
Datenschutzrecht” (2017) 72 Betriebs Berater (BB) 2036; Pärli, “Schutz der Privatsphäre am
Arbeitsplatz in digitalen Zeiten – eine menschenrechtliche Herausforderung” (2015) 8 Euro-
päische Zeitschrift für Arbeitsrecht (EuZA) 48.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
112 Mario Martini
43
See also Edwards and Veale (n 14) 44 ff.; Wachter, Mittelstadt, and Floridi, “Why a Right to
Explanation of Automated Decision-Making Does Not Exist in the General Data Protection
Regulation” (2017) 7 International Data Privacy Law 76, 95 f.
44
European Group on Ethics in Science and New Technologies, Statement on Artificial Intelli-
gence, Robotics and “Autonomous” Systems (European Union 2018) 9; Martini, “Art 22
DSGVO” in Paal and Pauly (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz:
DS-GVO BDSG (2nd edn, CH Beck 2018) margin nos 1, 8 ff.
45
These exceptions apply in particular to the entering into and performance of a contract as well
as cases where explicit consent is given (Art 22(2) a and c GDPR). Nonetheless, the data
controller has to implement suitable measures to protect the data subject’s rights, freedoms
and legitimate interests including, as a minimum guarantee, the right to express one’s point of
view (Art 22(3) GDPR) e.g., in order to explain complicated contexts or cases of hardship. This
also includes the right of the person concerned to demand a reassessment of the content by the
person responsible. Furthermore, processing must be transparent and fair. The processor must
therefore use suitable mathematical procedures (Recital 71 sub-para 6). The result of the
calculation must be based on correct and up-to-date data: error management with verification
mechanisms is needed to check the data basis and its accuracy for integrity and authenticity.
The person responsible must also use technical and organizational measures to counter risks of
discrimination according to, e.g. gender, genetic or health status (see Recital 71 sub-para
1 sentence 4, sub-para 2 sentence). Compare also Section 3.3.2.2.
46
Art 22(1) refers to “a decision based solely on automated processing.” See also Edwards and
Veale (n 14) 44 ff; Martini (n 38) 1029; Wachter, Mittelstadt, and Floridi (n 43) 96.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 113
As a result, Art 22 GDPR does not completely solve regulatory tasks for the
majority of algorithmic decision-making procedures. Rather, a regulatory regime,
covering the wider field of digital decision support, is needed.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
114 Mario Martini
De lege ferenda
(1) Extended disclosure requirements for software applications used for administrative
purposes
When a state authority uses algorithm-based systems for public applications, the
denial of information as provided in Section 6 German Federal Freedom of Infor-
mation Act51 is not appropriate. Public law should include the obligation to disclose
the source code of the software – despite trade secrets – if that is necessary to prove
the correctness of government decisions in individual cases (e.g., on allocation of
places in public kindergarten or in colleges and universities). It is recommendable to
modify the software developer’s intellectual property protection from an absolute to
a relative position: Source code and other details must be disclosed if there is a
predominant and legally protected public interest52 for the requested information.
48
Generalitat de Catalunya/Comissió de Garantia del Dret d’Accés a la Informació Pública
(GAIP), Resolución de 21 de septiembre de 2016, de estimación de las Reclamaciones 123/2016 y
124/2016 (acumuladas); association Droits des lycéens, press release, 10 December 2017 <www
.droitsdeslyceens.com/medias/files/dl-cp-17-12-13-tirage-sort-audience-ce.pdf>; first decision on
the disclosure of algorithms used by state authorities see Tribunal Administratif de Paris,
decision of 10 March 2016, <www.legalis.net/jurisprudences/tribunal-administratif-de-paris-
5eme-sec-2eme-ch-jugement-du-10-mars-2016/>.
49
See Art 14(1) German Basic Law (Grundgesetz); Art 17 Charter of Fundamental Rights of the
European Union.
50
Exceptions for the protection of special public interests are incorporated in almost all freedom
of information acts around the world. See e.g. Sec 3 IFG.
51
See Section 3.3.2.2.
52
There are cases in which it is justifiable not to disclose at least some critical parts of a software
application, because an obligation to disclose could undermine its task fulfillment (e.g.
requirements of official secrecy) if the integrity of its technical systems was be at stake. For
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 115
When awarding procurement contracts for software, the state should thus stipu-
late in the contract terms that the software must comply with the transparency
requirements (as well as fairness, user control, accountability and responsibility)
that society places on the state’s software.53 The state can use its power as a buyer on
the market to influence the supply of ethically desirable artificial intelligence in the
interest of its values.
(2) Extension of the information duties beyond procedures within the meaning of Art
22 GDPR to software applications which can have a sensitive effect on the rights of
their users
The obligation to provide information about “the existence of automated decision-
making, [. . .] and [. . .] meaningful information about the logic involved, as well as
the significance and the envisaged consequences of such processing for the data
subject,” which Art 13(2) f, Art 14(2) g and 15(1) h GDPR establish, has no particularly
broad scope of application. It is – as well as Art 22 GDPR – limited to cases of
“automated decision-making” – that is, decisions which are not substantially influ-
enced by human behavior.
The parenthesis “at least in these cases” (“zumindest in diesen Fällen,” “au moins
en pareils cas”) of Art 13(2) f, 14(2) g and 15(1) h GDPR indicates at first glance that
the explanatory obligations may also extend to other forms of processing. The
wording suggests that, in exceptional cases, the data controller may be obliged to
inform data subjects about the decision-making logic of their algorithms beyond the
scope of Art 22 GDPR. But, since the duty to provide information is subject to an
administrative fine (Art 83(5) b GDPR), it is not justifiable under the rule of law to
leave unanswered the question of which cases, beyond Art 22 GDPR, the duty to
provide information extends to. The obligation to provide information on the logic
and scope of systems therefore only extends – at least with suficient legal clarity and
under the legal sanction regime of the GDPR – to fully automated decisions which
are based on profiling or similarly use personal information for automated decision-
making. Under the GDPR now in force, the information duties of the data control-
ler do not extend to decisions outside the scope of Art 22(1) GDPR.54
example, public details about the crucial limits of the algorithm-based tax return audit
mechanism could undermine the automated system: anyone who knew the limit above which
the audit system for the deduction of donations applies could easily circumvent it. For this
reason, a legal provision stipulates that details of the risk management systems applied may not
be published (Section 88(5) s 4 of the German Fiscal Code (Abgabenordnung - AO). This
seems adequate. The same is equally true for software applications used by security agencies,
e.g., for purposes of anti-terrorism. The denial of access to the source code and details of
algorithm-based public applications should be limited to such cases. See also Martini and Nink
(n 8) 11.
53
The same applies in cases where the state uses its budget resources to finance investments in
the development of software systems.
54
See also Recital 63 Sentence 3 GDPR and more in detail Martini (n 19) 182 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
116 Mario Martini
The legislator should extend the scope of the labelling requirement to decisions that
are supported by algorithms and that tend to constrain fundamental rights. Thus, in
order to achieve transparency, the legislator should establish – beyond current law –
clear information duties for algorithm-based services that are not entirely based on
automated individual decision-making as in Art 22 GDPR. This concerns on the one
hand the duty to give “meaningful information about the logic involved” (at least in
areas that are sensitive to fundamental rights). On the other hand a transparency
obligation should form part of the regulatory concept, allowing the user to identify
whether risky machine-learning algorithms are used, or if a decision is sensitive to their
fundamental rights for other reasons.55 This applies in particular to profiling procedures
as well as chatbots, social bots and dynamic pricing or blocking software that differenti-
ates by features closely related to protected traits such as religion, race or gender.56 The
labelling obligation should require visual easy-to-understand symbols that customers
actually see and comprehend – otherwise, the information obligation will end up
simply adding another paragraph to the largely unread privacy policies used today.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 117
57
See the experiments of Baron, Beattie, and Hershey, “Heuristics and Biases in Diagnostic
Reasoning” (1988) 42 Organizational Behavior and Human Decision Processes 88, 100, 102 ff.,
108 ff.; see as well Ben-Shahar and Schneider, More than You Wanted to Know (Princeton
University press 2014) 55 ff.; Baron, Thinking and Deciding (4th edn, reprinted, CUP 2009) 177;
Vaughan, The Thinking Effect (Nicholas Brealey Publishing 2013) 29.
58
See e.g. Kettner, Thorun, and Vetter, Wege zur besseren Informiertheit (conpolicy 2018) 31 ff.
On freedom of information cases against Anglo-American states, compare Roberts, “Dashed
Expectations: Governmental Adaptation to Transparency Rules” in Hood and Heald (eds),
Transparency: The Key to Better Governance? (OUP 2006) 108, 109 ff.; Ben-Shahar and Bar-
Gill, “Regulatory Techniques in Consumer Protection: A Critique of European Consumer
Contract Law” (2013) 50 Common Market Law Review 109, 117 ff.
59
See in detail Edwards and Veale (n 14) 42 f., who doubt the reach of legal transparency
obligations; Martini (n 19) 188 f.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
118 Mario Martini
and, if need be, to challenge it.60 The right to demand an explanation can also serve
to discover and prevent discriminatory tendencies that would otherwise be undetect-
able – and thus build trust in digital technologies.
Ideally, the software application should implement a tool in its algorithm-based
process which at least substantiates a decision rejecting (in whole or in part) users’
requests. The clarification should explain in an intelligible way why the unfavour-
able decision determined by the algorithm was taken.61 It should add information on
comparison groups, parameters and principles guiding the decision-making process.
Reflecting about an algorithmic process and implementing such information into
the software will challenge programmers.62 Especially in the case of complex
machine-learning methods such as neural networks, their creators can often only
say that a decision has been made, but cannot explain the reasons why that
conclusion has been reached. However, technical challenges are no excuse, as long
as the solution is not impossible in a normative sense.63 Research efforts toward a
(more) “explainable artificial intelligence”64 are already underway.65
Nevertheless, the obligation to state reasons should not exist unconditionally and
without limits. Legislators intending to introduce an obligation to individually
explain algorithmic decisions to a subject must act with care in order not to interfere
disproportionately with the data controller’s private autonomy, professional freedom
and fundamental rights. In the analogue world, the individual cannot demand to
look into the neural (brain) structures of his contractual partner to obtain a scientific
explanation of every decision.
However, compared to humans, algorithm-based decision-making processes make
different, sometimes surprising mistakes. They operate on a quantitative basis of
similarities in the data that allows them to draw stochastic conclusions: Algorithms
recognize statistical correlations, but do not evaluate causal relations in the real
world. They have no worldview and lack the common sense capable of grasping a
60
See also Mittelstadt and others (n 23) 7; Tutt (n 3) 110.
61
See the normative requirement of Art 12(1)(1) GDPR.
62
Knight, “The Dark Secret at the Heart of AI” (MIT Technology Review online, 11 April 2017)
<www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/>. The US DARPA
(Defense Advanced Research Projects Agency) has started a research project titled “Explainable
Artificial Intelligence”: <www.darpa.mil/program/explainable-artificial-intelligence>.
63
An explanation of decisions implemented in the software could even be helpful for program-
mers to detect mistakes and law infringements early on; Edwards and Veale (n 14) 54 with
further evidence.
64
See also Wachter, Mittelstadt and Russell, “Counterfactual Explanations without Opening the
Black Box: Automated Decisions and the GDPR” (2018) 31 Harvard Journal of Law & Technol-
ogy 841, 841 ff., especially 860 ff.
65
For some early technical approaches, see Binder and others, “Layer-wise Relevance Propaga-
tion for Neural Networks with Local Renormalization Layers” in Wilson, Kim, and Herlands
(eds), Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex
Systems (2016) 1 ff; on using methods of visualization Yosinski and others (n 14). So far, however,
it has only been possible to get systems to explain specific aspects of their decisions.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 119
(c) Right of access by the data subject (Art 15 GDPR): right to information about
profiling results?
Since algorithm-based decisions are only as good as the dataset on which they were
trained, it is particularly important for those affected by the decision to gain insight
into the data basis. This is often the only way for individuals to ensure that no
incorrect decision basis distorting the correctness of the result is used in the process.
It is therefore consistent and meritorious that the GDPR guarantees individuals the
right to obtain information about the personal data that a controller processes
(Art 15(1) GDPR).67
66
See already Martini (n 38) 1020; Wachter, Mittelstadt, and Russell (n 64) 863 ff.
67
Nor can a right to obtain information on the results of a profiling be derived from the
minimum protection rights of Art 22(3) GDPR. See Martini (n 19) 190, 202.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
120 Mario Martini
This right does not necessarily include giving individuals the right to view the
profiles created by a system as a result of the processing. Profiles that are created as a
result of processing are indeed personal data. In principle, however, neither the right
to information (Art 15(1) GDPR) nor the right to rectification (Art 16(1) GDPR)
extend to them. Rather, the GDPR is based on the idea that the right to information
does not capture the forum internum of the data controller where his internal
process of forming his opinion, preparing decisions and his business secrets are
concerned (Art 15(4), Recital 63 sentence 5 GDPR). A right to know the processing
results and evaluations obtained by someone as a result of processing personal data
is, looking at the other side of the coin, an obligation to disclose one’s opinion about
others. It intervenes in a sustainable way in the (negative) freedom of opinion
guaranteed by fundamental rights and strikes at the heart of the privately autono-
mous conception of our legal system. The GDPR did not take this step.
A subjective right to gain an insight into the formation of a profile should be
granted in legal relationships characterized by an asymmetry of information and
power. These typically include, for example, the vertical relationship between the
citizen and the government, performance profiles recorded by an education service
provider, or employment relationships68 (provided that no overriding confidentiality
or security interests are in conflict).69
68
In a recent decision a German labor court has stated rightly that the right to access the
collected data on an employee´s professional conduct and performance can be restricted based
on Art 15(4) GDPR if the employer obtained the information from another employee who
might likely suffer disadvantages if the data (and thus his identity) were revealed, LAG Baden-
Württemberg Urt v 20.12.2018, ECLI:DE:LAGBW:2018:1220.17SA11.18.0A, para 182 f.
69
Martini (n 19) 202 ff.
70
Martini (n 19) 209 ff.
71
See Martini (n 38) 1022; Martini (n 19) 202 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 121
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
122 Mario Martini
quality aspects of the software, such as compliance with the principle of non-
discrimination.
A state audit procedure has to examine not only the source code of deterministic
procedures, but also standardized training processes76 and statistical models of
machine-learning algorithms. A special focus should, for example, be applied to
test data and to whether the software correctly integrates a non-discriminatory data
base. At the same time, consistent measures to protect the trade secrets of audited
software systems have to be a key element of an appropriate audit system.77
It is, however, not reasonable and therefore not necessary to apply a regime of
state supervision to every single software application – just as a bicycle does not, but
a car does need permission to participate in road traffic. A permission to market or
apply certain software products (such as in pharmaceutical law) is possible, but
needs to be strictly limited to dangerous use-case scenarios. Only algorithmic
procedures that bear a sensitivity for fundamental rights should thus be subject to
ex-ante supervisory and standardization procedures.
In case of private providers, an ex-ante evaluation should only be carried out on
software applications that typically involve special risks of discrimination (e.g.,
automated evaluation of job candidates) or have a lasting effect on the life plans
of individuals. In addition, software applications whose errors can lead to sustained
risks to life or limb (e.g., autonomous vehicles, care robots or medical analysis
systems), and sensitive forms of human‒machine collaboration (e.g., exoskeletons
or cobots) could be subject to a prior permission process. The same would apply to
the use of new technologies that allow a particularly high degree of personal
evaluation, especially facial recognition and sentiment analysis.
If the public administration applies methods of algorithmic decision-making (for
example, to allocate places at universities, to undertake tax assessments or to support
decisions of the judiciary), an ex-ante control in the public sector should be
mandatory and extend to a much wider scope.
For certain (sensitive) software applications the EU could complementarily
consider setting up a register. A registration requirement would give supervisory
authorities an overview of specifically risky algorithmic practices and could thus
help to improve their supervisory capabilities in individual cases.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 123
78
Council Directives 2000/43/EC, 2000/78/EC, 2004/113/EC and 2006/54/EC.
79
For contracts between private individuals that are not the subject of labor law, the anti-
discrimination directives of the EU furthermore only apply to gender and race discrimination.
In this respect, the scope of the German General Equal Treatment Act goes beyond the EU’s
legislation. See Art 1 and Art 3 para 1 of Council Directive 2004/113/EC for gender discrimin-
ation as well as Art 3 para 1h) of Council Directive 2000/43/EC for racial or ethnic discrimin-
ation. See also the exceeding implementation of these directives in Ss 2, 19 para 1 German
General Equal Treatment Act.
80
Section 19(1) of the German General Equal Treatment Act.
81
An alternative way could be to extend the scope of the German General Equal Treatment Act
to certain new constellations (e.g. consumer contracts concluded on the basis of a scoring
algorithm).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
124 Mario Martini
(b) onus of proof Since the individual is regularly denied insight into the
opponent’s decision-making processes and documents when algorithms are used,
it will be difficult for them to demonstrate unlawful discrimination, even under the
shifted onus of proof in European anti-discrimination law (e.g. Art 8 of Council
Directive 2000/43/EC). Under the current anti-discrimination law, a plaintiff has to
prove at least “facts from which it may be presumed that there has been direct or
indirect discrimination.” Even this can be very difficult if the plaintiff has no chance
to obtain other data to compare it to their own. The EU legislator should clarify the
provisions regarding the burden of proof in anti-discrimination law by adding that
black-box evaluations are sufficient evidence for algorithm-based procedures.
In German law, the burden-of-proof reversal privilege of Section 22 AGG does not
include injured parties in the important area of bulk business of civil-law transac-
tions, as the systematics of the law show. The legislator should extend the scope of
application of the provision to these transactions. As an expression of the principle
of equality of arms, the legislator should simultaneously impose higher standards of
exculpatory evidence (Entlastungsbeweis) in Section 21 (2) Sentence 2 AGG in
algorithm-based decisions than in human decisions.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 125
84
See in detail: Martini (n 19) 243 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
126 Mario Martini
For this purpose, audit algorithms that analyse the decision results of other
algorithms can operate as important testing tools. They follow the guiding principle,
“You shall know them by their fruits.” Audit algorithms can systematically examine
the decisions of an adaptive system or a proprietary software for abnormalities,
bringing evidence of unlawful behavior to light. By applying the same statistical
methods, the audit algorithms can detect which factors are particularly significant in
the decision-making of the system examined. They use artificial intelligence instru-
ments to check and balance themselves.
As algorithms cannot precisely define what unlawfulness is, neither the basic
algorithm nor the audit algorithm can prevent discriminatory behavior. All they can
do is collect evidence for further investigation.
The scope of administrative supervision includes the mathematical-statistical
validity of the conclusions drawn by a software-based system – such as scoring or
profiling. The probability values on which it is based and the conclusions it draws
from its attitudes should be checked to see whether the assumptions on which the
decision model is based are methodologically correct and consistent with the values
of law and society. Only criteria that are verifiably relevant to the decision may be
included in the decision model. They must justify a well-founded presumption that
there is a relevant link between an input variable and a desired result. The more the
information relates to the private sphere of an individual or even exposes intimate
details, the more relevant is the information to the subject matter of the decision.
For certain evaluation contexts, the legal system should formulate prohibitions of
use; for example, price differentiation algorithms should – in principle – not be
allowed to take into account the state of health of the person concerned.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 127
give this new support unit a framework as an authority that monitors markets and
products to establish supervisory mechanisms for certain particularly dangerous
software applications. In Germany, the Physikalisch-Technische Bundesanstalt
(PTB) and the Bundesamt für Sicherheit in der Informationstechnik (BSI) provide
an institutional set-up that could be blueprinted.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
128 Mario Martini
be in place. The documentation should pay special attention to the modelling of the
software application as well as its decisions. Learning steps should also be logged if
applicable.
Art 30 GDPR already provides a list of processing activities. But its obligations are
limited to elementary data, in particular the name of the processor, the purposes of
the processing, and so on.88 The procedural list of Art 30 GDPR thus lags behind
reasonable requirements for active logging of the program sequences. Art 5(2) and
Art 24(1)(1) GDPR also do not formulate logging of the processing steps of algorithm-
based systems as a mandatory duty – at least not sufficiently clearly.89 The European
Union legislator should establish such a logging duty and define its scope precisely.
However, a comprehensive log and its evaluation, especially with decentralized or
adaptive systems, can be extremely costly and can quickly become a disproportionate
burden for the service provider. Therefore, the scope of the obligation should
depend on the risks in respect of personality and other fundamental rights, and
should include hardship clauses.
3.3.4.1 Liability
(a) burden of proof (reverse onus clause) In the absence of insight into
the decision-making process, consumers can hardly prove – or even identify –
infringements, causalities and fault when a service provider or data controller uses
algorithms. This structural asymmetry has similarities to medical malpractice and
producer’s liability. Just as in these cases,90 the legislator should put a reverse onus
88
Hartung, “Art 30 DSGVO” in Kühling and Buchner (eds), Datenschutz-Grundverordnung,
Bundesdatenschutzgesetz (2nd edn, CH Beck 2018) margin nos 16 ff.; Martini (n 44) margin
nos 5 ff.
89
See in detail Martini (n 19) 260 ff.
90
In several constellations under German and European law, the law provides a shift of burden of
proof in contrast to the ordinary procedural principle of production of evidence. A medical
practitioner must prove that a gross error in treatment was not the cause of damage to the
patient if the patient could just prove this treatment error beforehand (section 630h (5)
Sentence 1 BGB). According to section 1(1) of the German Product Liability Act (Produkthaf-
tungsgesetz – ProdHaftG), the person who has put the product on the market must compensate
for damage to life, health or property belonging to another person unless they can prove that
certain statutory exemptions (section 1(2) ProdHaftG) exist, exceptionally excluding their liabil-
ity. The claimant only has to prove the damage, the product defect and the causality of the
defect for the damage beforehand.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 129
(b) strict liability? Software applications do not pose a general threat in the
same way as the usage of motor vehicles, performing surgery or keeping animals. But
in particularly sensitive fields of application (such as digitized medical applications
and nursing robots), a similarly strict liability for damages caused by automated
processes is reasonable to compensate for injuries such as potential violations of
important legal interests like life and limb. In such cases, a compulsory insurance
could complement the liability scheme. Offerors who profit from software applica-
tions should have to vouch for their mistakes and risks – even if the faults are due to
emergent (unpredictable) system behavior.91
Some even argue for a liability of the intelligent system itself.92 As a legal entity,
such an “electronic person” would be the algorithmic equivalent of a corporation as
a legal entity. It could provide considerable savings on transaction costs for eco-
nomic players and might guarantee seamless liability. Whether a legal personality is
needed, however, is another matter: mechanical systems do not (yet) possess the
freedom to make their own decisions. They are based on the programming of
natural persons and (until now) set in use by other natural persons, to whom their
behavior can be attributed. Therefore, from today’s perspective, it is not necessary to
construct a separate legal entity for this purpose.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
130 Mario Martini
Competitors have their own economic incentive to prevent the use of unlawful
algorithms by other market players. The law should thus extend opportunities to
issue a formal warning concerning the use of discriminatory or otherwise infringing
software applications.93
In the same breath, it should prevent the misuse of the right to issue a warning,
establishing a system of checks and balances. The right to issue warnings should be
limited in scope. Competition law could, for example, limit the number of eligible
competitors, establish a qualitative lower threshold to exclude minor infringements
from claims and regulate the compensation costs for warnings (e.g., by capping
them to a maximum amount and excluding contingency fees) in order to decrease
incentives for resourceful law firms to misuse the instrument.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 131
incorporated any form of legal action in which immediate rights are granted to a person by
court decision, when the respective person has not deliberately participated in the court
proceedings. But Germany has newly implemented a model law suit for consumers’ associ-
ations as a preliminary step to further individual actions (Section 606 Code of Civil Procedure
[ZPO]; Musterfeststellungsklage (“model proceeding”); see the legislator bill of the Federal
Government, BT-Drs 19/2439>. This action combines the benefits of a class action and a
consumers’ association action. In a model case, a collective action can clarify legal issues in
one procedure. The individuals can subsequently benefit from the results of the model case for
their individual (damage) claim. The need for a form of representative action became
politically relevant with the Volkswagen scandal, see Stadler, “Musterfeststellungsklagen im
deutschen Verbraucherrecht” (2018) 33 Verbraucher und Recht (VuR) 83, 83. Section 606 ZPO
follows the example of the only existing form of a collective action in Germany initiated with
the Capital Markets Model Case Act (Gesetz über Musterverfahren in kapitalmarktrechtlichen
Streitigkeiten – KapMuG) in 2005. The scope of the KapMuG covers damage claims in the
field of capital markets. Section 606 ZPO addresses further economic sectors to strengthen
consumer rights. Meanwhile, the European legislature aims higher with a proposal for a
directive on representative actions for the protection of the collective interests of consumers
(Proposal for a Directive of the European Parliament and the Council on representative
actions for the protection of the collective interests of consumers, and repealing Directive
2009/22/EC, 11/04/2018, COM(2018) 184 final) facilitating damage claims brought by qualified
entities. The Commission’s proposal goes beyond the German form of model proceedings.
However, its scope is restricted to certain sectors such as financial services, energy, telecommu-
nications, health and the environment (13). It would be advisable to extend its scope to
infringements of personality rights by algorithmic applications.
95
Another regulatory option to facilitate litigation could be an intervention right for consumers’
associations in civil proceedings that allows them to bring an action for an injunction against
the algorithm-based application in question.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
132 Mario Martini
competence clearly reach beyond a civil-law, inter partes valid injunction judgement.
An additional preventive competence can therefore only be justified objectively if an
inquisitorial principle is applied in the respective judicial proceedings (see for
example Section 86(1) Administrative Court Code [Verwaltungsgerichtsordnung]).
A right to publicize such judgements may also be appropriate. The winning party
may then have the judgement published at the expense of the unsuccessful party if it
has a legitimate interest. The pillory effect emanating from these proceedings is
deliberately intended, according to the will of the legislator, to create additional
incentives to refrain from unlawful conduct as a preventive measure.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 133
The hybrid approach used with the German Corporate Governance Codex98 –
set out in Section 161 German Companies Act (Aktiengesetz – AktG) – can serve as a
possible paradigm for a regulatory model.99 The Codex is not a statute. Rather, it
brings together diverse experience in a private panel of experts from the business
world.100 Section 161 AktG obliges those companies subject to stock exchange
trading to declare whether they have followed its recommendations. If a provider
does not comply with the recommendations, it must state its reasons. The regulation
mechanism of the Codex is based on the principle of “comply or explain.” It follows
the basic idea “let the market decide.”
In line with this concept, the legislator should establish an obligation for providers
of software applications that are particularly sensitive to fundamental rights to
commit themselves to an “Algorithmic Responsibility Codex.” A government com-
mission (consisting of elected deputies representing all sectors, including IT experts,
data protectors and consumer associations) would work out the content of the
Codex. Just as listed companies have to disclose their information for potential
investors to ensure transparency of investments, service providers (that offer
algorithm-assisted software applications posing a threat to personality and other
fundamental rights)101 should publicly comply with the rules of conduct for the
98
See Regierungskommission Deutscher Corporate Governance, Deutscher Corporate
Governance-Kodex, <www.dcgk.de/de/kodex/aktuelle-fassung/praeambel.html> (11.03.2019).
99
On the economic function of corporate governance codes see v Werder, “Ökonomische
Grundfragen der Corporate Governance” in Hommelhoff, Hopt, and v Werder (eds), Hand-
buch Corporate Governance (2nd edn, CH Beck 2009) 3. On the German code see Lutter,
“Deutscher Corporate Governance Kodex” in Hommelhoff et al. (eds) 123; for the British
Corporate Governance Code see Financial Reporting Council, “The UK Corporate Govern-
ance Code” (2016) <www.frc.org.uk/getattachment/ca7e94c4-b9a9-49e2-a824-ad76a322873c/
UK-Corporate-Governance-Code-April-2016.pdf>. A brief overview of international codes is
given by Wymeersch, “Corporate Governance Regeln in ausgewählten Rechtssystemen” in
Hommelhoff et al. (eds) 137; on the historical development of those codes see Hopt, “Die
internationalen und europarechtlichen Rahmenbedingungen der Corporate Governance” in
Hommelhoff et al. (eds) 39.
100
Its impact and meaningfulness are not free of criticism. The criticism extends in particular to
constitutional concerns over the influence of private parties in state legislation, which grants
private parties a higher binding power than the constitution possibly permits. At worst, the
Corporate Governance Codex gives private individuals the opportunity to impose declaration
obligations on other legal entities. The Codex is also suspected of being more of a fig leaf of
regulation than an effective instrument for improving corporate culture. See e.g. Habersack,
”Staatliche und halbstaatliche Eingriffe in die Unternehmensführung (Gutachten E)” in
Deutscher Juristentag (ed), Verhandlungen des 69. Deutschen Juristentages (CH Beck 2012)
E 57 f.
101
In order to achieve its normative mission in the digital world, the application of the Declaration
of Conformity (ideally incorporated in European law) should not depend on the requirement
of a branch in a certain country, but should – just like the other obligations to which service
providers are subject – follow the lex loci solutionis, where an offer is made to citizens of the
European Union. EU law applies irrespective of whether the supplier is located in the
European Union (see also Art 3(2) GDPR).
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
134 Mario Martini
3.4 conclusion
We increasingly fail to understand how algorithms work. Conversely, algorithms are
becoming better and better at learning how we work. In the digital age – not
different from previous ages – the state is expected to protect the individual’s
autonomy and informational self-determination from impairments. This obligation
involves establishing an efficient audit system capable of handling the diverse and
growing use of (machine-learning) algorithms and ensuring the embedding of social
basic ethical values into automated systems.The GDPR has already cautiously raised
its regulatory index finger. However, it only provides effective answers for fully
automated decisions (Art 22 GDPR). Further regulatory steps should follow. The
whole spectrum of algorithmic processes that assist human decisions and shape our
daily lives cries out for tailored solutions.
Reasonable regulatory instruments are (inter alia): an algorithm audit entity with
inspection rights; cooperation obligations for the operators as well as the duty to
inform about the logic and scope of an algorithm-based procedure (not only in cases
of automated individual decision-making as in Art 22 GDPR); the obligation to
publish comprehensive impact assessments (not only with regard to data protection)
and install risk-management systems (for algorithmic systems involving special
dangers for the rights of third parties); and an extension of the scope of application
of (European) anti-discrimination legislation. An Algorithmic Responsibility Codex,
following praeter propter the regulatory concept of the UK and German Corporate
Governance Codes would be a useful addition to this regulation bundle.
As isolated national solutions cannot suffice to tackle transnational malpractices
executed by algorithms, the regulatory challenges should be solved on the highest
normative level possible – based on the common values of the European human
rights tradition. Apart from the efficiency of a harmonized regulation on EU level,
the regulatory competence to control algorithm-based procedures is in any case no
longer predominantly in the hands of the national states. It is the European Union
that is competent in this field, especially regarding data protection law, according to
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 135
Art 16 para 2 Treaty on the Functioning of the European Union (TFEU). The
regulatory competence of the national legislature (with regard to the proposed
measures) is mainly limited to procedural rights, in particular the structuring of
rights of consumer associations, powers of warning notices in competition law and
the allocation of the burden of proof in civil-law suits.
Regulation is not a goal in itself. Rather, it is necessary to build confidence among
users in the new digital offering: only when the commitment of algorithmic systems
follows clear rules can trust be established. Trust building is a central task of a legal
system promoting welfare – just as state regulation has in the past contained the
dangers posed by cars or pharmaceuticals in order to ensure their suitability and
reliability for mass consumption.
Yet, regulation should not simply be exhausted in “German angst” and algorith-
mic necromancy. Regulatory ambitions notwithstanding, the legislature must be
careful not to overreact to the digital progression of society by obstructing the
potential for innovation offered by modern software applications. They should in
particular not burden innovative start-up structures with a set of regulatory instru-
ments that do not leave adequate scope for development. The intensity of regulation
should correspond to companies’ profit chance, and the size and level of risk they
pose. Establishing a graded regulatory system based on a (sector- and/or application-
specific) diagnosis of how sensitive software applications are to fundamental rights
will be a challenge worth accepting. What is needed in this process is a healthy
balance between the risk of suffocating innovation and the foundations of a digital
humanism. In the tradition of the Enlightenment era, the categorical imperative
should point the way ahead for the digital world. Technology should always serve
the people – not the other way around.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
4
Diana Sancho
introduction
Machine-learning algorithms are used to profile individuals and make decisions
based on them. The European Union is a pioneer in the regulation of automated
decision-making. The regime for solely automated decision-making under Article
22 of the General Data Protection Regulation (GDPR), including the interpretative
guidance of the Article 29 Working Party (WP29, replaced by the European Data
Protection Board under the GDPR), has become more substantial (i.e., less formal-
istic) than was the case under Article 15 of the Data Protection Directive. This has
been achieved by: endorsing a non-strict concept of ‘solely’ automated decisions;
explicitly recognising the enhanced protection required for vulnerable adults and
children; linking the data subject’s right to an explanation to the right to challenge
automated decisions; and validating the ‘general prohibition’ approach to Article 22
(1). These positive developments enhance legal certainty and ensure higher levels of
protection for individuals. They represent a step towards the development of a more
mature and sophisticated regime for automated decision-making that is committed
to helping individuals retain adequate levels of autonomy and control, whilst
meeting the technology and innovation demands of the data-driven society.
Downloaded from https://www.cambridge.org/core. University of New136England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 137
person.2 Profiling practices based on them are said to optimise the allocation of
resources by allowing private and public parties to personalise their products and
make more efficient choices.3 However, they can also be used to exploit consumers’
vulnerabilities and influence their attitudes and choices, which may result in unfair
discrimination, financial loss and loss of reputation.4
This chapter examines the legal mechanisms available in data protection law to
safeguard individuals from decisions which result from automated processing and
profiling. It considers, in particular, how the regime for automated decision-making
under the General Data Protection Regulation balances the interests of consumers
and their fundamental right to data protection against the demands of the data-
driven industry, such as the development of new products and services based on
artificial intelligence and machine-learning technologies.5 It will thus focus on
Article 22 GDPR and related provisions, and take a commercial perspective.6
The chapter has the following sections. Section 4.2 examines profiling and
automated decision-making and assesses the operation of Article 22 on procedural
grounds. Section 4.3 evaluates the concept of automated decision-making referred to
in Article 22(1) and demonstrates how the WP29 has helped this concept to become
more substantial (i.e., less formalistic). Section 4.4 analyses the so-called right to
human intervention and whether the legitimate interests of the controllers have any
with Personal Data’, Queen Mary University Legal Studies Research Paper 247, 2016 <https://
ssrn.com/abstract=2865811>.
2
See Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning
Algorithms’ (2016) Big Data & Society (BD&S)1 ff.
3
See Surblyte, ‘Data as a Digital Resource’, Max Planck Institute for Innovation & Competition
Research Paper No 16-12, 2016 <https://ssrn.com/abstract=2849303>. Information Commis-
sioner Officer (ICO), ‘Big Data, Artificial Intelligence, Machine Learning and Data Protection’
15 ff. <https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-
protection.pdf>. Lohsse, Schulze, and Staudenmayer (eds), Trading Data in The Digital
Economy: Legal Concepts and Tools (2017) 13 ff.
4
See O’Neil, Weapons of Math Destruction (2016) 10 ff.; Zarsky, ‘Mine Your Own Business!:
Making the Case for the Implications of the Data Mining of Personal Information in the Forum
of Public Opinion’ (2003) Yale Journal of Law and Technology (YJLT)19; Federal Trade Commis-
sion (FTC), Big Data: A Tool for Inclusion or Exclusion? (2016) <www.ftc.gov/system/files/
documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt
.pdf> 3 ff.; Centre for Information Policy and Leadership (CIPL), ‘Comments on the Article
29 Data Protection Working Party’s Guidelines on Automated Individual Decision-Making and
Profiling’ 2017. <www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_comments_
to_wp29_guidelines_on_automated_individual_decision-making_and_profiling.pdf>, 1 ff. Also,
Navas, Inteligencia Artificial. Tecnología y Derecho 2017, 63 ff.
5
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data (General Data Protection Regulation),
OJ 2016 L 119/1. On the interests at stake, see ICO (n 3) 94 ff., also CIPL (n 4) 1 ff.
6
The applicability of data protection law to private parties is not explicitly referred to in Article 16
TFEU, the legal basis for the GDPR, yet it is accepted that secondary EU law has extended the
application of data protection rights and obligations to private parties: see Surblyte (n 3) 15 ff.;
and Kokott and Sobotta, ‘The Distinction between Privacy and Data Protection in the Jurispru-
dence of the CJEU and the ECtHR’ (2013)3 International Data Privacy Law (IDPL) 226 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
138 Diana Sancho
role to play as a basis for processing under Article 22. Section 4.5 examines the
interplay between Article 22 and the information rights under Articles 13(2)(f ),
14(2)(g) and 15(1)(h). A conclusion follows in Section 4.6.
7
Article 4(2) GDPR.
8
WP29, Opinion 4/2007 on the concept of personal data, WP136, 4 ff.
9
See Hildebrandt, ‘Defining Profiling: A New Type of Knowledge’ in Hildebrandt and Gutwirth
(eds), Profiling the European Citizen, Cross-Disciplinary Perspectives (Springer Netherlands
2008) 17 ff.
10
On Article 3 GDPR, see Svantesson, ‘The extraterritoriality of EU Data Privacy Law – Its
Theoretical Justifications and Its Practical Effects on US Businesses’ (2014) Stanford Journal of
International Law (SJIL) 55 ff.; Alsenoy and Koekkoek, ‘Internet and Jurisdiction after Google
Spain’ (2015) 5 International Data Privacy Law (IDPL) 105 ff.; Sancho, ‘The Concept of
Establishment and Data Protection Law: Rethinking Establishment’ (2017) 42 European Law
Review (EL Rev) 491 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 139
including privacy by design and default requirements (i.e., the adoption of technical
and organisational measures to implement data protection obligations). Finally, a
system of supervisory authorities, redress mechanisms and liability rules enforce
compliance.
Several classifications have been proposed to explain the usual development
stages of automatic processing and profiling.11 Although the language they use to
describe the different phases of processing varies, they all tend to identify the
following three stages of processing: collection, analysis and application.12
At the collection stage, the user (i.e., the controller) gathers personal data from a
variety of sources, not merely from the data subjects.13 Massive amounts of personal
data are collected from internet resources, mobile devices and apps, through
ambient intelligent technologies embedded in everyday objects (e.g., furniture,
vehicles and clothes) and from the human body itself (e.g., biometric data).14
The value of data is often unknown at collection and can only be attained after
the data is (re)processed over and over again for different purposes.15 In the analytical
stage, potent computational frameworks are used to store, combine and analyse large
quantities of data in order to generate new information. Data mining increasingly
relies on machine-learning algorithms to profile individuals.16 These differ from
traditional algorithms in that they feed on a vast amount of data and can adopt their
own operating rules.17
11
For a general classification on the lifecycle of personal data processing, see OECD, ‘Exploring
the Economics of Personal Data: A Survey of Methodologies for Measuring Monetary Value’
OECD Digital Economy Papers No 220, 2013 <http://dx.doi.org/10.1787/5k486qtxldmq-en> 11
ff.; also FTC (n 4) 3 ff. Specifically on profiling, Hildebrandt, ‘The Dawn of a Critical
Transparency Right for the Profiling Era’, in Bus et al. (eds), Digital Enlightenment Yearbook
(2012) 44 ff.; also Kamarinou, Millard, and Singh (n 1) 8 ff.
12
Interestingly, almost none of the available classifications explicitly consider the expiration/
destruction of data as the final stage of processing; see Moerel and Prins, ‘Privacy for the Homo
Digitalis: Proposal for a New Regulatory Framework for Data Protection in the Light of Big
Data and the Internet of Things’ (2016) <https://ssrn.com/abstract=2784123> 12 ff.
13
See Rubinstein, ‘Big Data: The End of Privacy or a New Beginning?’ (2013)3 International
Data Privacy Law (IDPL) 74 ff.
14
Ibid. See also Rouvroy, Privacy, Data Protection, and the Unprecedented Challenges of Ambient
Intelligence. Studies in Ethics, Law and Technology (Berkeley Electronic Press 2008) <https://
ssrn.com/abstract=1013984>, 1 ff. Tene and Polonetsky, ‘Big Data for All: Privacy and User
Control in the Age of Analytics’ (2013) Northwestern Journal of Technology and Intellectual
Property (NJTIP) 255 ff.
15
See Mayer-Schonberger and Padova, ‘Regime Change? Enabling Big Data through Europe’s
new Data Protection Regulation’ (2016) Columbia Science and Technology Law Review
(Colum Sci & Tech L Rev). See also Custers and Ursic, ‘Big Data and Data Reuse: A
Taxonomy of Data Reuse for Balancing Big Data Benefits and Personal Data’ (2016) 6
International Data Privacy Law (IDPL) 13 ff.
16
See Anrig, Browne, and Gasson, ‘The Role of Algorithms in Profiling’ in Hildebrandt and
Gutwirth (eds), Profiling the European Citizen, Cross-Disciplinary Perspectives (2008) 66 ff.
17
See Burrell, BD&S (2016) 6 ff.; O’Neil (n 4) 76 ff.; and Singh, Walden, Crowcroft, and Bacon,
‘Responsibility & Machine Learning: Part of a Process’ (2016) <https://ssrn.com/abstract=
2860048>.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
140 Diana Sancho
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 141
4.3.1 Classification
Different types of decisions derived from automated processing, including profiling,
can be distinguished. The nature of the agent making the decision represents an
obvious first classification criterion27 distinguishing human-based decisions from
machine-based decisions. Under this classification, automated decisions would
typically equate to machine-based decisions. An alternative approach to the same
criterion, however, would also consider the degree of human involvement in the
automated decision-making process.
This approach is different from the previous case, in that the nature of the agent
involved is not conclusive as regards the ‘automated’ character of the decision. Since
most automated decisions happen to be machine based, it could be argued that this
approach is of little practical relevance. However, in an increasingly sophisticated
24
As Blume quotes, ‘Communicativity does not seem to be the strength of the GDPR’; see ‘The
Myths Pertaining to the Proposed General Data Protection Regulation’ (2014) 4 International
Data Privacy Law (IDPL) 273 ff.
25
Moerel and Prins (n 12) 22 ff.; also Kamarinou, Millard, and Singh, (n 1).
26
Article 9(1)(a) states, ‘the right not to be subject to a decision significantly affecting him or her
based solely on an automated processing of data without having his or her views taken into
consideration’; see modernised Convention for the Protection of Individuals with Regard to the
Processing of Personal, as it will be amended by its Protocol CETS No [223], at <https://rm.coe
.int/16808ade9d>. On the other hand, Article 9(1)(c) refers to the right to obtain ‘knowledge of
the reasoning underlying data processing where the results of such processing are applied to
him or her’.
27
For a typology of types of profiling, see Hildebrandt (n 9) 25 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
142 Diana Sancho
4.3.2 Analysis
4.3.2.1 Actor
Article 22(1) states that the relevant decision has to be ‘based solely on automated
processing, including profiling’. Two interpretations of automated decisions under
Article 22(1) are possible. First, a strict interpretation excludes the application of this
provision if the automated decision-making process has involved any form of human
participation.30 This focuses on the nature of the determinant making the decision
under the first criterion above. By contrast, the notion of automated decision-
making referred to in Article 22(1) can also be defined by reference to the degree
of human autonomy involved (or the lack of it). For the purpose of the definition of
solely automated decisions under Article 22(1), this would imply that human involve-
ment in the decision-making process is not to mechanically exclude the application
of this provision. Under this second interpretation, the key question is not whether a
28
Ibid 20 ff.
29
For example, individuals can challenge the legality of a Union act if they demonstrate that they
are ‘individually’ and ‘directly’ concerned by it [under Article 263 TFEU and the Plaumann
test, European Court of Justice (ECJ) 15.7.1963 case 25/62 (Plaumann/Commission), ECLI:
EU:C:1963:17]; see, on the EU regime for judicial review, Hartley, The Foundations of
European Union Law (7th edn, OUP 2010) 370 ff.
30
On the strict approach, for example, see Savin, ‘Profiling and Automated Decision Making in
the Present and New EU Data Protection Frameworks’, Paper Presented at 7th International
Conference Computers, Privacy, and Data Protection, Brussels, Belgium, 2014, 1 ff.; also
Hildebrandt (n 9) 51 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 143
31
Expressed as the possibility for a person to ‘actively exercise [. . .] real influence on the
outcome’, see Bygrave, ‘Minding the Machine: Article 15 of the EC Data Protection Directive
and Automated Profiling’ (2001) Computer Law & Security Report (CL&SR) 9 ff.
32
See Mendoza and Bygrave, ‘The Right Not to Be Subject to Automated Decisions Based on
Profiling’, University of Oslo Faculty of Law Legal Studies. Research Paper Series, (2017) 7 ff.;
also Rouvroy, ‘Des données sans personne: le fétichisme de la donnée à caractère personnel à
l’épreuve de l’idéologie des Big Data’ (2014) <https://works.bepress.com/antoinette_rouvroy/55/
> 12 ff.
33
WP29, ‘Guidelines on Automated individual decision-making and Profiling for the purpose of
Regulation 2016/679, WP251rev.01’; the Guidelines were adopted on 6 February 2018 (they
revise a previous draft version which was adopted on 3 October 2017).
34
Ibid 21 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
144 Diana Sancho
4.3.2.2 Recipient
The regime for solely automated decisions under Article 22 GDPR applies to
individual decisions (the provision’s title reads, ‘Automated individual decision-
making, including profiling’). Moreover, the protection granted in Article 22 operates
regardless of whether the data subject plays an active role in requesting the decision
(e.g., the data subject applies for a loan) or whether a decision is made about them
(e.g., the data subject is excluded from an internal promotion within an organisa-
tion). Article 22(1) also stipulates that automated decision-making targets decisions
on the ‘data subject’ rather than the natural person.35 The explicit reference to the
data subject in paragraph (1) implies that Article 22 is intended to apply to a decision
resulting from the processing of personal data of an identified or indefinable person.
This creates uncertainty as to whether the regime for solely automated decision-
making under Article 22 applies to individual decisions on data subjects based on the
processing of anonymised data.36 The Guidelines do not explicitly address this
point.37
The WP29 has confirmed that children’s personal data are not completely
excluded from automated decision-making under Article 22(1). The WP29 does not
consider that Recital 71 constitutes an absolute prohibition on solely automated
decision-making in relation to children.38 This is an important clarification which
reconciles the complete ban in Recital 71 with silence in the main text of the
GDPR.39 The WP29 has taken the view that controllers should not rely on the
derogations in Article 22(2) to justify solely automated decision-making in relation to
children (contractual necessity, imposed by law or based on the data subject’s explicit
consent), unless it is ‘necessary’ for them to do so, ‘for example to protect [children’s]
35
This, however, was the intention of the Commission in its 2012 Proposal; see Vermeulen,
‘Regulating Profiling in the European Data Protection Regulation; An Interim Insight Into the
Drafting of Article 20’ (2013) Centre for Law, Science and Technology Studies, <https://ssrn
.com/abstract=2382787> 8 ff.
36
As discussed in Kamarinou, Millard, and Singh, (n 1); also Savin (n 30) 9 ff.
37
The Guidelines recommend controllers are able to perform anonymisation and pseudonimisa-
tion techniques in the context of profiling; see WP29, ‘Guidelines, WP251rev.01’, 11 ff and 32 ff.
38
‘[R]ecital 71 says that solely automated decision-making, including profiling, with legal or
similarly significant effects should not apply to children. Given that this wording is not
reflected in the Article [22] itself, WP29 does not consider that this represents an absolute
prohibition’, ibid 28 ff.
39
Veale and Edwards, ‘Clarity, Surprises, and Further Questions in the Article 29 Working Party
Draft Guidance on Automated Decision-Making and Profiling’ (2018) Computer Law &
Security Review 398, 403 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 145
welfare’.40 Although this language may require further clarification (by the European
Data Protection Board or the ECJ in the context of a dispute),41 the references to
Recitals 71 and 38 and the view taken on Article 22 clearly suggest that the WP29 is
advocating the introduction of a restrictive system of solely automated decision-
making in relation to children.42 This is further confirmed by the WP29 continuing
to state that controllers processing children’s data under Article 22 must provide
suitable safeguards, as is required in Article 22(2)(b) and Article 22(2)(a) and (c).43
4.3.2.3 Effects
Automated decisions under Article 22(1) are required to have ‘legal effects’ on or
‘similarly significantly affect’ the recipient. Since decisions producing ‘legal effects’
on data subjects impact on their legal rights or legal status,44 they are more easily
objectified: for example, decisions granting or denying social benefits guaranteed by
law or decisions on immigration status when entering the country.45 However, in the
absence of objective standards, the meaning of the phrase ‘similarly significantly
affects him or her’ remains contextual and subjective; typical examples include
automatic refusal of credit applications and automatic e-recruitment practices, as
reported in Recital 71. The WP29 has stated that the effects of the processing must be
‘sufficiently great or important to be worthy of attention’.46 It has also provided some
guidance on which decisions may have the potential for this. According to WP29,
these are decisions that ‘significantly affect the circumstances, behaviour or choices
of the individuals concerned, have a prolonged or permanent impact on the data
subject, or, at its most extreme, lead to the exclusion or discrimination of
individuals’.47
Targeted advertising does not ordinarily produce decisions which could ‘similarly
and significantly’ affect individuals (e.g., banners automatically adjusting their
content to the user’s browsing preferences, personalised recommendations and
updates on available products).48 Some scholars, however, prefer not to exclude
the application of Article 22(1) to targeting advertising practices that systematically
40
Ibid 28 ff.
41
For example, how the requirement ‘necessary’ for the controller is to be interpreted, or whether
there any other valid examples apart from welfare cases.
42
Industry representatives, however, advocate for a more flexible approach, see Centre for Infor-
mation Policy and Leadership (CIPL), ‘GDPR Implementation in Respect of Children’s Data
and Consent’ (2018) 23 ff, available at <www.informationpolicycentre.com/uploads/5/7/1/0/
57104281/cipl_white_paper_-_gdpr_implementation_in_respect_of_childrens_data_and_
consent.pdf>.
43
Ibid 28 ff.
44
See Bygrave (n 31) 7 ff.
45
WP29, ‘Guidelines, WP251rev.01’, 21 ff.
46
Ibid.
47
Ibid.
48
Mendoza and Bygrave (n 32) 20 ff.; Bygrave (n 31) 9 ff.; Savin (n 30) 4 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
146 Diana Sancho
49
Mendoza and Bygrave (n 32) 12 ff.; also O’Neil (n 4) 164 ff., discussing examples in the
insurance sector.
50
Such as the intrusiveness of the profiling process; the expectations and wishes of the individ-
uals; the way the advert is delivered; and particular vulnerabilities of data subjects (WP29,
Guidelines, WP251rev.01, 22 ff.).
51
Ibid.
52
These include the choice and behaviours the controllers seek to influence, the way in which
these might affect the child, and the child’s increased vulnerability to this form of advertising:
ICO, ‘Consultation: Children and the GDPR guidance’ (2018) 5 ff., available at <https://ico
.org.uk/media/about-the-ico/consultations/2172913/children-and-the-gdpr-consultation-guidance-
20171221.pdf>.
53
ICO (n 3) 21 ff. (para 37); Moerel and Prins (n 12) on ‘pay how you drive’ 25 ff.
54
For example, O’Neil (n 4) 200 ff.; Zarsky (n 4)19–20 ff.; Mantelero, ‘Personal Data for
Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension
of Data Protection’ (2016) Computer Law & Security Review (CL&SR) 238 ff.; Mantelero and
Vaciago, ‘Data Protection in a Big Data Society. Ideas for a Future Regulation’ (2015) Digital
Investigation (DI) 107 ff.; Baruh and Popescu, ‘Big Data Analytics and the Limits of Privacy
Self-Management’ (2017) New Media and Society (NMS) 590 ff.; Hildebrandt (n 9) 52 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 147
4.4.1 Prohibition
Article 22(1) can be interpreted as a prohibition.58 Paragraph (1) is worded negatively
as it refers to the right of the data subject ‘not to be subject to . . .’. This corresponds
to a negative obligation for the controller (not to subject data subjects to solely
automated decisions). As a prohibition, Article 22(1) bans solely automated decision-
making categorically, unless one of the derogations in paragraph (2) applies (i.e.,
data subject’s explicit consent, where the decision is necessary for entering into or
performing a contract or is authorised by law). Under this approach, the law sets a
standard whereby the interests of data subjects not to be subject to automated
decision-making override the interests of controllers in engaging with it. The
resulting regime is both rigid and strict: rigid, because the legal standard is fixed
and allows no room for balancing competing interests (i.e., the ground of processing
based on the legitimate interests of the controller plays no role); and strict, because
the chosen legal standard ensures a high level of protection to individuals by default
55
See example on page 22, which refers to the situation of an individual who is deprived of credit
opportunities because of the behaviour of customers living in the same geographical area as
him or her (WP251rev.01, 22 ff.).
56
For example, Mendoza and Bygrave (n 32) 7 ff.
57
Ibid 16 ff.
58
Discussing this, Wachter, Mittelstadt, and Floridi, ‘Why a Right to Explanation of Automated
Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 Inter-
national Data Privacy Law (IDPL) 94 ff.; also Mendoza and Bygrave (n 32) 9 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
148 Diana Sancho
4.4.2 Right
Article 22(1) can also be interpreted as granting data subjects the right not to be
subject to automated decision-making. Under this interpretation, the interests of
controllers and data subjects are on an equal footing unless the data subject objects
to automated decision-making. If the latter enters an objection, the right not to be
subject to solely automated decision-making prevails. Compared to Section 4.4.1
(i.e., Article 22(1) as a prohibition), this interpretation is also rigid but less strict. It is
rigid because no competing interests are to be balanced against each other (i.e., the
law tolerates solely automated decisions based on the legitimate interests of control-
lers, unless the data subject lodges an objection). If the data subject objects, solely
automated decision-making is prohibited. It is less strict, however, because the
protection relies entirely on the data subject, who has to actively exercise the right
not to be subject to solely automated decision-making. Overall, this interpretation is
more beneficial to controllers than the previous one.
Under this approach, the right to human intervention may be operated in one of
two ways. Before any decision is formulated, Article 22(1) can be relied upon pre-
emptively to avoid solely automated decision-making. In this case, the right to
human intervention would reach the decision-making process ex ante, as in Section
4.4.1. On the other hand, if the data subject objects to a solely automated decision
already taken, the right to human intervention would apply ex post as a safeguard for
fair processing.
4.4.3 Derogations
Article 22(2) on automated decision-making admits one interpretation only.
According to this provision, controllers’ interests in carrying out solely automated
decision-making based on the explicit consent of the data subject (Article 22.2.c),
contractual necessity (Article 22.2.a) or authorised by law (Article 22.2.b) prevail over
the data subjects’ right not to be subject to solely automated decision-making. The
rule in Article 22(2) is most beneficial to private controllers. Although data protec-
tion authorities may have interpreted the ground ‘contractual necessity’ narrowly,59
59
The WP29 has clarified that this ground has to be construed narrowly and ‘does not cover
situations where the processing is not genuinely necessary for the performance of a contract,
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 149
this ground does not require the data subject to provide consent to the processing.
Turning to consent, the GDPR requires it to be ‘explicit’. The WP29 has stated that
an obvious way to comply with this is to obtain written statements signed by the data
subject.60 The WP29 has also clarified that, in the digital context, this requirement
can be satisfied by the data subject by filling in an electronic form, sending an email,
uploading a scanned document (that carries the signature of the data subject) or
using an electronic signature.61
It is noteworthy that the rule in Article 22(2), although striking the balance in
favour of controllers (who can engage in solely automated decision-making under
certain conditions), is formulated in terms just as rigid as the rule in Sections 4.4.1
and 4.4.2 (i.e., those that interpret Article 22(1) as a prohibition and as a right,
respectively). Under Article 22(2), the legislator sets a fixed standard according to
which, if the controller demonstrates explicit consent or contractual necessity (or the
decision is authorised by law), the processing is lawful. In regard to the right to
human intervention, here it materialises in Article 22(3) GDPR as a safeguard and
operates ex post only.
but rather unilaterally imposed on the data subject by the controller’, Opinion 06/2014 on the
notion of legitimate interests of the data controller under Article 7 of Directive 95/46, WP217,
2014, 16 ff.
60
WP29, ‘Guidelines on Consent under Regulation 2016/679, WP259’ 19 ff.
61
Ibid.
62
WP29, ‘Guidelines, WP251rev.01’ 19 ff.
63
Ibid 15 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
150 Diana Sancho
64
See CIPL, ‘Comments on the Article 29’ 9 ff.
65
Ibid.
66
Under Article 13(1), at the time controllers obtain the data from the data subject; under Article
14(3)(a), if they have not obtained the data from the data subject, within a month, at the time of
the first communication to the data subject or when the data are first disclosed to another
recipient. At any time under the right of access in Article 15. For a study on the effectiveness of
controller’s response to data access requests, see L’Hoiry and Norris, ‘The Honest Data
Protection Officer’s Guide to Enable Citizens to Exercise Their Subject Access Rights: Lessons
From a Ten-Country European Study’ (2015) 5 International Data Privacy Law (IDPL) 190 ff.
67
See Wachter, Mittelstadt, and Floridi (n 58) 83 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 151
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
152 Diana Sancho
73
See for example, industry representatives, CIPL, ‘Comments on the Article 29’ 13 ff.; WP29,
‘Guidelines, WP251rev.01’ 24 ff.
74
See Wachter, Mittelstadt, and Floridi (n 58) 78, 89‒90 ff.; they base their interpretation on the
non-binding nature of Recital 71 (which refers to the right to obtain an explanation of the
decision reached), and a systemic analysis of Article 22 (which does not refer to such a right in
paragraph 3) and Articles 13‒15. Cf Goodman and Flaxman, ‘European Union Regulations on
Algorithmic Decision-Making and a “Right to Explanation”’ (2016) <https://arxiv.org/abs/1606
.08813>.
75
Ibid.
76
See Selbst and Powles, ‘Meaningful Information and the Right to Explanation’ (2017) 7
International Data Privacy Law (IDPL) 233 ff, discussing ‘determinism’ in machine learning
(239 ff ).
77
Malgieri and Comandé, ‘Why a Right to Legibility of Automated Decision-Making Exists in the
General Data Protection Regulation’ (2017) 7 International Data Privacy Law (IDPL) 243 ff.
78
Ibid 258 ff.
79
Ibid.
80
See ICO (n 3) 95 ff.; also Kroll, Huey, Barocas, Felten, Reidenberg, Robinson, and Yu,
‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 153
81
See WP29, ‘Guidelines, WP251rev.01’, 27 ff.; the revised Guidelines were adopted on 6 Febru-
ary 2018, whilst the draft version was adopted on 3 October 2017; noticeably, the draft version
barely elaborated on Articles 13(2)(f ), 14(2)(g) and 15(1)(h).
82
Ibid 25 and 27 ff.
83
Ibid 23.
84
Ibid 25 and 27.
85
Article 35(3)(a) reads, ‘A data protection impact assessment referred to in paragraph 1 shall in
particular be required in the case of: a systematic and extensive evaluation of personal aspects
relating to natural persons which is based on automated processing, including profiling, and on
which decisions are based that produce legal effects concerning the natural person or similarly
significantly affect the natural person’.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
154 Diana Sancho
Rules that set common standards of protection are ordinarily classified as general
rules, whereas rules that do not fall within this category may operate as special
provisions (if they override general rules) or qualified rules (if they offer additional
safeguards to those in the general framework). This distinction is important because
special rules displace, in principle, the otherwise applicable general rules, whereas
qualified rules apply in a cumulative manner. For instance, the Brussels Ibis
Regulation and the Rome I Regulation offer some well-known examples of special
provisions for the protection of consumers in international disputes, which displace
the otherwise applicable general provisions for non-consumers.86 Under the GDPR,
however, there is no evidence that the regulator has intended to deliver protection
strictly relying upon the interplay between special and general provisions. For
example, assuming that the protection afforded to the categories of data referred to
in Article 9 is meant to be special, it has not prevented the WP29 from supporting
the cumulative application of the common grounds for processing to the special
categories of data; according to the WP29, this interpretation is tenable should it
ensure a higher level of protection to individuals, on a case-by-case basis.87
Article 22 on solely automated decision-making is often referred to as a qualified
provision.88 Certainly, it is difficult to categorise the protection afforded by Article
22 as special. Nothing in the GDPR suggests that this is the intention of the
legislator. Moreover, there is no such thing as a ‘general’ regime for automated
decisions outside Article 22. The GDPR does not specifically regulate automated
decisions falling outside Article 22.89 Like any other processing activity on personal
data, automated decision-making not meeting the requirements in Article 22(1) will
have to comply with the principles and rules of the GDPR.90
86
Regulation EU 1215/2012 (Brussels Ibis Regulation, OJ 2012 L 351/1) adopts a special regime
seeking to protect consumers in cross-border disputes (Article 15); this regime displaces the
general rules in Articles 4 and 7 for disputes between non-consumers. Also, Regulation EC 593/
2008 on the law applicable to contractual obligations (Rome I Regulation, OJ 2008 L 177/6)
introduces a special rule on the applicable law to consumer contracts in Article 6; this rule
states the applicability of the law of the country where the consumer has his habitual residence
(displacing the general rules in Articles 3 and 4, which point to the law freely chosen by the
parties or the law of the vendor). In practice, however, the operation of the special rules for
consumers may not always consistent; see Rühl, ‘The Protection of Weaker Parties in the
Private International Law of the European Union: A Portrait of Inconsistency and Conceptual
Truancy’ (2014) 10 Journal of Private International Law (JPIL) 335 ff.
87
See WP29, WP217 (2014) 15 ff, which reads, ‘in conclusion, the Working Party considers that an
analysis has to be made on a case-by-case basis whether Article 8 DPD in itself provides for
stricter and sufficient conditions, or whether a cumulative application of both Article 8 and 7
DPD is required to ensure full protection of data subjects’.
88
See, for example, Mendoza and Bygrave (n 32)11 ff.; also ICO (n 3) 21 ff (para 35).
89
These can be decisions: solely automated with trivial effects on the data subject; non-solely
automated with significant effects on the data subject; or non-solely automated with trivial
effects.
90
The GDPR sets the general framework for the processing of personal data; see ‘Explanatory
Memorandum of the Commission’s Proposal for a Regulation on the protection of individuals
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 155
This, however, has not prevented the WP29 from blurring the boundaries
between these two types of automated decision-making processes (i.e. within and
outside Article 22): by requiring controllers to comply with risk management duties
under Article 35(3)(a);91 and by recommending the application of notification rights,
under Articles 13(2)(f ) and 14(2)(g), to automated decision-making outside Article
22.92 To conclude, therefore, although these are positive proposals which help
provide higher levels of protection to individuals, more coordinated efforts in regards
to the development of these categories would provide greater clarity for the auto-
mated decision-making regime.
4.6 conclusion
This chapter illustrates the benefits of the joint intervention of the EU legislator and
the WP29 ‒ currently, the European Data Protection Board ‒ in protecting individ-
uals in a data-driven society. Together these two actors have contributed to modern-
ising the regime for solely automated decision-making under Article 22 GDPR. The
WP29 interpretative guidance on automated decision-making and profiling shows a
determined commitment to making solely automated decision-making more sub-
stantial (i.e., less formalistic). This is achieved by: implementing an interpretation of
the term ‘solely’ which does not exclude human nominal involvement (i.e., involve-
ment lacking the ability to influence or change the automated output); explicitly
acknowledging the need to enhance protection of vulnerable adults and children
under the ‘similarly significant effects’ test and the safeguards in Article 22(2)(b), (3)
and (4); and linking the data subject’s right to meaningful information to the right to
challenge a decision. The WP29 has also confirmed the strict and rigid nature of
Article 22, meaning that solely automated decision-making is limited to the data
subject´s explicit consent, contractual necessity, legal authorisation and the specific
requirements for specially protected data under paragraph (4). Outside these cat-
egories, the general prohibition in paragraph (1) makes solely automated decision-
making unlawful.
These developments represent progress towards the introduction of a sustained
and more advanced regime for solely automated decision-making. Compared to
Article 15 DPD, they improve legal certainty and provide data subjects with higher
levels of protection in solely automated decision-making processes. However, it has
to be noted that there is nothing intrinsically revolutionary about them. Although it
is clear that they provide new and more articulated mechanisms to address data
subjects’ needs for enhanced protection, they do so without altering the underlying
with regard to the processing of personal data and on the free movement of such data (General
Data Protection Regulation)’, COM(2012) 11 final, 2012/0011 (COD) 1 ff.
91
This results from the wording of Article 35(3)(a) which refers to ‘decisions’ (rather than to
‘solely’ automated decisions). WP29, ‘Guidelines, WP251rev.01’ 29 ff.
92
As a matter of good practice, ibid 25 ff.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
156 Diana Sancho
regulatory paradigm, which they inherit from the Data Protection Directive. After
all, solely automated decision-making remains limited to specific types of decisions
and grounds for processing, and requires the adoption of safeguards.
The main question this raises is whether the higher standards of protection in
Article 22 GDPR, including controllers’ new transparency and accountability duties,
will allow data subjects to maintain adequate levels of autonomy and control in the
era of machine-learning algorithms and big data. This will have to be assessed
against the practice of solely automated decisions as it develops under the GDPR.
If the revised regime proves incapable of empowering individuals effectively, whilst
allowing the technological and innovative drive of the data-driven society, a more
ambitious regulatory intervention will be required.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
5
Susana Navas
introduction
The legal consideration of a robot machine as a ‘product’ has led to the application
of civil liability rules for producers. Nevertheless, some aspects of the relevant
European regulation suggest special attention should be devoted to a review in this
field in relation to robotics. Types of defect, the meanings of the term ‘producer’, the
consumer expectation test and non-pecuniary damages are some of the aspects that
could give rise to future debate. The inadequacy of the current Directive 85/374/
EEC for regulating damages caused by robots, particularly those with self-learning
capability, is highlighted by the document ‘Follow up to the EU Parliament
Resolution of 16 February 2017 on Civil Law Rules on Robotics’. Other relevant
documents are the Report on “Liability for AI and other emerging digital technolo-
gies” prepared by the Expert Group on Liability and New Technologies, the “Report
on the safety and liability implications of Artificial Intelligence, the Internet of
Things and Robotics” [COM(2020) 64 final, 19.2.2020] and the White Paper “On
Artificial Intelligence – A European approach to excellence and trust” [COM(2020)
65 final, 19.2.2020].
1
As is known, the term ‘robot’ was created by Josef Čapek, who was born in the Czech Republic.
In 1920, Josef used it when speaking with his brother Karel, allowing him to make the term
known in a play called R.U.R. (Rossum’s Universal Robots). ‘Robot’ came from the Czech word
robota, meaning ‘worker slave’. In addition, the word ‘robotist’ was created by Isaac Asimov in
1941, referring to a person studying or building robots (Asimov, I, Robot, Gnome Press 1950).
Regarding the origin of the term ‘robot’, see Horáková and Keleman, ‘The Robot Story: Why
Downloaded from https://www.cambridge.org/core. University College157 London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
158 Susana Navas
would act or at least seem to act autonomously and interact with human beings.2
However, robots are something more than this or, at least from a technological
viewpoint, are much more than they are considered by the collective imagination.
Thus, depending on what is understood by the word ‘robot’ – and how a robot is
represented – particular rules will regulate robots. Therefore, from the legal per-
spective, not all cases relating to robots should be treated in the same manner.
Robots Were Born and How They Grew Up’ in Husbands, Holland, and Wheeler (eds), The
Mechanical Mind in History (MIT Press 2008) 307.
2
Automatically to assign physical features like those of a person or an animal to a robot machine
is very common (Richards and Smart, ‘How Should the Law Think about Robots?’ in Calo,
Froomkin, and Kerr (eds), Robot Law (Edward Elgar 2016) 6.
3
Calo, ‘Robotics and the Lessons of the Cyberlaw’ (2015) Cal L Rev 103, 513; Palmerini and
Bertolini, ‘Liability and Risk Management in Robotics’ in Schulze and Staudenmayer (eds),
Digital Revolution: Challenges for Contract Law in Practice (Nomos Verlag 2016) 235.
4
Artificial intelligence entities, known as electronic or autonomous agents, have raised interest-
ing legal questions concerning the conclusion of contracts by electronic means. I will not deal
with this topic in this chapter, but instead would refer the reader to my work: Navas and
Camacho (eds), Mercado digital. Reglas y principios jurídicos (Tirant Lo Blanch 2016) 99. Also
see Loos, ‘Machine-to-Machine Contracting in the Age of the Internet of Things’ in Schulze,
Staudenmayer, and Lohsse (eds), Contracts for the Supply of Digital Content. Regulatory
Challenges and Gaps (Nomos Verlag 2017) 59‒83.
5
Kevin, ‘Paving the Road Ahead: Autonomous Vehicles, Products Liability and the Need for a
New Approach’ (2013) 1 Utah L Rev 437‒462.
6
European Commission, ‘Statement on Artificial Intelligence, Robotics and Autonomous
Systems’, European Group on Ethics in Science and New Technologies (March 2018),
available at <http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf>. Date of
access: April 2020; Karnow, ‘The Application of Traditional Tort Theory to Embodied
Machine Intelligence’ in Calo, Froomkin, and Kerr (n 2) 55.
7
Stone and Veloso, ‘A Survey of Multiagent and Multirobot Systems’ in Balch and Parker (eds),
Robot Teams: From Diversity to Polymorphism (Taylor & Francis 2002) 37.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 159
8
Millar and Kerr, ‘Delegation, Relinquishment and Responsibility: The Prospect of Expert
Robots’ in Calo, Froomkin, and Kerr (n 2) 102; Brynjolfsson and McAfee, The Second Machine
Age. Work, Progress, and Prosperity in a Time of Brilliant Technologies, (WW Norton &
Company Ltd 2016) 24‒27, 50, 65, 92‒93, 192, 207, 255; Ford, The Rise of the Robots.
Technology and the Threat of Mass Unemployment (Oneworld 2015) 102‒106, 108, 153‒155;
Balkin, ‘The Path of Robotics Law’ (2015) Cal L Rev Circ 6, 45.
9
Saroni, FinTech Innovation: From Robo-Advisors to Goal Based Investing and Gamification
(Wiley 2016) 21.
10
The common core is the analysis of massive data (knowledge-based AI), and the obtaining of
smart data in order to suggest solutions or diagnoses given the purpose or purposes for which
these data are handled (Mayer-Schönberger and Cukier, Big data. La revolución de los datos
masivos (Turner Madrid 2013).
11
Perritt Jr and Sprague, ‘Drones’ (2015) Vand J Ent & Tech L 7(3), 673; Perritt Jr and Sprague,
‘Law Abiding Drones’ (2015) 16 Colum Sci & Tech L Rev 385; Ford (n 8) 122, 173.
12
Brynjolfsson and McAfee (n 8)14‒15, 19, 55, 80, 200, 206‒207, 219; Ford (n 8) 96, 175‒186;
Rifkin, La sociedad de coste marginal cero (Paidós 2014) 285‒286.
13
Ebers, ‘La utilización de agentes electrónicos inteligentes en el tráfico jurídico: ¿necesitamos
reglas especiales en el Derecho de la responsabilidad civil?’ (2016) InDret 5, <www.indret
.com>. Date of access: April 2020.
14
Navas, ‘El internet de las cosas’ in Navas and Camacho (n 4) 32.
15
Stone and Veloso (n 7) 37; Navas, ‘Agente electrónico e inteligencia ambiental’ in Navas and
Camacho (n 4) 91.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
160 Susana Navas
achieve specific purposes. To plan also means to have regard to the perceived
information in order to select actions or determine situations or behaviour that
should take place in the future.
In addition, the choice between different behaviours, and thus the planning of
future actions, should be made as quickly as possible to enable the system to
respond, for instance, in milliseconds to any external circumstance. Lastly, the
system must act, that is, perform the foreseen plan, for which the machine usually
has an electronic system different from the traditional mechanical and hydraulic
system that was previously employed. Actions and behaviours modify and transform
the environment in which the machine is located.16
16
Calo (n 3) 513; Funkhouser, ‘Paving the Road Ahead: Autonomous Vehicles, Products Liability
and the Need for a New Approach’ (2013) 1 UL Rev 437‒462.
17
Karnow (n 6) 55.
18
We can find great artificial intelligence systems in the field of music, where algorithms can
compose pieces emulating the style of Mozart or Chopin, or computational programs capable of
painting and drawing better than many artists and with a level of creativity even higher than that of
a human <www.robotart.org>; Schlackman, ‘The Next Rembrandt: Who Holds the Copyright
in Computer Generated Art’, Art Law Journal (22 April 2016) available at <https://alj.orangenius
.com/the-next-rembrandt-who-holds-the-copyright-in-computer-generated-art/>. Date of access:
April 2020), or designing buildings that astonish many famous architects, or producing journalis-
tic reports that would perplex many journalists, or programs that propose judgments and write
decisions for the greater delight of judges and tribunals. Some more examples are described by
Carr, Atrapados. Cómo las máquinas se apoderan de nuestras vidas (Taurus 2014) 15.
19
Funkhouser (n 16) 437‒462.
20
Nanotechnology or nanorobotics is an emerging technology that already has relevant applica-
tions in the domains of medicine, electronics and the building industry. Nevertheless, nanor-
obots have many more future applications such as in nutrition or oral hygiene (Ford (n 8) 235‒
245; Serena, La nanotecnología (CSIC Madrid 2010) 95.
21
Feil-Seifer and Matarić, ‘Defining Socially Assistive Robotics’ (2005) Proceedings of the 2005
IEEE, 9th International Conference on Rehabilitation Robotics, 28 June – 1 July, Chicago;
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 161
Levy, Amor + Sexo con Robots (Contextos Paidós 2007) 133; Turkle, The Second Self: Com-
puters and the Human Spirit (Simon & Schuster 1984).
22
Funkhouser (n 16) 437‒462.
23
See Camacho, ‘La subjetividad ciborg’ in Navas (ed) Inteligencia artificial. Tecnología. Dere-
cho (Tirant Lo Blanch 2017) 231‒257; Navas and Camacho, El ciborg humano. Aspectos
jurídicos (Comares Granada 2018); Aguilar, Ontología Cyborg. El cuerpo en la nueva sociedad
tecnológica (Gedisa 2008) 13; Hughes, Citizen Cyborg: Why Democratic Societies Must
Respond to the Redesigned Human of the Future (Basic Books 2004) 3; Ramachandran, ‘Against
the Right to Bodily Integrity: Of Cyborgs and Human Rights’ (2009) 1(87) Denver U L Rev 17‒
20; Clark, Natural-Born Cyborgs. Minds, Technologies and the Future of Human Intelligence
(Oxford University Press 2003) 13; Zylinska, The Cyborg Experiments. The Extensions of the
Body in the Media Age (Continuum 2002) 15.
24
Donati et al., ‘Long-Term Training with a Brain-Machine Interface-Based Gait Protocol
Induces Partial Neurological Recovery in Paraplegic Patients’ <www.nature.com/scien
tificreports>. Date of access: April 2020.
25
Robert, ‘Impresoras 3D y 4D’ in Navas (ed) Inteligencia artificial. Tecnología. Derecho (Tirant
Lo Blanch 2017) 197‒230. Concerning views on 4D printing at the Self-Assembly Lab of the
Massachusetts Institute of Technology (MIT), www.selfassemblylab.net, see Tibbits, Self-
Assembly Lab: Experiments in Programming Matter (Routledge 2017) 29. In the field of
medicine, see Mitchell, Bio Printing: Techniques and Risks for Regenerative Medicine (Else-
vier2017) 3; Kalaskar (ed) 3D Printing in Medicine (Elsevier 2017) 43. In the domain of
architecture and engineering, see Casini, Smart Buildings: Advanced Materials and Nanotech-
nology to Improve Energy (Elsevier2016) 95; European Commission, ‘Identifying Current and
Future Application Areas, Existing Industrial Value Chains and Missing Competences in the
EU, in the Area of Additive Manufacturing (3D-printing)’ (DOI 10.2826/72202), Executive
Agency for Small and Medium-Sized Enterprises, (Brussels 2016) 34.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
162 Susana Navas
robot’.26 In this case, changes can be made to the system by third parties without
compromising its performance of tasks.
Robot machines and virtual robots can be either closed or open robots, although
the former are frequently closed robots (e.g., robots for industry), whereas the latter
are usually open robots (e.g., Dr Watson, Deep Blue27 or Google AlphaGo).28
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 163
hand, we must take into account that robots have been employed in the real world,
interacting with people as assistant robots, nurse robots or drones, or in general
autonomous means of transportation. Thus, as well as the robot machine producer’s
liability, there is the robot owner’s liability and the designer-engineer’s liability. In
studying these topics, it is important to deal with robot machines and virtual robots
separately. Since virtual robots are computer programs, the regulations related to
computer programs should be applied to them. Robot machines can be regarded as
a ‘movable good’, one of the different parts of which could be a computer program
(e.g. drones or driverless cars). Notwithstanding this, when a robot is part of a
movable or immovable good, it can be seen, in the traditional classification of
goods, as an ‘immovable good’ by destiny or by incorporation, depending on the
particular case treated (e.g., chirurgical arms30 or automotive industry arms).
30
Sankhla, ‘Robotic Surgery and Law in USA – A Critique’ <http://ssrn.com/abstract=2425046>.
Date of access: April 2020.
31
Commission staff working document, ‘Liability for emerging digital technologies’, SWD(2018)
137 final.
32
OJL 157/24, 9.6.2006. At the time of writing, the abovementioned directive is being reviewed
[Artificial Intelligence for Europe, SWD(2018) 137 final].
33
Smith, ‘Lawyers and Engineers Should Speak the Same Robot Language’ in Calo, Froomkin,
and Kerr (n 2) 78; Leenes and Lucivero, ‘Laws on Robots, Laws by Robots, Laws in Robots:
Regulating Robot Behavior by Design’ (2014) 6(2) Law, Innovation and Technology 193‒220.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
164 Susana Navas
In the international arena, there are the well-known ISO standards that, in the
field of industrial robots, are particularly taken into account by the EU and the
Member States. ISO 10218-I and 10218-II have been reviewed and updated by ISO
15066:2016.34 Other relevant ISO standards are ISO 26262, concerning safety in the
field of vehicles, and ISO/IEC 15288, in relation to engineering systems and
software. In relation to therapeutic or assistant robots (such as the well-known Robot
Pepper) that accompany minors during medical treatment, help disabled people
with daily activities or assist elderly people in their homes, it is foreseeable that the
human has physical contact with the robot or that their home should have certain
dimensions or other specific requirements. Certain security and safety standards
must therefore be established, as well as mechanisms that, in certain situations,
could automatically switch off the robotic system to prevent damage being caused.
The design should therefore emphasize the ability of the robot to comply with
certain legal and even social requirements.35 The document ‘Follow up to the EU
Parliament Resolution of 16 February 2017 on Civil Law Rules on Robotics’ recom-
mends that this type of robot (an assistant or collaborative robot) should be given
particular consideration and mentions their possible future regulation. For this
reason, specialized technical committees have been set up, such as the ISO/TC
299 Robotics Committee, which is exclusively dedicated to the design of rules
relating to robotics. In this regard ISO 13482:2014 should be taken into consideration.
Additionally, the context in which the robot performs its autonomous activity can
require it to respect certain legal rules that can, like technical norms, affect its
activity through the design of the artificial intelligence system embedded within it.
This is the case with driverless cars, which must pay particular attention to traffic and
safety rules as well as those concerning liability.36 Nowadays, researchers work with
algorithms that allow intelligent agents to recognize norms and respect them,
adapting to the uncertain and always changing context in which they interact.37
Because, in these cases, we are dealing with assistant rather than industrial robots,
from a legal point of view the producer must take other rules into account,
particularly Directive 2001/95/EC of the European Parliament and of the Council
of 3 December, on general product safety,38 and Council Directive 85/374/EEC of
25 July 1985 on the approximation of the laws, regulations and administrative
provisions of the Member States concerning liability for defective products.39 The
latter directive will be discussed in Section 5.4.
34
<www.iso.org/obp/ui/#iso:std:iso:ts:15066:ed-1:v1:en>. Date of access: April 2020.
35
Wynsberghe, ‘Designing Robots for Care: Care Centered Value-Sensitive Design’ (2013) 19 Sci
Eng Ethics 407‒433; Leenes and Lucivero (n 33) 193‒220.
36
Castells, ‘Vehículos autónomos y semiautónomos’ in Navas (n 25) 101‒121.
37
Criado, Argente, Noriega, and Botti, ‘Reasoning about Norms under Uncertainty in Dynamic
Environments’ (2014) 5 International Journal of Approximate Reasoning 2049–2070; Navas,
‘Derecho e Inteligencia artificial desde el diseño. Aproximaciones’ in Navas (n 25) 23‒72.
38
OJL 11/4, 15.1.2002.
39
OJL 210, 7.8.1985.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 165
Other interesting cases are rules concerning respect for, or adaptation to, the
environment through, for example, channelling or intelligent infrastructures that
take advantage of nanotechnology and 4D printing.40
Close to the domain of robotics are brain‒computer interfaces, which consist of
artificial systems that interact with the nervous system through neurophysiological
signals and are used, for instance, by people with disabilities during the execution of
certain motor activities.41 Cyborgs are one field in which these interfaces could have
full application.
It is important to bear in mind that a duty to inform, so that a person gives
informed consent to the implantation of the artificial system in question, is imposed
by national legal systems.
In short, if a robot or an autonomous artefact is to be put on the market, legal rules
can determine not just its corporeal structure but also its capabilities, through the
design of the artificial intelligence system itself. For this purpose, it is useful for
sensors allowing information to be received from the environment to be incorpor-
ated so that the robot is able to adapt to changing circumstances.
40
The fact that the environment is relevant for the development of robotic capabilities has been
highlighted in the study of the iCube robot in which real situations have been recreated: Ribes,
Cerquides, Demiris, and López de Mántaras, ‘Active Learning of Object and Body Models
with Time Constraints on a Humanoid Robot’ (2016) IEEE Transactions on Autonomous
Mental Development <www.iiia.csic.es/~mantaras/TAMD.pdf>. Date of access: April 2020.
41
Camacho (n 23) 231‒257.
42
In the USA, see Balkin, Balkin (n 8) 45.
43
Hubbard, ‘Sophisticated Robots: Balancing Liability, Regulation and Innovation’ (2015) 66 Fla
L Rev 1803, 1862‒1863; Richards and Smart (n 2) 6.
44
Kelley, Schaerer, Gomez, and Nicolescu, ‘Liability in Robotics: An International Perspective
on Robots as Animals’ (2010) 24(13) Advanced Robotics 1861‒1871, DOI: 10.1163/
016918610X527194.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
166 Susana Navas
same way as animals in some national legal systems such as those of Germany,
Switzerland or Austria. Where the robot machine is used by a supplier of services,
could one treat their liability for damages as vicarious liability in the same way as a
principal is liable for damage caused by assistants?45 In my view, this option suggests
that robot machines and employees have the same legal status, which is doubtful.
The fact that they perform similar jobs does not mean that they deserve equal legal
consideration.
While I do not believe a specific rule is needed to regulate liability in the case of
owning a robot, policy makers should amend civil codes to regulate civil liability for the
possession of potentially dangerous goods, including robots or smart artefacts.46
Whether this is considered on the basis of fault (with a possible presumption iuris
tantum of lack of diligence, as in cases concerning the responsibility of parents or
guardians for the acts of minors under their charge) or of strict liability (as in cases of
animals or the handling of potentially dangerous machines), obtaining insurance with
a minimum level of cover for the damage caused by the robot should be compulsory.
I do not agree with the idea suggested by some scholars that, although third parties
should be compensated by the owner, responsibility should be assigned to the machine
itself.47 In such a case, the machine would be deemed to be a child, that is, a human
person, or, at least, legal personality would be assigned to it. This is not yet the case,
although it could become the case in the future through rulings by national policy
makers.48 In my opinion, if the attribution to a robot of the consideration of “holder
of rights and duties” makes some sense, it is that of being able to be “the subject” to
which the action that causes damage is “attributed”, whilst “the subject” that is to be
considered “liable” is the human. Thus, it would be a (new) case of civil liability for
someone else’s act.
45
Palmerini and Bertolini (n 3) 241.
46
Spindler, ‘Roboter, Automation, künstliche Intelligenz, selbst-steurende Kfz – Braucht das
Recht neue Haftungskategorien?’ (2015) 12 CR 775.
47
Beck, ‘Grundlegende Fragen zum rechtlichen Umgang mit der Robotik’ (2009) 6 JR 229–230;
Ebers (n 13) 8; Kersten, ‘Menschen und Machinen’ (2015) 1 JZ 1‒8.
48
Loos (n 4) 59‒83.
49
2015/2103 INL.
50
27.01.2017, A8–0005/2017.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 167
Resolution of 16 February 2017 on Civil Law Rules on Robotics. These two docu-
ments also focus on the need to regulate the civil liability for damage caused by robots.
The European Parliament’s resolution of 12 February 2019 on a comprehensive
European industrial policy on artificial intelligence and robotics (2018/2088(INI)),51
the Report on “Liability for AI and other emerging digital technologies” prepared by
the Expert Group on Liability and New Technologies”, in which the need for a
review of liability rules is highlighted, should also be taken into consideration.
Although compensation for damages caused by defects in robots and other intelli-
gent machines can be awarded according to national producer liability legislation,
classical issues regarding the application of this legislation to such ‘products’ will
arise when it comes to future reviews of this legislation.52 In fact, the inadequacy of
the current Directive 85/374/CEE for regulating damages caused by robots, particu-
larly those with self-learning capacity, is highlighted by the ‘Follow up’ document
mentioned above.53 Some topics for a possible future review of EU legislation on
producer liability are presented below.
51
P8_TA-PROV(2019)0081.
52
Howells and Willet, ‘3D Printing: The Limits of Contract and Challenges for Tort’ in Schulze
and Staudenmayer (eds), Digital Revolution: Challenges for Contract Law in Practice (Nomos
Verlag 2016) 67; Solé, El concepto de defecto del producto en la responsabilidad civil del
fabricante (Tirant lo Blanch 1997) 563; Salvador and Ramos, ‘Defectos de productos’ in
Salvador and Gómez, Tratado de responsabilidad civil del fabricante (Thomson Civitas Cizur
Menor 2008) 135.
53
At the time of writing, the above-mentioned directive is being reviewed (Artificial Intelligence
for Europe, SWD(2018) 137 final).
54
Fairgrieve et al. in Machnikowski (ed) European Product Liability (Intersentia Cambridge
2016) 47.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
168 Susana Navas
Given that robots are becoming increasingly sophisticated, the ‘state of scientific
and technical knowledge existing at the time when he put the product (the robot)
into circulation’ is especially relevant for assessing the producer’s defence against
liability (Art 7(e) Directive 85/374/EEC). Software upgrades and updates questions
the application of the so-called “development risks exception”.
55
As is well known, the criterion used by the Directive to define the ‘defectiveness’ of a product is
not a subjective criterion but an objective and normative one (Wuyts, ‘The Product Liability
Directive – More than Two Decades of Defective Products in Europe’ (2014) 5(1)JETL 12).
56
In contrast to the position in the USA (see § 2 Restatement Third of Torts: Product Liability),
the Directive does not distinguish between types of defect. However, in practice, courts in the
Member States differentiate between manufacturing defects, design defects and instruction
defects (Fairgrieve et al. in Machnikowski (ed), European Product Liability 53).
57
Spindler (n 47) 769; Castells (n 36) 115‒121.
58
Hubbard (n 43) 1821‒1823; Ebers, ‘Autonomes Fahren: Produkt- und Produzentenhaftung’ in
Oppermann and Stender-Vorwachs (eds), Autonomes Fahren (CH Beck 2017) 111‒112.
59
Calo, ‘Robotics & the Law: Liability for Personal Robots’ <http://ftp.documation.com/refer
ences/ABA10a/PDfs/2_1.pdf>. Date of access: April 2020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 169
the regulation of robots and the Report on Liability for Artificial Intelligence clearly
opt for the introduction of liability irrespective of fault on the part of the producer of
the robot in all cases concerning defects. In addition, the proposal states that the
owner of a robot should take out compulsory insurance for damage caused to another,
and requires the creation of a compensation fund that covers all damage that cannot
be covered by that insurance.60
60
̒Follow-up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on
Robotics’.
61
Balkin (n 8) 45. This scholar raises the question, but he does not propose a concrete solution.
For the same view, see Beck (n 48) 227.
62
Hubbard (n 43) 1821‒1823.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
170 Susana Navas
63
Calo, ‘Open Robotics’ <http://ssrn.com/abstract=1706293>. Date of access: April 2020; Cooper
(n 26) 166‒167.
64
Salvador, ‘Causalidad y responsabilidad (versión actualizada)’ (2002) InDret 3<www.indret
.com>. Date of access: April 2020; Luna, ‘Causalidad y su prueba. Prueba del defecto y del
daño’ in Salvador and Gómez (eds), Tratado de responsabilidad civil del fabricante (Thomson
Aranzadi Cizur Menor 2008) 471‒476; Ruda, ‘La responsabilidad por cuota de mercado a
juicio’ (2003) InDret 3, <www.indret.com>. Date of access: April 2020. All these authors quote
widely from the North American bibliography, which is the source of this approach.
65
COM(1999), 396 final.
66
The Third Report concerning Directive 85/374/EEC, of 2006, does not refer to these sugges-
tions [COM(2006) 496 final].
67
Bräutigam and Klindt, ‘Industrie 4.0, das Internet der Dinge und das Recht‘ (2015) NJW
1137‒1143; Grünwald and Nüssing, ‘Machine to Machine (M2M)-Kommunikation. Regulator-
ische Fragen bei der Kommunikation im Internet der Dinge‘ (2015) MMR 378‒383.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 171
significant change in the current rules concerning the responsibility of the manu-
facturer. On the basis of expert systems, defects of any kind that appear can be fully
identified and corrected almost immediately, at least if the system is able to repair
itself, or the defective mechanism can be stopped, which can prevent or minimize
damage. The knowledge of the defect that is immediately acquired by the liable
person allows them to take urgent measures in this regard (for example, modifying
the software or warning the user about the possible risk of damage and the best steps
to take to avoid it). It is worth mentioning that questions raised about the responsi-
bility of the producer after the product is put into circulation, in relation to
identifying a defect that can cause damage, must be answered according to the
general rules of civil liability under domestic law.68
68
Machnikowski (ed) European Product Liability (Intersentia 2016).
69
Regarding the meaning of this in relation to producers’ civil liability, see Solé (n 53) 97‒102;
Salvador and Ramos (n 53) 146‒152.
70
According to § 2(b): ̒A product is defective (. . .) in design when the foreseeable risks of harm
posed by the product could have been reduced or avoided by the adoption of a reasonable
alternative design by the seller or other distributor, or predecessor in the commercial chain of
distribution, and the omission of the alternative design renders the product not reasonably
safe. . .̓ (§ 2 Rest. Third. Torts: Products Liability: Categories of Product Defect). This is
suggested by Hubbard (n 43) 1854‒1855.
71
Salvador and Ramos (n 53) 182‒184.
72
Navas (n 14) 58.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
172 Susana Navas
73
Gómez, ‘Daño moral’ (1999) 1 InDret<www.indret.com>. Date of access: April 2020; Gómez,
‘Ámbito de protección de la responsabilidad de producto’ in Salvador and Gómez (eds),
Tratado de responsabilidad civil del fabricante (Thomson Aranzadi Cizur Menor, 2008) 662,
footnote 9.
74
This is highlighted by Marín, Daños por productos: estado de la cuestión (Tecnos 2001) 152;
Alcover, La responsabilidad civil del fabricante. Derecho comunitario y adaptación al Derecho
español (Civitas 1990) 80; Martín and Solé, ‘El daño moral’ in Cámara (ed), Derecho Privado
Europeo (Colex 2003) 859‒860.
75
See a comparative overview of non-pecuniary damages in Horton (ed), Damages for Non-
pecuniary Loss in a Comparative Perspective (Springer 2001) 279.
76
Magnus, ‘La reforma del derecho alemán de daños’ (2003) 4 InDret <www.indret.com>. Date
of access: April 2020.
77
<http://www.gesetze-im-internet.de/bundesrecht/prodhaftg/gesamt.pdf>. Date of access:
April 2020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 173
be amended to make sure that non-pecuniary damage falls within its scope of
protection.78 In fact, the Resolution of the European Parliament to the Commission
of February 2017 on standards in relation to robotics warns that the rules on civil
liability should cover all possible damage caused by a robot, given, as has already
been indicated, that not all cases involving a robot fall within the scope of the
directive’s current wording.
5.5 conclusions
The internet of things, as well as robots and other intelligent machines, presents a
challenge to civil liability norms, giving rise to the need for an articulated system that
can respond to the new situations that could occur. It should not be forgotten that
permanent communication between intelligent machines, or systems that are
capable of repairing themselves, or expert robots that make decisions at critical
moments, can drastically reduce the number of accidents or fatalities, with a
consequent decrease in deaths and bodily injuries with long-term consequences.
This may have a major economic impact, not just in the field of health.79 The
impact will be of particular importance in the insurance sector.80
Permanent communication between intelligent machines can allow machines
themselves to adapt constantly to new technical and scientific advances or to adapt
to their environment on the basis of the existing knowledge in a specific domain or
for a specific technique (e.g., the materials with which pipes are produced, in
relation to pipelines or other pieces of infrastructure).This will inevitably, and
sooner rather than later, affect the rules on the civil liability of the producer and
owner of a robot or intelligent machine. Robotics, then, can give a great opportunity
to review and finally amend different aspects of the producer liability rules that,
since 1999, have been left outside the political agenda of the Community’s public
bodies.81 In any case, future “personalized” information based on customer prefer-
ences, needs, capabilities, by way of the analysis of massive data stored by the
manufacturer, could allow to “personalize” liability avoiding the one-size-fits-all
rule.
78
For more arguments and literature concerning this approach see: Navas, ‘Daño moral y
producto defectuoso. Estado de la cuestión legal y jurisprudencial en España’ (2016) 13 Revista
Crítica de Derecho privado (Uruguay) 525‒573.
79
In the case of autonomous vehicles, it is estimated that they can reduce fatalities by 90%.
This, in turn, can mean savings of billions of euros per year in medical care (Rifkin (n 12)
285‒287). In fact, robotics is a current matter on the agenda of the World Economic Forum
<www.weforum.org/es/agenda/archive/artificial-intelligence-and-robotics/>. Date of access:
April 2020.
80
In relation to driverless cars, see the considerations of some important insurance companies in
<www.driverless-technologies-insurance.com>. Date of access: April 2020.
81
At the time of writing, the abovementioned directive is being reviewed (Artificial Intelligence
for Europe, SWD(2018) 137 final).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
6
Ruth Janal*
introduction
As robots and intangible autonomous systems increasingly interact with humans, we
wonder who should be held accountable when things go wrong. This chapter
examines the extra-contractual liability of users, keepers and operators for wrongs
committed by autonomous systems. It explores how the concept of ‘wrong’ can be
defined with respect to autonomous systems and what standard of care can reason-
ably be expected of them. The chapter also looks at existing accountability rules for
things and people in various legal orders and explains how they can be applied to
autonomous systems. From there, various approaches to a new liability regime are
explored. Neither product liability nor the granting of a legal persona to robots is an
adequate response to the current challenges. Rather, both the keeper and the
operator of the autonomous system should be held strictly liable for any wrong
committed, opening up the possibility of privileges being granted to the operators of
machine-learning systems that learn from data provided by the system’s users.
* The author would like to thank Rebecca Sieber for her research assistance and for proofreading
this chapter.
which terms to search for in a search engine and whether to conclude a contract
with a customer.
The hope is that a more prominent use of autonomous systems will cause the
incidence of damage events and loss to fall,1 as artificial intelligence will outperform
humans in cognitive tasks. While this vision may or may not come true, those times
certainly have not yet arrived. Autonomous cars may crash,2 vacuum robots may eat
hair,3 and a whole spectrum of new categories of damages has arisen that were
previously unheard of: search engines that suggest defamatory search terms,4 adver-
tising networks that display adverts for high-paying jobs to men rather than to
women5 and image-recognition technology that categorizes persons of colour as
gorillas.6
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
176 Ruth Janal
legal persona is to separate the ownership from the management of an entity and
limit shareholder liability. To that effect, the legal entity is provided with its own
assets. However, currently it does not seem economically efficient to endow robots
with their own assets, at least if those assets are supposed to be sufficient to cover
potential losses caused by the system. The same purpose can be more easily
achieved simply by making insurance a requirement.10 Even if the idea of legal
personality for robots gains traction over time, it is difficult to imagine each and
every kind of autonomous system endowed with its own assets, such as bank
accounts for vacuum robots, intelligent irrigation systems and internet search
engines.11
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 177
An example illustrates the various players. Let us assume that an autonomous car
is involved in a car accident, and the victim is seeking compensation. The victim
might sue the car manufacturer who produced the car and put the product into
circulation (Volkswagen). Another option is to make a claim against the operator of
the autonomous system that has continuously collected data from all the cars
equipped with the autonomous system and integrated this data in the updates which
it regularly sends to the cars (Waymo, the Google autonomous car company). And
obviously, a claim might also be made against the keeper/owner of the car or the
driver/passenger of the car.
Any of the four parties may bear responsibility for the accident by virtue of their
own wrongdoing: the manufacturer may have installed a defective sensor; the
operator may have installed an update which was not thoroughly tested; the keeper
may have ignored a notice to update the system; and the driver/passenger may have
ignored system warnings or other obvious signs that a sensor was dirty and thus not
operating properly. Apart from liability for any wrongdoing committed, some of
these parties might also be liable under strict liability rules.
This chapter takes a closer look at all but one of the parties named above,
addressing the liability of the user, the keeper and the operator of an autonomous
system. Product liability is considered by Susana Navas in Chapter 5. This chapter
looks at extra-contractual liability and is not concerned with contract law.
It is also the case that many damages are covered by insurance, and depending
on the applicable rules, any person who suffered a loss may have a direct
claim against the tortfeasor’s insurer. Again, this is not the subject of the present
chapter.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
178 Ruth Janal
6.2.1.1 England
Under English law, a person will be liable under the tort of negligence if they are
under a duty of care towards the eventual victim, if they have breached said duty and
if the breach has resulted in damage, based on a preponderance of evidence.
However, damages that could not reasonably have been foreseen are considered to
be too remote and will not be compensated. Damages claims regularly turn on the
question of whether a duty of care existed towards a particular person or groups of
persons. There are accepted categories of a duty of care in case law: direct bodily
harm, product liability, legal malpractice, and so on. Further categories may be
developed by the courts, which will take an incremental approach and consider
three – admittedly vague – elements: proximity, foreseeability, and whether it is fair,
just and reasonable to impose a duty.13 Existence of a duty of care will be particularly
scrutinized in cases of pure economic loss.
A breach of duty occurs when a party fails to live up to the standard that a
reasonable person in their position is expected to meet, allowing for specific
standards for professionals and lay persons alike. In particular, children are only
expected to meet the standard of a reasonable child of the same age.14 Mental
impairment, however, is not an accepted defence.15
6.2.1.2 France
Under Articles 1240 and 1241 of the French Civil Code, liability arises for any damage
caused by faute. The concept of faute is best described as behaviour which does not
meet the standard of a just and cautious person or a good professional. Minor age
12
Zweigert and Kötz, Einführung in die Rechtsvergleichung auf dem Gebiete des Privatrechts (3rd
edn, Mohr Siebeck Verlag 1996) 650.
13
Caparo Industries PLC v Dickman [1990] United Kingdom House of Lords (UKHL) 2.
14
Jackson v Murray and Another [2015] United Kingdom Supreme Court (UKSC) 5.
15
Van Dam, European Tort Law (2nd edn, Oxford University Press 2013) 276 with reference to
exceptions.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 179
and mental impairment are not accepted defences. While pure economic loss is an
accepted head of damage, losses will only be compensated if the damage is direct,
certain and legitimate, which gives the courts some discretion to exclude rather
remote damages. It should also be noted that the significance of liability for negli-
gent acts in French law has dwindled in light of the wide-ranging liability imposed
for the acts of things and employees.
6.2.1.3 Germany
Under German law, a negligent act will generally only give rise to liability if one of
the rights named in § 823(1) BGB was infringed, namely life, health, property,
freedom, personality and commercial enterprise (leaving aside some other grounds
for negligence, such as breach of statutory duty, § 823(2) BGB). As § 823(1) does not
list a party’s wealth as a protected right, a negligent causation of purely economic
loss generally does not give rise to compensation. In cases of indirect losses, the
alleged tortfeasor will be held liable if they were under a duty of care to prevent the
damage by monitoring and controlling a particular source of damage, such as
hazardous objects or activities (Verkehrssicherungspflicht). The victim must prove
causation to the satisfaction of the court and, similar to English law, liability for
damages will be denied where the loss could not reasonably have been foreseen by
the tortfeasor. Neither minors nor mentally impaired persons are held liable if they
lack the appropriate comprehension of why their actions are wrong.16
2017 saw the introduction of new rules to the German Straßenverkehrsgesetz
(StVG; Road Traffic Act), adapting the law for the emerging functions of highly
automated driving. Under §§ 1a and 1b StVG, it is legal to operate a car with high-
level complete automation systems as defined under the law. Drivers may switch
these cars into automated mode and turn their attention away from traffic, provided
that they remain sufficiently alert to immediately regain control whenever the
system asks them to do so or whenever it becomes obvious that the prerequisites
for the use of automated driving functions are no longer present. These duties of
care are generally in line with the above-mentioned safety duties for hazardous
items. However, the law is so vague that it fails to contribute to legal certainty.17
Furthermore, it is questionable whether current systems are able to recall drivers’
attention in time, given that humans need around 30 to 40 seconds to fully get ‘back
in the loop’18 ‒ i.e. to assess the vehicle’s situation and to respond accordingly. For
16
§§ 827, 828 BGB.
17
Schirmer, ‘Augen auf beim automatisierten Fahren! Die StVG-Novelle ist ein Montagsstück’
(2017) Neue Zeitschrift für Verkehrsrecht (NZV) 255.
18
Merat, Jamson, Lai, Daly, and Carsten, ‘Transition to Manual: Driver Behaviour when
Resuming Control from a Highly Automated Vehicle’ in Merat and de Waard (eds), Transpor-
tation Research Part F: Traffic Psychology and Behaviour, Volume 27, Part B (Elsevier 2014) 274
et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
180 Ruth Janal
the time being, any driver who averts their attention from traffic should therefore be
considered to have acted negligently.
6.2.2.1 France
French law is certainly quick to assign the damage caused by a thing to the thing’s
keeper (gardien). Art 1242(1) of the Civil Code declares that ‘A person is liable not
only for the damage which he caused by his own act, but also for that which is
caused . . . by things which he has in his keeping’. Originally, this sentence was only
intended and understood to be an introductory note to the liability rules in Art 1242
(2) and following (which provide for strict liability of the keepers of animals and
buildings).20 In the nineteenth century, when industrialization led to a rapid
increase in accidents and victims were often unable to prove faute on the part of
the owners of machines, the French Cour de Cassation started to use Art 1242(1) as
the foundation of a strict liability regime.21 Over time, Art 1242(1) has come to be
understood as a general rule providing for strict liability of the keeper of a good.22
Strict liability arises whenever there is an intervention d’une chose, meaning that
the respective thing must somehow be involved in the creation of damage. It is
irrelevant whether that involvement is physical or merely psychological, whether the
19
Jourdain, Les principes de la responsabilité civile (9th edn, Dalloz 2014) 96; van Dam (n 15) 299.
20
Boyer, Roland, and Starck, Obligations. 1. Responsabilité délictuelle (5th edn, Elitec 1996) 201;
Ferid and Sonnenberger, Das französische Zivilrecht (2nd part, 2nd edn, Verlag Recht und
Wirtschaft GmbH 1986) chap 20, n 301; Jourdain (n 19) 85 et seq.; Zweigert and Kötz
(n 12) 663 et seq.
21
Van Dam (n 15) 60.
22
Chambre des requêtes de la Cour de cassation (Cass. Req.) 19.1.1914, D. 1914, 1, 303; Cour de
Cassation, Chambres réunies (Cass. Ch. Réun.) 13.2.1930, DP 1930, 1, 57 note Ripert = S. 1930,
1, 121 note Esmein – arrêt Jand’heur; Albrecht, Die deliktische Haftung für fremdes Verhalten im
französischen und deutschen Recht (Mohr Siebeck 2013) 20.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 181
harm is caused directly or indirectly, and whether the good is dangerous or generally
considered to be innocuous.23 Even if a person uses the object as an instrument to
create harm, that does not necessarily militate against the keeper’s responsibility.24
Instead, the French courts consider the particular role that the good has played in
the causation of damage. If the thing was in an orderly state at its orderly place, it
will be considered to have played a passive role (rôle purement passif). The thing will
then not be considered a major factor in the causation of harm and its keeper will
not be held liable for damages. On the other hand, if the thing was moving and thus
came into contact with the person harmed or the goods damaged, an active role in
the causation of damage will be presumed. It would then be up to the keeper to
exonerate himself by demonstrating contributory negligence of the victim25 or a case
of force majeure (events or effects that cannot be reasonably foreseen or
controlled).26
Special rules exist for specific items. Art 1243 provides a specific strict liability rule
for the keepers of animals, but case law does not distinguish between animals and
other things.27 The situation is different for cars, with the liability of the keeper of a
car subject to the so-called Loi Badinter.28 Compared to the strict liability regime
under Art 1242(1), the Loi Badinter restricts the defences available to the keeper. In
case of personal damage, the defence of contributory negligence can only be raised
under very limited circumstances.29 The keeper is moreover barred from raising the
defence of force majeure.30
This encompassing liability regime obviously warrants a closer look at the defin-
ition of gardien (keeper). Any person who possesses usage, control and supervision of
the good (usage, direction et contrôle) is considered its keeper, regardless of whether
the power of disposal is due to law or fact.31 For example, if the item is stolen, the
thief will be considered its new keeper. The former keeper will no longer be liable
for any loss incurred by the good, even if they did not keep the object in safe custody
23
Cass. Ch. Réun., DP 1930, 1, 57 note Ripert = S. 1930, 1, 121 note Esmein – arrêt Jand’heur;
Jourdain (n 19) 88.
24
Boyer, Roland, and Starck (n 20) 223 et seq.
25
Cour de cassation chambre civile (Cass. Civ.) 6.4.1987, D. 1988, 32 note Chabas; Assemblée
plénière de la Cour de cassation (Cass. Ass. Plén.) 14.4.2006, Bull. 2006, N 6, 12 = D. 2006,
1577; Jourdain (n 19) 96 et seq.
26
Cour de cassation chambre civile (Cass. Civ.) 2.7.1946, D. 1946, 392; Jourdain (n 19) 96 et seq.
27
Van Dam (n 15) 67.
28
Loi n. 85-677 du 5.7.1985.
29
Art 3 Loi Badinter; Cour de cassation chambre civile (Cass. Civ.) 20.7.1987, J.C.P. 1987, IV,
358–360; Cour de cassation chambre civile (Cass. Civ.) 8.11.1993, Bull. II no 316; Quézel-
Ambrunaz,’ Fault, Damage and the Equivalence Principle in French Law’ (2012) 3 Journal
of European Tort Law (JETL) 21, 29.
30
Art 2 Loi Badinter.
31
Cour de Cassation, Chambres réunies (Cass. Ch. Réun.) 2.12.1941, Bull. civ. N. 292, 523 – arrêt
Franck; Jourdain (n 19) 90 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
182 Ruth Janal
and negligently facilitated the theft.32 The keeper will be liable for any damage that
has arisen irrespective of fault, mental sanity33 or a specific age. According to a
famous decision by the Cour de Cassation, even a small child falling off a swing with
a stick in his hand will be liable if the stick accidentally injures another child’s eye.34
Looking at these principles, one might be misled into thinking that the liability
for autonomous systems embedded in a physical object does not pose any problems
under French law. It seems a given that the object’s keeper is liable for any damage
caused by the object, unless the person harmed is principally responsible for the
damage or there is a case of force majeure. Quite surprisingly, however, several
French authors argue that the keeper should not be held liable for a robotic object
due to the keeper’s lack of control if the object is steered autonomously.35 It is also
important to note that software which is stored on a data carrier is not physical
enough to be considered a chose (thing).36 Arguably therefore, strict liability for
things under French law does not cover autonomous systems.
6.2.2.2 Germany
At first glance, the liability for things in German law follows a very different path
from the French law.
(a) strict liability for motor vehicles and luxury animals Strict liabil-
ity of the keeper of an item is only imposed upon the keepers of motor vehicles
(§ 7(1) StVG) and ‘luxury’ animals that do not serve an economic purpose for their
keeper (§ 833(1) BGB). Similar to French law, the keeper is considered to be the
person who benefits from the use of the good and who is able to control the object as
a source of risk.37 Contrary to French law, however, the keeper’s liability does not
end with the motor vehicle being stolen or misappropriated. Rather, keepers will
only be exonerated under § 7(3) StVG if they have not negligently facilitated the
misappropriation.38 The abstract risk of harm posed by motor vehicles and animals
alike provides justification for the keeper’s strict liability. As a consequence, the
32
Cass. Ch. Réun., Bull. civ. N. 292, 523 – arrêt Franck; Jourdain (n 19) 91.
33
Art 414 (3) C.Civ.
34
Assemblée plénière de la Cour de cassation (Cass. Ass. Plén.) 9.5.1984, Bull. 1984, ass. plén.
n = D. 1984, 525 note Chabas – arrêt Derguini.
35
Mendoza-Caminade, D. 2016, 445, 447; Bonnet, La Responsabilité du fait de l’intelligence
artificielle (Master de Droit privé general thesis, Université Paris 2 Panthéon-Assas 2015) 19 et
seq.; Lagasse (2015) 12 CREOGN 2.
36
Cour d’appel de Paris, Pôle 5 (CA Paris Pôle 5) 9.4.2014 note Loiseau, CCE. N 6. 2014, 54
(regarding Google AdWords).
37
For vehicles see Bundesgerichtshof (BGH) 22.3.1983, Neue Juristische Wochenschrift (NJW)
1983, 1493; BGH, 26.11.1996, NJW 1997, 660; for animals see BGH, 19.1.1988, NJW-RR 1988,
656; Spindler in Beck’scher Online-Kommentar BGB (44th edn, 2017) § 833 n 1 et seq.; Wagner
in Münchener Kommentar zum Bürgerlichen Recht (7th edn, CH Beck 2017) § 833 n 2.
38
For animal theft see Wilts, Beiheft Versicherungsrecht, Karlsruher Forum 1965, 1020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 183
keeper will only be held liable if that specific risk has contributed to the damage
(i.e., the loss can be attributed to the unpredictability of animal behaviour).39 In this
context, it is irrelevant whether a vehicle was steered by a human or an autonomous
system. Contributory negligence by the person harmed is a valid defence, as is force
majeure.40
There is no consensus on the required minimum age for the keeper’s liability.
While some authors argue that liability should depend upon the individual cogni-
tive ability of the minor (§ 828 BGB),41 others look to the capacity to contract
following §§ 104 et seq. BGB.42 When a parent entrusts a motor vehicle or animal to
their child, the parent is generally considered to be the keeper.43
39
Bundesgerichtshof (BGH) 6.7.1976, Neue Juristische Wochenschrift (NJW) 1976, 2130, 2131;
BGH, 6.3.1990, Neue Juristische Wochenschrift – Rechtsprechungs-Report (NJW-RR)
1990, 791.
40
§ 254 BGB, §§ 9, 17(2), 7(2) StVG.
41
Hofmann, ‘Minderjährigkeit und Halterhaftung’ (1964) Neue Juristische Wochenschrift (NJW)
228 (232 et seq.); Deutsch, ‘Die Haftung des Tierhalters’ (1987) Juristische Schulung (JuS) 678;
Wagner (n 37) § 833 n 40; Staudinger in Schulze et al. (ed), BGB, 2017, § 833 n. 6.
42
Canaris, ‘Geschäfts- und Verschuldensfähigkeit bei Haftung aus culpa in contrahendo, Gefähr-
dung und Aufopferung’ (1964) Neue Juristische Wochenschrift (NJW) 1990 et seq.; Spindler
(n 37) § 833 n. 14; Teichmann in Jauernig, Kommentar zum BGB (16th edn, C.H.Beck, 2015)
§ 833 n. 3.
43
Bundesgerichtshof (BGH), 6.3.1990, Neue Juristische Wochenschrift – Rechtsprechungs-Report
(NJW-RR) 1990, 790; Wagner (n 37) § 833 n. 40.
44
Bundesgerichtshof (BGH) 8.2.1972, Neue Juristische Wochenschrift (NJW) 1972, 726; Wagner
(n 37) § 823 n. 406.
45
Förster, in Beck’scher Online-Kommentar BGB (44th edn, 2017) § 823 n. 442 et seq.; Wagner
(n 37) § 823 n. 599 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
184 Ruth Janal
6.2.2.3 England
English law takes a very restrictive approach to the keeper’s liability. According to
the Animals Act 1971 (s 2.1), the keeper of an animal may be held strictly liable, but
only if the animal is of a dangerous species (defined as a species not normally kept in
the British Isles and capable of causing serious damage if it is roaming free).
Apart from that, a strict liability for tangible items is unheard of for private
individuals. Even the keeper of a motor vehicle will not held be held liable for
damages caused by the car.49 Admittedly, the challenges posed by industrialization
did lead to the notorious precedent of Rylands v Fletcher, which held the keeper of
land strictly liable for hazardous substances stored on the ground.50 However,
subsequent decisions have watered down the rule in Rylands v Fletcher with the
result that it has become irrelevant.51
Occasionally, it is possible to identify tendencies in English case law to compen-
sate for the lack of strict liability rules.52 In Roberts v Ramsbottom, the High Court
held a driver liable for negligence after he rear-ended another car, even though the
driver’s steering ability was impaired due to a slight stroke. Neill LJ argued that since
the driver had kept his hands on the wheel, he was able to maintain some control,
albeit imperfect.53 The driver was then held liable as his driving was below the
required standard. Roberts v Ramsbottom came very close to imposing strict liability
on the driver of a car. However, the subsequent decision in Mansfield v Weetabix
(which was based on similar facts) emphasized that a driver will not be liable under
the tort of negligence if he is unaware of his illness and consequently fails to notice
the accidents caused by his actions. Leggat LJ in the Court of Appeal convincingly
argued that a more stringent standard for the driver’s duty of care would amount to
nothing less than strict liability.54
Interestingly enough, the approach to strict liability for motor vehicles has
changed in light of automated driving. Under the Automated and Electric Vehicles
46
Bundesgerichtshof (BGH) 12.6.1990, NJW-RR 1991, 24 et seq.; Wagner (n 37) § 823 n. 689561.
47
Bundesgerichtshof (BGH) 12.3.1968, NJW 1968, 1183; BGH, 25.9.1990, NJW 1991, 502; Ober-
landesgericht Frankfurt (OLG Frankfurt) 30.5.2006, Straßenverkehrsrecht (SVR) 2006, 340
48
Oberlandesgericht Düsseldorf (OLG Düsseldorf ) 23.07.1974, NJW 1975, 171; Oberlandesger-
icht Hamm (OLG Hamm) 27.03.1984, NJW 1985, 332 et seq.; Oberlandesgericht Karlsruhe
(OLG Karlsruhe) 04.10.1990, Versicherungsrecht (VersR) 1992, 114; Oberlandesgericht Zwei-
brücken (OLG Zweibrücken) 10.04.2002 – 1 U 135/01 (juris).
49
Zweigert and Kötz (n 12) 672.
50
Rylands v Fletcher [1868] United Kingdom House of Lords (UKHL) 1.
51
Tofaris, Rylands v Fletcher Restricted Further [2013] CLJ 11 (14) with further references.
52
Zweigert and Kötz (n 12) 672.
53
Roberts v Ramsbottom [1980] 1 All ER 7 = 1 WLR 823.
54
Mansfield v Weetabix [1998] 1 WLR 1263.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 185
Act 2018, the Secretary of State is to keep a list of automated vehicles which, under
defined circumstances, are capable of safely driving themselves.55 The Road Traffic
Act (s 143) makes it an offence to drive a car without insurance against third-party risks,
and under the new Act, insurers will be held liable for the damage caused by an
automated vehicle.56 In the absence of insurance, which may be the case for vehicles
owned by public bodies, the owner of the vehicle is liable for third-party damages.57
Note that insurance policies may exclude liability when the insured person has made
software alterations or has failed to install safety-critical software updates.58 The victim
would then have to sue the insured party for negligence.
In light of English law’s narrow approach to strict liability, any liability for the acts of
autonomous systems other than automated vehicles will need to be based upon the tort
of negligence. Some common-law scholars have argued that parallels can be drawn
between autonomous systems and animals.59 But courts are unlikely to follow that
suggestion in the near future, seeing that the strict liability for animals is based on
statute60 and that Parliament has acted to introduce liability only for automated vehicles.
55
Automated and Electric Vehicles Act 2018, s 1.
56
Automated and Electric Vehicles Act 2018, s 2(1).
57
Automated and Electric Vehicles Act 2018, s 2(2).
58
Automated and Electric Vehicles Act 2018, s 4.
59
Kelley, Schaerer, Gomez, and Nicolescu, ‘Liability in Robotics: An International Perspective on
Robots as Animals’ (2010) 24 Advanced Robotics, 1864 et seq.; sceptical Asaro, ‘The Liability
Problem for Autonomous Artificial Agents’, Association for the Advancement of Artificial
Intelligence (2015) <http://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI
.pdf>.
60
For the historic writ of scienter cf. Chapman, ‘Liability for Animals that cause Personal Injury:
Historical Origins & Strict Liability under the Animals Act 1971’ <http://1chancerylane.com/
barristers/matthew-chapman-qc/matthew-chapman-publications>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
186 Ruth Janal
61
Assemblée Plénière de la Cour de cassation (Cass. Ass. Plén.) 19.05.1988, D.S. 1988, 513;
Jourdain (n 19) 109.
62
Cour de cassation, chambre criminelle (Cass. crim.) 23.11.1923, GP 1928, 2, 900; Paris 8.7.1954,
GP 1954, 2, 280; Cass. crim. 16.2.1965, GP 1965, 2, 24; Cass. crim. 23.11.1928, GP 1928, 2, 900;
Cass. crim. 5.11.1953, GP 1953, 2, 383; Cass. crim. 18.6.1979, DS 1980 IR 36 (Larroumet); Cass.
Ass. Plén. 19.5.1988, D.S. 1988, 513 (Larroumet); Jourdain (n 19) 111.
63
Bundesgerichtshof (BGH) 12.04.1951, BGHZ 1, 388 (390); BGH 14.02.1989, NJW-RR 1989,
723 (725).
64
Larenz and Canaris, Lehrbuch des Schuldrechts II/2 § 79 III 2 d, 480 https://doi.org/10.17104/
9783406731181-419; Medicus and Lorenz, Schuldrecht II (17th edn, CH Beck 2014) n 1347;
Wagner (n 37) § 831 n. 27.
65
See further Wagner (n 37) § 831 n. 1 et seq.; Zweigert and Kötz (n 12) 634 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 187
The courts pursue two strategies to hold principals accountable beyond § 831 BGB.
First of all, they seek to extend the sphere of contractual liability to the pre-
contractual phase (culpa in contrahendo) and to third parties (by means of a legal
instrument called Vertrag mit Schutzwirkung zugunsten Dritter, or contracts with
protective effect to the benefit of third parties).66 As a consequence, § 278 BGB will
apply and the principal will be held vicariously liable for the acts of their agents. The
courts’ second measure is to extend the scope of § 823(1) BGB by introducing strict
duties of care for businesses that employ agents. Principals are held to intensive
duties of care in operational management and the production process. Among other
things, this has given rise to a relatively strict liability in the area of product liability,
replacing the application of § 831(1) BGB.67
Lastly, parties who seek to delegate their safety duties to independent third parties
are bound by case law to diligently choose the third party and to undertake spot tests
on the third party. Failure to do so will again lead to liability under § 823(1) BGB.68
66
Zweigert and Kötz (n 12) 637 et seq.
67
Diederichsen, ‘Wohin treibt die Produzentenhaftung?’ (1978) Neue Juristische Wochenschrift
(NJW) 1287; Wagner (n 37) § 823 n. 778.
68
Bundesgerichtshof (BGH) 26.9.2006, Neue Juristische Wochenschrift (NJW) 2006, 3629; BGH
30.9.1986, Neue Juristische Wochenschrift – Rechtsprechungs-Report (NJW-RR) 1987, 147;
BGH 2.10.1984, NJW 1985, 271; BGH 12.3.2002, NJW-RR 02, 1057; BGH 12.6.2007, NJW 2007,
2550; Wagner (n 37) § 823 n. 464 et seq.
69
Various Claimants v Catholic Child Welfare Society [2012] United Kingdom Supreme Court
(UKSC) 56; Cox v Ministry of Justice [2016] UKSC 10, n. 15 et seq.; Bermingham and Brennan,
Tort Law (5th edn, OUP 2016) 240.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
188 Ruth Janal
to vicarious liability.70 It seems that the UK Supreme Court has given both the
qualifying relationship and the close connection requirement a broader interpret-
ation in recent years, thus extending the principal’s liability.71
In addition to vicarious liability, liability for the conduct of an independent third
party (in particular independent contractors) may arise from the tort of negligence.
While the law of negligence is generally fault based, a person may be required to
procure the careful performance of work delegated to others in case of a ‘non-
delegable duty’. The wording notwithstanding, it is generally accepted that a non-
delegable duty may in fact be delegated, but doing so will give rise to a strict liability
on the part of the person delegating the task. The case law regarding non-delegable
duties is not particularly coherent,72 but it is possible to identify two broad categories
of non-delegable duties: first, where an independent contractor is commissioned to
perform a task which is inherently hazardous;73 and second, where there is an
antecedent relationship between the principal and the victim under which the
principal is under a duty to protect and care for the victim.74
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 189
liability on parents for the acts of their children in Art 1242(4) of the Civil Code.
Parents will be held liable irrespective of their diligence in supervising their child76
and even irrespective of faute on the part of the child.77 The courts have developed a
similar liability rule for institutions charged with organizing, controlling and
directing other persons’ conduct and based this rule on Art 1242 (1).78
76
Cour de Cassation, 2e Chambre civile (Cass. 2e Civ.) 19.2.1997, D. 1997, 265 note Jourdain.
77
Assemblée plénière de la Cour de Cassation (Cass. Ass. Plén.) 13.12.2002, D. 2003, 231 note
Jourdain.
78
Assemblée plénière de la Cour de Cassation (Cass. Ass. Plén.) 29.3.1991, D. 1991, 324 note
Larroumet; Cour de Cassation, 2e Chambre civile (Cass. 2e Civ.) 22.5.1995, D. 1996, 453 note Le
Bars/Buhler; Cass. 2e Civ. 12.12.2001, ETL 2002, 201.
79
Carmarthenshire County Council v Lewis (1956) Appeal Cases (AC) 549.
80
Woodland v Essex CC [2013] UKSC 66 [41].
81
Donaldson v McNiven [1952] 2 All England Law Reports (All ER) 691 (Court of Appeal) 692.
82
Bundesgerichtshof (BGH), 24.3.2009, Neue Juristische Wochenschrift (NJW) 2009, 1954; BGH,
15.11.2012, NJW 13, 1441.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
190 Ruth Janal
(a) four examples Determining a ‘wrong’ becomes much more difficult when
we look at intangible autonomous systems. Let us consider four examples:
(1) Consider autocomplete functions in search engines that suggest terms
which convey a false impression of a person. A famous example is the
case of the wife of the former President of Germany, Mrs Bettina Wulff.
In the year 2015, when the letters ‘bet’ were entered into the search form
at google.de, the search engine would suggest the search terms ‘Bettina
Wulff Escort Service’ and similar.83 Should this be considered a false
statement of fact and thus a wrong?84 Or should it be regarded as a
statement that significant numbers of people who started out by typing
‘bet’ ended up searching for ‘Bettina Wulff Escort Service’ – which
would be true?
83
Tota, ‘Dreiundvierzig Wortkombinationen weniger’, Frankfurter Allgemeine Zeitung (FAZ)
16 January 2015 <www.faz.net/aktuell/feuilleton/google-entfernt-ergaenzungen-bei-suche-
nach-bettina-wulff-13373712.html>.
84
For a discussion of case law see Karapapa and Borghi, ‘Search Engine Liability for Autocom-
plete Suggestions: Personality, Privacy and the Power of the Algorithm’ (2015) 23 International
Journal of Law and Information Technology 275 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 191
(2) In some legal orders, such as in German law, derogatory terms and
insults give rise to civil liability. If that is the case, does image-recognition
software that labels a woman of colour ‘gorilla’85 commit a ‘wrong’?
(3) While it is well known that roughly 10 per cent of pneumonia patients
die from the disease, it is not easy to determine the risk in individual
cases. Since autonomous systems are often used for risk assessment,
suppose a system is developed to predict the probability of death for
patients with pneumonia so that high-risk patients are admitted to
hospital while low-risk patients are treated as outpatients.86 Suppose
further that in an individual instance, the algorithm does not suggest
inpatient admission and the patient dies. Should the patient’s relatives
be entitled to damages, even though a doctor could not have given a
clear recommendation in the individual case?
(4) An employer uses social media to advertise jobs in STEM fields.
Without any direction by the employer, the ads are shown on social
media to more men than women.87 Should female applicants be
entitled to compensation under equal opportunity laws, such as Art 18
of Directive 2006/54/EC88? Does it matter whether the cause for this
imbalance can be discovered? Should the argument that targeting
women is more expensive than targeting men be considered a valid
defence?
There are no easy answers to these questions. Some of the system results described
above may be due to user and/or programming decisions. Such a decision might be
to show ads to as many people as possible for a given price, irrespective of their
gender, in scenario (4). In scenario (1), a significant programming decision was to
include searches based upon autocomplete suggestions when counting the absolute
number of searches. Whenever someone entered the letters ‘bet’, searching for the
German word for bed (Bett), they would have stumbled upon the scandalous
content suggested by the autocomplete function. Their typical curiosity regarding
the scandalous content would have contributed to the popularity of the search term,
thus leading to a snowball effect of the autocomplete function. Other system results
may be based upon insufficient data, which helps explain why image-recognition
85
Cf. Simonite (n 6).
86
Caruana et al., ‘Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital
30-day Readmission’ (2015) Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining 1721 <http://people.dbmi.columbia.edu/noemie/
papers/15kdd.pdf>.
87
Lambrecht and Tucker, ‘Algorithmic Bias? An Empirical Study into Apparent Gender-Based
Discrimination in the Display of STEM Career Ads’, 15 October 2016 <https://papers.ssrn
.com/sol3/papers.cfm?abstract_id=2852260>.
88
Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the
implementation of the principle of equal opportunities and equal treatment of men and
women in matters of employment and occupation (recast), OJ 2006 L 204/23.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
192 Ruth Janal
software often fails to yield satisfying results for minorities (scenario (2)).89 Also, data
may be incomplete, which is why a neuronal network trained to discover the
mortality risk of pneumonia patients classified asthma patients as low-risk patients
(scenario (3)). Interestingly enough, the data fed to the system supported this result.
However, the data did not reveal that patients with a history of asthma who develop
pneumonia are usually admitted directly to intensive care units and it is for this
reason that they rarely die of pneumonia.90
(b) human standards and beyond So what standard of behaviour can reason-
ably be expected from an autonomous system? Surely, the behaviour of a reasonable
human should be the minimum standard to expect when an autonomous system is
allowed to ‘run free’. But as autonomous systems become more sophisticated and
outperform humans in specific tasks, the bar should be raised, and one might expect
at least an average performance level from an autonomous system – or should it be
expected to be state of the art? However, due to the often intransparent nature of
software, particularly machine-learning software, it would be difficult to define
either a state-of-the-art or an average performance. System performance may work
well in 95 per cent of all instances, but may not work at all where certain minorities
are concerned. It is therefore not easy to define an average performance. Also,
irrespective of whether one applies an average standard or requires state of the art,
what is the relevant point in time? I suggest we require an average performance from
autonomous systems at the time the harm was done, as this would draw a parallel
with humans who are also expected to adopt evolving safety standards and are
judged by the standard of reasonable peers. The specific problem of machine
learning from user data will be considered in Section C.IV.4.
(c) lack of transparency Finally, the reasons for a specific result being
yielded by an autonomous system may lie in the dark – which is certainly an issue
for liability rules grounded in the principle of causation. As Caruana et al. note, ‘In
machine learning often a trade-off must be made between accuracy and intelligi-
bility.’91 When even experts fail to understand why an autonomous system makes a
specific recommendation (such as whether a pneumonia patient should be admitted
to hospital), how can a court decide whether this decision was correct?
I do not claim to have the definitive answers to these questions, but I certainly
believe that we need interdisciplinary research to address them. For the time being,
at least when a robot directly harms a person or property or when an autonomous
89
Cf. Buolamwini and Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commer-
cial Gender Classification’ (2018) 81 Proceedings of Machine Learning Research 1–15 <http://
proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf>.
90
Caruana et al. (n 86).
91
Caruana et al. (n 86).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 193
system does not manage to reach the human standard, this should be considered as a
“wrong” committed by the system.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
194 Ruth Janal
the user’s input is intentional, a damages claim can normally be based on inten-
tional torts. However, users should also be held under a duty of care to not lead
machine-learning systems astray by providing them with training data that mirrors
negligent behaviour.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 195
things under their control – although admittedly to quite varying degrees. Finally, for
autonomous systems that are learning from data provided by their users, the keeper is in a
crucial position to steer the learning process by deciding who is allowed to use the system.
6.3.3.3 Counterarguments
Four arguments militate against the keeper’s liability: (i) the development process of
artificial intelligence, (ii) the non-dangerous nature of AI, (iii) a perceived lack of
98
See the proposal for a ‘Robot Liability Matrix’ set out by Zornoza, Moreno, Guzmán,
Rodriguez, and Sánchez-Hermosilla, ‘Robots Liability: A Use Case and a Potential Solution’
in Dekoulis (ed) Robotics – Legal, Ethical and Socioeconomic Impacts (InTech 2017) 70 et seq.
<http://dx.doi.org/10.5772/intechopen.69888>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
196 Ruth Janal
control on the part of the keeper and (iv) an adjacent liability of the producer or
operator.
99
Cf. Zech (n 11) 195.
100
See for example Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’
(2017) 86 George Washington Law Review 1, 118 et seq, 121 et seq.; Asaro, ‘The Liability Problem for
Autonomous Artificial Agents’, Association for the Advancement of Artificial Intelligence, 2015
<http://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI.pdf>.
101
Weber, ‘Liability in the Internet of Things’ (2017) 6 Journal of European Consumer and Market
Law (EuCML) 208.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 197
(c) lack of control French authors have argued against the keeper’s liability
for robots due to the keeper’s inability to control an object steered by an autonomous
system.103 It is important to note that this argument was made in the context of Art
1242(1) of the Civil Code, where persons will be liable for the damage caused by an
object if they are the gardien, meaning they possess usage, control and supervision of
the good. I would question the argument that a completely autonomous system
evades human control,104 as the keeper of such a system will make the general
decision on whether or not to put the system into operation and on who may or may
not use the system. More importantly, if we look at the broader principle of liability
based upon a sphere of action, we find that parties are often held liable for actions
which they cannot entirely control or for situations where their attempt to exercise
control has failed, such as the actions of employees or animals. This is the very
essence of the principle of eius damnum, cuius commodum.
102
Weber (n 101) 208.
103
Bonnet (n 35) 19 et seq.; Lagasse (2015) 12 CREOGN 2.
104
Cf. Petit, ‘Law and Regulation of Artificial Intelligence and Robots – Conceptual Framework
and Normative Implications’, 9 March 2017 <https://papers.ssrn.com/sol3/papers.cfm?abstract_
id=2931339>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
198 Ruth Janal
particularly if one takes into account the victim’s perspective.105 For the injured
party, it may be very difficult to determine whether the accident is a consequence of
a human operating error, a malfunction of the autonomous system or a malfunction
of the mechanical parts of the car.106 Also, the developer of the software or the
operator of the system may be difficult to identify, may have their place of business
abroad or may be insolvent.107 Finally, it may be necessary to provide an incentive
for keepers to keep the system updated and ensure its proper use.108 In light of the
keepers’ decision to delegate tasks to the autonomous system, it does not seem
appropriate to exonerate them and instead ask the victim to pursue claims against
the producer or operator liability.
105
Cf. Borghetti, ‘L’accident généré par l’intelligence artificielle autonome’, La semaine juridique
(December 2017) 27.
106
Günther and Böglmüller, ‘Künstliche Intelligenz und Roboter in der Arbeitswelt’ (2017)
Betriebs-Berater (BB) 53, 54 et seq.
107
Günther and Böglmüller (n 106).
108
Galasso and Luo, Punishing Robots: Issues in the Economics of Tort Liability and Innovation in
Artificial Intelligence, Economics of Artificial Intelligence (University of Chicago Press 2018) 6
<www.nber.org/chapters/c14035.pdf>.
109
For safety duties under current German law cf. Spindler, ‘Zukunft der Digitalisierung –
Datenwirtschaft in der Unternehmenspraxis’ (2018) Der Betrieb (DB) 41, 48.
110
Lohmann, ‘Roboter als Wundertüten – eine zivilrechtliche Haftungsanalyse’ (2017) Aktuelle
Juristische Praxis (AJP) 152, 159.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 199
lenient standards (such as spot checks) would be insufficient to comply with the
principle eius damnum, cuius commodum.
Second, the cognitive abilities of humans do not align with autonomous systems.
Machine learning is based on the comparison of patterns that are not self-
explanatory to the human mind. Humans cannot always comprehend why an
algorithm shows ads for STEM jobs to more men than women111 or how algorithms
manage to make reliable assumptions regarding the sexual orientation of a person in
a portrait.112 Even if an autonomous system does not employ machine-learning
techniques, most consumers do not possess the requisite knowledge to monitor
the workings of the software – and even experts experience difficulties if the software
is not open source. As a consequence, any duty to monitor the systems would likely
be limited to obvious malfunctions and error messages. Such limited measures are
unable to prevent the autonomous system from causing harm in unexpected ways.
Third, the courts would have to painstakingly establish the duty of care in each
individual case, leading to uncertainty and a lack of legal clarity. A strict liability rule
therefore seems more efficient113 and will alert the keeper to the necessity of insuring
the corresponding risk.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
200 Ruth Janal
liability extends to all risks posed by the object ‒ both the specific autonomy risk and
the risk of mechanical defects. This will make it easier for victims to claim damages,
as they do not have to prove the exact cause of malfunction. Thus, in case of a
malfunctioning intelligent irrigation system, the neighbour whose garden has been
flooded would only have to prove the system’s overall malfunction, not the specific
cause of the defect (operating error or defective sensor). However, if the reason for
the keeper’s liability is the delegation of control to the autonomous system, then the
keeper should only bear this specific autonomy risk. I propose the following happy
medium: while the keeper only bears the risk associated with the autonomous
system, any defect is presumed to be caused by a malfunction of the control system.
Keepers get the option to exonerate themselves by proving that the damage was due
to a physical malfunction that could not have been prevented. Obviously, this
distinction need not be made by legal orders such as the French which employ a
general strict liability regime for things.
An operating error may be deemed to exist whenever the autonomous system has
not carried out an action that could reasonably be expected from it under the
circumstances. Whether the error is due to incorrect programming, is due to
training data that is not representative of real-world conditions or is an effect of
unforeseen machine learning should not be relevant. Reasonable expectations will
initially be modelled on the capabilities of humans, and should pose the minimum
level of performance. Since autonomous systems are expected to outperform
humans over time, the reasonable expectation could then be modified to an average
system at the time the damage occurred (see Section 6.3.I.2(b)).
Finally, whether the software controlling an object is incorporated in the object
itself or there is a control mechanism operated from somewhere in the cloud should
also be irrelevant.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 201
control for lack of expertise. It is also possible that third parties exert control over the
autonomous system, such as the operator of the system (Section 6.3.4.2) or hackers
manipulating it. Such an inability to technically control the autonomous system is
an inherent risk of the use of the system and should not exonerate the keeper.114 This
mirrors the prevalent position regarding employees and animals (see Sections 6.2.2
and 6.2.3). In any case, the keeper will always be able to control the object by cutting
off the power supply or confining the object physically.
(c) mental capacity threshold A final point for consideration with respect to
the keeper’s liability is the mental capacity required to qualify as the keeper of an
autonomous system. As noted earlier, under French law even a small child will be
regarded as the keeper of an object that has caused harm, whereas German scholars
debate how to ascertain the minimum age for the keeper’s liability. The rise of smart
toys for children and care robots for the elderly shows that this is also a critical topic
114
Gless and Janal, ‘Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche
Verantwortung’ (2016) Juristische Rundschau (JR) 561.
115
Various Claimants v Catholic Child Welfare Society [2012] UKSC 56; regarding the respons-
abilité du fait d’autrui: Ferid and Sonnenberger (n 20) chap 2, n 226; regarding § 831 I BGB:
Bundesgerichtshof (BGH) 26.01.1995, Neue Juristische Wochenschrift – Rechtsprechungs-
Report (NJW-RR) 1995, 659 et seq.
116
Regarding § 7 I StVG cf. Bundesgerichtshof (BGH) 28.04.1954, NJW 1954, 1198; Deutsch,
‘Gefährdungshaftung – Tatbestand und Schutzbereich’ (1981) Juristische Schulung (JuS) 317,
323 et seq.; Walter in beck-online.Grosskommentar zum Zivilrecht, 1.11.2017, § 7 StVG n. 78;
regarding § 833 S. 1 BGB: Spickhoff in: beck-online.Grosskommentar zum Zivilrecht, 1.11.2017,
§ 833 n. 89; regarding the responsabilité du fait des choses: Ferid and Sonnenberger (n 20) chap
2, n 328.
117
Cf. Chandra, ‘Liability Issues in Relation to Autonomous AI Systems’ 29 September 2017
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3052154>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
202 Ruth Janal
for the law of autonomous systems. The matter is clearly linked with the question of
insurability, that is, whether a child or a person with mental impairment can weigh
the risks associated with the autonomous system and insure the risk accordingly. If
this is not the case, the person who provided the robot to the child or person
concerned should be considered its keeper.
118
Art 2 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws,
regulations and administrative provisions of the Member States concerning liability for defect-
ive products, OJ 1985 L 210/29 et seq.
119
Weber, ‘Liability in the Internet of Things’ (2017) 6 Journal of European Consumer and Market
Law (EuCML) 207, 210.
120
Art 4, 6 Product Liability Directive.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 203
was put into circulation.121 This may limit the liability for machine-learning systems
as well as autonomous systems that produce unexpected actions when they are
subsequently fed with obsolete or incomplete data or when they interact with other
autonomous systems.122 Finally, the EU Product Liability Directive only provides for
damages in the form of death, personal injury and damage to consumer property
other than the product itself. Product liability rules modelled on the directive may
therefore prove inadequate to address the problems posed by autonomous systems.
The European Commission has launched an evaluation of the Product Liability
Directive that will, among other topics, look into this matter.123
121
Art 7(e) Product Liability Directive; cf. also Beck (n 7) 474; Lohmann, AJP 2017, 158.
122
Beck (n 7) 474.
123
<https://ec.europa.eu/growth/single-market/goods/free-movement-sectors/liability-defective-
products_en>.
124
Duffy v Google [2015] Supreme Court of Australia (SASC) 170, n. 284.
125
Dr Yeung, Sau Shing Albert v Google [2014] Hong Kong Court of First Instance (HKCFI) 1404,
n. 103.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
204 Ruth Janal
6.3.4.4 Privileges
The Google autocomplete case shows that, arguably, privileges should be granted to
the operators of machine-learning systems that are fed by the system’s users.
Common law courts have highlighted that the innocent dissemination defence
may be available to the operator of a search engine.128 The German Federal Court
held that while Google is liable for an infringement of personality rights if defama-
tory search terms are suggested, courts must undertake a process of balancing rights,
taking into account the rights of the harmed individual, the protection of free speech
and the benefit derived from the suggestion of search terms. As a consequence, the
German Federal Court found that Google was only liable for damages after it had
been notified of the defamatory search terms and declined to act.129 The court
therefore introduced a principle similar to the ISP privileges under the EU
E-Commerce Directive130 or the US Digital Millennium Copyright Act.131 In Italy,
a court held that the caching privilege of the Italian implementation of the
E-Commerce Directive applied directly, thus exonerating the company running
the search engine.132
In my view, such a principle seems adequate in some instances, such as when
weighing personality rights and freedom of speech. Such privileges may not be
appropriate in other instances, for example, when autonomous car systems are fed
steering data from all drivers using the particular system without oversight. One must
also be careful not to assume that an algorithm is ‘neutral’ when processing user
behaviour. The functionality behind the autonomous system is often kept a trade
secret by the operator and not revealed, and operators follow their own optimization
goals (such as viewer engagement). An autocomplete function, for example, both
126
Duffy v Google [2015] Supreme Court of Australia (SASC) 170, n. 375.
127
Cour de cassation, première chambre civile (Cass. 1re Civ.), 19.06.2013, Arrêt n 625, <https://
www.courdecassation.fr>.
128
Dr Yeung, Sau Shing Albert v Google [2014] HKCFI 1404, n. 120 et seq., Duffy v Google [2015]
SASC 170, n. 386
129
Bundesgerichtshof (BGH) 14.5.2013 (2013) Neue Juristische Wochenschrift (NJW) 2350.
130
Art 12 et seq. Directive 2000/31/EC of the European Parliament and of the Council of 8 June
2000 on certain legal aspects of information society services, in particular electronic commerce,
in the Internal Market, OJ 2000 L 178/1.
131
US Code § 512 – Limitations on liability relating to material online.
132
X c. Google, Tribunale Ordinario di Milano, 25.3.2013, N RG 2012/68306.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 205
predicts what users would have searched for and draws their attention to previously
unthought-of searches.133 Autocomplete suggestions based on the latter will lead to a
snowball effect which perpetuates defamatory content. In the same vein, it has been
shown that the YouTube autoplay function tends to promote radical content.134
Thus, the granting of liability privileges needs careful consideration.
6.5 conclusion
The transfer of cognitive activities from man to machine poses new challenges for
liability law. This chapter has explained why current product liability rules may not
be adequate to cover losses resulting from the acts of an autonomous system and
why, even with reforms, such rules might remain insufficient. The chapter focused
on three players that could also be held liable for damages caused by an autonomous
system:
(1) Any user of such a system may be held liable under negligence if they
failed to supervise the system despite having reason to be believe that
the system is not fully autonomous in the particular circumstances.
(2) The keeper of the system (the party that instructs, controls and benefits
from the use of the system) should be held strictly liable for the damage
caused.
133
Karapapa and Borghi, ‘Search Engine Liability for Autocomplete Suggestions: Personality,
Privacy and the Power of the Algorithm’ (2015) 23 International Journal of Law and Information
Technology 264.
134
Tufekci, ‘YouTube, the Great Radicalizer’, 10 March 2018 <www.nytimes.com/2018/03/10/
opinion/sunday/youtube-politics-radical.html>.
135
Siebtes Buch Sozialgesetzbuch - Gesetzliche Unfallversicherung.
136
Art L461–1 Code de la sécurité sociale.
137
Art L1142–1 Code de la santé publique.
138
See for bionic prosthetics by Bertolini and Palmerini, ‘Regulating Robotics: A Challenge for
Europe’ in Directorate General for Internal Policies (ed), Upcoming Issues of EU Law
v. 24.9.2014 144 et seq. <www.europarl.europa.eu>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
206 Ruth Janal
(3) The ‘operator’, i.e. the party that is responsible for running the autono-
mous system, making sure the system is provided with necessary data,
overseeing and tweaking the machine-learning process and installing
required updates in gadgets. In parallel with product liability, I propose
that the operator should be held strictly liable for the autonomous
system as well, but that privileges for machine learning based on user
data should be explored.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
7
Gerald Spindler
introduction
High-frequency trading has become important on financial markets and is one of
the first areas in algorithmic trading to be intensely regulated. This chapter reviews
the EU approach to regulation of algorithmic trading, which can be taken as a
blueprint for other regulations on algorithms by focusing on organizational require-
ments such as pre- and post-trade controls and real-time monitoring.
Downloaded from https://www.cambridge.org/core. University College207 London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
208 Gerald Spindler
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 209
1
Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on
markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/
EU (recast), OJ of 12.6.2014, L 173/349.
2
ESMA, ‘Guidelines: Systems and controls in an automated trading environment for trading
platforms, investment firms and competent authorities’, 24 February 2012 ESMA/2012/122 (EN).
3
Commission delegated Regulation (EU) 2017/589 of 19 July 2016 supplementing Directive
2014/65/EU of the European Parliament and of the Council with regard to regulatory technical
standards specifying the organizational requirements of investment firms engaged in algorith-
mic trading, OJ 31.3.2017 L 87/417.
4
Technical Committee of the International Organization of Securities Commissions, CR
02/11 July 2011 available at <www.iosco.org/library/pubdocs/pdf/IOSCOPD354.pdf>.
5
For a thorough overview see Peter Gomber, Björn Arndt, Marco Lutat, and Tim Uhle, ‘High
Frequency Trading’ (2011) <http://ssrn.com/abstract=1858626>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
210 Gerald Spindler
High-frequency trading also carries risks. In what became known as the Flash
Crash of 6 May 2010, a wrongly coded algorithm led to a crash. Within minutes of
the sell program initiating the sale of a large block of E-mini contracts valued at
US$4.1 billion, other algorithms reacted similarly, leading to a rapid decline of the
E-minis.6
Thus, whilst high-frequency trading is just another phenomenon of systemic risks
in financial markets, it can result in a total crash. This problem can only be adressed
when markets are monitored and trade is interrupted. The monitoring takes place
on the basis of a system of indicators and warning signals (known as ‘break circuits’).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 211
the computer on which the algorithms are running and the systems which match
the incoming order, including a minimum speed of 10 GB per second.9 Every
market participant who fulfils these criteria has to apply for a permit, even if they are
based outside Germany.10 As a consequence, every trader with a high-frequency
algorithm has to comply with the requirements laid down in the German Securities
Trading Act (Wertpapierhandelsgesetz) as well as the banking law (Kreditwesenge-
setz). Moreover,11 traders are subject to solvency supervision.
The Act is not limited to high-frequency trading but also encompasses the more
generic ‘algorithmic trading’ (Sec § 80 para 2 s 1 Securities Trading Act), which
refers to a computer program that automatically defines parameters for orders such
as price, time for buying or selling, or quantity of an order.12 The requirements
established by the German Securities Trading Act are thus applicable to all kind of
algorithm-based trading, whether it is on the trader’s own account or for clients,
whether on stock markets or over the counter. However – and in contrast to high-
frequency trading – only market participants based in Germany are covered, not
foreign market participants.13
Based on the European Securities and Markets Authority (ESMA) 2012 guide-
lines,14 the German Supervisory Authority issued a circular15 in 2013 specifying the
requirements for algorithmic trading. According to Sec 80 para 2:
(2) An investment services enterprise must additionally comply with the provisions
stipulated in this subsection if it conducts trading in financial instruments in such a
way that a computer algorithm automatically determines individual parameters of
orders, unless the system involved is used only for the purpose of routing orders to
one or more trading venues or for the confirmation of orders (algorithmic trading).
Parameters of orders within the meaning of sentence 1 include, in particular,
decisions on whether to initiate the order, on the timing, price or quantity of the
order, or on how to manage the order after its submission with limited or no human
intervention. An investment services enterprise that conducts algorithmic trading
must have in place effective systems and risk controls to ensure that
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
212 Gerald Spindler
1. its trading systems are resilient, have sufficient capacity and are subject to
appropriate trading thresholds and limits;
2. the routing of erroneous orders or the functioning of the system in a way that
may create or contribute to a disorderly market are prevented;
3. its trading systems cannot be used for any purpose that is contrary to European
or national rules against market abuse or to the rules of the trading venue to
which it is connected.
An investment services enterprise that conducts algorithmic trading must also have
in place effective business continuity arrangements to deal with unforeseen failures
of its trading systems and must ensure that its systems are fully tested and properly
monitored.
Thus, algorithmic traders must implement an appropriately resourced risk-
management system that follows the prescribed three-step order control system,
depending on the complexity of the algorithms they have implemented.16
According to Sec 80 (3) of the German Securities Trading Act an algorithmic trader
must document how they comply with these management requirements and keep the
relevant records for at least five years. Supervisory authorities may inspect those
records. Furthermore, high-frequency algorithmic traders are required to record every
order, including cancelled orders, executed orders, and market prices on exchanges
and trading platforms (Sec 80 (3) sent. 2 German Securities Trading Act). Thus, every
modification of any computer algorithm used for trading purposes must also be
documented. The trader has to provide evidence of changes of algorithms; if strategies
for algorithms are changed, or algorithms are used in new markets or platforms, the
German Supervisory Authority classifies them as ‘new products’ that require a com-
plete risk assessment according to the provisions on risk management.17
Employees of the trader have to be able to understand and to control the algorithm
on time. The German Supervisory Authority goes beyond ESMA requirements18 to
demand that the trader’s operators can be reached at any time by operators of the
market exchange or platform.19
Moreover, the algorithm has to provide adequate limits for trade. The German
Supervisory Authority has laid down very detailed requirements in this area: the limits
for contracting parties and the issuer, and market prices have to be settled before each
transaction. Liquidity must be ensured at all times and be monitored in real time.20
The trader and in particular the risk managers must be able to intervene directly and
independently of their trading departments.21 Algorithms must be designed in such a
way that every order and transaction can be identified and linked to the particular
algorithm that has executed the order.
16
Circular (n 9) No 36, 39.
17
Circular (n 9) No 26.
18
Cf. ESMA Guidelines (n 2) No 2.2. g.
19
Circular (n 9) No 23.
20
Circular (n 9) No 39.
21
Circular (n 9) No 41.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 213
Algorithmic traders must ensure that their algorithms are not misused for the
purposes of market manipulation and that they comply with market-specific rules. In
order to comply with these provisions traders therefore have to implement systems
that monitor the behaviour of algorithms, including automated warning systems.22
The entire board of directors of the investment firm is obliged to assess market
manipulation risks and define a strategy, at least to explain why they are not prone to
such risks.23 Systems must be designed in such a way that they allow for real-time
monitoring, which means that controls have to take place within a reasonable time
span.24 Operators of monitoring systems have to be independent of those who are
operating the algorithm trading system.
Traders must provide emergency systems that are able to cope effectively with
unforeseen difficulties in their main systems. These emergency systems must be kept
up to date and provide risk-appropriate actions in emergency cases.
Furthermore, traders must safeguard the continuous operation of their systems.
All systems have to be checked thoroughly under stress conditions before going
live.25 Moreover, algorithmic traders must assess the risks resulting from trade on
individual trading platforms and check their risk management.26
The third cornerstone of high-frequency trading regulation is the transparency
obligations codified in Sec 16 (2) No 3 of the German Stock Exchange Act (Börsen-
gesetz). Market participants have to flag the fact that algorithms are being used for
trading, but only those with direct access to the platform are obliged to do so. If
market participants allow their clients direct access to the platform and those clients
use algorithms they must ensure that clients will cooperate to flag algorithms in use.27
7.5.1 MiFID II
MiFID II more or less parallels the German approach by introducing a permit duty
for high-frequency trading, but not for algorithmic trading in general, for which it
22
Circular (n 9) No 55.
23
Circular (n 9) No 60.
24
Circular (n 9) No 71; see also Kindermann and Coridaß (n 10) 183.
25
Circular (n 9) No 15.
26
Circular (n 9) No 4.4.
27
Hessisches Ministerium für Wirtschaft, Energie, Verkehr und Landesentwicklung, ‘Guidelines
for adherence to the requirement of the labelling of trading algorithms’ (§ 16 sub-para 2 no 3
Stock Exchange Act (Börsengesetz), § 33 sub-para 1a Securities Trading Act (Wertpapierhan-
delsgesetz), § 72a Exchange Rules for the Frankfurter Wertpapierboerse (Börsenordnung für
die Frankfurter Wertpapierbörse), § 17a Exchange Rules for Eurex Deutschland and Eurex
Zurich (Börsenordnung für die Eurex Deutschland und die Eurex Zürich) as of 22 September
2014 No 7 available at <https://wirtschaft.hessen.de/sites/default/files/media/hmwvl/guidelines_
to_the_adherence_to_the_requirement_of_the_labelling_of_trading_algorithms_14–09-22-neu
.pdf>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
214 Gerald Spindler
provides a set of specific requirements. The parallels between the two sets of
regulations are quite obvious, given that the German legislator wanted to adopt
the European proposals at an early stage (even though MiFID II was only adopted in
2014). The evolution of discussion of algorithmic and high-frequency trading can be
seen in the definitions adopted by MiFID II. Art 4 (39) states that:
‘algorithmic trading’ means trading in financial instruments where a computer
algorithm automatically determines individual parameters of orders such as
whether to initiate the order, the timing, price or quantity of the order or how to
manage the order after its submission, with limited or no human intervention, and
does not include any system that is only used for the purpose of routing orders to
one or more trading venues or for the processing of orders involving no determin-
ation of any trading parameters or for the confirmation of orders or the post-trade
processing of executed transactions.
Thus, MiFID II follows grosso modo the approach taken by both the German act
and ESMA28 in excluding algorithms that only forward orders (or route them). The
algorithmic trading covered by MiFID has to be related to trading in a narrow sense,
acting on the market platform. Interestingly (and in contrast to Art 22 GDPR),
MiFID also covers systems that still allow for human decisions (based, however,
on algorithms). Moreover, MiFID does not distinguish between traditional software
and machine-learning software.
Regarding high-frequency trading, MiFID II also follows the lead of the German
act and ESMA by defining high-frequency trading as (Art 4 (40)):
. . .an algorithmic trading technique characterised by:
(a) infrastructure intended to minimise network and other types of latencies,
including at least one of the following facilities for algorithmic order entry:
co-location, proximity hosting or high-speed direct electronic access
(b) system-determination of order initiation, generation, routing or execution
without human intervention for individual trades or orders; and
(c) high message intraday rates which constitute orders, quotes or cancellations.
Thus, the minimization of latencies, in particular around hosting or high-speed
electronic access, is decisive. Moreover (and in contrast to the more generic term
‘algorithmic trading’), ‘high-frequency trading’ is restricted to fully automated
trading without any human intervention.
The basic requirements for algorithmic trading are laid down in Art 17 MiFID II,
which, however, leaves the bulk of specifications to ESMA and then to the Com-
mission as a delegated act.29 Thus, Art 17 requires an investment firm in general to
‘have in place effective systems and risk controls suitable to the business it operates
to ensure that its trading systems are resilient and have sufficient capacity, are subject
28
ESMA (n 2).
29
See Section 7.5.2.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 215
to appropriate trading thresholds and limits and prevent the sending of erroneous
orders or the systems otherwise functioning in a way that may create or contribute to
a disorderly market’. Market manipulation is also banned. Art 17 emphasizes the
capacity of investment firms to cope with unexpected events and failures of the
algorithms. The supervisory authorities are explicitly entitled, according to Art 17 (2),
to obtain a description ‘of the nature of its algorithmic trading strategies, details of
the trading parameters or limits to which the system is subject, the key compliance
and risk controls that it has in place to ensure the conditions laid down in paragraph
1 are satisfied and details of the testing of its systems’. Hence, investment firms
cannot refer in their descriptions to any kind of trade secrets or intellectual property
concerning the algorithms used.
Furthermore, Art 17 (3) requires an investment firm, that engages in algorithmic
trading to pursue a market making strategy, to ‘take into account the liquidity, scale
and nature of the specific market and the characteristics of the instrument traded
when complying with its obligations as per lit a-c’. Special attention is paid to
conformance with the framework of the trading venue.
Somewhat surprisingly, neither Art 17 nor the rest of MiFID II contains specific
provisions on high-frequency trading as opposed to generic ones for algorithmic
trading – even though the Recitals (No 61 and subsequent) explicitly mention the
specific risks of high-frequency trading. Recital 62 alone requires that ‘in order to
ensure orderly and fair trading conditions, it is essential to require trading venues to
provide such co-location services on a non-discriminatory, fair and transparent basis’.
However, these principles are not reflected in the provisions of Article 17 of MiFID
II (or anywhere else). Recital 64, which emphasizes the need for robust measures ‘in
place to ensure that algorithmic trading or high-frequency algorithmic trading
techniques do not create a disorderly market and cannot be used for abusive
purposes’, does not distinguish between different types of algorithmic trading. The
same is true of the requirement for tests and resilient systems including ‘circuit
breakers . . . on trading venues to temporarily halt trading or constrain it if there are
sudden unexpected price movements’.
Enforcement and supervision are enhanced by the requirement to flag all orders
generated by algorithmic trading (Recital 67), enabling supervisory authorities to
more precisely relate events to certain algorithms which may lead to distortion of
markets.
Finally, Art 48 (6) MiFID II requires trading venues and platforms to provide for
controls on algorithmic trading, including circuit breaker facilities, in order to avoid
‘flash crashes’.30 In particular, regulated markets have to provide testing environ-
ments for algorithmic traders. Concerning the control of algorithms, Art 48 MiFID
II requires market operators to ‘manage any disorderly trading conditions which do
30
See also ESMA, ‘Automated Trading Guidelines, ESMA peer review among National Com-
petent Authorities’, 18 March 2015, ESMA/2015/592.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
216 Gerald Spindler
arise from such algorithmic trading systems, including systems to limit the ratio of
unexecuted orders to transactions that may be entered into the system by a member
or participant, to be able to slow down the flow of orders if there is a risk of its system
capacity being reached and to limit and enforce the minimum tick size that may be
executed on the market’. Hence, market operators have to ensure that they are able
to take algorithms out of the market, notwithstanding the ‘kill functionalities’ which
are in the hands of the investment firms. This is also stressed by Recital 157.
Further, Art 2 of Regulation 2017/589 obliges the investment firm to employ com-
pliance staff who have a general knowledge of the algorithms, are in continuous
contact with those who operate the algorithms and have detailed technical know-
ledge. Closely related to the description of compliance staff – and self-evident ‒ are
the requirements that technical staff should understand the algorithm and be able to
manage, monitor and test it (Art 3 (1)). Whereas Art 4 obviously allows for
31
See n 3.
32
ESMA (n 2).
33
Such as penetration tests, simulation of cyber-attacks, identification of users of the system, etc.
34
See Notification of the German Supervisory Authority of 18 December 2017, Gz: BA 54-FR
2210-2017/0010.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 217
outsourcing software and hardware by stating that the investment firm remains fully
responsible, it is unclear whether the investment firm can also outsource staff for
managing and controlling the algorithms that are in place. ESMA had already set
out detailed provisions for the governance of algorithms, starting with development
and/or purchase of software (including outsourcing) and its subsequent mainten-
ance and control.35
Regulation 2017/589 also structures the deployment of an algorithm, requiring the
system to be tested in accordance with its specific market – and also in case of
‘substantial updates’ (Art 5 (1)). For algorithms which execute orders, specific
obligations are set out in Art 5 (2–5): the senior management of the investment firm
must designate a person to be responsible for the deployment or update (Art 5 (2)). In
particular, Art 5 (4) requires that the algorithm:
(a) does not behave in an unintended manner;
(b) complies with the investment firm’s obligations under this Regulation;
(c) complies with the rules and systems of the trading venues accessed by the
investment firm;
(d) does not contribute to disorderly trading conditions, continues to work effectively
in stressed market conditions and, where necessary under those conditions, allows
for the switching off of the algorithmic trading system or trading algorithm.
With regard to artificial intelligence or machine learning, Art 5 (4) (a) could raise
new questions, as these algorithms and their behaviour are not completely predict-
able. However, it is unlikely that the Commission really wanted to ban semi-
autonomous electronic agents from markets as long as their general behaviour can
be predicted.
The Commission specifies the necessary testing further in Art 6 and Art 7, again
following the ESMA guidelines, which required testing in a live environment before
going online:36 the investment firm must check the algorithm in respect of its
conformance with the requirements of the market venue, in particular the inter-
action with market venue software and the processing of data flows. Moreover, tests
have to be undertaken ‘in an environment that is separated from its production
environment and that is used specifically for the testing and development of
algorithmic trading systems and trading algorithms’ (Art 7 (1)).
As well as design and testing, in Art 8 the Commission obliges investment firms to
set limits on
(a) the number of financial instruments being traded;
(b) the price, value and numbers of orders;
(c) the strategy positions and
(d) the number of trading venues to which orders are sent.
35
ESMA (n 2) No 2.2. a.
36
ESMA (n 2) No 2.2. d.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
218 Gerald Spindler
Thus the Commission continues the approaches already chosen by ESMA. More-
over, the algorithm is not allowed to change these parameters; thus, Art 8 sets limits
to semi-autonomous systems as well.
The testing is not restricted to the initial deployment. An important part of the
duty to validate the algorithms annually is the required stress test (Art 10), in
particular the resilience of the system in case of increased order flows or market
stresses. The Commission requires that these stress tests should encompass
(a) running high messaging volume tests using the highest number of messages
received and sent by the investment firm during the previous six months,
multiplied by two;
(b) running high trade volume tests, using the highest volume of trading reached
by the investment firm during the previous six months, multiplied by two.
Another important element which was previously not specifically required is the ‘kill
functionality’ in Art 12 (1), which allows the investment firm to immediately cancel
unexecuted orders in emergency cases. Moreover, Art 12 (3) requires that the
investment firm can identify every trading algorithm and trader related to the
emergency case. The importance of this is illustrated by the additional requirement
that the compliance staff must be in constant contact with those who can ‘kill’ the
algorithm (Art 2 (2) of Regulation 2017/589).
Like the first approaches taken by the German Supervisory Authority and ESMA,
the Regulation lays stress on automated surveillance systems to detect market
manipulation and obliges investment firms to constantly monitor all trading activ-
ities (Art 13). ESMA had already demanded that traders should be able to automatic-
ally block orders that do not match fixed prices and quantities.37 In particular, the
investment firm must review its surveillance system each year and adapt it to
changes in the regulations (Art 13 (6)). The Commission Regulation even prescribes
detailed conditions for the system concerning time granularity, and capacity to
document and analyze order and transaction data ex post in a low-latency trading
environment (Art 13 (7)).
Like ESMA, the Commission Regulation is also concerned about continuation of
business in cases of disruption caused by incidents. Thus Art 14 explicitly requires
‘business continuity arrangements’ which should take into account different ‘pos-
sible adverse scenarios relating to the operation of the algorithmic trading systems,
including the unavailability of systems, staff, work space, external suppliers or data
centres or loss or alteration of critical data and documents’. Like ESMA, the
Commission even requires investment firms (among other organizational proced-
ures, such as shutting down the running algorithm) to provide for ‘(c) procedures for
relocating the trading system to a back-up site and operating the trading system from
37
ESMA (n 2) No 4.2.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 219
that site, where having such a site is appropriate to the nature, scale and complexity
of the algorithmic trading activities of the investment firm’.
Applying to all investment firms – and not only algorithmic traders – are the
provisions for pre-trade control on order entry in Art 15 Regulation 2017/589. The
Regulation requires investment firms to carry out price collars with automatic
blocking of mismatching orders, maximum order values and maximum message
limits – thus obviously seeking to ban any market manipulation attempts. Moreover,
investment firms must control the number of times an algorithm has been used,
disabling it after a certain number of executions, which only can be enabled by
human decision of the competent officer (Art 15 (3)). Investment firms also have set
market and credit limits that are based, among other criteria, on ‘the length of time
the investment firm has been engaged in algorithmic trading’ (Art 15 (4)).
One subject that has been intensively debated is now codified in Art 16 (1)
Regulation 2017/589, which requires an investment firm to monitor in real time
‘all algorithmic trading activity that takes place under its trading code, including that
of its clients, for signs of disorderly trading, including trading across markets, asset
classes, or products, in cases where the firm or its clients engage in such activities’.
This real-time monitoring task is assigned to the risk management department of the
investment firm and must be carried out independently of the trading staff (Art 16
(2)). The monitoring staff should be accessible to other market participants and
supervisory authorities. Moreover, Art 16 (5) requires real-term alerts to unexpected
trading activities undertaken by means of an algorithm within 5 seconds of the
relevant event. The investment firm is then obliged to take action, and in particular
to withdraw the order. However, the ‘killing functionality’ is not mentioned in Art
16 (5).
During the downsizing of algorithmic trading the investment firm must also
control post-trade market and credit risk limits, including, in case of alerts, the
shutdown of an algorithm (Art 17 (1)). However, as Recital 59 of MiFID II already
clarifies, the mere use of algorithms in the post-trade phase does not constitute
relevant algorithmic trading.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
220 Gerald Spindler
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
8
Susana Navas
introduction
The possible emulation of human creativity by various models of artificial intelli-
gence systems is discussed in this chapter. In some instances, the degree of original-
ity of creations using algorithms may surprise even human beings themselves. For
this reason, copyright protection of ‘works’ created by autonomous systems is
proposed, which would take account of both the fundamental contributions of
computer science researchers and the investment in human and economic
resources that give rise to these ‘works’.
8.1 creativity
1
Boden, ‘Computer Models of Creativity’ (2009) 30(3) AI Mag 23.
2
‘Probably the new thoughts that originate in mind are not completely new, because they have
their seeds in representations that already are in the mind. To put it differently, the germ of our
culture, all our knowledge and our experience, is behind each creative idea. The greater the
knowledge and the experience, the greater the possibility of finding an unthinkable relation
that leads to a creative idea. If we understand creativity like the result of establishing new
relations between pieces of knowledge that we already have, then the more previous knowledge
one has the more capacity to be creative’ Boden, Artificial Intelligence and Natural Man (2nd
edn, Basic Books 1987) 75.
Downloaded from https://www.cambridge.org/core. University College221 London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
222 Susana Navas
perseverance, and hours of practice, study and tests, in which the so-called slow
brain3 can process ideas that arise as progress is made in research, or in a new artistic,
architectural, gastronomical or musical style, to give some examples.
According to renowned scholar Margaret A Boden, three types of creativity arise
successively.4 The first, ‘combinational creativity’, consists of a new combination of
familiar ideas through the association of ideas that were not previously related, or
through analogous reasoning. These two mechanisms may result in the creation of
complex conceptual structures, and therefore could be called creative. It could be
said that this class of creativity is a natural property of the human mind, and
functions through associations, images, symbols and analogies that vary according
to the society and culture in which the person grows and is formed. Whatever the
influence, this type of creativity is the easiest for human beings to use. In this sense,
everyone, whether disabled or not (although not someone who is very seriously
disabled) possesses at least a minimum level of creativity. This is a basic sort of
creativity ‒ or creativity in its pure state (natural creativity),5 which does not mean
that its results must always and in all cases be protected by the law. It is a more
limited and poorer type of creativity than those described below, since much of the
information on which the ideas are based or the analogies are made comes from the
context or from the tacit knowledge acquired in the medium in which the person
lives, and not from in-depth knowledge of one or more matters or areas of know-
ledge. In many cases, the result of this creative combination does not pass beyond
the stage of mere occurrence, and no creation worthy of legal protection is formed.6
Many poetic images do not extend beyond this level of creativity.
The second model is ‘exploratory creativity’, which consists of exploring a style of
thought or a conceptual space belonging to the person defining it, using a set of
productive ideas (‘generative ideas’) that may be explicit but may also be totally or
partially implicit.7 In this type of creativity, the limits of the conceptual scheme are
explored, and small changes or alterations that do not necessarily modify its basic
initial rules are even introduced. The result of this exploration, insofar as it is
sufficiently original, may be protected by law. This is a creativity that could be
classified as ‘professional’ as opposed to ‘natural’.
The third model, according to Boden, is ‘transformational creativity’; in this
model, the conceptual space for the style of the thought itself is transformed when
one or more of the elements defining it are altered, resulting in new ideas that could
not possibly have been generated before. These ideas are not only valuable and new,
but also surprising, shocking, counterintuitive and a break with the status quo or
3
Kahneman, Pensar rápido, pensar despacio (Barcelona 2013) 48‒50.
4
Boden (n 1).
5
Boden, ‘Creativity and Artificial Intelligence’ (1998) 103(1) Artif Intell 347‒356.
6
Navas, ‘Creation and Witticism in the User-Generated Online Digital Content’ (2015‒2016) 36
Actas de Derecho Industrial 403‒415.
7
Boden (n 1); Collins and Evans, Rethinking Expertise (The University of Chicago Press 2007).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 223
with some of the ideas commonly accepted by the social, artistic, legal or economic
sector in which the person works.8 It takes years, therefore, for these ideas to be
recognised and studied and for people (including other experts) to become accus-
tomed to this new form of thinking in the area involved. This is the only type of
‘professional’ creativity to provide ideas that are different from previous ones, not
only to their authors but to anyone else. It is different from the other two types of
creativity, which generate ideas (or artefacts) that are mostly new for their creator but
not for humanity, since either the idea already exists, or another person has had the
same idea or created the same artefact without the two creators knowing each
other.9 It is this difference that provides a ‘creative height’ worthy of legal protection.
‘Transformational creativity’ can only be a product of the mind, of the effort of
this person and no other. The personal imprint of the creator is fundamental. On
the other hand, in the case of ‘combinational’ and ‘exploratory’ creativity, the idea
(or the new artefact) may be created by another person, meaning the persona of the
creator is fungible, so that his or her imprint is neither determinant nor fundamental
for the ‘creative’ result. This does not prevent the appreciation that the author’s
imprint is stronger in the second type of creativity than in the first.
8
Boden (n 1).
9
Boden differentiates between psychological creativity (‘P-creativity’) and historical creativity
(‘H-creativity’). In the former, the creativity takes the person who produced the idea as the
reference, even if other people already had the same idea previously. In the latter, as well as
being P-creative, the idea is H-creative in the sense that nobody has had this idea before (see
Boden (n 1)).
10
However, it is thought that Kurzweil is very close to doing this: How to Create a Mind. The
Secret of Human Thought Revealed (Penguin Books 2013).
11
Schorlemmer, Confalonieri, and Plaza, ‘The Yoneda Path to Buddhist Monk Blend’ <www
.iiia.csic.es/es/publications/yoneda-path-buddhist-monk-blend>. Date of access: April 2020;
Benítez, Escudero, Kanaan, and Masip, Inteligencia artificial avanzada (UOC Barcelona
2013) 10.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
224 Susana Navas
imitate the way the human brain functions12 have been developed, consisting of a
large number of very simple components that work together. A fundamental feature
of this type of network is its ability to learn and improve its behaviour through
training and experience. Computer algorithms that combine ideas giving rise to
improbable – but not impossible – ideas owe much to progress in both fuzzy
computational logic and neuronal connections. Of the two working methods
normally used by computer scientists, the bottom-up method, which focuses on
solutions, seems more popular than the top-down method of concentrating on the
problems.13There are artificial intelligence models based on the association of ideas,
others that handle analogies in both fixed and flexible structures, and models that
centre on induction, which is crucial for artistic and scientific creativity, taking into
account case-based knowledge and reasoning, as well as theoretical models
that suggest new questions and new approaches to answering these questions
(explanation-based learning).14
The other two types of creativity are easier, in that they use a set of rules that can
be specified sufficiently well for them to be converted into binary code, that is,
translated into an algorithm in computer language that, by transforming the rules of
the conceptual framework, could lead to results that are comparable or even
superior to those of the most competent professionals. The music of Mozart is
usually quoted as an example of ‘exploratory creativity’; in exploring the inherent
possibilities of the musical genres of his epoch, Mozart generally introduced rela-
tively superficial changes that did not involve a fundamental transformation.
Another case is AARON, the program created by Harold Cohen, which has created
drawings and paintings that are exhibited in the world’s leading art galleries.15
The use of genetic algorithms is fundamental to transformational creativity,
meaning that the rules of the conceptual space or scheme of thought change
themselves.16 Thus, the random and sudden changes in the algorithm rules are
similar to the mutations or crossings that occur in biology, giving rise to ‘surprises’
and a constant and automatic evolution of the computer program, the result of
which is highly creative. This type of creativity requires the human being to possess
not only a profound knowledge of their area but also a great deal of knowledge of
12
Barrow, ‘Connectionism and Neural Networks’ in Boden (ed), Artificial Intelligence (2nd edn,
Oxford University Press 1996) 135‒155.
13
Galanter, ‘What Is Generative Art? A Complexity Theory As a Context for Art Theory’,
ga2003_paper.pdf. Date of access: April 2020.
14
Boden, ‘Creativity’ in Boden (ed) Artificial Intelligence (2nd edn, Oxford University Press 1996)
272‒277.
15
Boden (n 1). Ramalho, ‘Will Robots Rule the (Artistic) World? A Proposed Model for the Legal
Status of Creation by Artificial Intelligence Systems’ (13 June 2007). Available at SSRN: https://
ssrn.com/abstract=2987757. Date of access: April 2020.
16
Boden (n 1); Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine
Intelligence’ in Calo, Froomkin, and Kerr (eds) Robot Law (Edward Elgar 2016) 56‒58; Boden
(n 14) 286‒289.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 225
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
226 Susana Navas
The art thus created is known as ‘generative art’.25 It features randomness in its
composition, evolution and constant change in a complex or even chaotic environ-
ment created exclusively by the software.26 Two examples of this type of art are, in
the visual arts, the AARON program and, in music, the EMI program used by
David Cope.
When the program produces results (‘works’) that cannot even be imagined by the
person who commissioned the development of the program or who used a program
that had already been created, this is usually called ‘evolutionary art’. Examples of
this type of art are the works of Karl Sims and William Latham. Karl Sims uses a
computer program that produces graphical images (12 at a time) that are radically
different from those produced randomly without favouring one style over another.
This remains the decision of the computer itself. This is ‘transformational creativity’
using a genetic algorithm. William Latham also uses a genetic algorithm to produce
sculptures that he is unable to imagine himself.
However, if the program is designed to interact with the medium and, in
particular, to take external human behaviour into account, the result is ‘interactive
art’.27 Here, the audience may influence the behaviour of the software up to a
certain point, but this does not always occur. Indeed, the software may interpret this
external factor in a way that differs from the audience’s intention and gives rise to
unusual and surprising artistic results. This type of art is similar to multi-media work
but does not fully correspond to it.28 In 2007, an art gallery in Washington, DC used
a computer program written by Ernest Edmond to interact with works by Mark
Rothko, Clyfford Still and Kenneth Noland to commemorate the 50th anniversary
of the ‘color field’ painters.
25
Following the classification given by Boden (n 1). A wider taxonomy of generative art can be
found in Boden and Edmond, ‘What Is Generative Art?’ (2009) 20(12) Digital Creativity 21‒46.
26
A wide definition of generative art is offered by Galanter: ‘Generative art refers to any art
practice where the artist cedes control to a system that operates with a degree of relative
autonomy, and contributes to or results in a completed work of art. Systems may include
natural language instructions, biological or chemical processes, computer programs, machines,
self-organizing materials, mathematical operations, and other procedural inventions’ (Galanter
n 22).
27
Boden (n 1).
28
Esteve, La Obra Multimedia en la Legislación Española (Aranzadi Cizur Menor 1997) 29‒35.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 227
29
Yu, ‘The Machine Author: What Level of Copyright Protection Is Appropriate for Fully
Independent Computer-Generated Works?’ (2017) 165 U Pa L Rev 1241; Yanisky-Ravid and
Velez-Hernandez, ‘Copyrightability of Artworks Produced by Creative Robots and the Concept
of Originality: The Formality-Objective Model’, available at SSRN: <https://ssrn.com/
abstract=2943778>. Date of access: April 2020.
30
<www.wipo.int/treaties/es/text.jsp?file_id=283698>. Date of access: April 2020. Referred to as
the ‘Berne Convention’ from now on.
31
Perry and Margoni, ‘From Music Tracks to Google Maps: Who Owns Computer-Generated
Works?’ Paper 27, Law Publications (2010) <http://ir.lib.uwo.ca/lawpub/27>. Date of access:
April 2020; Yanisky-Ravid and Velez-Hernandez (n 29).
32
Rahmantian, Copyright and Creativity. The Making of Property Rights in Creative Works
(Edward Elgar 2011).
33
Marco, ‘La formación del concepto de derecho de autor y la originalidad de su objeto’ in
Macías and Hernández (eds), El derecho de autor y las nuevas tecnologías. Reflexiones sobre la
reciente reforma de la Ley de Propiedad Intelectual (La Ley 2008); Rahmantian (n 32).
34
Marco (n 33); Yanisky-Ravid and Velez-Hernandez (n 29).
35
Margoni, ‘The Harmonisation of EU Copyright Law: The Originality Standard’, Available at
SSRN: <https://ssrn.com/abstract=2802327>. Date of access: April 2020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
228 Susana Navas
of their author and, as a result of this, legal protection. This minimum effort must be
beyond the ordinary, the routine or the obvious. Thus, the chronological or alpha-
betical ordering or the putting together of other people’s works without any coher-
ence would not give rise to a ‘new work’ and, consequently, would lack protection.
In this sense, the salient factor determining whether the originality requirement is
met is the creation process, and its result is less important. Indeed, compared to the
‘classic’ (traditional) model still present in copyright legislation in Europe and the
United States, in which the author creates from nothing, from their own inspiration
and alone,36 progress in artificial intelligence, the new technologies and the Internet
provide a much more dynamic model in which the author can hold a dialogue with
the public about their work, interacting with them and with their colleagues. The
author’s model forged from the network of networks therefore puts the accent not so
much on themselves as on the process of creating the work. Technology allows the
work to be in permanent evolution: the creative process does not end, but is always
actively improving, transforming or perfecting the work.37
In the classic approach described, the work resulting from an algorithm with
learning capacity that can evolve and generate original works unimaginable to the
human being who wrote the algorithm could not be considered a work protectable
by copyright. The ‘intelligent’ imprint of the algorithm is not comparable with the
creative effort of a physical person, however minimal this may be. However, if the
emphasis is placed on the creative process itself rather than on the result, it can be
seen that in certain types of algorithms, above all the genetic ones, the process of
creating the work is similar to the creation process that only a few human creators
can carry out. This is the case with transformational creativity, a fundamental
element in evolutionary art. In our opinion, similar considerations could apply with
regard to exploratory creativity. It is these two types of human creativity that are the
simplest for artificial agents to imitate, in so far as it is possible to emulate the
functioning of the human brain when working with a scheme of predefined rules.
On the other hand, it is most difficult to emulate the working of the brain in natural
creativity because of the sheer quantity of nuances, ambiguities, generalisations and
non-professional tacit knowledge that are involved. This natural creativity is within
the reach of any physical person who, with a minimum of creative effort, can
produce a work deserving legal protection under intellectual property legislation.
When the work is created by an algorithm,38 the creation process is very similar to
36
Grimmelmann, ‘Copyright for Literate Robots’ (2016) 101 Iowa L Rev 657: ‘Copyright’s ideal of
romantic readership involves humans writing for other humans . . . Copyright ignores
robots. . .’.
37
For a proposal for a new model of copyright based on new technologies and the Internet, see
Navas, ‘Dominio público, diseminación online de las obras del ingenio y cesiones “creative
commons” (Necesidad de un nuevo modelo de propiedad intelectual)’ (2011‒2012) 32 Actas de
Derecho Industrial 239‒262.
38
The ideas or principles involved in the algorithm, the computational logic and the program-
ming language are not protected by copyright. Only the expression of the computer program is
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 229
the one the human brain would use in the case of transformational and exploratory
creativity, which might argue for copyright protection for a result produced by an
algorithm. The personal imprint of the author as a physical person is emulated
almost perfectly by the algorithm or is even superior to what they might achieve.39
On the other hand, works in which the process of creation is based on natural
creativity, a field in which artificial intelligence is still far from emulating the human
brain, would remain outside the copyright protection regime.
Therefore, a result of the three types of creativity described above will be protect-
able by copyright if it is made by a physical person and there is a minimum of
creative effort, while a work in which the creative process replicates almost identi-
cally the creative process of a human will only be protected by copyright if it is the
product of the ‘imagination’ of an algorithm. This can occur more often in trans-
formational creativity and evolutionary art than in exploratory and merely generative
art and, to a much lesser extent, where the creativity can be classified as ‘natural’, in
purely combinatory processes. In these cases, the algorithms faithfully follow instruc-
tions, having very little, if any, learning capacity, and acting mechanically without
introducing changes. Where the work created by the algorithm can be protected,
the originality must have a component of novelty that is not required for works of
human creativity. In fact, the issue of whether or not works created by algorithms
have legal protection brings into question whether the minimum creative effort
criterion for the protection of works of human intellect should be revised, the
threshold raised and a creative height required that seems to have disappeared
(the objective approach).40 As part of this creative height, the element of novelty
must still be taken into consideration. Machines can certainly contribute to
improved self-observation and self-knowledge for human beings, allowing them to
see the intellectual potential that is all too frequently wasted.
A challenging question must therefore be answered. Under the current copyright
model, the term ‘work’ can only apply to the work of a physical person, not to that of
a machine or an animal, even if they are ‘creative’, so ‘work’ may not be appropriate
for objects created by an algorithm. The use of other terms, such as ‘result’, could be
the subject of an independent concept in intellectual property legislation, requiring
definition or differentiation.
protected, as is reminded by Recital No 11 and Art 1.2 of Directive 2009/24/EC of the European
Parliament and Council, 23 April 2009, on the legal protection of computer programs (codified
version), OJ L 111, 5.5.2009 16‒22.
39
In fact, the popularisation of culture and art has, through the use of technology and publication
on the Internet, reached levels that are almost unimaginable, with mere popular occurrences
being considered as brilliant ideas and as works that make artificial agents with creative capacity
appear much more intelligent than perhaps they are and, above all, appear more (even much
more) intelligent than many humans. At least the cognitive biases of people will not appear
here (for more on this perspective, see Navas, ‘Creation and Witticism in the User-Generated
Online Digital Content’ (2015‒2016) 36 Actas de Derecho Industrial 403‒415).
40
Yanisky-Ravid and Velez-Hernandez (n 29).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
230 Susana Navas
41
As an example, in Spain, Art 5.1 Ley de propiedad intelectual (BOE 22 April 1996) states that
the author may only be a ‘natural person’; § 2(2) of the Urheberrechtsgesetz in Germany <www
.gesetze-im-internet.de/urhg/inhalts_bersicht.html> (Date of access: April 2020) considers that
only works that consist of ‘persönliche geistige Schöpfungen’ can be considered objects of
protection; Art L 111-1 of the French Code de la propriété intellectuelle <www.legifrance
.gouv.fr/affichCode.do?cidTexte=LEGITEXT000006069414> (Date of access: April 2020)
alludes to ‘ouvrages de l’esprit’, which implies that these are created by man. In the same
sense, see the wording of s 9 UK Copyright, Design and Patent Act (1988) <www.legislation
.gov.uk/ukpga/1988/2> (Date of access: April 2020). Likewise, see s 2(1) Ireland Copyright Act
and Related Rights Act (2000) <www.irishstatutebook.ie/eli/2000/act/28/enacted/en/html>.
Date of access: April 2020.
42
Neuberger, ‘Computer Ownership Is not Monkey Business: Wikimedia and Slater Fight over
Selfie Photographs’ (2014) 20(5) IP Litigator 33; Ricketson, ‘The Need for Human Authorship –
Australian Developments: Telstra Corp Ltd v Phone Directories Co Pty Ltd (Case Comment)’
(2012) 34(1) EIPR 54: ‘the need for author to be human is a longstanding assumption’;
McCutcheon, ‘Curing the Authorless Void: Protecting Computer-generated Works Following
ICETV and Phone Directories’ (2013) 37 Melbourne University Law Review 46; Ramalho
(n 15).
43
Yanisky-Ravid and Velez-Hernandez (n 29); Hertzmann, ‘Can Computers Create Art?’, avail-
able at: arXiv: 1801.04486v6[cs.AI], 8 May 2018. Date of access: April 2020.
44
Art L 113-2 Code la propriété intellectuelle; Art 8 Ley de propiedad intelectual; Art 7 Legge di
protezione del diritto d’autore e di altri diritti conessi al suo esercizio 23 April 1941 <www
.interlex.it/testi/l41_633.htm#6>. Date of access: April 2020; Art 19 Código de derechos de autor
y derechos conexos in Portugal <www.wipo.int/wipolex/es/text.jsp?file_id=198457>. Date of
access: April 2020.
45
Art 2.1 Directive 2009/24/EC of the European Parliament and Council, 23 April 2009, on the
legal protection of computer programs. This is specifically admitted in the LPI for Spain,
Art 97.
46
Art 2.3 Directive 2009/24/EC of the European Parliament and Council, 23 April 2009, on the
legal protection of computer programs; Art L 113-10 Code de la propriété intellectuelle; § 69b
Urheberrechtsgesetz (Germany).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 231
Act)47 under which the employer of the person who carries out the work is
considered to be the author, which thus differentiates between the ‘author in fact’
and the ‘author in law’. The author in law is the owner of the rights to exploitation
and exercises them in relation to the work produced by the author in fact.48
The same situation occurs in the case of the production of audio-visual works, for
which rights are transmitted by the director of the work to its producer (Art 15.2
Berne Convention).49 Similarly, the rights to anonymous and pseudonymous works
may be exercised by a legal person (Art 15 Berne Convention),50 as can the so-called
right of dissemination as suggested by Ana Ramalho.51
Thus, whether by legal fiction or by presumption, copyright recognises exceptions
to the rule that only a physical person, and only an author in fact, can own the rights
to a work.
From this, we may start from the premise that only works created by an algorithm
in a process that emulates the creative process of a human brain can be protected,
which, according to computer scientists, applies especially in cases of exploratory
and transformational creativity in which the legal fiction is established that the
‘author in law’ is the individual or organisation that commissioned the algorithm
in question or that used an algorithm that had been created previously but for other
purposes but that ended up producing the ‘original’ work. Such an organisation or
individual will be the owner of the rights to both moral and economic exploitation
(the same rights as would be held in any other case in which the author in fact was
an individual). The author in fact would be the ‘robot machine’.52
There are already legal systems – all of them within the common-law legal
tradition – that have admitted such an interpretation: the Copyright, Design and
Patent Act (1988) in the UK, section 9 paragraph 3; the New Zealand Copyright Act
(1994), paragraphs 2 and 5;53 the Ireland Copyright Act and Related Rights Act
(2000), Part I, section 2 and Chapter 2, paragraph 21; and the South Africa Copyright
Act (1978), No 98.54
These laws define computer-generated works as works ‘generated by a computer
in circumstances such that there is no human author’, where ‘the person by
whom the arrangements necessary for the creation of the work are undertaken’ is
47
The text can be consulted at <www.copyright.gov/title17/title17.pdf>. Date of access: April
2020.
48
Lee, ‘Digital Originality’ (2012) 14(4) Vanderbilt J Ent and Tech Law 919.
49
Arts 88‒89 Ley de propiedad intelectual (Spain); § 89 Urheberrechtsgesetz (Germany).
50
Art L 113-6 Code de la propriété intellectuelle (France), art 6 Ley de propiedad intelectual
(Spain); § 10 Urheberrechtsgesetz (Germany).
51
Ramalho (n 15).
52
Samuelson, ‘Allocating Ownership Rights in Computer-Generated Works’ (1985) 47 U Pitt L R
1185, 1224; Yu (n 29).
53
<http://legislation.govt.nz/act/public/1994/0143/105.0/DLM345634.html>. Date of access:
April 2020.
54
<www.nlsa.ac.za/downloads/Copyright%20Act.pdf>. Date of access: April 2020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
232 Susana Navas
considered to be the author owning the rights in the work that was created entirely
by the computer program.55 However it is not clear that these standards admit,
without further ado, the protection of works created autonomously without human
involvement. The expression ‘arrangements necessary for’ does not necessarily mean
that they are contemplated by the rule. The relationship between these ‘arrange-
ments’ and the final result is not easily understood, nor is it clear whether these
arrangements must be made by a human or whether it is sufficient that they are
made by an expert system. That is, it is not clear whether there must be a person
guiding the ‘arrangements’ in the creative process; this would not match the
definition given of ‘computer-generated works’.56
If it is not admitted that the author in fact is an expert system, given the premise
on which authors’ rights are based, as mentioned earlier, there is always the
possibility that the result of the ‘spontaneity’ of an algorithm without any human
presence passes to the public domain.57 Yet another possibility could be some right
sui generis to ensure compensation for anyone who invests human and economic
resources in creating an expert system or intelligent agent, so that a user who
acquires the system to create a work must pay.58 In fact, there could be a regulation
system similar to that for databases,59 independently of whether there are authors’
rights in the result of the creativity of the algorithm or any of its parts.
The payment of this compensation for the economic and human investment
made could be carried out electronically using automated systems, to avoid discour-
aging investment in creating, in innovating or in technological progress in general.60
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 233
61
The legislative technique of directives is the one that has been used by the community
legislator in the harmonising of copyright to which we refer.
62
From the day on which legal personality of intelligent robots is admitted (Chopra and White,
‘Artificial agents – Personhood in law and philosophy’, <www.sci.brooklyn.cuny.edu/~scho
pra/agentlawsub.pdf> (Date of access: April 2020); Wettig and Zehendner, ‘A Legal Analysis of
Human and Electronic Agents’ (2004) 12 Artif Intell Law 111‒135, there will be no legal or
theoretical problem in attributing to them the condition of authors, not only ‘in fact’ but also
‘in law’, exercising the rights through other legally designated subjects who may be individuals
or organisations (the person who commissioned the program and/or invested economic
resources in its preparation or even an unconnected third party).
63
Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on
Robotics, 2015/2103 INL.
64
Ramalho (n 15).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
234 Susana Navas
It must not be forgotten that technology and the binary code entered the world of
authors some time ago, and their presence is emphasised further in the regulation of
digital rights management65 and technological protection measures66 which have
had to become ‘intelligent’ rights and measures incorporating legal and computer
science and, especially, artificial intelligence ideas and legal concepts. We can thus
allude, at least among legal scholars and the creators of algorithms, to ‘computa-
tional copyright law’.67 Perhaps it is time, as proposed by Gervais, to think of a new
Berne Convention.68
For now, for the first time, a work made by an algorithm, ‘The portrait of Edmond
de Belamy’, has been auctioned by Christie’s in New York (23‒25 October 2018).69
65
These could consist of algorithms that autonomously determine whether or not what is used is
legal, as well as using computational logic to represent the standards relating to copyright.
66
The leading technology at the time ‒ the blockchain ‒ must be taken into account (Navas,
‘User-Generated Online Digital Content as a Test for the EU Legislation on Contracts for the
Supply of Digital Content’ in Schulze, Staudemeyer, and Lohsse (eds) Contracts for the Supply
of Digital Content: Regulatory Challenges and Gaps (Nomos Verlag 2017) 229‒255.
67
Schafer, Komuves, Niebla, and Diver (n 55).
68
Gervais, (Re)structuring Copyright. A Comprehensive Path to International Copyright Reform
(Edward Elgar Publishing 2017).
69
<www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-
9332-1.aspx>. Date of access: April 2020.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
9
introduction
This chapter introduces the notion of “wake neutrality” of artificial intelligence
devices and reviews its implication for wake-word approaches in open conversational
commerce (OCC) devices such as Amazon’s Alexa, Google Home and Apple’s Siri.
Examples illustrate how neutrality requirements such as explainability, auditability,
quality, configurability, institutionalization, and non-discrimination may impact the
various layers of a complete artificial intelligence architecture stack. The legal
programming implications of these requirements for algorithmic law enforcement
are also analysed. The chapter concludes with a discussion of the possible role of
standards bodies in setting a neutral, secure and open legal programming voice name
system (VNS) for human-to-AI interactions to include an “emotional firewall.”
I don’t need a girlfriend. My conversational device gives me everything I need and more.
(MIT student, summer 2017, two weeks after the first conversations with Amazon’s Alexa)
1
By “standard” we mean that from the users’ point of view the way to engage (the dial pad in the
case of a phone call) is the same and is unrelated to the infrastructure choices made by the
different parties involved (make of phone, network provider). On the web it is also standard
since the different browsers work the same way and, again, the functionality is mostly unrelated
to the type of computer you are using or the ISP provider you have.
Downloaded from https://www.cambridge.org/core. University College235 London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
236 Brian Subirana, Renwick Bivings, and Sanjay Sarma
may even be impossible. For example, Siri mid-2019 still wouldn’t “talk” to Spotify.
You asked it, “Play Bruce Springsteen by Spotify” and it politely responded, “I can
only talk to Apple Music” ‒ where you don’t have an account. Not only there is
lack of interoperability but, even if you are routed the service is inconsistent
across devices. For example, Google Home and Amazon don’t understand the same
flavor of English and won’t take you to the same service even if you ask for the
same thing.
To distinguish the two behaviors above making explicit the difference between
the phone network and the AI wake examples above, we introduce the notion of
“Wake Neutrality Markets” in the following definition:
We say that a market has “Wake Neutrality” if there are standard ways to activate
services that don’t favor a particular supplier. This includes:
1. Product Wake Neutrality: The same products can be consumed independently of
the market operator chosen.
2. Naming Wake Neutrality: Operator switching costs are not a function of the
number of products consumed. In particular, products have the same names
regardless of the market operator chosen.
3. Intelligence Wake Neutrality: Operators don’t use intelligence derived from wake
requests to give an unfair advantage to a particular product supplier.
4. Net Wake Neutrality: Market operators cannot lower the quality of service of a
given supplier to favor another one.
To prevent market dominance and insure Wake Neutrality, regulation can help.
For example, EU regulators announced an antitrust investigation into Apple in
connection with the music and Spotify example mentioned above.2 Apple’s response
was to “comply” enabling Spotify on Siri in September of 2019. However, iOS 13,
released that month, introduced “voice control,” a proprietary way to interact with
some features of Apple devices using voice. This means Apple can infer, among
other things, what you listen on any music platform, and more broadly your mood
based on the tone of your voice as you switch applications or respond to specific
messages.3 EU regulators initiatives may have made the market more product wake
neutral at the expense of intelligence wake neutrality.
2
Toplensky, “Brussels poised to probe Apple over Spotify’s fees complaint. EU to launch formal
competition inquiry as music streaming battle escalates” The Financial Times (5 May 2019).
3
Recordings are stored as stated in: <https://support.apple.com/en-us/HT210657>. Apple gives
you an id which is different than your personal id within Apple so that advertising requests are
not sent to you based on your voice profile.
4
Borden and Armstrong, “Tiny Sensors, Huge Consequences: Unregulated Inferences from Big
Data Create Ethical and Legal Dilemmas for Businesses and Consumers” (2016) 12(3) SciTech
Lawyer 28‒30.
5
Conrad and Branting, “Introduction to the Special Issue on Legal Text Analytics” (2018) 26(2)
Artificial Intelligence and Law 99‒102 <https://doi.org/10.1007/s10506–018-9227-z>. Springer
Netherlands.
6
Peppet, “Regulating the Internet of Things: First Steps Toward Managing Discrimination,
Privacy, Security and Consent” (2014) 93(1) Texas Law Review 85–178.
7
Adib and Katabi, “See Through Walls with WiFi!” (2013) 43.4 ACM.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
238 Brian Subirana, Renwick Bivings, and Sanjay Sarma
high-speed cameras,8 or even determine whether a person has been diligent about
car maintenance using a mobile phone’s built-in microphone.9 Recent research on
AI and medicine suggests that these devices may soon even be able to predict certain
medical conditions and even anticipate suicide attempts better than humans can.10
Should they be allowed to do so? Under what conditions? The unexplainability of
such algorithms prevents formal scrutiny by smart law-enforcement agents and poses
many as yet unresolved security11 and legal issues.12
We feel that the lack of standard interoperability in conversational commerce
may lower the adoption rate of simple voice interactions and eventually impact on
the well-being of the industry more broadly. Behind standardization choices, much
is at stake in terms of how our future societies will evolve, as various government
agencies have agreed.13 Much is at stake.
8
Davis, Rubinstein, Wadhwa, Mysore, Durand, and Freeman, “The Visual Microphone:
Passive Recovery of Sound from Video” (2014) 33(4) ACM Trans Graph.
9
Siegel, Bhattacharyya, Kumar, and Sarma, “Air Filter Particulate Loading Detection Using
Smartphone Audio and Optimized Ensemble Classification” (2017) 66 Engineering Applica-
tions of Artificial Intelligence 104‒112.
10
Loh, “Medicine and the Rise of the Robots: A Qualitative Review of Recent Advances of
Artificial Intelligence in Health” (2018) BMJ Leader.
11
Brundage, Avin, Clark, Toner, Eckersley, Garfinkel, and Anderson, “The Malicious Use of
Artificial Intelligence: Forecasting, Prevention, and Mitigation” (2018) arXiv preprint
arXiv:1802.07228.
12
Stern, “Introduction: Artificial Intelligence, Technology, and the Law” (2018) 68(supplement 1)
University of Toronto Law Journal 1‒11.
13
Cath, Wachter, Mittelstadt, Taddeo, and Floridi, “Artificial Intelligence and the ‘Good
Society’: The US, EU, and UK Approach” (2018) 24(2) Science and Engineering Ethics
505‒528.
14
Peppet (n 6).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 239
could, for example, mean AI modeling humans to the extent of pushing sales by
manipulating customer desires, and perhaps even changing personalities at the
software’s behest. In the short term, this is an unattainable goal, but we certainly
live in an era where the amount of personal identifiable information that can be
recorded is increasing, opening up unprecedented opportunities to design eco-
nomic markets and innovation policies.15,16
This chapter also examines how to algorithmically enforce wake neutrality in the
behavior of these new powerful AI technologies including avoiding bias toward
certain groups of humans and types of behaviors17 and prevent the unintended
emergence of isolated platforms limiting the potential of these technologies.18
Without legally enforceable neutrality rules we cannot ensure that AI devices do
not distort competition beyond what we would consider fair.19,20,21,22 From a legal
point of view, computers can no longer be seen as simple communication tools for
message transmission in commerce. Instead, they are powerful AI legal program-
ming23 agents with human-like personalities that operate in Internet of Things (IoT)
environments, engaging with humans using natural-language open conversational
commerce (OCC) and initiating transactions that generate agreements with third
parties through automated contracts.
15
Milgrom and Steven, “How Artificial Intelligence and Machine Learning Can Impact Market
Design” (2018) National Bureau of Economic Research, No w24282.
16
Agrawal, Joshua, and Avi, “The Economics of Artificial Intelligence” McKinsey Quarterly,
April 2018.
17
Fessler, “Amazon Alexa Is Now Feminist and Is Sorry If That Upsets You” Quartz at Work,
17 January 2018 <https://qz.com/work/1180607/amazons-alexa-is-now-a-feminist-and-shes-sorry-
if-that-upsets-you>.
18
Smith, “Siri Can Finally Control Streaming Apps like Spotify in iOS 12,” 7 June 2018 <https://
bgr.com/2018/06/07/ios-12-features-siri-shortcuts-streaming-apps-spotify>.
19
Khan, “Amazon’s Antitrust Paradox” (2016) 126 Yale Law Journal 710.
20
Frieden, “The Internet of Platforms and Two-Sided Markets: Legal and Regulatory Implica-
tions for Competition and Consumers” (October 2017). SSRN: <https://ssrn.com/abstract=
3051766 or http://dx.doi.org/10.2139/ssrn.3051766>.
21
Hovenkamp, “Whatever Did Happen to the Antitrust Movement?” (24 August 2018) Notre Dame
Law Review, forthcoming; U of Penn, Inst for Law & Econ Research Paper No 18-7. Available at
SSRN: <https://ssrn.com/abstract=3097452 or http://dx.doi.org/10.2139/ssrn.3097452>.
22
Parsheera, Ajay, and Avirup, “Competition Issues in India’s Online Economy” No 17/194. 2017.
National Institute of Public Finance and Policy, New Delhi, Working paper No 194.
23
Subirana and Bain, “Legal programming” (2006) 49.9 Communications of the ACM 57‒62.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
240 Brian Subirana, Renwick Bivings, and Sanjay Sarma
By 2020, over 50 percent of Americans were using voice search once a day.24 The
growth of conversational commerce devices is unprecedented, doubling that of
mobile phones and expected to reach 50 percent of US homes by 2020. In this
approach, the conversational commerce agents are a simple channel to a traditional
sandbox webpage or mobile application inheriting the legal framework of the
channelled service. Therefore, these interactions are not truly open because they
are mediated by a third party. Legal terms and conditions are established when the
human user configures the system and contracting changes are done with this third-
party sandboxed service (or “garden owner”).
This first closed-garden approach raises important issues in terms of law enforce-
ment since it is unclear how automated legal enforcement is to be performed. For
example, the service owner can extract undesirable personally identifiable infor-
mation (PII) from speech, including gender, race and mood. Serious legal hurdles
are also encountered when generic conversational devices are embedded in public
settings. For example, current proprietary devices, such as Amazon’s Echo, require
users to agree to relevant terms of use when downloading a “skill,” which is an app
that runs on top of the Alexa platform. What happens when such skills are embed-
ded in the cloud of a public, generic-use platform? More generally, how should we
deal with the fact that in a solely voice-based interaction with a conversational
device, there may be no point at which a user agrees to any terms whatsoever? It
seems likely that the current model of having users check a box or otherwise
physically agree to lengthy terms of service will lose applicability in a voice-based
environment. There is also the issue of both user and device authentication. What
does it mean to log in via voice in a public setting? Conversely, how can an
individual know that the device they are talking to is really what it claims to be?
While the effectiveness of biometric identification by voice will probably increase
rapidly in the coming years, users’ need to authenticate devices will still present
problems likely to fall within the purview of the law.
Finally, conversational devices also present unique security threat models that
pose new legal questions. What happens if a malicious actor uses a particular
individual’s recorded voice to authenticate themselves improperly? More subtly, a
malicious actor may intercept interactions between two machines regarding a
certain user’s sensitive information, which poses the question of exactly who is liable
when devices are in public spaces. Current voice recognition technologies also
utilize certain strategies in parsing verbal terms that are subject to change. A user
request might be parsed improperly, leading to unintended results and damages. In
a voice-only space, providing secondary authorization to certain requests could
prove burdensome, but the lack of such a process would clearly raise legal issues
24
Jeffs (2018), “OK Google, Siri, Alexa, Cortana; Can You Tell Me Some Stats on Voice
Search?” Branded3 <www.branded3.com/blog/google-voice-search-stats-growth-trends>
Accessed June 2019.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 241
as well. Even if these issues are solved, closed-garden solutions fragment the market
and prevent a single-user experience.
25
Subirana, Taylor, Cantwell, Jacobs, Hunt, Warner, Stine, Graman, Stine, and Sarma, “Time
to Talk: The Future of Brands Is Conversational,” MIT Auto-ID Laboratory Memo, January
2018. Available at <www.researchgate.net/publication/328733947_Time_to_talk_The_Future_
for_Brands_is_Conversational>. 10.13140/RG.2.2.10490.75208.
26
Helbing, “Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From
Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies” Towards
Digital Enlightenment (Springer 2018) 47‒72.
27
Crosby et al., “Blockchain Technology: Beyond Bitcoin” (2016) 2 Applied Innovation 6‒10.
28
De León and Avi, ”The Impact of Digital Innovation and Blockchain on the Music Industry”,
Inter-American Development Bank (2017).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
242 Brian Subirana, Renwick Bivings, and Sanjay Sarma
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 243
table 9.1 Six legal requirements to achieve and enforce wake neutrality
9.2.1.1 Configurability
Open conversational commerce, in the foreseeable future, will depend on imperfect
speech-to-text and speech-to-personality inferences which make neutrality more
intricate than is the case in simple web browsing or phone dialing. While speech-
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
244 Brian Subirana, Renwick Bivings, and Sanjay Sarma
31
Hinton et al., “Deep Neural Networks for Acoustic Modeling Ii Speech Recognition: The
Shared Views of Four Research Groups” (2012) 29(6) IEEE Signal Processing Magazine 82‒97.
32
Shen, Hung, and Lee, “Robust Entropy-Based Endpoint Detection for Speech Recognition in
Noisy Environments” (1998) 98 ICSLP paper 0232.
33
Asay, “Consumer Information Privacy and the Problems(s) of Third-Party Disclosures” (2013)
11(5) Northwestern Journal of Technology and Intellectual Property 358.
34
Hu, “Big Data Blacklisting” (2015) 67(5) Florida Law Review 1735‒1810.
35
Mariarosaria and Floridi, “Regulate Artificial Intelligence to Avert Cyber Arms Race” (2018)
556(7701) Nature 296‒298.
36
Paez and La Marca, “The Internet of Things: Emerging Legal Issues for Businesses” (2016)
43(1) Northern Kentucky Law Review 29‒72.
37
Webb, Pazzani, and Billsus, “Machine Learning for User Modeling” (2001) 11(1) User Modeling
and User-Adapted Interaction 19‒29.
38
Witten et al., Data Mining: Practical Machine Learning Tools and Techniques (Morgan
Kaufmann 2016).
39
Jansen et al., “Defining a Session on Web Search Engines” (2007) 58(6) Journal of the
American Society for Information Science & Technology 862‒871. EBSCOhost, doi:10.1002/
asi.20564.
40
Murray, Pouget, and Silva, “Reflections of Depression in Acoustic Measures of the Patient’s
Speech” (2001) 66(1) Journal of Affective Disorders 59‒69.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 245
similarly diagnose the mental states of users? Neutrality here will probably need to
include some mechanism for decoupling the emotive information contained in
speech from the content of the speech itself, which will present issues from a
semantic interpretation perspective. On the other hand, well-intentioned services
may use this powerful health inferences to provide valuable alerts to users and early
prevention treatment options that could result in great cost savings and significant
health improvements over time.
It is evident, therefore, that voice as a widespread medium for interactions with
devices connected to the Internet impacts neutrality in several dimensions such as
accuracy, personally identifiable information (PII), machine-learning effort, cross-
device identification, and semantic interpretation.
The above discussion implies that in order to effect neutrality, there must be a
way to configure the speech-to-text and speech-to-personality algorithms so that users
can decide which PII is shared and, most importantly, whether they want some form
of feedback to be able to interrupt the sending of data to the wrong service in case
there are inaccuracies in any of the conversions.
Thus, a requirement for wake neutrality is that of configurability. Open
systems must have a way to set options so that neutrality is tailored to various modes
based on the particular privacy preferences of the user and service. Some examples
could be:
Speech incognito mode: The handling service only receives the trans-
lated text. No voice information is passed along.
Native speech mode: The handling service receives the full speech via an
encrypted point to point connection.
Emotional mode: Speech is converted to text and basic sentiment
analysis information.
Although privacy and security are perhaps the most important things to configure,
there are many other aspects of voice devices that may benefit from some form of
configuration standardization. For example, how is sound level defined; are there
reserved words to turn on IoT devices such as a light bulb; are there industry specific
commands like “shopping list” or “checkout.”
9.2.1.2 Institutionalization
Who should establish what are the configuration options to be implemented? For an
open approach to flourish, we feel an international standards body is needed to
facilitate setting the standards ensuring wake neutrality. There are many options
including adoption by existing bodies such as GS1, W3C or ITEF. Whatever choice,
an organization must take charge to set the standards to be implemented. In the
foreseeable future, AI may continue to progress creating new options and challenges
for such institution.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
246 Brian Subirana, Renwick Bivings, and Sanjay Sarma
9.2.1.3 Non-discrimination
Perhaps the most important legal right to preserve is the development of a market
non-discriminatory wake service. This means that the service offered is not biased in
any way and that no particular supplier or customer gets any preferential treatment.
In addition, in the case of conversational commerce, provisions needs to be made for
special cases such as the mute, blind, or deaf.
41
Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against
Algorithmic Discrimination under EU Law” (2018) 55 Common Market Law Review 1143‒1186.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 247
be registered to prevent malicious phishing practices. With voice, one can also
prevent close sounding names to be registered. However, there is no clear way to set
phonetic boundaries and some errors seem unavoidable especially in noisy environ-
ments. Establishing such boundaries may prove increasingly difficult to manage as
AI algorithms develop personalized user voice profiles.
42
Wu, “Network Neutrality, Broadband Discrimination” (2003) 2 Journal on Telecommunications
& High Technology Law 141.
43
Coase, “The Federal Communications Commission” (1959) 2 The Journal of Law & Econom-
ics 1–40. JSTOR.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
248 Brian Subirana, Renwick Bivings, and Sanjay Sarma
44
See the Communications Act of 1934, 47 USC § 151 et seq. and Coase (n 43).
45
Nichols, “Redefining Common Carrier: The FCC’s Attempt at Deregulation by Redefinition”
(1987) 3 Duke Law Journal 501‒520.
46
Levi, “Not with a Bang but a Whimper: Broadcast License Renewal and the Telecommuni-
cations Act of 1996” (1996) 29 Connecticut Law Review 243.
47
Holmes, “Common Carriers and the Common Law” (1879) 13(4) American Law Review 609‒
631.
48
Gioia, “FCC Jurisdiction over ISPS in Protocol-Specific Bandwidth Throttling” (2009) 15(2)
Michigan Telecommunications and Technology Law Review 517‒542.
49
Brown, “Revisiting the Telecommunications Act of 1996” (2018) 51(1) PS: Political Science &
Politics 129‒132. doi:10.1017/S1049096517002001
50
Klarman, From Jim Crow to Civil Rights: The Supreme Court and the Struggle for Racial
Equality (Oxford University Press 2004).
51
Parks and Haskins, Rosa Parks: My Story (Dial Books 1992).
52
Act, An. CIVIL RIGHTS ACT OF 1964. Title VII, Equal Employment Opportunities (1964).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 249
and similar establishments.”53 One of the key differences between the Act of
1964 and previous civil rights legislation was that after its passing, the Supreme
Court ruled in the landmark case of Heart of Atlanta Motel v United States that the
law applied not only to the public sector but also to the private sector, on the
grounds that Congress has the power to regulate commerce between the States.54
The passing of civil rights legislation put in place a set of standards by which to
judge future developments, allowing us to pass some judgment on how fair, or how
net neutral, subsequent developments in conversational IoT are. The idea that ISPs
should not be able to throttle speeds to certain users over others is broadly based on
the notion of fairness and equal access to common goods, as well as legal recourse in
cases in which such expectations are not met. Net neutrality is a way of positively
promoting the fundamentally democratic and decentralized nature of the Internet.
While the term “net neutrality” was first coined by Professor Tim Wu at the
beginning of the twenty-first century,55 many of the fundamental ideas associated with
it were already being debated in the 1800s, with some legal scholars asking whether
telegrams sent and received by two individuals in the same state, but routed through
another state, would be designated “interstate commerce” (as seen in the Civil Rights
Act of 1964, this designation can be crucial for federal regulation).56 More recent ideas
relating to antitrust and monopoly law57 are being developed as part of efforts to afford
consumers more access to ideas and creative works.58 There has been extensive debate
as to when, where, and how to apply net neutrality, and in certain cases this debate has
led to actual changes in FCC policy.59 In one prominent complaint filed with the
FCC against Comcast, the company was alleged to have been throttling use of its high-
speed Internet service to users of the file-sharing software Bittorrent.60
There is, however, still no truly agreed upon definition of net neutrality.61
Narrowly defined as it applies to Internet access, a working definition might
53
Berg, “Equal Employment Opportunity under the Civil Rights Act of 1964” (1964) 31 Brook
Law Rev 62.
54
McClain, “Involuntary Servitude, Public Accommodations Laws, and the Legacy of Heart of
Atlanta Motel, Inc v United States” (2011).
55
Wu, “Network Neutrality, Broadband Discrimination” (2003) 2 Journal on Telecommunications
& High Technology Law 141.
56
Harris, “Is a Telegram which Originates and Terminates at Points within the Same State but
which Passes in Transit Outside of that State an Interstate Transaction?” (1916‒1917) 4(1)
Virginia Law Review 35‒52.
57
Schwartz, “Antitrust and the FCC: The Problem of Network Dominance” (1959) 107(6)
University of Pennsylvania Law Review 753‒795.
58
Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (Vintage 2002).
59
Browni, “Broadband Privacy within Network Neutrality: The FCC’s Application & Expansion
of the CPN Rules” (2017) 11(1) University of St Thomas Journal of Law and Public Policy
(Minnesota) 45‒62.
60
Reicher, “Redefining Net Neutrality after Comcast v FCC” (2011) 26(1) Berkeley Technology
Law Journal 733‒764.
61
Krämer, Wiewiorra, and Weinhardt, “Net Neutrality: A Progress Report” (2013) 37(9) Telecom-
munications Policy 794‒813.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
250 Brian Subirana, Renwick Bivings, and Sanjay Sarma
look like this: Next we outline reasonable boundaries for the spirit of net
neutrality, which we analyze in terms of its application to conversational
commerce and IoT.62 Net Neutrality in Spirit (NNiS) is a set of loosely defined
conventions that expand upon narrowly defined net neutrality via the concepts
that underpin the legislation outlined above, namely the Telecommunications
Acts of 1934 and 1996, common carrier designations, and the Civil Rights Act of
1964. The real thrust of net neutrality is its application in spirit.63 An example of
NNiS includes so-called open Internet initiatives that go beyond the original
notion of net neutrality in an effort to promote open standards, transparency,
lack of censorship, and low barriers to entry.64 Many of their core proponents
regard these initiatives as an attempt to decentralize the power inherent in
technology and data, and as similar to open-source software, at least in their
core mission.65
In general, NNiS might apply to any good or service that has become so ubiquitous
or necessary to daily life that access is viewed as nearly or wholly a common good.66
For example, Title IX legislation has sought to rectify gender discrimination at
federally funded schools,67 while the Americans with Disabilities Act was passed
to prohibit discrimination based on disability, requiring businesses and organizations
to provide accommodations enabling individuals to participate in regular employ-
ment and education.68 Similarly, many take the view that ISPs should not be
liable for the presence of illegal content online, although actual legal opinions have
62
Kim, “Securing the Internet of Things via Locally Centralized, Globally Distributed Authenti-
cation and Authorization,” PhD Thesis, University of California at Berkeley 2017.
63
Klein, “Data Caps: Creating Artificial Scarcity as a Way around Network Neutrality” (2014)
31(1) Santa Clara High Technology Law Journal 139‒162.
64
Meinrath and Pickard, “Transcending Net Neutrality: Ten Steps toward an Open Internet”
(2008)12(6) Education Week Commentary 1‒12.
65
Thierer, “Are ‘Dumb Pipe’ Mandates Smart Public Policy? Vertical Integration, Net Neutral-
ity, and the Network Layers Model” in Lenard and May (eds), Net Neutrality or Net Neutering:
Should Broadband Internet Services Be Regulated (Springer US 2006) 73‒108.
66
Hartmann, “A Right to Free Internet: On Internet Access and Social Rights” (2013) 13(2)
Journal of High Technology Law 297‒429.
67
Heckman, “Women & (and) Athletics: A Twenty Year Retrospective on Title IX” (1992) 9(1)
University of Miami Entertainment and Sports Law Review 1‒64.
68
Acemoglu and Angrist, “Consequences of Employment Protection? The Case of the Ameri-
cans with Disabilities Act” (2001) 109(5) Journal of Political Economy 915‒957.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 251
differed depending on the region.69 In the recent case Packingham v the State of
North Carolina, the Supreme Court ruled in favor of the right of a convicted sex
offender to use Facebook to make innocuous posts, even though the court also
upheld North Carolina’s right to prohibit registered sex offenders from using social
media to make any attempt to contact minors.70 Here, the case hinged on whether
prohibiting access to ubiquitous social media like Facebook infringed on funda-
mental rights to free speech and access to public spaces, and at least in this case, the
Supreme Court took this view.
In the narrow definition introduced above, net neutrality may not strictly apply
to Wake Neutrality in conversational commerce and AI in general. Even if it were
to be applied, the architecture may be set up in such a way that businesses wish to
incentivize more consumer use, choosing to reward data creators instead of
seeking to throttle heavy data users.71 Broadly defined and in spirit, however,
net neutrality could present a set of expected norms within this space, the
breaching of which might trespass on what is considered fair and right in the
public mind. Users of a unified IoT ecosystem may, for instance, expect that
information provided after requests is provided truthfully and equally to all parties,
even if the presentation of information, such as search results, is ultimately
protected under the First Amendment rights of the company.72 Companies could
theoretically benefit from creating customized IoT experiences that lead different
users to believe different things, but customers in such a scenario might expect
that, unless customization is explicitly requested, information should be provided
in equal, straightforward ways.73
This section has presented six requirements for wake word neutrality which lead
us to suggest the following definition of net wake neutrality:In this definition, we
have used “wake” rather than “wake word” in order to include other forms of
recalling agents that may be based on sign language or IoT sensing. Even though
our focus is on conversational commerce using speech, the discussion above would
be very similar for other forms of AI software agent awakening (vision, EEG,
presence based). For the sake of simplicity, we will continue to center our descrip-
tions on the voice-use case but with the understanding that our ambitions are non-
discriminatory in the broadest sense possible.
69
Kleinschmidt, “An International Comparison of ISP’s Liabilities for Unlawful Third Party
Content” (2010)18(4) International Journal of Law and Information Technology 332‒355.
70
See the Packingham v North Carolina (2017) case, accessed June 2019, in <www.supremecourt
.gov/opinions/16pdf/15-1194_08l1.pdf>.
71
Ganti, Ye, and Hui Lei, “Mobile Crowdsensing: Current State and Future Challenges” (2011)
49(11) IEEE Communications Magazine 32‒39.
72
Volokh and Falk, “Google: First Amendment Protection for Search Engine Search Results”
(2012) 8(4) Journal of Law, Economics & Policy 883‒900.
73
Hall, “Standing the Test of Time: Likelihood of Confusion in Multi Time Machine v
Amazon” (2016) 31 Berkeley Technology Law Journal Annual Review 815‒850.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
252 Brian Subirana, Renwick Bivings, and Sanjay Sarma
74
Subirana and Bain, Legal Programming: Designing Legally Compliant RFID and Software
Agent Architectures for Retail Processes and Beyond (Springer-Science 2005).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 253
75
Wright and De Filippi, “Decentralized Blockchain Technology and the Rise of Lex Crypto-
graphia” (10 March 2015). Available at SSRN: <https://ssrn.com/abstract=2580664>.
76
Mik, “Smart Contracts: Terminology, Technical Limitations and Real World Complexity”
(2017) 9(2) Law, Innovation and Technology 269‒300.
77
Eskandari, Clark, Barrera, and Stobert (2018), “A First Look at the Usability of Bitcoin Key
Management” arXiv preprint arXiv:1802.04351.
78
Nakamoto Satoshi, “Bitcoin: A Peer-to-Peer Electronic Cash System” (2008) <https://bitcoin
.org/bitcoin.pdf>.
79
Yli-Huumo, Ko, Choi, Park, and Smolander, “Where Is Current Research on Blockchain
Technology? – A Systematic Review” (2016) 11(10) PLoS ONE e0163477. <https://doi.org/10
.1371/journal.pone.0163477>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
254 Brian Subirana, Renwick Bivings, and Sanjay Sarma
reality. Blockchain could therefore form a major part of a backbone architecture for
the formulation of voice net neutrality in conversational AI/IoT networks, which
could impact the perceived fairness, configurability, and explainability of future
voice-based IoT devices.
One possible way of automating contracts is to associate each conversational
commerce device with a validation server operated by a standards body or, by
default, its parent according to the VNS hierarchy. Such a validation server could
have a default wake-word policy (or issue its own) and report it in the form of a URL
and a public hash key. Each validation server would issue its own tokens and register
wake-word requests on its own ledger or a blockchain of choice, as set by the policy
in operation. These ledgers could then be algorithmically audited for legal compli-
ance. The validation servers could issue their own tokens and store them as
validation server hash keys at given intervals. The tokens could then be incorporated
in the legal programming of algorithmic law-enforcement agents. These agents
could signal exceptions and issue fines. Token depletion and PII removal could
be recorded on the same blockchains.
Note that such ledgers could include the associated conversational commerce
devices and could be run either by it or by participating in other public
blockchains such as bitcoin. These registries could selectively include the ori-
ginal sound files and some description of the processed output and this infor-
mation would be paired with the destination agent signature and any additional
user information such as IoT data, PIN verification or DUO security authoriza-
tion. The architecture could combine these different signatures to enable algo-
rithmic law enforcement of AI agent allocation by checking pair integrity. While
records and extra signature options are not so relevant for a wake standard, they
may become essential for automated contracting. If the DNS association implies
that miners are not anonymous, and perhaps even that they are trustable, the
participating blockchains could incorporate novel mining approaches where
mining rewards are based on time spent talking to the device, proof of talk or
proof of charity (certainly more environmentally friendly than bitcoin’s resource-
intensive proof of work).
Each device could store in its server’s blockchain a smart contract policy that
specifies the algorithmic law-enforcement guarantees that are in place and how the
six requirements of wake neutrality are met. The policy could include procedures
for users to report exceptions and for the involvement of the courts. Most import-
antly, it should describe how speech recordings are used and the various ways of
interacting with the device available to (a) the user, (b) third-party standards bodies
and (c) government law-enforcement services. A challenge service could be
included for algorithmic legal enforcement by standards bodies and government
agencies. The service would take speech recordings and provide access to the parsed
output. There could also be an algorithmic auditing policy for standards bodies and
government agencies.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 255
80
Jacobowitz and Ortiz, “Happy Birthday Siri! Dialing in Legal Ethics for Artificial Intelligence,
Smart Phones, and Real Time Lawyers” (2018) Texas A&M University Journal of Property Law,
University of Miami Legal Studies Research Paper No 18-2. Available at SSRN: <https://ssrn
.com/abstract=3097985>.
81
Subirana and Bain (n 74).
82
Sartor and Cevenini, “Agents in Cyberlaw”. Proceedings of the Workshop on the Law of
Electronic Agents (LEA02) 2002.
83
Thus, for example in e-commerce, the importance in web pages of including any contracting
conditions, either directly on the “Accept” page, or by a visible and easily accessible link.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
256 Brian Subirana, Renwick Bivings, and Sanjay Sarma
84
Karnow, “Liability for Distributed Artificial Intelligences” (1996) Berkeley Technology Law
Journal 147‒204.
85
Ibid.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 257
Table 9.2 presents a sample of legal and technical issues for electronic contracting.
We suggest that if these technical contracting processes can be completed and
modeled so that they become universal for the majority of conversational commerce
contracting, we may be able to create a legal architecture that can be applied to the
technical processes of AI agent contracting. This legal modeling in turn would
enable software developers to legalize their technical models – thus creating a
framework for compliant contracting-agent engineering in open conversational
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
258 Brian Subirana, Renwick Bivings, and Sanjay Sarma
Agent determines a need Would the user agree to this Ledger registration of original
to purchase specific item action? agent programming/
parameters (trigger events,
contract conditions)
Agent searches the Advertising or offer. Identification of data messages
network for various stores Information requirements as advertisements or offers.
selling relevant products (consumer contracting issue) Forwarding of obligatory
in agreement with user information to users’
wake? traceability ledger
Agent negotiates with Identification of parties – Registration of negotiation
store(s) for the quantity, agent user identified as steps (assistance to determine
price and other terms of consumer. true intent).
sale Capacity of agent to Session control and processes
negotiate. for system failures.
Good faith and withdrawal Well defined negotiation
from negotiation protocols
Agent concludes Capacity and consent registry Certification of agent’s
purchase agreement and impact if authority to conclude
mistake on wake. contracts.
Incorporation of all terms Process for retrieving and
storing terms.
Process for error correction
and confirmation.
Process for acknowledgment
of receipt
Agent provides delivery Identification of parties and Digital signatures for
and payment details use of PII information for payments (e.g. SET protocol).
possible anonymized Reference to user for PIN
payment
commerce. Further work, however, is needed in both legal and technical domains
in relation to agent-based contracting in public open conversational commerce
settings that are enhanced with AI agents. Specific areas requiring attention include
the expression of contractual preferences through speech associated with automated
negotiation (including the ability of computing languages to capture and express
personal contracting preference and the degree of granularity that may be achieved),
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 259
enhancing the legal validity of agent-based digital voice signatures, and attribution
regimes for contracts that are not directly supervised by the human user of the agent.
86
The four layers are inspired by the MIT CBMM model of the brain: <https://cbmm.mit.edu/
research/modules>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
260 Brian Subirana, Renwick Bivings, and Sanjay Sarma
87
Kraepelin, “Manic Depressive Insanity and Paranoia” (1921) 53(4) The Journal of Nervous and
Mental Disease 350.
88
Alpert, Pouget, and Silva, “Reflections of Depression in Acoustic Measures of the Patient’s
Speech” (2001) 66(1) Journal of Affective Disorders 59‒69.
89
Li et al., “Smart Community: An Internet of Things Application” (2011) 49(11) IEEE Communi-
cations Magazine 68‒75.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 261
90
Rubinstein, “Regulating Privacy by Design” (2011) 26(3) Berkeley Technology Law Journal
1409‒1456.
91
Lenard and Rubin, “Big Data, Privacy and the Familiar Solutions” (2015) 11(1) Journal of Law,
Economics & Policy 1‒32.
92
Paez and La Marca, “The Internet of Things: Emerging Legal Issues for Businesses” (2016) 43
(1) Northern Kentucky Law Review 29‒72.
93
Subirana et al., “The MIT Voice Name System (VNS)” MIT Auto-ID Laboratory
Memo (2019).
94
Gobl and Ailbhe, “The Role of Voice Quality in Communicating Emotion, Mood and
Attitude” (2003) 40(1) Speech Communication 189‒212.
95
Gobl and Ailbhe (n 94).
96
Xue, An, and Fucci, “Effects of Race and Sex on Acoustic Features of Voice Analysis” (2000) 91
(3) Perceptual and Motor Skills 951‒958.
97
McComb et al., “Elephants can Determine Ethnicity, Gender, and Age from Acoustic Cues in
Human Voices” (2014) 111(14) Proceedings of the National Academy of Sciences 5433‒5438.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
262 Brian Subirana, Renwick Bivings, and Sanjay Sarma
The rise of conversational commerce may lead to new issues of potential discrim-
ination. An ubiquitous sensor-based IoT infrastructure, coupled with powerful big
data-crunching algorithms, might produce unexpected inferences about individual
consumers, leading to unintended, yet nevertheless discriminatory, decisions.98
Should a business have the right to deny entry to an individual who is deemed to
have a cold based on their recorded voice patterns? Can an insurance adjuster deny
a claim based on voice-based transactional histories showing a pattern of lies, even if
the claimant has indeed lied? The potential for discriminatory blacklisting practices,
albeit unintentional, that designate certain individuals or groups as guilty before the
fact will likely only increase as big data-based inferences become increasingly
powerful.99
98
Peppet (n 16).
99
Hu (n 34).
100
Zyskind and Oz, “Decentralizing Privacy: Using Blockchain to Protect Personal Data” Security
and Privacy Workshops (SPW), 2015, IEEE.
101
Kosba et al., “Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart
Contracts” (2016) IEEE Symposium on Security and Privacy (SP).
102
Villaronga, Fosch, Kieseberg, and Li, “Humans Forget, Machines Remember: Artificial Intelli-
gence and the Right to be Forgotten” (2018) 34(2) Computer Law & Security Review 304‒313.
103
Subirana, Bagiat, and Sarma, “On the Forgetting of College Academics: At Ebbinghaus
Speed?” Center for Brains, Minds and Machines (CBMM Memo No 068), 2017 <http://hdl
.handle.net/1721.1/110349>.
104
Cano-Córdoba, Sanjay, and Subirana, “Theory of Intelligence with Forgetting: Mathematical
Theorems Explaining Human Universal Forgetting using ‘Forgetting Neural Networks’”
Center for Brains, Minds and Machines (CBMM Memo No 071), 2017 <https://dspace.mit
.edu/handle/1721.1/113608>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 263
105
Petit, “Artificial Intelligence and Automated Law Enforcement: A Review Paper” (21 March
2018). Available at SSRN: <https://ssrn.com/abstract=3145133> or <http://dx.doi.org/10.2139/
ssrn.3145133>.
106
Khan et al., “Future Internet: The Internet of Things Architecture, Possible Applications and
Key Challenges” 10th International Conference on Frontiers of Information Technology
(FIT), IEEE, 2012.
107
Haucap and Heimeshoff, “Google, Facebook, Amazon, eBay: Is the Internet Driving Compe-
tition or Market Monopolization?” (2014) 11(1‒2) International Economics and Economic Policy
49‒61.
108
Etzioni and Etzioni, “Incorporating Ethics into Artificial Intelligence” (2017) 21(4) The Journal
of Ethics 403‒418.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
264 Brian Subirana, Renwick Bivings, and Sanjay Sarma
109
Mytis-Gkometh, Drosatos, Efraimidis, and Kaldoudi, “Notarization of Knowledge Retrieval
from Biomedical Repositories Using Blockchain Technology” in Maglaveras, Chouvarda, and
de Carvalho (eds), Precision Medicine Powered by Health and Connected Health. IFMBE
Proceedings, vol 66 (Springer 2018) 69‒73.
110
Kester, “Demystifying the Internet of Things: Industry Impact, Standardization Problems, and
Legal Considerations” (2016) 8(1) Elon Law Review 205‒228.
111
Bajarin, “The Voice-First User Interface Has Gone Mainstream” Recode. 7 June 2016.
Web. 14 July 2017.
112
Brown, “The Fitbit Fault Line: Two Proposals to Protect Health and Fitness Data at Work”
(2016) 16(1) Yale Journal of Health Policy, Law and Ethics 1‒50.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 265
the past,113,114 but the uniquely intimate nature of voice-based data presents new
challenges requiring innovative solutions.
In the extreme case, the offer of private browsing options, or incognito mode (or a
TOR browser), as popularized by the Google Chrome browser, has been an
interesting solution to the customer desire to have different levels of privacy for
different browsing sessions.115 Users sometimes want the convenience and efficiency
of having their online forms automatically filled or their passwords automatically
remembered that browser-based cookies used to track and store customer-specific
information can provide, but on other occasions they may want to remain com-
pletely anonymous, even if that means they have to re-enter login information
manually each time. They do not want to be presented with a trade-off decision,
but instead want the ability to choose what type of session they begin each time, and
“incognito mode” allows for this. A similar customer desire may present itself when
interacting conversationally with IoT devices116. As previously discussed, the inher-
ent traits of an individual, such as biological sex, age or ethnicity117,118,119 may be
inferable through the voice alone by use of sophisticated algorithms. Furthermore,
voice-based interactions with certain platforms may be stored in the cloud, meaning
that an individual interacting with a new device for the first time might be recog-
nized by acoustic patterns unique to their voice.120 Individuals may wish to turn off
the ability of devices to utilize these inferences at their pleasure.121 One person may
wish to have their preferences automatically inferred when asking about local
restaurants, while another may want these settings turned off when requesting
updates on the latest news. A privacy-conscious user may wish to have this option
always off, even if that leads to poor accuracy in recommended options and other
core features of IoT devices. Overall, many or most customers may ultimately
choose not to opt out or turn off such settings, choosing convenience over privacy,122
but a lack of any ability to conversationally browse in a way similar to browser-based
113
Bailey, “Seduction by Technology: Why Consumers Opt out of Privacy by Buying into the
Internet of Things” (2016) 94(5) Texas Law Review 1023‒1054.
114
Bajarin (n 111).
115
Said et al., “Forensic Analysis of Private Browsing Artifacts” International Conference on
Innovations in Information Technology (IIT). IEEE, 2011.
116
Apthorpe, Reisman, and Feamster, “A Smart Home Is No Castle: Privacy Vulnerabilities of
Encrypted IoT Traffic”. arXiv preprint arXiv:1705.06805 (2017).
117
Gobl and Ailbhe (n 94).
118
Xue, An, and Fucci (n 96).
119
McComb et al. (n 97).
120
Rozeha et al., “Security System Using Biometric Technology: Design and Implementation of
Voice Recognition System (VRS).” International Conference on Computer and Communi-
cation Engineering, ICCCE, 2008.
121
Tene and Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics”
(2012) 11 Northwestern Journal of Technology and Intellectual Property xxvii.
122
Bailey (n 113).
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
266 Brian Subirana, Renwick Bivings, and Sanjay Sarma
incognito modes may present a challenge on the road to achieving voice net
neutrality.
In the context of conversational commerce, disputes between companies and
consumers may arise due to the impracticality of creating and communicating
voice-based terms of service. Can users really be expected to fully agree to
transaction-specific terms via a voice prompt when many users even now fail to
adequately read text-based online terms of service?123 Blockchain technology may
offer a novel method by which consumers can privately enforce their rights with
regard to any such disputes without the need to seek recourse from the relevant
company or a third-party entity.124 Although still largely in the conceptual stage,
novel approaches like this are needed to bridge the trust gap between voice-based
conversational commerce and more traditional browser-based transactions. One way
to address this issue is to develop AI-based algorithms that extract selective infor-
mation and proof125 that they are correct in a legally binding way, without ever
disclosing PII that may compromise human rights.
123
Obar and Oeldorf-Hirsch, “The Biggest Lie on the Internet: Ignoring the Privacy Policies and
Terms of Service Policies of Social Networking Services” (2018) Information, Communication
& Society https://doi.org/10.1080/1369118X.2018.1486870
124
Koulu, “Blockchains and Online Dispute Resolution: Smart Contracts As an Alternative to
Enforcement” (2016) 13(1) SCRIPTed: A Journal of Law, Technology and Society 40‒69.
125
Goldwasser, Micali, and Rackoff, “The Knowledge Complexity of Interactive Proof Systems”
(1989) 18(1) SIAM Journal on Computing 186‒208.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 267
heighten the persuasiveness of machine voices, increasing the need for such cap-
abilities to be configured at the behest of the user.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
268 Brian Subirana, Renwick Bivings, and Sanjay Sarma
The biggest challenges are privacy and security considerations, together with the
unexplainability, unreliability and power requirements of current neural network
speech-recognition systems. A proposal we are exploring is for an independent third
party that implements a token-activated cognitive emotional firewall distributed
system acting like a human-within-the-machine to observe the system holistically.
A legal programming approach to the VNS may facilitate legal enforcement of wake
neutrality through partial algorithmic enforcement. A simple form of emotional
firewall, offering a weak incognito-type mode, is a third-party service translating
speech into text without revealing anything else from the voice signal (such as
gender, accent or mood).
While this chapter has focused mainly on wake neutrality in conversational
commerce within smart speakers and IoT systems, the more general problem goes
to the heart of how human beings hope to interact with machines,129 the solution to
which will surely continue to be a hotly debated topic for the foreseeable future.
129
Subirana, Sarma, Rice, and Cottrill, “Can Your Supply Chain Hear Me Now?” MIT Sloan
Management Review. Frontiers Blog, 7 May 2018. Available at: <https://sloanreview.mit.edu/
article/can-your-supply-chain-hear-me-now>.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
10
Björn Steinrötter*
introduction
Nowadays everything revolves around digital data. They are, however, difficult to
capture in legal terms due to their great variety. They may be either valuable goods or
completely useless. They may be regarded as syntactic or semantic. However, it is the
particularly sensitive data protected by data protection law that are highly valuable and
interesting for data-trading, big-data and artificial-intelligence applications in the
European data market. The European legislator appears to favour both a high level
of protection of personal data, including the principle of ‘data minimisation’, and a
free flow of data. The GDPR includes some free-flow elements, but especially
legislation on trading and usage of non-personal data is currently under discussion.
The European legislator faces key challenges regarding the (partly) conflicting object-
ives reflected in data protection law and data economic law. This contribution assesses
the current state of legal discussions and legislative initiatives at the European level.
Key Words: data protection, data producer’s right, access rights, data portability, free
flow of data, digital single market strategy, data ownership, data holder, GDPR,
privacy
* I would like to thank Dr Marc Stauch, MA (Oxon) for thorough proofreading and valuable,
wise comments. All remaining mistakes and shortcomings are, of course, my own.
Downloaded from https://www.cambridge.org/core. University College269 London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
270 Björn Steinrötter
intelligence’,1 including machine learning).2 At the same time, the quality of the
data to be processed3 is crucial, not only for the processing speed but also for
the accuracy of results. Even the smartest algorithm does not deliver usable results,
if the underlying data (structure) quality is poor. Against this background it is
unsurprising that phrases such as ‘data is the new oil’ or the ‘new gold’ of the digital
economy are commonplace, notwithstanding the fact that there is comparatively
little information value in such statements taken in isolation.4
It is certainly true that digital data, very often machine generated and in raw form,
are precious assets in this day and age. The law, in turn, must respond to this
development. When it comes to areas of law that concern data specifically, a kind of
dual standard is apparent in respect of data protection and data economic law. This
chapter will show that when it comes to commercialisation, in particular trading and
movement (= free flow) of data as a factual prerequisite for algorithm-based applica-
tions, the interplay of these two tracks is not completely harmonious.5
Data protection law is well known in European legal systems. This was already
true prior to the directly applicable General Data Protection Regulation (GDPR),6
as most of the European states and the EU itself7 have a strong tradition of data
protection.8 Hitherto this area of law could be said to have been an ‘only child’. Data
protection law now seems set to acquire a ‘legal sibling’ in the shape of data
economic law,9 which covers such issues as ‘data ownership’, ‘data producer rights’
1
Cf. regarding the link between big data and artificial intelligence Fink, ‘Big Data and Artificial
Intelligence’ (2017) 9 Zeitschrift für Geistiges Eigentum/Intellectual Property Journal (ZGE/IPJ)
288.
2
Expert Opinion of the German Association for the Protection of Intellectual Property (GRUR)
on the European Commission Communication ‘Building a European Data Economy’, 3 April
2017 (hereafter cited as GRUR; available at <https://tinyurl.com/llpygqh>) 5: ‘. . . algorithms
often do the major part of the work.’
3
For instance: are they already sorted or still raw?
4
Without algorithms the masses of data would not be very helpful. If data are the oil of the
economy, algorithms are the engines; see Pleier, ‘Big Data und Digitalisierung: Warum
Algorithmen so entscheidend sind’, [https://tinyurl.com/yd4wr3xo]; cf. (to get a first impression
and in general) the several contributions in: Harvard Business Manager 4/2014, Big Data;
instructive regarding big data: Sugimoto, Ekbia, and Mattioli (eds), Big Data Is Not a Monolith
(MIT Press 2016).
5
See Becker, ‘Reconciliating Data Privacy and Trade in Data – A Right to Data-Avoiding
Products’ (2017) 9 ZGE/IPJ 371.
6
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regula-
tion) OJ 2016 L 119, 4.5.2016, p. 1.
7
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the
protection of individuals with regard to the processing of personal data and on the free
movement of such data, OJ 1995 L 281, p. 31.
8
Cf. Becker, ‘Rights in Data – Industry 4.0 and the IP Rights of the Future’ (2017) 9 ZGE/IPJ
253, 258.
9
Cf. also Berger, ‘Property Rights to Personal Data? – An Exploration of Commercial Data Law’
(2017) 9 ZGE/IPJ 340 using the term Commercial Data Law including private law issues as well
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 271
as the data protection law and subsequently determining ‘a dichotomy in legal terms between
private commercial data law and public data protection law’.
10
Cf. to the development of the discussion regarding an IP right in data: Becker (n 8) 253 ff; see
also Fezer, ‘Data Ownership of the People. An Intrinsic Intellectual Property Law Sui Generis
Regarding People’s Behaviour-generated Informational Data’ (2017) 9 ZGE/IPJ 356; Spindler,
‘Data and Property Rights’ (2017) 9 ZGE/IPJ 399; Wiebe, ‘A New European Data Producers’
Right for the Digital Economy?’ (2017) 9 ZGE/IPJ 394; in respect of personal data Buchner, ‘Is
there a Right to One’s Own Personal Data?’ (2017) 9 ZGE/IPJ 416; Specht, ‘Property Rights
Concerning Personal Data’ (2017) 9 ZGE/IPJ 411; constitutive in respect of the recent discus-
sion in Germany and beyond regarding the syntactical level of information: Zech, ‘Information
als Schutzgegenstand’, 2012.
11
The term ‘raw data’ describes unsorted data.
12
I.e., automatically generated without the active intervention of a human being.
13
If the data concerned are not of a personal nature, they are not even protected by data
protection law.
14
Of course, there is an existing mosaic-like protection that covers data as such in an indirect way.
For example, the sui generis right of the Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases, OJ 1996 L 77, p. 20 applies
under certain conditions. The same holds true for the law of trade secrets, see the Directive
(EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection
of undisclosed know-how and business information (trade secrets) against their unlawful
acquisition, use and disclosure, OJ 2016 L 157, 15.6.2016, p. 1, which needed to be transposed
into Member State law by June 2018. Competition Law instruments might be helpful in some
cases, too. Furthermore, national private laws, e.g. tort law, could provide a kind of ‘reflexive’
protection. Christians and Liepin, ‘The Consequences of Digitalization for German Civil Law
from the National Legislator's Point of View’ (2017) 9 ZGE/IPJ 331, 336; Becker (n 8) 253 et
seq., also emphasising the supposed shortcomings of this patchwork-protection de lege lata;
continuative Steinrötter, ‘Vermeintliche Ausschließlichkeitsrechte an binären Codes’, Multi-
Media und Recht (MMR) (2017) 731, 733 ff.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
272 Björn Steinrötter
digitalisation, big data, algorithms etc. are replete with buzzwords, often without it
being clear what the real meanings of those words are.
According to ISO/IEC 2382:2015, IT Vocabulary, 2121272, data are a ‘reinterpre-
table representation of information in a formalized manner suitable for communi-
cation, interpretation, or processing’. That means, the term ‘data’ is not congruent
with the term ‘information’. It is only a ‘representation’ of the latter and the
‘formalized manner’ is in the present context – digital data – the binary coding.15
Hence, data as such concern the syntactic level, whereas information means
the semantic, content-related level (indeed according to another view the term
‘information’ could be divided into syntactic, semantic and even structural16
components).17 Both syntactic and semantic levels potentially come into question
as an economic good and a legal object. Keeping to the former approximation of the
concept of data, data are merely a ‘carrier’ of the information.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 273
21
Cf. Steinrötter, ‘Feuertaufe für die EU-Datenschutz-Grundverordnung – und das Kartellrecht
steht Pate’ (2018) 2 Zeitschrift für Europäisches Wirtschafts- und Steuerrecht (EWS) 61 (III.4.).
22
Cf. EAID, Statement of 23/11/2017, p. 2.
23
N 14.
24
Spindler (n 10).
25
Where data are personal, the protection of the data subject regarding privacy aspects (with the
consequence of the application of the GDPR) needs to be added. Certainly, personal data
could be anonymised and would then be considered as non-personal data.
26
Nowadays such relevant data are regularly machine generated. Of course, the manual collec-
tion/curation of data takes more effort and would possibly be just as worthy of protection as
machine-generated data, if at all.
27
Lock-in effects can be described as conditions in which a strong market participant (here:
having a monopoly-like position because of the factual access to data plus having the technical
means to protect the data from access by third parties, which leads to factual exclusivity) is
capable of making it at least very difficult for its contractual partners to switch to another
supplier/provider.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
274 Björn Steinrötter
Unsurprisingly, given the economic and social importance of the issue, the matter
has attracted the regulatory interest of the European Union. Indeed, a national
approach would not be convincing, as data transactions typically do not have
national borders. For every transborder data flow, conflict-of-law rules would deter-
mine which country’s national law regime applied.28 This would further increase
complexity.29 An uncoordinated approach risks the creation of a fragmented system
that is the opposite of what is needed in the internal market.30 Therefore, a
European approach seems indicated.
28
Steinrötter (n 14) 731, 735.
29
Of course, if a European Directive came into play, there would still be a need to find the
applicable national law. However, from a practical point of view, this issue is softened. If an EU
Regulation came into force, that problem would be resolved to a large extent. It is questionable
whether the EU would have the competencies to introduce a data ownership or similar legal
concepts (cf. Art 345 TFEU).
30
COM(2017) 9 final 11.
31
See also the Industry Package, consisting of Communication from the Commission to the
European Parliament, the Council, the European Economic and Social Committee and the
Committee of the Regions, ‘European Cloud Initiative – Building a competitive data and
knowledge economy in Europe’, COM(2016) 178 final; Communication from the Commis-
sion to the European Parliament, the Council, the European Economic and Social Commit-
tee and the Committee of the Regions,’ ICT Standardisation Priorities for the Digital Single
Market,’ COM(2016) 176 final; Communication from the Commission to the European
Parliament, the Council, the European Economic and Social Committee and the Committee
of the Regions, ‘Digitising European Industry – Reaping the full benefits of a Digital Single
Market’, COM(2016) 180 final; Commission Staff Working Document, ‘Advancing the Inter-
net of Things in Europe’, SWD(2016) 110 final.
32
Communication from the Commission to the European Parliament, the Council, the Euro-
pean Economic and Social Committee and the Committee of the Regions, ‘A Digital Single
Market Strategy for Europe’, COM(2015) 192 final; see already about one year earlier: Com-
munication from the Commission to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions, ‘Towards a thriving data-
driven economy’, COM(2014) 442 final.
33
Communication from the Commission to the European Parliament, the Council, the Euro-
pean Economic and Social Committee and the Committee of the Regions, ‘Building a
European Data Economy’, COM(2017) 9 final; Commission Staff Working Document on
the free flow of data and emerging issues of the European data economy, SWD(2017) 2 final.
34
SWD(2017) 2 final.
35
COM(2017) 9 final 10.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 275
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
276 Björn Steinrötter
45
COM(2017) 9 final 5 et seq.
46
Ibid 6.
47
Ibid 7.
48
Interestingly, as discussed further below, this is explicitly provided for by law regarding – (of all
things) – personal data Art 16(1) TFEU, Art 1(3) GDPR. The GDPR contains several opening
clauses, of course, which could be used by Member States to implement data location
restrictions.
49
Centrum für europäische Politik (cep), cep Policy Brief No 33 (2017) 3.
50
Regarding the re-use of data held by the public sector see Directive 2003/98/EC of the
European Parliament and of the Council of 17 November 2003 on the re-use of public sector
information, OJ 2003 L 345, p. 90, revised by Directive 2013/37/EU of the European Parliament
and of the Council of 26 June 2013 amending Directive 2003/98/EC on the re-use of public
sector information, OJ 2013 L 175, p. 1.
51
COM(2017) 9 final 8 et seq.: ‘[. . .] access and transfer in relation to the raw data [. . .] are
therefore central to the emergence of a data economy [. . .]’; see also the several European Data
Market Study Reports [http://datalandscape.eu/study-reports]; cf. the ‘Report of the high-level
conference Building the European Data Economy’ [https://ec.europa.eu/digital-single-market/
en/news/report-high-level-conference-building-european-data-economy].
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 277
Another reason is that it seems difficult to quantify the data’s monetary value.52
Companies may also be wary of the severe administrative fines pursuant to Art
83 GDPR in case of infringements of rules on personal data protection. As discussed
further below, this reflects the challenge in confidently demarcating non-personal
(anonymous) data – the focus of the EU’s data commerce initiatives – from
identifiable personal data. This is true also for much machine-generated data,53
where the differentiation of whether data is personal or not has become more and
more difficult in practice.54
Leaving the last point aside for now, in its proposals the Commission identifies a
number of possible options for addressing the issue of data access. These are
presented and (in parts) assessed below.
access rights The improvement of access to data is one way of maximising the
value of data in society.56 Possible access rights address the (factual) data holder
(e.g., manufacturers or service providers).
Whatever the case may be, it is important that access rights are designed with a
sense of proportion,57 as the incentive for data generation could be reduced if
generated data became more or less freely available.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
278 Björn Steinrötter
all data analysis would be called ‘research’.60 Here, in contrast with existing open
data initiatives within the EU, the institutions or persons involved decide on their
own to make data available; they are normally not legally bound to do so.
Apart from these higher-level aims, it would be worth considering data access in
return for remuneration61 (full or partial; perhaps after anonymisation).62 The
development of FRAND63 terms such as are found in competition policies on
standard-essential patents64 seems conceivable.65 When it comes to standards
resulting from technology under patent, the patent holder is often required to
licence the use of relevant information,66 and this could serve as a model to a
certain extent, notwithstanding the fact that it is difficult to implement licensing
terms which meet the requirements of being fair, reasonable and non-discrimin-
atory.67 In certain cases it might also be possible to draw upon the ‘essential facility
doctrine’68 from competition law (giving companies access to other companies’
infrastructural facilities where they are essential for participation in a downstream
market).69 Whether access should be free of charge or chargeable could also depend
on the respective sector or the data-producing costs of the parties involved. At the
same time, competition law approaches are certainly incapable of addressing all
cases70 where data are withheld at the expense of the public interest.71 Accordingly,
60
Zech (n 15) 317, 326.
61
COM(2017) 9 final 13.
62
A ‘potential benefit’ is seen insofar by GRUR 3.
63
Fair, reasonable and non-discriminatory.
64
European Court of Justice (ECJ), 16.7.2015, case 170/13 (Huawei/ZTE), ECLI:EU:C:2015:477;
cf. most recently High Court of Justice, Chancery Division, Patents Court, [2017] EWHC 3083
(Pat), Case No: HP-2014-000005; from legal scholarly literature see Colangelo and Torti,
Filling Huawei’s gaps: The recent German case law on Standard Essential Patents: (2017)
European Competition Law Review (ECLR) 538; Cross and Strath, Computer and Telecommu-
nications Law Review (CTLR) 2017, 112; Henningsen, ‘Injunctions for Standard Essential
Patents under FRAND Commitment: A Balanced, Royalty-Oriented Approach’ (2016) Inter-
national Review of Intellectual Property and Competition Law (IIC) 438.
65
COM(2017) 9 final 13; SWD(2017) 2 final 37. The payment of a reasonable and proportionate
fee is due in the motor vehicle sector, too (Art 7(1) Regulation No 715/2007).
66
Standard essential patents (SEP); SWD(2017) 2 final 38.
67
Cf. Mariniello, ‘Fair, Reasonable and Non-discriminitory (FRAND) Terms: A Challenge for
Competition Authorities’ (2011) 7 Journal of Competition Law & Economics 523.
68
See the obligations to licence the use of commercially-held information provided by the Cases:
ECJ, 6.4.1995, joined cases 241/91 and 242/91 (RTE and ITP/Commission), ECLI:EU:
C:1995:98; ECJ, 12.2.2004, case 218/01 (Henkel KGaA), ECLI:EU:C:2004:88; European Gen-
eral Court, 17.9.2007, case 201/04 (Microsoft Corp/Commission), ECLI:EU:T:2007:289; ECJ,
Huawei/ZTE (n 64).
69
Cf. Section 19(2) of German Act against Restraints of Competition (GWB).
70
It is not even clear if data could be an essential facility; see the detailed review of whether EU
competition law applies in principle to the data economy, Drexl, ‘Designing Competitive
Markets for Industrial Data – Between Propertisation and Access’ (2017) 8 Journal of Intellectual
Property, Information Technology and Electronic Commerce Law (JIPITEC) 257.
71
GRUR 3; Max Planck Institute for Innovation and Competition, Position Statement of 26 April
2017 on the European Commission’s ‘Public consultation on Building the European Data
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 279
while further access rights seem worth considering, competition law could to a
certain extent definitely serve as a template.72
More generally, an obligation to grant access to data in certain specific contexts is
certainly well known in European law (for example, Art 6–9 Regulation 715/2007/
EC,73 Art 35–36 Directive 2015/2366/EU,74 Art 27, 30 Regulation 2006/1907/EC,75
Art 30, 32 Directive 2009/72/EC,76 Recital 11 Directive 2010/40/EU77 and Art 9 Regu-
lation 2019/1150/EU).78 These instruments address the importance of data sharing in
the public interest in the widest sense in respect of such matters as access to vehicle
repair and maintenance information, access to payment systems, maintaining public
safety in relation to dangerous chemicals, etc.
Economy (hereafter cited as “MPI”)’ 12; Spindler (n 10) 399, 404; Zech (n 15) 317, 328; see also
Podszun, ‘Competition and Data’ (2017) 9 ZGE/IPJ 406.
72
GRUR 3.
73
Regulation (EC) No 715/2007 of the European Parliament and of the Council of 20 June
2007 on type approval of motor vehicles with respect to emissions from light passenger and
commercial vehicles (Euro 5 and Euro 6) and on access to vehicle repair and maintenance
information, OJ 2007 L 171, p. 1; amended by Regulation (EU) No 459/2012 of 29 May
2012 amending Regulation (EC) No 715/2007 of the European Parliament and of the Council
and Commission Regulation (EC) No 692/2008 as regards emissions from light passenger and
commercial vehicles (Euro 6), OJ 2012 L 142, p. 16; cf. also Art 12(2) Regulation (EU) 2015/758
of the European Parliament and of the Council of 29 April 2015 concerning type-approval
requirements for the deployment of the eCall in-vehicle system based on the 112 service and
amending Directive 2007/46/EC, OJ 2015 L 123, p. 77.
74
Directive (EU) 2015/2366 of the European Parliament and of the Council of 25 November
2015 on payment services in the internal market, amending Directives 2002/65/EC, 2009/110/
EC and 2013/36/EU and Regulation (EU) No 1093/2010, and repealing Directive 2007/64/EC,
OJ 2015 L 337, p. 35.
75
Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December
2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals
(REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and
repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/
94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/
EEC, 93/105/EC and 2000/21/EC, OJ 2007 L 136, p. 3; cf. also Commission Implementing
Regulation (EU) 2016/9 of 5 January 2016 on joint submission of data and data-sharing in
accordance with Regulation (EC) No 1907/2006 of the European Parliament and of the
Council concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals
(REACH), OJ 2016 L 3, p. 41.
76
Directive 2009/72/EC of the European Parliament and of the Council of 13 July 2009 concern-
ing common rules for the internal market in electricity and repealing Directive 2003/54/EC, OJ
2009 L 211, 55; cf. Proposal for a Directive of the European Parliament and of the Council on
common rules for the internal market in electricity (recast), COM(2016) 864 final/2; cf. also
Regulation (EC) No 1099/2008 of the European Parliament and of the Council of 22 October
2008 on energy statistics, OJ 2008 L 304, p. 1.
77
Directive 2010/40/EU of the European Parliament and of the Council of 7 July 2010 on the
framework for the deployment of Intelligent Transport Systems in the field of road transport
and for interfaces with other modes of transport, OJ 2010 L 207, p. 1.
78
Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on
promoting fairness and transparency for business users of online intermediation services, OJ
2019 L 186, 57.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
280 Björn Steinrötter
As the data market is heterogeneous, an overarching data access regime would not
meet the needs of different sectors (connected cars, mechanical engineering, smart
grids, smart homes, medical and health care sectors, agriculture etc.). If
access provisions are to be granted at all, it therefore seems preferable to create
different ones.79
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 281
2018/1807,89 legally grounded in the competence provision of Art 114 TFEU. The
concrete objectives of this act, which in general terms aims for a more competitive
and integrated internal market for data storage and other processing services and
activities, are:
Improving the mobility of non-personal data across borders in the single
market, which is limited today in many Member States by localisation restric-
tions or legal uncertainty in the market;
Ensuring that the powers of competent authorities to request and receive
access to data for regulatory control purposes, such as for inspection and audit,
remain unaffected; and
Making it easier for professional users of data storage or other processing
services to switch service providers and to port data, while not creating an
excessive burden on service providers or distorting the market.90
More specifically, pursuant to Art 1, issues to be addressed include ‘data localisation
requirements, the availability of data to competent authorities and the porting of
data for professional users’. The scope is restricted to the processing of electronic
data other than personal data pursuant to Art 4(1) GDPR91 with a specific territorial
link to the EU (Art 2). This should avoid overlap with the GDPR. In case of conflict,
the GDPR prevails (Art 2 para 2 Regulation [EU] 2018/1807). Cloud computing, big-
data applications, artificial intelligence and the internet of things are the most
relevant applications.92
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
282 Björn Steinrötter
a data set composed of both personal and non-personal data, this Regulation applies
to the non-personal data part of the data set. Where personal and non-personal data in
a data set are inextricably linked, this Regulation shall not prejudice the application of
[GDPR].’ It is unclear if a substantial area of application remains for non-personal
data.95
data porting However, what is even more interesting here is the provision
dealing with ‘porting of data’. As discussed earlier, to port data freely – to transfer
data smoothly between systems/platforms offered by different providers – is to be
‘a key facilitator of [informed] user choice [enabling] easy comparisons of the
individual components of various data storage or other processing services and
effective competition in markets for data storage or other processing services’.96
Art 6 reads as follows:
1. The Commission shall encourage and facilitate the development of self-
regulatory codes of conduct at Union level (‘codes of conduct’), in order to
contribute to a competitive data economy, based on the principles of transpar-
ency and interoperability and taking due account of open standards, covering,
inter alia, the following aspects:
(a) best practices for facilitating the switching of service providers and the
porting of data in a structured, commonly used and machine-readable
format including open standard formats where required or requested by
the service provider receiving the data;
(b) minimum information requirements to ensure that professional users are
provided, before a contract for data processing is concluded, with suffi-
ciently detailed, clear and transparent information regarding the pro-
cesses, technical requirements, timeframes and charges that apply in
case a professional user wants to switch to another service provider or port
data back to its own IT systems;
(c) approaches to certification schemes that facilitate the comparison of data
processing products and services for professional users, taking into account
established national or international norms, to facilitate the comparability
of those products and services. Such approaches may include, inter alia,
quality management, information security management, business con-
tinuity management and environmental management;
(d) communication roadmaps taking a multi-disciplinary approach to raise
awareness of the codes of conduct among relevant stakeholders.
2. The Commission shall ensure that the codes of conduct are developed in
close cooperation with all relevant stakeholders, including associations of
SMEs and start-ups, users and cloud service providers.
95
Sceptical with a view to the draft’s ‘low impact’ on the one hand and ‘substantial additional
costs’ on the other hand: German ‘Bundesrat’, BR-Drucks. 678/1/17, p. 2.
96
Recital 20 first sentence, Recital 21 first sentence of the proposal.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 283
The aim of this provision is to eliminate private restrictions, such as ‘legal, contract-
ual and technical issues hindering or preventing users of data processing services
from porting their data from one service provider to another or back to their own
information technology (IT) systems, not least upon termination of their contract
with a service provider’ (recital 5).
The Commission has chosen a self-regulation approach as, in its judgement, this
would not disturb the innovation process and rather bears in mind ‘the experience
and expertise of the providers and professional users of data storage or other
processing services’.97
Nonetheless, there are arguably problems with this aspect of the regulation, too.
First of all, Art 6 only encourages the switching of providers and porting of data,
without any obligation to do so and without precise specifications. The provision
only implies soft law (codes of conduct). However, this is precisely what might be
the best option at the moment, as it is not quite clear whether the EU legislator has a
complete overview of the data-(driven) market (who can say that they have the
overview in toto?). It is quite a tempting idea to start with soft law, to continue with
the analysis of the market and to reserve the creation of ‘hard law’ for later on.98
Then again, it is suboptimal99 to have two different portability provisions100 – Art
6 Regulation (EU) 2018/1807 and Art 20 GDPR.101
The restriction on personal scope – only professional users are captured by Art 6 –
makes the act even more irrelevant in practice. More generally, this aspect of the
regulation is directed at cases where data users make use of third-party provider
systems to manage their data – here, as noted, it may encourage a certain freeing up
of the data (by strengthening the user’s position against the provider); by contrast, it
leaves untouched other cases where organisations – including many larger ones –
entrust their data to their own competent IT specialists. Here, other mechanisms
appear necessary to encourage data sharing by such holders, namely by addressing
commercial factors that currently tell against the granting of third-party access.
97
Recital 21 second sentence of the proposal.
98
See Art 6(3) [and Recital 21 third sentence] of the proposal: ‘The Commission shall review the
development and effective implementation of such codes of conduct and the effective provi-
sion of information by providers no later than two years after the start of application of this
Regulation.’ See also Art: 9 (1): ‘No later than [5 years after the date mentioned in Article 10(2)],
the Commission shall carry out a review of this Regulation and present a report on the main
findings to the European Parliament, the Council and the European Economic and Social
Committee.’
99
EAID, Statement of 23/11/2017, p. 3.
100
Therefore, DAV Statement 4/2018, January 2018, p. 8 proposes to wait how Art 20 GDPR will
work in practice and then address the respective issue concerning non-personal data.
101
Section 10.4.2.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
284 Björn Steinrötter
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 285
promoting transfer to those market participants who would most benefit from using
them.108 It would imply (alongside a set of defensive rights)109 the exclusive right to
utilise data and to license their usage.110 This would potentially cover the whole
data-related value chain, including the copying, curation and analysis of data.111
108
SWD(2017) 2 final 33, 36; cf. Zech (n 17) 51, 77 et seq.
109
It would also be conceivable to create a data producer’s right as a set of defensive rights in
favour of the factual data holder, assuming the factual assignment is balanced and fair; in this
direction arguably Kerber (2016) GRUR Int 989, 998: ‘Our negative result in regard to
protecting data through an exclusive property right does not imply that the data of data holders
should not be protected against a wide array of behaviour that endangers and impedes the
holding, use, and trade of these data. [. . .] In that respect, we could also talk about ‘rights on
data’ and ‘ownership’ of data, which however would not encompass an exclusive right on these
data (as physical property or traditional IPRs). Therefore the possession and use of data can be
protected without the necessity of introducing exclusive property.’
110
SWD(2017) 2 final 33.
111
Zech (n 15) 317, 318.
112
Becker (n 8) 253, 256 who emphasises the correspondence between this specific performance
and the exclusive allocation of rights in use. However, this aspect is rather coloured by
continental, Hegelian droit d'auteur type arguments for IP. It is less pronounced in Anglo-
American law, where the main rationale has always been to reward the effort – whether creative
in a higher sense or not – that went into producing a given work.
113
Spindler (n 10) 399, 401, who at the same time clarifies that licence contracts remain necessary
either way; Zech (n 17) 51, 77; with respect to personal data: Specht (n 10) 411, 412.
114
MPI 5, 8.
115
GRUR 2.
116
One might think of the areas of autonomous transportation, industrial systems, personal
systems, medical fields etc.
117
Nevertheless, in favour of such an approach GRUR 2.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
286 Björn Steinrötter
such a right?118 In this regard, there has been little evidence of a market need for
such a data right hitherto.119 The fact that presently not enough data are freely
shared (to assist big-data applications, etc.), as holders prefer to hoard it for their own
use, does not show a market need just for a data property right. If the exclusivity was
not only factual but legal, this could further increase the hoarding tendencies.
Another (and arguably the most controversial)120 issue is: who would be the original
holder of the exclusive data right? The Commission defines the ‘data producer’ as
‘the owner or long-term user [. . .] of the device’.121 Neither the assignment to the
‘data producer’ nor the Commission’s definition of the term seem compelling.
Another possibility would be the ‘economic beneficiary’. However, the identity of
the latter is not always clear. For example, is it the developer (bears the development
costs), the producer (bears the manufacturing costs) or the user of a device (bears the
maintenance costs)?122 Moreover, the scope of protection (possibly a limitation of
the allocated uses to commercial uses)123 and the limitations and exceptions (maybe
an obligation to share data to a certain extent in order to achieve welfare-enhancing
effects with, for example, scientists performing research)124 need to be carefully
outlined,125 a significant undertaking. Additionally, the right balance and relation
must be struck between the data exclusive right on the one hand and other rights,126
such as those resulting from data protection law, copyright law, patent law, database
law, the law of know-how protection (regarding trade secrets), competition law,
private127 and even criminal law on the other hand. It would seem convincing to
classify the data producer’s right (as considered here) as a supplementary right in
relation to (most of )128 the aforementioned legal fields. However, the question arises
118
Cf. ibid.
119
Becker (n 8) 253, 255 et seq.: ‘. . . companies control their data via technical means so
extensively that legal protection is not a pressing issue for them’; ‘for companies with adequate
IT-security, exclusive rights only become relevant for outgoing data’; ‘. . . especially . . . when
data is exchanged with business partners; or if public availability of data is necessary, as the case
with most internet services’.
120
Zech (n 15) 317, 324; cf. also Becker (n 8) 253, 255.
121
COM(2017) 9 final 13; sceptical Wiebe (n 10) 394, 395.
122
See also the reluctance of the possibility of an adequate personal allocation by GRUR 2.
123
This would be the approach of Zech (n 15) 317, 318 et seq.
124
SWD(2017) 2 final 35.
125
Easily comprehensible would be a limitation in favour of the data producer to fulfil legal
obligations such as monitoring products on the market (product safety and security). The same
applies to the free use for certain authorities regarding public welfare functions, for instance;
Zech (n 15) 317, 325 et seq.
126
See the overview given by SWD(2017) 2 final 19 et seq.
127
In particular, tort law. In Germany it is debated whether the integrity of data should be directly
protected by tort law (section 823(1) German Civil Code (BGB)). Renowned professors have
already spoken in favour of such an approach: Spindler in Beck’scher Online-Groß-Kommentar
(BeckOGK) zum Bürgerlichen Gesetzbuch, § 823 Rn 182 ff; Wagner in Münchener Kommentar
zum Bürgerlichen Gesetzbuch (MüKO BGB), Band VI, 7. Auflage, 2017, § 823 Rn 296.
128
Contract Law would be superseded, of course.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 287
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
288 Björn Steinrötter
There are, however, also some problems with taking a contractual approach.
Contracts only have inter partes effect, which provides less legal certainty regarding
legal transactions and fails to address structural disparities between the potential
parties. If an exclusive use is desired in order to create or maintain data markets, and
‘property rights’ in data are rejected, secrecy – secured by technical means – remains
the only option.139 Drafting contracts that contain such factual exclusivity is rather
complex and therefore costly. Moreover, consumers often possess neither the equip-
ment nor the know-how to deal with these technical matters, nor do they have the
market power to safeguard their interests contractually, for example, regarding
connected cars.140 Rather, the previous de facto holder will ‘use standard contract
terms formulated in its own interest’.141
Furthermore, it is unclear what type of contract is relevant (e.g., whether it is a
contract of sale or for services, etc.) until a special data contract law regime is created
as a standard.142 This is perhaps more of an issue for continental codified systems of
law; practical problems can arise regarding the review of the terms and conditions.
The Directive on certain aspects concerning contracts for the supply of digital
content and digital services143 is intending to bridge this gap. However, the directive
only applies to B2C contracts and explicitly refuses to stipulate a type of contract.144
One factual problem seems to be that current contractual practice tends to limit
onward re-use of data.145 Parties are mostly not entitled to use the data for any
purpose other than fulfilling the relevant contract, such as for own purposes or
transfer to third parties.146
To reduce the imbalance in parties’ bargaining power, while maintaining a
contractual freedom-based approach to data access, certain default rules could be
considered, perhaps coupled with unfairness controls regarding contractual data
clauses and/or a set of standard contract terms.147
provides a certain protection, whereas it deserves attention that some Member States apply its
provisions or its ‘spirit’ also to b2b-constellations, SWD(2017) 2 final 21. In connection with data as
such the Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011
on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the
European Parliament and of the Council and repealing Council Directive 85/577/EEC and
Directive 97/7/EC of the European Parliament and of the Council, OJ 2011 L 304, p. 64 may
become relevant. The same will hold true regarding the final version of the Directive on certain
aspects concerning contracts for the supply of digital content, COM(2015) 634 final.
139
Zech (n 17) 51, 60: ‘factual exclusivity – that is secrecy – is difficult to trade’.
140
Cf. Zech (n 17) 51, 60.
141
MPI 7.
142
Specht (n 10) 411, 414.
143
OJL 136, p. 1.
144
Zech (n 17) 51, 61.
145
SWD(2017) 2 final 16 refers to Clark, ‘Legal Study on Ownership and Access to Data’ (2016) 79
(available at [https://tinyurl.com/y8w478m6]).
146
SWD(2017) 2 final 16.
147
See GRUR 4 calling for an introduction of an unfairness control in b2b constellations; MPI 7;
cf. also Zech (n 15) 317, 327; cf. also Spindler (n 10) 399, 402 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 289
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
290 Björn Steinrötter
personal data are a potential commercial asset, namely a tradable good. Indeed, they
may often constitute the most valuable kind of asset (e.g., enabling businesses to
target their customers with advertising based on a detailed knowledge of their
individual interests and assets).154 However, the strict requirements of data protec-
tion law lead to the conclusion in parts of legal literature that at present this potential
is not being exploited to the full.155 In the light of existing data protection law, the
value of personal data is actually lower than it would be otherwise, as its use exposes
holders to significant regulatory costs and/or risks. To this extent, data protection and
commercialisation of data may be seen as contradictory objectives. On the one
hand, Art 20 GDPR may assist the movement of data to a certain extent by
safeguarding data portability.156 One of the few real innovations within the new
data protection act is as follows: (Art 20 GDPR)
1. The data subject shall have the right to receive the personal data concerning
him or her, which he or she has provided to a controller, in a structured,
commonly used and machine-readable format and have the right to transmit
those data to another controller without hindrance from the controller to
which the personal data have been provided, where:
(a) the processing is based on consent pursuant to point (a) of Article 6(1) or
point (a) of Article 9(2) or on a contract pursuant to point (b) of Article
6(1); and
(b) the processing is carried out by automated means.
2. In exercising his or her right to data portability pursuant to paragraph 1, the
data subject shall have the right to have the personal data transmitted directly
from one controller to another, where technically feasible.
3. The exercise of the right referred to in paragraph 1 of this Article shall be
without prejudice to Article 17. That right shall not apply to processing
necessary for the performance of a task carried out in the public interest or
in the exercise of official authority vested in the controller.
4. The right referred to in paragraph 1 shall not adversely affect the rights and
freedoms of others.
In general, portability means the ‘ability to move, copy or transfer something’.157 The
rationale of Art 20 GDPR is to avoid lock-in effects and to improve the process of
switching from one service provider to another.158 It has more of a competition
154
Becker (n 8) 253, 259.
155
Cf. Berger (n 5) Abstract.
156
SWD(2017) 2 final 20.
157
SWD(2017) 2 final 46.
158
Hennemann, ‘Datenportabilität’ (2017) Privacy in Germany (PinG) 5; cf. also the ‘switching
mechanisms’ of Art 9 of Directive 2005/29/EC, Art 30 Universal Service Directive 2002/22/EC
and Art 13 No 2 lit. c, Art 16(4) lit. b of the proposal for a Directive on contracts for the supply of
digital content, COM(2015) 634 final.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 291
law159 than a data protection law background,160 albeit the data subject’s right to data
protection is preserved by better data sovereignty, at least indirectly due to Art
20 GDPR.
Looking at the provision in more detail, Art 20 GDPR implies two components:
first, the transmission of personal data from one controller to another (if technically
possible); and second, the receipt of the data from the controller. A trouble spot
seems to be the ascertainment of the term ‘personal data [. . .] which he or she has
provided to a controller’ since this determines which data is eligible to be ported. It
is clear that data ‘actively and knowingly’ provided by the data subject is encom-
passed.161 The same arguably holds true regarding data provided automatically as a
result of the subject’s use of a device or a service.162 In contrast, data created by the
controller on the basis of data that were provided by the data subject appear to fall
outside the scope of Art 20 GDPR.163
In fact, even before the applicability of the GDPR, the data subject enjoyed a
well-known ‘right of data access’ (under Art 12 Directive 95/46/EC). This right, now
re-enacted in Art 15 GDPR, ‘supports’ the newly created data portability right. Art
15 GDPR reads as follows:
1. The data subject shall have the right to obtain from the controller confirm-
ation as to whether or not personal data concerning him or her are being
processed, and, where that is the case, access to the personal data and the
following information:
(a) the purposes of the processing;
(b) the categories of personal data concerned;
(c) the recipients or categories of recipient to whom the personal data have
been or will be disclosed, in particular recipients in third countries or
international organisations;
(d) where possible, the envisaged period for which the personal data will be
stored, or, if not possible, the criteria used to determine that period;
(e) the existence of the right to request from the controller rectification or
erasure of personal data or restriction of processing of personal data
concerning the data subject or to object to such processing;
(f ) the right to lodge a complaint with a supervisory authority;
(g) where the personal data are not collected from the data subject, any
available information as to their source;
(h) the existence of automated decision-making, including profiling, referred
to in Article 22(1) and (4) and, at least in those cases, meaningful
159
Paal and Pauly (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz (2nd edn, Beck
2018) Art 20 para 6.
160
Hennemann (n 158) 5, 6.
161
SWD(2017) 2 final 46.
162
Ibid.
163
Ibid.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
292 Björn Steinrötter
information about the logic involved, as well as the significance and the
envisaged consequences of such processing for the data subject.
2. Where personal data are transferred to a third country or to an international
organisation, the data subject shall have the right to be informed of the
appropriate safeguards pursuant to Article 46 relating to the transfer.
3. The controller shall provide a copy of the personal data undergoing process-
ing. For any further copies requested by the data subject, the controller may
charge a reasonable fee based on administrative costs. Where the data subject
makes the request by electronic means, and unless otherwise requested by the
data subject, the information shall be provided in a commonly used
electronic form.
4. The right to obtain a copy referred to in paragraph 3 shall not adversely affect
the rights and freedoms of others.
A seemingly simple solution, to avoid the application of these requirements, would
be for the data controller to anonymise the personal data. More generally, this might
open the door to the application of data economic law. However, the current law
places high demands on the process of anonymising (and de-anonymisation seems
to be possible quite often). Moreover, anonymised data is probably not as valuable
(e.g., for big-data applications) as non-anonymised data.164
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 293
De lege lata it is true that such an evolution of data protection law conflicts with
the free revocability of consent pursuant to Art 7(3) GDPR and other principles of
data protection. Indeed, for this reason the Commission has resolutely opposed such
a development:
As the protection of personal data enjoys the status of a fundamental right in the EU
and processing of personal data is protected by the highest standards of data
protection legislation in the world, in the EU personal data cannot be subject to
any type of ‘ownership’.168
For the discussion in the US, which is not quite suitable for the European debate since the
privacy approach is very different, just see Samuelson, ‘Privacy as Intellectual Property’ (1999)
52 Stanford Law Review 1125.
168
SWD(2017) 2 final 24.
169
See the many disadvantages listed under Section 10.3.2.2 regarding syntactic information.
Mutatis mutandis, the list holds true here, too.
170
Cf. Berger (n 9) 340, 345.
171
Berger (n 9) 340, 351 et seq.
172
OJL 136, p. 1.
173
Berger (n 9) 340.
174
Berger (n 9) 340, 343: ‘contract in lieu of ban’.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
294 Björn Steinrötter
including in Germany, the age threshold for consenting to have one’s data processed
is in any event lower than that for entering into a contract, rendering ostensible
‘contractual’ consent ineffective from the outset (cf. section 107 of the German Civil
Code (BGB)).
It may also be questioned whether, if data protection law were developed into a
data private (contract) law regime, contract law would provide adequate means of
dealing with personal data trades and movements. One could argue that contract
law would not be able to solve the problem of the imbalance of power between the
data subject and the data industry. As it stands, however, this is not really true.
Modern contract law certainly has the tools to balance the different power levels of
the parties,175 all the more so given that existing data protection law mandatorily
takes precedence and secures the data subject beyond purely contractual mechan-
isms. In other words, the parties cannot waive this minimum protection. This
becomes evident, once again, with respect to the ‘right [of the data subject] to
withdraw his or her consent at any time’ (Art 7(3) GDPR).
10.5 conflicts
Privacy concerns and the free flow of data may obviously conflict with each other. At
first sight, this does not affect non-personal data, such as that concerning non-
human physical phenomena, which remain outside the scope of the GDPR.
However, the economic reality is that datasets and data flows often contain both
personal and non-personal data.176 This phenomenon applies also to machine-
generated data, which are created without direct human intervention but rather
by computers or sensors. Indeed, as we have seen, differentiation between non-
personal and personal data is at the least very difficult,177 and maybe increasingly
impossible.
Where such data allow the identification of natural persons, the GDPR applies,
and with it the potential for very substantial fines.178 This clearly does not allow for
unrestricted trading and processing of personal data.179 It is remarkable (and
somewhat incomprehensible) that the Commission, in its current legislative initia-
tives regarding the EU data economy, has made little attempt to harmonise these
with the GDPR.180 Rather, as with other recent initiatives,181 such as the digital
content and digital services directive (promoting the use of data [personal as well as
175
Berger (n 9) 340, 351.
176
COM(2017) 9 final 9. This reality is also recognised by the European legislator, as Art 2 para 2
Regulation (EU) 2018/1807 illustrates.
177
GRUR 4.
178
Art 83(3) (6) GDPR.
179
GRUR 4: ‘Personal data shall be adequate, relevant and limited to what is necessary in relation
to the purposes for which it is processed.’
180
Becker (n 8) 253, 258 et seq.
181
See Section 10.3.2.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 295
other data] as a means of payment: Art 3(1)),182 it appears that the EU has prioritised
the issue of ‘data as tradable goods’.
Here, data protection law, including the principle of data minimisation, has
opposite objectives to, for example, big-data applications. Hence, some commen-
tators argue that the data minimisation principle is no longer up to date.183 More-
over, as noted, pursuant to Art 7(3) GDPR, the ‘data subject shall have the right to
withdraw his or her consent at any time’, which would lead – if rigidly interpreted –
to a stoppage of the big-data process.184
10.6 alternatives
Maybe data are not even the right starting point for regulating the data economy.
Maybe disclosure of the methods and techniques used by algorithms is.185 However,
businesses concerned will argue that algorithms are an important trade secret to
them. In addition, it seems questionable whether the end user would benefit from
the disclosure. In cases of artificial intelligence, it is often suggested that it would not
be even possible to trace back how the results have been obtained.186
Data economic law will eventually have to be reconciled with data protection
law, as the trick of ‘taking refuge in syntactic information’ is not convincing and data
protection law remains the standard measure. In all this there remains one promis-
ing starting point: the autonomous decision of the data subject, in other words,
informed consent.187 It is true that the ‘concept of informed consent [. . .] has proved
insufficient in legal reality.’188 Therefore, the aim must be to optimise its efficiency.
Setting aside data economic law considerations, one of the central legal policy
issues in recent years has been the improvement of (digital) ‘data sovereignty’,189 in
combination with certain information obligations and/or the enforcement of
182
OJL 136, p. 1.
183
GRUR 4.
184
Ibid, pointing out that in addition the several rights of the persons affected typically disturb big
data applications.
185
Ibid 5; regarding the regulation of algorithms see, e.g., Comandè, ‘Regulating Algorithms’
Regulation? First Ethico-Legal Principles, Problems, and Opportunities of Algorithms’ in
Cerquitelli, Quercia, and Pasquale (eds) Transparent Data Mining for Big and Small Data
(Springer 2017); Martini, ‘Algorithmen als Herausforderung für die Rechtsordnung’ (2017)
Juristenzeitung (JZ ) 1017.
186
It is questionable whether this is true. Computer scientists currently conduct a lot of research
on the question of interpretability of AI systems (source: personal conversation with Avishek
Anand, professor at Leibniz University Hanover).
187
Cf. Sattler, Sattler, GRUR Newsletter 01/2017 7 et seq.
188
Becker (n 5) 371.
189
Krüger, ‘Datensouveränität und Digitalisierung’ (2016) Zeitschrift für Rechtspolitik (ZRP) 190;
Rosenzweig, ‘International Governance Framework for Cybersecurity’ (2012) 37 Canada-
United States Law Journal (Can-US LJ) 405, 421 et seq.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
296 Björn Steinrötter
10.7 conclusions
This contribution has sought to demonstrate how complicated the establishment of
a data economic law would be. Much still remains unclear.
We have the factual problem of differentiating non-personal data from personal
data, which is decisive in respect of the applicable legal regime – especially taking
into consideration that data protection law and data economy law have partly
conflicting objectives. It seems tempting to establish a data economic law
190
Becker, ‘Ein Recht auf datenerhebungsfreie Produkte’ (2017) JZ 170; Becker (n 5) 371: ‘a right to
data-avoiding products’.
191
Becker (n 5) 371, 384 et seq.
192
Becker (n 5) 371, 388 et seq.
193
Forgó cited in Beer, ‘Europäische Datenschutzgrundverordnung: Rechtsinformatiker plädiert
für Datenschutzampel’ <https://tinyurl.com/ycj3xla2>.
194
Note, however, the scepticism with respect to personal goods as trading objects tracing back to
Immanuel Kant from: Becker (n 5) 371, 375 et seq.; cf. also Specht (n 10) 411, 412.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 297
(apparently) outside the strict GDPR regime. However, this is arguably not feasible.
The distinction between personal and non-personal data in practice is very complex
and, in some cases, perhaps even impossible, especially since even machine-
generated data are in many cases personal.195
In addition, ultimately the data in question are semantic in nature. Even big-data
applications aim at the micro content that the syntactic level ‘carries’; they do not
aim at the syntactic level as such. To focus on the syntactic level might work in
theory but it is not an option for future legislation. Even if it were possible to separate
the two levels, the GDPR (and other legal fields such as copyright law) would still
thwart the regulations on the syntactic level. If the binary code ‘transports’ legally
protected meaning, this protection would prevail and spill over onto the
syntactic level.
The necessity for legal certainty in the fields of data trading and usage means that
despite these concerns, a data economic law will gradually be established. While
data producers’ rights would not be a convincing component of such law, access
rights could be one way forward – along with an adjusted contract law.
Whatever the future holds, proper regulation of the data market remains a hot
legal topic, both now and for the foreseeable future. Discussion of the optimal legal
framework for addressing the use of digital data will occupy legal scholarship for
some time to come.
195
GRUR 4, has already pointed out this aspect.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011