You are on page 1of 321

algorithms and law

Algorithms permeate our lives in numerous ways, performing tasks that until recently
could only be carried out by humans. Artificial Intelligence (AI) technologies, based on
machine learning algorithms and big-data-powered systems, can perform sophisticated
tasks such as driving cars, analyzing medical data, and evaluating and executing complex
financial transactions – often without active human control or supervision. Algorithms
also play an important role in determining retail pricing, online advertising, loan
qualification, and airport security. In this work, Martin Ebers and Susana Navas bring
together a group of scholars and practitioners from across Europe and the US to analyze
how this shift from human actors to computers presents both practical and conceptual
challenges for legal and regulatory systems. This book should be read by anyone
interested in the intersection between computer science and law, how the law can better
regulate algorithmic design, and the legal ramifications for citizens whose behavior is
increasingly dictated by algorithms.

martin ebers is Associate Professor of IT Law at the University of Tartu, Estonia and
permanent research fellow at the Humboldt University of Berlin. He is co-founder and
president of the Robotics & AI Law Society (RAILS). In addition to research and
teaching, he has been active in the field of legal consulting for many years. His main
areas of expertise and research are IT law, liability and insurance law, and European and
comparative law. In 2016, he published the monograph Rights, Remedies and Sanctions
in EU Private Law. Most recently, he co-edited the book Rechtshandbuch Künstliche
Intelligenz und Robotik (C.H. Beck 2020).
susana navas is Professor of Private Law at the Autonomous University of Barcelona,
Spain. Her main fields of interest are very broad, comprising matters as varied as child
law, copyright law, and European private law. In recent years she has focused on the
study of digital law. Her most recent publication in this field are Inteligencia artificial.
Tecnología. Derecho (Tirant Lo Blanch 2017), El ciborg humano (Comares 2018) and
Nuevos desafíos para el Derecho de autor. Robótica, Inteligencia artificial y Derecho
(Reus 2019). She has been involved in a number of research projects and has been a key
speaker at many conferences and workshops at national and European level. She has
enjoyed of some research stays at European and North American institutes and univer-
sities and has supervised several doctoral thesis that have been published.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Algorithms and Law

Edited by
MARTIN EBERS
Humboldt University of Berlin
University of Tartu

SUSANA NAVAS
Autonomous University of Barcelona

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
University Printing House, Cambridge cb2 8bs, United Kingdom
One Liberty Plaza, 20th Floor, New York, ny 10006, USA
477 Williamstown Road, Port Melbourne, vic 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre, New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906

Cambridge University Press is part of the University of Cambridge.


It furthers the University’s mission by disseminating knowledge in the pursuit of
education, learning, and research at the highest international levels of excellence.

www.cambridge.org
Information on this title: www.cambridge.org/9781108424820
doi: 10.1017/9781108347846
© Cambridge University Press 2020
This publication is in copyright. Subject to statutory exception
and to the provisions of relevant collective licensing agreements,
no reproduction of any part may take place without the written
permission of Cambridge University Press.
First published 2020
A catalogue record for this publication is available from the British Library.
Library of Congress Cataloging-in-Publication Data
names: Ebers, Martin, 1970– author. | Navas, Susana, 1966– author.
title: Algorithms and law / Martin Ebers, Susana Navas.
description: 1. | New York : Cambridge University Press, 2020. | Includes bibliographical
references and index.
identifiers: lccn 2019039616 (print) | lccn 2019039617 (ebook) | isbn 9781108424820 (hardback) |
isbn 9781108347846 (epub)
subjects: lcsh: Law–Data processing. | Information storage and retrieval systems–Law. |
Computer networks–Law and legislation.
classification: lcc k87 .e24 2020 (print) | lcc K87 (ebook) | ddc 343.09/99–dc23
LC record available at https://lccn.loc.gov/2019039616
LC ebook record available at https://lccn.loc.gov/2019039617
isbn 978-1-108-42482-0 Hardback
Cambridge University Press has no responsibility for the persistence or accuracy
of URLs for external or third-party internet websites referred to in this publication
and does not guarantee that any content on such websites is, or will remain,
accurate or appropriate.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:34:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/E88C575B4D859FBE71C9B0BA97B9EF80
Contents

List of Figures and Tables page xi


Notes on Contributors xiii
Preface xvii
Acknowledgments xxi

1 Robotics and Artificial Intelligence: The Present and Future Visions 1


Sami Haddadin and Dennis Knobbe
1.1 Machine Intelligence: History in a Nutshell 1
1.1.1 Back to the Roots 1
1.1.2 The Modern Era of Robotics and AI 9
1.1.3 A Big Step Forward 13
1.2 Key Technologies in Modern Robotics and Artificial Intelligence 16
1.2.1 Trustworthy Artificial Intelligence 16
1.2.2 Safety in Physical Human‒Robot Interaction 17
1.2.3 Robot Mechatronics As AI Embodiment 17
1.2.4 Multimodal Perception and Cognition 18
1.2.5 Navigation and Cognition 19
1.2.6 Modern Control Approaches in Robotics 20
1.2.7 Machine-Learning Algorithms 21
1.2.8 Learning in Intelligent and Networked Machines 24
1.3 Man and Machine in the Age of Machine Intelligence 25
1.3.1 Flying Robots 26
1.3.2 Mobile Ground Robots 27
1.3.3 Tactile Robots 27
1.4 Applications and Challenges of Robotics and AI Technologies 29
1.4.1 From Cleaning Robots to Service Humanoids 29
1.4.2 Production and Logistics 32
1.4.3 Robotic Disaster Relief 33
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. v
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
vi Contents

1.4.4 Multimodal Communication for AI-Enabled Telemedicine 34


1.4.5 The Future of Medicine with Molecular Robots 35
1.5 Conclusion 36

2 Regulating AI and Robotics: Ethical and Legal Challenges 37


Martin Ebers
2.1 Scenario 37
2.1.1 The Use of Algorithms by Businesses and Governments 37
2.1.2 Concepts and Definitions 40
2.1.3 Overview 44
2.2 The Problematic Characteristics of AI Systems from a Legal
Perspective 44
2.2.1 Complexity and Connectivity 44
2.2.2 From Causation to Correlation 45
2.2.3 Autonomy 46
2.2.4 Algorithms As Black Boxes 48
2.3 Fundamental Questions 50
2.3.1 Replacement of Humans by Machines: To What Extent? 50
2.3.2 Brain‒Computer Interfaces and Human Enhancement 52
2.4 Safety and Security Issues 53
2.4.1 Superintelligence As a Safety Risk? 53
2.4.2 Current Safety Risks 54
2.4.3 Security Risks Due to Malicious Use of AI 55
2.5 Accountability, Liability, and Insurance for Autonomous Systems 56
2.5.1 Emerging Questions 56
2.5.2 Overview of Opinions 57
2.5.3 Revising (Product) Liability Law in the European Union 57
2.5.4 A Specific Legal Status for AI and Robots? 60
2.6 Privacy, Data Protection, Data Ownership, and Access to Data 61
2.6.1 The Interplay between Data and Algorithms 61
2.6.2 Privacy, Data Protection, and AI Systems 62
2.6.3 Data Ownership v Data Access Rights 66
2.7 Algorithmic Manipulation and Discrimination of Citizens,
Consumers, and Markets 70
2.7.1 Profiling, Targeting, Nudging, and Manipulation of Citizens
and Consumers 71
2.7.2 Discrimination of Citizens and Consumers 76
2.7.3 Market Manipulation: The Case of Algorithmic Collusion 81
2.8 (International) Initiatives to Regulate AI and Robotics 83
2.8.1 Overview 83
2.8.2 European Union 86
2.8.3 International Organizations 89
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
Contents vii

2.8.4 Industry Initiatives and Self-Regulation at


International Level 91
2.9 Governance of Algorithms: Regulatory Options 92
2.9.1 Should AI Systems and Robotics be Regulated by
Ethics or Law? 92
2.9.2 General Regulation versus Sector-specific Regulation 93
2.9.3 Guiding Questions For Assessing the Need to Regulate 93
2.9.4 Level of Regulation: Global, International, National,
or Regional? 94
2.9.5 Instruments for Modernizing the Current Legal Framework 95
2.9.6 A Plea for an Innovation-friendly Regulation 97
2.10 Outlook 98

3 Regulating Algorithms: How to Demystify the Alchemy of Code? 100


Mario Martini
3.1 Algorithms As Key to a Digital Cognitive World: Tomorrow’s
Leviathan? 100
3.2 Out of Control? Risk Potentials of AI As Prediction Machines 102
3.2.1 Opacity 102
3.2.2 Unlawful Discrimination As Ethical and Legal Challenge 104
3.2.3 Monopolization of Market Power and Knowledge:
Influencing the Formation of Political Opinion 107
3.3 Regulatory steps and proposals for further legislative measures 108
3.3.1 Collective Data Protection As Part of Consumer Protection
in the Digital World 109
3.3.2 Preventive Regulatory Instruments 112
3.3.3 Accompanying Risk Management and Supervision
by Public Authorities 125
3.3.4 Ex-post Protection 128
3.3.5 Self-Regulation: Algorithmic Responsibility Code with
a Declaration of Conformity 132
3.4 Conclusion 134

4 Automated Decision-Making under Article 22 GDPR: Towards


a More Substantial Regime for Solely Automated Decision-Making 136
Diana Sancho
4.1 Algorithms and Decision-Making 136
4.2 Automated Processing, Profiling, and Automated Decision-Making 138
4.2.1 A Dynamic Process 138
4.2.2 The Procedural Design of Article 22 140
4.3 Which Decisions? 141
4.3.1 Classification 141
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
viii Contents

4.3.2 Analysis 142


4.4 The Right to Human Intervention and Article 22 147
4.4.1 Prohibition 147
4.4.2 Right 148
4.4.3 Derogations 148
4.4.4 The WP29 Guidelines 149
4.5 The Right to an Explanation and Article 22 150
4.6 Conclusion 155

5 Robot Machines and Civil Liability 157


Susana Navas
5.1 Robot Machines and Virtual Robots 157
5.1.1 Broad Notion of a Robot 158
5.1.2 Strict Notion of a Robot 160
5.1.3 European Notion of a Robot 162
5.2 Robots from a Legal Perspective 162
5.2.1 Current Legal Framework 162
5.2.2 Regulation of the Design and Production of
Robot Machines 163
5.3 The Liability of the Owner of a Robot: Some Reflections 165
5.4 The Producer’s Liability for Damage Caused by a Robot
Machine: Review 166
5.4.1 Robot Machines As Products 167
5.4.2 Types of Defects 168
5.4.3 Notion of Producer: The ‘Market Share Liability’ Rule 169
5.4.4 The Consumer Expectations Test 171
5.4.5 Inclusion of Non-pecuniary Damages 172
5.5 Conclusions 173

6 Extra-Contractual Liability for Wrongs Committed


by Autonomous Systems 174
Ruth Janal
6.1 Damage Wrought by Autonomous Systems 174
6.1.1 Robots As Legal Persons 175
6.1.2 The Players Involved in Autonomous Systems 176
6.1.3 Existing Liability Regimes 177
6.2 Traditional Concepts of Liability 178
6.2.1 Fault-Based Liability 178
6.2.2 Liability for Things 180
6.2.3 Liability for Employees and Other Assistants 185
6.2.4 Liability for Minors 188

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
Contents ix

6.3 Perspective: Liability for Autonomous Systems 190


6.3.1 How to Define ‘Wrong’ in the Context of Autonomous
Systems 190
6.3.2 User of the Autonomous System 193
6.3.3 Keeper of the Autonomous System 194
6.3.4 The Operator’s Liability 202
6.4 No-Fault Compensation Schemes 205
6.5 Conclusion 205

7 Control of Algorithms in Financial Markets: The Example


of High-Frequency Trading 207
Gerald Spindler
7.1 Algorithms and Financial Markets 207
7.2 Control of Algorithms: High-Frequency Trading As a Blueprint
for Regulation? 209
7.3 Risks and Impact of High-Frequency Trading on Markets 209
7.4 The German High-Frequency Trading Act 210
7.5 Regulation on the European Level 213
7.5.1 MiFID II 213
7.5.2 Delegated Act: The Regulation of the European Union 216
7.6 Outlook: High-Frequency Trading As a Blueprint? 219

8 Creativity of Algorithms and Copyright Law 221


Susana Navas
8.1 Creativity 221
8.1.1 Definition: Types of Creativity 221
8.1.2 The Relationship between Creativity and Algorithms 223
8.1.3 Categories of Computational Art 225
8.2 Creation by Algorithms and Copyright 226
8.2.1 A Work Produced by an Algorithm as an
Original ‘Work’ 227
8.2.2 Authorship: Ownership and Exercise of Rights 230
8.3 Conclusion: Challenges for Copyright 232

9 “Wake Neutrality” of Artificial Intelligence Devices 235


Brian Subirana, Renwick Bivings, and Sanjay Sarma
9.1 Wake Neutrality and Artificial Intelligence 235
9.1.1 Product and Name Wake Neutrality of Smart Speakers 236
9.1.2 Intelligence Wake Neutrality of Smart Speakers 237
9.1.3 Wake Neutrality Legal Compliance: Open versus Closed
Approaches 238
9.1.4 A Voice Name System for Wake Neutrality 242

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
x Contents

9.2 Six Requirements for Wake Neutrality of AI Devices in OCC 242


9.2.1 Requirements to Achieve Wake Neutrality 243
9.2.2 Requirements to Enforce Wake Neutrality 246
9.3 Net Neutrality and Wake Neutrality 247
9.4 Legal Programming Enablers of Wake Neutrality 252
9.5 Balancing Wake Neutrality with Automated Contracting 255
9.6 Implications of Wake Neutrality for the AI Architecture Stack 259
9.6.1 Wake Neutrality and the Sensor Stream 259
9.6.2 Wake Neutrality and the Cognitive Core 260
9.6.3 Wake Neutrality and the Brain Operating System 263
9.6.4 Wake Neutrality and the Expression Layer 266
9.7 Conclusion and Future Research 267

10 The (Envisaged) Legal Framework for Commercialisation of


Digital Data within the EU: Data Protection Law and Data
Economic Law As a Conflicted Basis for Algorithm-Based
Products and Services 269
Björn Steinrötter
10.1 The Link between Data and Algorithms 269
10.2 Definition of Digital Data 271
10.3 Data Economic Law 272
10.3.1 Brief Description and Rationale 272
10.3.2 The Free Flow of Data Initiative of the European
Commission 274
10.3.3 Non-personal Data Contract Law 287
10.4 Data Protection Law 289
10.4.1 Brief Description and Rationale 289
10.4.2 Personal Data Movement and Trading 289
10.4.3 Personal Data Ownership/Property in Personal Data? 292
10.4.4 Personal Data Contract Law 293
10.5 Conflicts 294
10.6 Alternatives 295
10.7 Conclusions 296

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:34:40, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/821A618155E3C56B0AEC34DBD3EC81A2
List of Figures and Tables

figures
1.1 Overview of available mobile robotic systems page 28
1.2 Overview of existing and upcoming service-oriented humanoid
systems 31
1.3 Telemedicine case scenario 35

tables
9.1 Six legal requirements to achieve and enforce wake neutrality 243
9.2 Legal risks of AI agent-contracting processes 258

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:35:31, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. xi
https://www.cambridge.org/core/product/7BC47A2AF1D3EEAD41FA80D5CFC342EC
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:35:31, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/7BC47A2AF1D3EEAD41FA80D5CFC342EC
Notes on Contributors

Renwick Bivings is a JD student at the Harvard Law School, USA. He has a BS in


Business Management from the Eller College of Management, University of Ari-
zona. He is a subcite editor of the Harvard Business Law Review, and has worked in
Tokyo for LINE Corporation and interned at the MIT Auto-ID Laboratory,
researching the legal hurdles to the democratization of the Internet of Things.
Martin Ebers is Associate Professor of IT Law at the University of Tartu, Estonia
and permanent Fellow (Privatdozent) at the Law Faculty of the Humboldt Univer-
sity of Berlin, Germany. He has taken part in various research projects for the
European Commission, especially in the field of EU private and consumer law,
and was one of the co-ordinators of the EU Consumer Law Acquis project. He is the
author and editor of ten books and over 80 articles published in national and
international journals. In addition to researching and teaching, he has been active
in the field of legal consulting for many years. His main areas of expertise and
research are IT law, liability and insurance law, and European and comparative law.
In 2016, he published the monograph Rights, Remedies and Sanctions in EU Private
Law. In 2017, he co-founded the Robotics & AI Law Society (RAILS; www.ai-laws
.org), of which he has been president since its foundation.
Sami Haddadin is Director of the Munich School of Robotics and Machine
Intelligence at the Technical University of Munich (TUM), Germany, where he
holds the Chair of Robotics and System Intelligence. Since 2017, he has been co-
founder and deputy chairman of the Robotics & AI Law Society (RAILS; www.ai-
laws.org). His research interests include intelligent robot design, robot learning,
collective intelligence, human‒robot interaction, nonlinear control, real-time plan-
ning, optimal control, human neuro-mechanics and prosthetics, and robot safety.
He holds degrees in electrical engineering, computer science, and technology
management from the Technical University of Munich and the Ludwig Maximilian
University of Munich. He received his PhD summa cum laude from RWTH
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. xiii
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
xiv Notes on Contributors

Aachen University. He has published more than 130 articles in international journals
and for conferences. Awards received include the George Giralt PhD Award (2012),
the RSS Early Career Spotlight (2015) and IEEE/RAS Early Career Award (2015),
the Alfred Krupp Award for Young Professors (2015), the German Future Prize of the
Federal President (2017), and the Leibniz Prize (2019).
Ruth Janal is Professor of Civil Law, Intellectual Property and Commercial Law at
the University of Bayreuth, Germany. She has authored and co-authored several
books on consumer protection, unfair commercial practices, comparative IP law,
and international civil procedure. Her current research focuses on the interplay
between the digital transformation and private law. She has given presentations and
written articles on commercial communication in the digital space, data access in
connected cars, liability of internet intermediaries, data protection in the Internet of
Things, and algorithmic decision-making.
Dennis Knobbe is a PhD student in the Department of Robotics and System
Intelligence at the Technical University of Munich (TUM), Germany. In 2016 he
was awarded an MSc in electrical engineering and information technology, with a
focus on control and systems theory, from the Christian-Albrecht University of Kiel.
His research interests are modeling, analysis and control of complex dynamic
systems, optimal and adaptive control, as well as collective intelligence, systems
biology, and bioinformatics.
Mario Martini holds the Chair of Administrative Science, Constitutional Law,
Administrative Law and European Law at the German University of Administrative
Sciences Speyer and is head of the Transformation of the State in the Digital Age
program at the German Research Institute for Public Administration, a fellow at the
Center for Advanced Internet Studies and a member of the German government’s
Data Ethics Commission. Since 2016, he has directed the Digitization program at the
German Research Institute for Public Administration. Until April 2010, he held a
chair in constitutional and administrative law at the Ludwig Maximilian University in
Munich. Mario Martini habilitated at the Bucerius Law School in 2006 and received
his PhD from the Johannes Gutenberg University, Mainz in 2000. His research
focuses in particular on the internet, data protection, media and telecommunications
law, law and economics, as well as open government and artificial intelligence.
Susana Navas is Professor of Private Law at the Autonomous University of Barce-
lona, Spain. Her main fields of interest are very broad, comprising matters as varied
as child law, copyright law, and European private law. In recent years she has
focused on the study of digital law. She is author or editor of more than 13 books
and over 80 articles, reviews, and chapters for national and international publishing
houses and journals. She has been involved in a range of research projects and has
been a key speaker at many conferences and workshops at national and European
level. She has enjoyed research stays at European and North American institutes and
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
Notes on Contributors xv

universities (Institute of Legal Advance Studies in London, Max-Planck-Institut für


Ausländisches und Internationales Privatrecht in Hamburg, Max-Planck-Institut für
Innovation und Wettbewerb in Munich, Fordham Law School in New York) and
has supervised several published doctoral theses.
Diana Sancho joined the University of Leicester, UK as lecturer in international
commercial law in 2013, following several years as associate professor of private
international law at the Universidad Rey Juan Carlos, Madrid. She holds LLB and
PhD degrees from the Complutense University of Madrid and an LLM from the
London School of Economics, and is a fellow of the UK Higher Education
Academy. She was awarded first prize for her research on international transfers of
personal data by the Spanish Data Protection Agency. Her research interests include
international data privacy law, private international law, and dispute resolution law.
Diana has authored three monographs (on cross-border mobility of companies,
international transfers of personal data, and model contracts for the transfer of
personal data to third countries) and has written multiple journal articles. She has
undertaken research at the Real Colegio Complutense, Harvard University, the
University of Melbourne, the Swiss Institute of Comparative Law, and the Institute
of Advanced Legal Studies, London, and has been a member of a number of
national and international research projects on jurisdiction and choice of law in
commercial contracts and torts and also on international aspects of company law.
Sanjay Sarma is the Vice President for Open Learning at Massachusetts Institute of
Technology, USA, which includes the Office of Digital Learning, the MIT Inte-
grated Learning Initiative and the Abdul Latif Jameel World Education Lab. He is
also the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor of
Mechanical Engineering at MIT. He received his bachelor’s degree from the Indian
Institute of Technology, his master’s degree from Carnegie Mellon University, and
his PhD from the University of California at Berkeley. A co-founder of the Auto-ID
Center at MIT, Sarma developed many of the key technologies behind the EPC
suite of RFID standards now used worldwide. He was the founder and CTO of
OATSystems, which was acquired by Checkpoint Systems in 2008, and he has
worked at Schlumberger Oilfield Services in Aberdeen, UK, and at the Lawrence
Berkeley Laboratories in Berkeley, California. His research interests include sensors,
the Internet of Things, cybersecurity, and RFID. Currently, Sarma serves on the
boards of GS1, EPCglobal, several start-up companies including Hochschild Mining
and Top Flight Technologies, and edX, the not-for-profit company set up by MIT
and Harvard to create and promulgate an open-source platform for the distribution
of free online education worldwide. He also advises several national governments
and global companies. Author of more than 100 academic papers on computational
geometry, sensing, RFID, automation, and CAD, Sarma has received many teach-
ing and research awards, including the MacVicar Fellowship, the Businessweek eBiz
Award, and InformationWeek’s Innovators and Influencers Award.
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
xvi Notes on Contributors

Gerald Spindler studied law and economics in Frankfurt am Main, Hagen, Genf,
and Lausanne. He is Professor of Civil Law, Commercial and Economic Law,
Comparative Law, Multimedia and Telecommunications Law at the University of
Göttingen, Germany, where he is occupied, among other topics, with the legal
aspects of e-commerce. He was elected a full tenured member of the German
Academy of Sciences, Göttingen in 2004. He has published over 300 articles in
law reviews, as well as expert legal opinions. He serves as general rapporteur regarding
privacy and personality rights on the internet for the bi-annual German Law Confer-
ence. He is editor of two of the best-known German law reviews covering the whole
field of cyberspace law and telecommunications law as well as co-editor of inter-
national journals on copyright law. He is also the founder and editor of JIPITEC, an
open access-based journal for intellectual property rights and e-commerce. In
2007 he was commissioned by the EU to review the e-commerce directive and
currently acts as an expert on data economy for the single market (2017).
Björn Steinrötter was recently appointed junior professor of IT law and media law
at the University of Potsdam, Germany. Prior to that he was a postdoctoral
researcher at the Institute for Legal Informatics, Leibniz University Hanover, Ger-
many. His research activities focus on private law, its European and international
implications, IT law, in particular data protection and data economy law, and IP
law. He is a founding and board member of the Robotics and Artificial Intelligence
Law Society (RAILS; www.ai-laws.org).
Brian Subirana is Director of the MIT Auto-ID lab, and teaches at both MIT and
Harvard University, USA. Prof. Subirana’s research centers on fundamental
advances at the cross section of the Internet of Things (IoT) and Artificial Intelli-
gence focusing on use-inspired applications in industries such as Sports, Retail,
Health, Manufacturing and Education. He wants to contribute to a world were
spaces can have their own “brain” with which humans can converse. His Harvard
classes on artificial intelligence and the science of intelligence are the first MIT-run
non-residential online classes ever to offer academic credits. His MIT Sloan class
was the first course ever to offer a recorded lecture on MIT Open Courseware. He
obtained his PhD in computer science at the MIT Artificial Intelligence Laboratory
(now CSAIL) and his MBA at MIT Sloan, and has been affiliated to MIT for over 20
years in various capacities, including visiting professor at the MIT Sloan School of
Management. He has founded three start-ups and earlier in his career he worked at
The Boston Consulting Group. He has over 200 publications, including three
books, one of them on legal programming, and currently is working on publishing
the MIT Voice Name System, a conversational commerce open standard that can
be used in multiple industries such as health, education and retail.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:09, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/CCEC49DB64B5EFD4AA31405F18EA8510
Preface

algorithms and law


Algorithms come in many different shapes and forms, ranging from software systems
(e.g., data-mining programs, medical diagnosis systems, price algorithms, and expert
trading systems) to embodied robots (e.g., self-driving cars, unmanned underwater
vehicles, surgical robots, drones, and personal and social robots) and open-source
machine-learning systems.1 The increased use of these intelligent systems is
changing our lives, society, and economy – while at the same time challenging
the traditional boundaries of law. Algorithms are widely employed to make decisions
which have increasingly far-reaching impacts on individuals and society, potentially
leading to manipulation, biases, censorship, social discrimination, violations of
privacy, property rights, and more.
This has sparked a global debate on how to regulate AI and robotics. Although
many countries and sometimes also international/intergovernmental organizations
have laws, rules, and norms that are relevant to AI and robotics, most of this
legislation was not made with AI and smart robotics in mind. Accordingly, it is
difficult to gage the extent to which existing legislation adequately regulates the
negative implications of intelligent machines. Since the beginning of 2017, many
governments across the world have begun to develop national strategies for the
promotion, development, and use of AI systems. The European Union, the United
Nations, the OECD, and many other international organizations have also
developed AI strategies, sometimes even with concrete suggestions of how to regu-
late AI and smart robotics in the future.

1
For definitions of the terms “algorithms”, “artificial intelligence”, “robotics”, “machine learn-
ing”, etc., used in this volume, see 1.2.1, 1.2.3 and 2.1.2.

Downloaded from https://www.cambridge.org/core. University of NewxviiEngland, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
xviii Preface

In this volume, German and Spanish scholars have collaborated to study the
practical and legal implications that algorithms present for individuals, society, and
political and economic systems – discussing the various policy options for future
regulation and ethical codes.

content of this volume


In Chapter 1, Sami Haddadin and Dennis Knobbe provide a short history of intelli-
gent machines and an overview of the present state of robotics and AI, discussing
current research directions, outlining major technological challenges, and depicting
the future of man and machine that is yet to be built. The authors point out that the
large gap between the algorithmic and physical worlds leaves existing systems still far
from the vision of intelligent and human-friendly robots capable of interacting with
and manipulating our human-centered world. Against this backdrop, Haddadin and
Knobbe look into the emerging discipline of machine intelligence which could
provide a new holistic paradigm to address this issue, in particular by reunifying
perception (sensing), AI (planning), and robotics (acting) with the pervasive roles of
control and machine learning that are crucial if these intelligent systems are to
become reality in our daily lives.
In Chapter 2, Martin Ebers outlines the most urgent ethical and legal issues raised
by the use of self-learning algorithms, providing an overview of several key initiatives
at the international and European levels on AI ethics and regulation. In the author’s
opinion, policy makers should avoid premature, innovation-inhibiting regulation. As
there is no one-size-fits-all solution, the chapter underlines that the need for new
rules should be evaluated for each sector and for every application separately,
considering the respective risks and legal interests involved, in order to find the
right balance between keeping up with the pace of change and protecting people
from the harm posed by AI and robotic systems. At the same time a regulatory
environment needs to be created that avoids over-regulation but allows for innov-
ation and further development.
In Chapter 3, Mario Martini addresses the question “How to Demystify the
Alchemy of Code” by looking at three specific legal issues: the opacity of
machine-learning systems; unlawful discrimination; and monopolization of market
power and knowledge. The author examines existing and potentially adaptable legal
solutions and complements them with further proposals. The chapter designs a
regulatory model in four steps along the time axis: preventive regulation instru-
ments, accompanying risk management, ex post facto protection, and the vision of
an algorithmic responsibility code. According to the author, these elements should
form the legislative blueprint to regulate applications of artificial intelligence.
In Chapter 4, Diana Sancho focuses on one of the most important provisions for
the algorithmic society we have so far, namely Article 22 of the European General
Data Protection Regulation. The author shows that the European Union is a pioneer

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
Preface xix

in regulating automated (algorithmic) decision-making by setting not only formal but


also substantial standards, endorsing a non-strict concept of “solely” automated
decisions; explicitly recognizing the need for enhanced protection of vulnerable
adults and children; linking the much discussed data subject’s right to an explanation
to the right to challenge automated decisions; and interpreting Article 22(1) as a
“general prohibition”. This development represents, according to Sancho, an import-
ant step towards the development of a more mature and sophisticated regime for
automated decision-making that is committed to helping individuals retain adequate
levels of autonomy and control in decision-making, whilst meeting the technology
and innovation demands of the data-driven society.
Chapters 5 and 6 deal with one of the most important questions raised by
autonomous systems: whether and how traditional concepts and the provisions of
current legal regimes (e.g., regarding negligence or strict liability) can apply in the
context of emerging autonomous systems, or whether we need new rules. Susana
Navas (Chapter 5) and Ruth Janal (Chapter 6) expose the key issues, dealing with
(extra-)contractual liability of users, keepers, and operators for wrongs committed by
autonomous systems. Both authors explore how the concept of “wrong” can be
defined with respect to autonomous systems and what standard of care can reason-
ably be expected of them. Further, the contributions look at existing accountability
rules for things and people in various legal orders and explain how these rules can be
applied to autonomous systems.
In addition, Gerald Spindler analyses in Chapter 7 the control of algorithms in
financial markets, especially in the case of high-frequency trading. High-frequency
trading has become an important factor in financial markets and is one of the first
areas in algorithmic trading to be intensely regulated. Against this background, the
author gives an overview of the EU approach to regulating algorithmic trading and
considers whether this regime (with its pre- and post-trade controls, and real-time
monitoring) could be taken as a blueprint for other regulations on algorithms.
In Chapter 8, Susana Navas deals with the creativity of algorithms and copyright
law. The author discusses the possible emulation of human creativity by various
models of artificial intelligence systems. As the degree of originality of creations
using algorithms may surprise even human beings themselves, the author makes the
case for copyright protection of the “works” created by autonomous systems, espe-
cially taking into account the fundamental contributions of computer science
researchers on the one hand and, on the other, the investment in human and
economic resources that is required to obtain these ”works”. The author does not
only question traditional categories in the field of IP rights but also suggests how the
law could approach “computational creativity”.
In Chapter 9, Brian Subirana, Renwick Bivings and Sanjay Sarma focus on voice-
recognition systems and smart speakers in the context of conversational commerce,
and especially on the regulatory options for standardizing the initial steps of the
human-to-machine interaction. According to the authors, voice is complicated to

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
xx Preface

regulate because it is ambiguous; it is neither race nor gender neutral because it


reveals significant amounts of information about the person through its tone, choice
of words and semantic constructs. Given the design choices for these new powerful
AI technologies, the chapter examines how to algorithmically enforce neutrality in
the behavior of such technologies. It concludes with a discussion of possible
standards to establish an “emotional firewall”.
In the book’s final Chapter 10, Björn Steinrötter analyses the legal framework of
(training) data. The chapter highlights that the European Union is facing consider-
able challenges in this regard, because it wants to promote both a high level of data
protection (GDPR) and at the same time a free flow of data (data economic law). In
light of these considerations, the author assesses the status quo of legislation (initia-
tives) and legal discussions at the European level.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:36:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.001
Acknowledgments

The editors are grateful to the Autonomous University of Barcelona (Spain) for
providing access to relevant materials during the preparation of this book.
This book was supported by the Estonian Research Council’s grant no. PRG124
and by the Research Project “Machine learning and AI powered public service
delivery”, RITA1/02-96-04, funded by the Estonian Government.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:40:06, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. xxi
https://www.cambridge.org/core/product/FA620A398B0F0410E925A4310778BB57
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:40:06, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms.
https://www.cambridge.org/core/product/FA620A398B0F0410E925A4310778BB57
1

Robotics and Artificial Intelligence

The Present and Future Visions

Sami Haddadin and Dennis Knobbe

introduction
The rise of artificial intelligence is mainly associated with software-based robotic
systems such as mobile robots, unmanned aerial vehicles, and increasingly, semi-
autonomous cars. However, the large gap between the algorithmic and physical
worlds leaves existing systems still far from the vision of intelligent and human-
friendly robots capable of interacting with and manipulating our human-centered
world. The emerging discipline of machine intelligence (MI), unifying robotics and
artificial intelligence, aims for trustworthy, embodiment-aware artificial intelligence
that is conscious both of itself and its surroundings, adapting its systems to the
interactive body it is controlling. The integration of AI and robotics with control,
perception and machine-learning systems is crucial if these truly autonomous
intelligent systems are to become a reality in our daily lives. Following a review of
the history of machine intelligence dating back to its origins in the twelfth century,
this chapter discusses the current state of robotics and AI, reviews key systems and
modern research directions, outlines remaining challenges and envisages a future of
man and machine that is yet to be built.

1.1 machine intelligence: history in a nutshell

1.1.1 Back to the Roots


The basic vision of robotics and AI can be traced back to twelfth-century Europe.1
Literature from this period mentions a mystical creature called the golem, which
had a human-like shape but was significantly stronger than a normal human. The
1
Wöll, “Der Golem: Kommt der erste künstliche Mensch und Roboter aus Prag?” in Nekula,
Koschmal, and Rogall (eds), Deutsche und Tschechen: Geschichte - Kultur - Politik (Beck 2001)
233–245.

Downloaded from https://www.cambridge.org/core. University of New England,


1 on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
2 Sami Haddadin and Dennis Knobbe

golem was described as a harmless creature used by its creator as a servant. In the
legend of the golem of Prague, first written down at the beginning of the nineteenth
century, Rabbi Löw created the golem to relieve him of heavy physical work and to
serve humans in general.2 The real-world realization of this idea had a long way to
go.Some of the earliest scientific writings relating to machine intelligence date back
to the fifteenth century, the period of the Renaissance. Leonardo da Vinci
(1452‒1519), the universal savant of his time,3 decisively influenced both art and
science with a variety of inventions, including, for example, a mechanical jumper,
hydraulic pumps, musical instruments, and many more. However, the two inven-
tions that stand out from a robotics point of view were Leonardo’s autonomous flying
machine and his mechanical knight, also known as Leonardo’s robot.4 The latter is a
mechanism integrated into a knight’s armor, which could be operated via rope pulls
and deflection pulleys, enabling it to perform various human-like movements ‒
clearly first steps in robotics. Wilhelm Schickard (1592‒1635)5 developed and built
the first known working mechanical calculator. It was a gear-based multiplication
machine that was also used for some of Kepler’s lunar orbit calculations.
Sir Isaac Newton (1642‒1726), one of the world’s greatest physicists, is best known
for laying the foundations of classical physics by formulating the three laws of
motion.6 He was also an outstanding mathematician, astronomer and theologian.
In the field of mathematics, he developed a widely used technique for solving
optimization problems (nowadays called Newton’s method) and founded the field
of infinitesimal calculus. Gottfried Wilhelm Leibniz (1646‒1716) worked in parallel
with Newton on this topic but conceived the ideas of differential and integral
calculus independently of Newton.7 Leibniz, who is known for various other
contributions to science, is often referred to as one of the first computer scientists
due to his research on the binary number system. Slightly later, Pierre Jaquet-Droz
(1721‒1790) built amazing mechanical inventions such as The Writer, The Musi-
cian and The Draughtsman.8 The Draughtsman, for example, is a mechanical doll
that draws with a quill pen and real ink on paper. The input device was a cam disk
that essentially functions as a programmable memory defining the picture to be
drawn. With three different cam disks, the The Draughtsman was able to draw four
different artworks. In addition to these fascinating machines, Jaquet-Droz and his

2
Grün and Müller, Der hohe Rabbi Löw und sein Sagenkreis (Verlag von Jakob B Brandeis
1885).
3
Grewenig and Otto, Leonardo da Vinci: Künstler, Erfinder, Wissenschaftler (Historisches
Museum der Pfalz 1995).
4
Moran, “The da Vinci Robot” (2006) 20(12) Journal of Endourology 986–990.
5
Nilsson, The Quest for Artificial Intelligence (Cambridge University Press 2009).
6
Westfall, Never at Rest. A Biography of Isaac Newton (Cambridge University Press 1984).
7
Nilsson (n 5).
8
Soriano, Battaïni, and Bordeau, Mechanische Spielfiguren aus vergangenen Zeiten (Sauret
1985).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 3

business partner Jean-Frédéric Leschot later started to build prosthetic limbs for
amputees.
Another memorable figure in the history of machine intelligence is Augusta Ada
Byron King (1815‒1852).9 The Countess of Lovelace is known to be one of the first to
recognize the full potential of a computing machine. She wrote the first computer
program in history, which was designed to be used for the theoretical analytical
engine proposed by Charles Baggage. The programming language Ada was named
after her. These fundamental technological advances in the areas of mechanics,
electronics, communications and computation paved the way for the introduction
of the first usable computing machines and control systems, which began around
1868. The first automatic motion machines were systematically analyzed, docu-
mented, reconstructed, and taught via collections of mechanisms.
A mechanism can be defined as an automaton that transforms continuous,
typically linear, movements into complex spatial motions. Ludwig Burmester
(1840‒1927) was a mathematician, engineer and inventor, and the first person to
develop a theory for the analysis and synthesis of motion machines.10 Later in this
period, Czech writer and dramatist Karel Čapek (1890‒1938) first used the word
“robot” in his science-fiction work. The word “robot” is derived from robota, which
originally meant serfdom, but is now used in Czech for “hard work.” Through his
1920 play R.U.R. (Rossums Universal Robots), Čapek spread his definition of robot to
a wider audience.11 In this play, the robots were manufactured to industry standards
from synthetic organic materials and used as workers in industry to relieve people
from heavy and hard work.
We now come to the pre-eminent philosopher and mathematician Norbert
Wiener (1894‒1964). From his original research field of stochastic and mathematical
noise processes, he and his colleagues Arturo Rosenblueth, Julian Bigelow and
others founded the discipline of cybernetics in the 1940s.12 Cybernetics combines
the analysis of self-regulatory processes with information theory to produce new
concepts, which can be said to be the precursors of modern control engineering,
thus building significant aspects of the theoretical foundations of robotics and AI.
Wiener developed a new and deeper understanding of the notion of feedback,
which has significantly influenced a broad spectrum of natural science disciplines.
Alan Turing (1912‒1954) worked in parallel with Wiener in the field of theoretical
computer science and artificial intelligence.13 Most people interested in artificial
intelligence today are familiar with his name through the Turing test. This test was

9
Nilsson (n 5).
10
Koetsier, “Ludwig Burmester (1840–1927)” in Ceccarelli (ed), Distinguished Figures in Mech-
anism and Machine Science, History of Mechanism and Machine Science, vol 7 (Springer 2009)
43–64.
11
Nilsson (n 5).
12
Ibid.
13
Ibid.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
4 Sami Haddadin and Dennis Knobbe

devised to determine whether a computer or, more generally a machine, could


think like a human. His groundbreaking mathematical model of an automatic
calculating machine that can solve complex calculations is today known as a Turing
machine. The Turing machine models the process of calculating in such a way that
its mode of operation can be easily analyzed mathematically, making the terms
“algorithm” and “computability” mathematically manageable for the first time.
A similarly renowned researcher and colleague of Turing was John von Neumann
(1903‒1957).14 He developed the von Neumann computer architecture, which still
forms the basis of the operation of most computers today. As well as collaborating
with Turing on AI research, he also worked on other mathematical topics like linear
programming and sorting programs. Von Neumann’s concept of self-reproducing
machines, developed in 1940, testifies to his outstanding capabilities.15 The aim of
this concept was to describe an abstract machine, which, when in operation,
replicates itself. To achieve this goal von Neumann also developed the concept of
cellular automata. According to von Neumann, a cellular automaton is a collection
of states in a two-dimensional grid of cells, which forms a certain pattern. A cell
represents one of twenty-nine possible states, which can change over time. The
change of state of a cell is determined by the states of the neighboring cells from the
previous time step as input. The theory of cellular automata defined the elementary
building blocks responsible for the concept of self-replicating machines. With these
building blocks, von Neumann created the universal constructor, which is a par-
ticular pattern of different cell states. This pattern contains three different sub-units:
an information carrier for storing its own construction plan, a construction arm,
which builds itself up in the free grid according to the construction plan, and a
copying machine for copying the construction plan. This made it possible for von
Neumann to develop a self-replicating machine within the concept of cellular
automata.
A famous mathematician and inventor who also worked in the field of digital
computing is Claude Elwood Shannon (1916‒2001). His groundbreaking ideas on
logical circuit design for digital computers and information theory had an enormous
impact on the research community of his time, and continue to do so today. In 1948,
with his book A Mathematical Theory of Communication,16 he laid important
foundations for today’s high-speed telecommunications and data processing by
mathematically tackling the problem of data transmission via a lossy communication
channel. He developed a coding algorithm that made it possible to restore the
originally transmitted information from previously coded lossy data. In a further

14
Ibid.
15
Von Neumann and Burks, “Theory of Self-Reproducing Automata” (1966) 5(1) IEEE Transac-
tions on Neural Networks 3.
16
Shannon, “A Mathematical Theory of Communication” (1948) 27(3) Bell System Technical
Journal 379‒423.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 5

publication,17 he developed a complete theory of channel capacity, which defined


the maximum data rate that can be transmitted lossless over a specific communi-
cation channel type. In 1949, he published the formal basics of cryptography, thus
establishing it as a scientific discipline.18
At the beginning of 1941, the engineer and computer scientist Konrad Zuse (1910‒
1995) made headlines with the world’s first functional programmable digital com-
puter, the Z3, built in cooperation with Helmut Schreyer.19 Zuse also demonstrated
that machines can assemble themselves on a variable scale, long before the idea of
robotic assembly systems had been conceived.20 Based on John von Neumann’s
ideas and proofs that it is theoretically possible to build a machine that can
reproduce itself, Zuse published his implementation ideas for such a machine in
the journal Unternehmensforschung under the title “Gedanken zur Automation und
zum Problem der technischen Keimzelle” (“Thoughts on Automation and the
Problem of the Technical Germ Cell”).21 In the 1970s he designed the assembly
robot SRS72 in his own construction workshop as a functional demonstration of this
idea. The SRS72 machine could automatically assemble prefabricated manually
supplied parts by positioning two work pieces and connecting them with screws.
This prototype machine was the starting point for a complete self-reproducing
system. According to Zuse, an entire automated workshop is required to perform
all the complex manufacturing and assembly steps necessary to obtain a self-
producing system.22
Independently of Zuse, the physicist Richard Phillips Feynman (1918‒1988) also
studied von Neumann’s ideas. His own research area was quantum field theory, and
he was awarded the Nobel Prize in 1965 for his work on quantum electro dynamics.
Today, however, he is also regarded as a visionary of self-reproducing machine
technology. His famous lecture, “There’s Plenty of Room at the Bottom,” on the
future opportunities for designing miniaturized machines that could build smaller
reproductions of themselves was delivered in 1959 at the annual meeting of the
American Institute of Physics at the California Institute of Technology and pub-
lished the following year in the journal Engineering and Science.23 Feynman’s
speech is frequently referenced in today’s technical literature in the fields of

17
Shannon, “Communication in the Presence of Noise” (1949) 86 Proceedings of the IRE 10–21.
10.1109/JRPROC.
18
Shannon, “Communication Theory of Secrecy Systems” (1949) 28(4) Bell System Technical
Journal 656‒715.
19
Bauer et al., Die Rechenmaschinen von Konrad Zuse (Springer 2013).
20
Eibisch, “Eine Maschine baut eine Maschine baut eine Maschine. . .” (2011) 1 Kultur und
Technik 48‒51.
21
Zuse, “Gedanken zur Automation und zum Problem der technischen Keimzelle” (1956) 1(1)
Unternehmensforschung 160‒165.
22
Ibid.
23
Feynman, “There’s Plenty of Room at the Bottom,” talk given on 29 December 1959 (1960) 23
(22) Science and Engineering 1–13.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
6 Sami Haddadin and Dennis Knobbe

micro- and nanotechnology, which speaks for the high regard in which his early
vision is held in expert circles.
Very few people had the knowledge and skills to program complex early comput-
ing machines like the Z3 computer. Unlike today’s programming languages that use
digital sequence code, these machines were programmed with the help of strip-
shaped data carriers made of paper, plastic or a metal-plastic laminate, which store
the information or the code lines in the punched hole patterns. One person who
mastered and shaped this type of programming was American computer scientist
Grace Hopper (1906‒1992).24 She did not work with the Z3, but on the Mark I and II
computers she designed the first compiler called A-0. A compiler is a program that
translates human readable programming code into machine-readable code. She also
invented the first machine-independent programming language, which led to high-
level languages as we know them today.
Returning to robotics in literature, a short story that still exerts a powerful influ-
ence on real-world implementation of modern robotics and AI systems as we know
them today is Isaac Asimov’s (1920‒1992) science-fiction story “Runaround,” pub-
lished in 1942, which contained his famous “Three Laws of Robotics”:25
One, a robot may not injure a human being, or, through inaction, allow a human
being to come to harm. [. . .] Two, a robot must obey the orders given it by human
beings except where such orders would conflict with the First Law. [. . .] And three,
a robot must protect its own existence as long as such protection does not conflict
with the First or Second Laws.

Asimov’s early ideas, including his vision of human‒robot coexistence, paved the
way for the concept of safety in robotics. Asimov’s Three Laws, formulated as basic
guidance for limiting the behavior of autonomous robots in human environments,
are enshrined, for example, in the Principles of Robotics of the UK’s Engineering
and Physical Sciences Research Council (EPSRC)/Art and Humanities Research
Council (AHRC), published in 2011.26 These principles lay down five ethical
doctrines for developers, designers and end users of robots, together with seven
high-level statements for real-world applications.
Shortly before the vast technological advancements in the second half of the
twentieth century began, the first rudimentary telerobotic system was developed in
1945 by Raymond Goertz at the Argonne National Laboratory.27 It was designed to
control, from a shelter, a robot that could safely handle radioactive material. From
the 1950s on, the first complex electronics were developed, further optimized
and miniaturized, and modern concepts of mechanics were created. The first

24
Beyer, Grace Hopper and the Invention of the Information Age (BookBaby 2015).
25
Asimov, Astounding Science Fiction, chapter “Runaround” (Street & Smith 1942).
26
Prescott and Szollosy, “Ethical Principles of Robotics” (2017) 29(2) Connection Science 119‒123.
27
Goertz and Thompson, “Electronically Controlled Manipulator” (1954) 12 Nucleonics (US)
46‒47.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 7

mechatronic machines, such as fully automated electric washing machines28 or the


first industrial robots,29 were invented, and the concept of AI was further developed.
Through the mathematical work of Jacques S Denavit (1930‒2013), Richard Harten-
berg (1907‒1997) and Rudolf August Beyer (1892‒1960), one of the most important
methods of calculating the direct kinematics of robots was developed around the
year 1955.30 This matrix calculus, known today as the Denavit‒Hartenberg Conven-
tion, calculates how the joints of a robot have to be adjusted in order for it to be able
to approach a specific point in space. In the same year, John McCarthy (1927‒2011),
an American cognitive computer scientist and inventor of the famous programming
language Lisp, introduced the term “artificial intelligence.”31 He also organized the
famous Dartmouth Conference in the summer of 1956, which is considered the
birth of AI as a research field.
Marvin Lee Minsky (1927‒2016) was an American mathematician and cognitive
scientist as well as a colleague of McCarthy in the same AI working group at
Massachusetts Institute of Technology (MIT).32 He is known for the invention of
head-mounted graphical displays and for his work in artificial neural networks.
Together with Seymour Papert, he wrote the book Perceptrons, which is still
required reading for the analysis of artificial neural networks. He introduced several
famous AI models and developed SNARC, the first neural network simulator. The
late 1950s can also be seen as an important opening stage in the modern theory of
optimization and optimal control. The field of optimal control deals with the
process of calculating appropriate control laws for a given system in order to meet
certain desired optimality criteria. In this context, at the end of the 1950s the
mathematicians Lev Semyonovich Pontryagin (1908‒1988) and Richard E Bellman
(1920‒1984) published a series of new fundamental optimization methods, such as
Pontryagin’s maximum principle,33 Bang-Bang control,34 the Hamilton‒Jacobi‒
Bellman equation or the Bellman equation for dynamic programming,35 which
changed the entire field of mathematical optimization and control. These advances
continue to this day to have a major influence on various practical areas from
engineering to economics.

28
Milecki, “45 Years of Mechatronics–History and Future” in Szewczyk, Zieliński, and Kalic-
zyńska (eds), Progress in Automation, Robotics and Measuring Techniques in Szewczyk,
Zieliński, and Kaliczyńska (eds), Progress in Automation, Robotics and Measuring Techniques
(Springer 2015).
29
Nilsson (n 5).
30
Denavit and Hartenberg, “A Kinematic Notation for Lower-Pair Mechanisms Based on
Matrices” Trans. of the ASME (1955) 22 Journal of Applied Mechanics 215‒221.
31
Nilsson (n 5).
32
Ibid.
33
Boltyanskii, Gamkrelidze, and Pontryagin, “Towards a Theory of Optimal Processes”
(in Russian) (1956) 110(1) Reports Acad Sci USSR 1–10.
34
Pontryagin et al., Mathematical Theory of Optimal Processes (in Russian) 1961.
35
Bellman, Dynamic Programming, vol 295 (Rand Corp Santa Monica CA 1956); Bellman,
Dynamic Programming (Princeton University Press 1957).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
8 Sami Haddadin and Dennis Knobbe

In 1957 the first autonomous underwater vehicle, the Self-Propelled Underwater


Research Vehicle (SPURV), was invented at the Applied Physics Laboratory at the
University of Washington by Stan Murphy, Bob Van Wagennen, Wayne Nodland,
and Terry Ewart;36 this system was used to measure the physical properties of the sea.
A few years later, in 1960, electrical engineer and mathematician Rudolf Emil
Kalman (1930‒2016) developed the Kalman filter in cooperation with Richard
S Bucy and Ruslan L Stratonovich.37 This mathematical algorithm is capable of
predicting system behavior based on a dynamic model and suppressing additive
noise at the same time. In the context of this algorithm Kalman introduced two new
system analysis concepts: system observability and controllability.38 The concept of
observability analyzes how well the internal states of a system can be calculated by
measuring its output. Controllability measures how an input signal changes the
internal states of a system. These system analysis methods are crucial for the design
of a Kalman filter, but also provide very important system information for the design
of stable control loops in robots, process machines or driver assistance systems in
cars. The Kalman filter itself is still one of the most important signal-processing tools
in modern robotics, but is also used in various other disciplines such as AI, naviga-
tion, communications and macroeconomics.
The basic theories of robotics continued to expand, with developments in hard-
ware and control, such as electric motor and sensor systems. In 1961 Joseph Engel-
berger (1925‒2015), an American entrepreneur, physicist and engineer known as the
father of industrial robots, developed, together with his company, the first industrial
robot, Unimate.39 A few years later, in 1964, a machine-learning algorithm called
support-vector machine (SVM) was invented by mathematicians Vladimir Naumo-
vich Vapnik and Alexey Yakovlevich Chervonenkis (1938‒2014).40 The original
SVM algorithm is a linear classifier for pattern recognition. In 1992 the original
method was extended to a nonlinear classifier by applying the so-called kernel
trick;41 the algorithm’s final stage of development, still used today, was reached in
1995.42

36
Van Wagenen, Murphy, and Nodland, An Unmanned Self-Propelled Research Vehicle for Use
at Mid-Ocean Depths (University of Washington 1963); Widditsch, “SPURV-The First Decade”
No APL-UW-7215, Washington University Seattle Applied Physics Lab 1973.
37
Kalman, “A New Approach to Linear Filtering and Prediction Problems” Transaction of the
ASME (1960) 82(1) Journal of Basic Engineering 35–45.
38
Kalman, “On the General Theory of Control Systems” (1960) Proceedings First International
Conference on Automatic Control, Moscow, USSR.
39
Nilsson (n 5).
40
Chervonenkis, Early History of Support Vector Machines. Empirical Inference (Springer 2013);
Vapnik and Chervonenkis, Об одном классе алгоритмов обученияраспознаванию
образов (On a Class of Algorithms of Learning Pattern Recognition) (1964) 25(6) Avtomatika
i Telemekhanika.
41
Boser, Guyon, and Vapnik, “A Training Algorithm for Optimal Margin Classifiers” Proceedings
of the Fifth Annual Workshop on Computational Learning Theory (ACM 1992) 144–152.
42
Cortes and Vapnik, “Support-Vector Networks” (1995) 20(3) Machine Learning 273‒297.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 9

Back in 1966, the computer program ELIZA was developed and introduced at
MIT’s Artificial Intelligence Laboratory under the direction of Joseph Weizen-
baum.43 ELIZA is a program for natural language processing that uses pattern
matching and substitution methodologies to demonstrate communication between
humans and machines by simulating a coherent conversation. Three years later
American engineer Victor Scheinman (1942‒2016) designed the first successful
electrically operated, computer-controlled manipulator.44 This robotic arm had six
degrees of freedom, and was light, multi-programmable and versatile in its motion
capabilities. Later on, the robot was amended for industrial uses such as spot welding
for the automotive industries. In the field of machine learning, David E Rumelhart,
Geoffrey E Hinton, and Ronald J Williams introduced the modern version of the
backpropagation algorithm in 1968.45 This method is used in artificial neural
networks to train networks and is a standard tool in this field today.

1.1.2 The Modern Era of Robotics and AI


The modern era of robotics and AI is characterized by ever greater miniaturization
of electronics and mechatronics and an enormous increase in computing power,
developments that have led to more practical robotic systems. The first humanoid
robot to mimic human motion, the WaBot 1, was introduced by a Japanese research
team from Waseda University in 1973.46 WaBot 1 had very basic capabilities to walk,
grab objects and transport them from one place to another. In 1978 Unimation
released a new and more versatile version of the Unimate, called the Programmable
Universal Machine for Assembly (PUMA).47 PUMA has become very popular in
industry and academia and over time has become an archetype for anthropo-
morphic robots. It remains widely used today as a reference example and benchmark
system in academic robotics books and publications worldwide.48
In the 1980s the modern field of reinforcement learning was founded by combin-
ing different approaches from various disciplines. The starting point was the idea of
trial-and-error learning, which was derived from psychological studies on animal
learning dating from the early eighteenth century.49 Reinforcement is the expression
43
Nilsson (n 5).
44
Scheinman, “Design of a Computer Manipulator” Stanford AI Memo AIM-92, 1 June 1969.
45
Rumelhart, Hinton, and Williams, “Learning Representations by Back-Propagating Errors”
(1986) 323 Nature 533–536.
46
Kato, “of WABOT 1” (1973) 2 Biomechanism 173‒214.
47
Beecher, Puma: Programmable Universal Machine for Assembly, Computer Vision and Sensor-
Based Robots (Springer 1979).
48
Corke, “Robot Arm Kinematics” in Corke (ed), Robotics, Vision and Control (Springer 2017); Çakan
and Botsali, “Inverse Kinematics Analysis of a Puma Robot by using MSC Adams” The VIth
International Conference Industrial Engineering and Environmental Protection 2016 193–228.
49
Woodworth, Experimental Psychology (Holt 1938), Department of Psychology Dartmouth
College Hanover, New Hampshire 1937; Woodworth, “Experimental Psychology (Rev edn)”
(1954) 18(5) Journal of Consulting Psychology 386‒387.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
10 Sami Haddadin and Dennis Knobbe

of a certain behavior pattern in connection with an interaction of an animal with its


environment. The animal receives different stimuli in temporal correlation with its
behavior, causing certain behavior patterns to persist even after the stimuli
have subsided. From the technical point of view, this process can be described as
an optimization problem with some stochastic features in terms of incomplete
knowledge of the whole system. A further development of the optimal control
framework already mentioned can be used to describe and solve such a system.
One of the first to implement this idea was Witten, with his adaptive optimal control
approach.50
Another important aspect of the rise of the modern theory of reinforcement
learning is temporal-difference (TD) learning, the origins of which lie in animal
learning psychology. It can be seen as either a subclass or an extension of the general
reinforcement learning idea. In contrast to the standard reinforcement approach, in
TD learning the learner’s behavior or strategy is adjusted not only after receiving a
reward, but after each action before receiving it, based on an estimate of an expected
reward with the help of a state value function. The algorithm is thus controlled by
the difference between successive estimates. In 1959 Arthur Samuel implemented
this approach for the first time in his checkers-playing program.51
In 1983, a further development of this reinforcement learning algorithm, the so-
called actor‒critic architecture, was applied to the control problem of pole balan-
cing.52 The year 1989 can be described as the year of full integration of optimal
control methods with online learning. The time difference and optimal control
methods were fully merged in this year with Chris Watkin’s development of the
Q-Learning algorithm.53
In addition to reinforcement learning, the 1980s saw seminal work in robot
manipulator control. Early in the decade John J Craig and Marc Raibert published
a new hybrid control technique for manipulators. Their system made it possible to
simultaneously satisfy the position and force constraints of trajectories, enabling
compliant motions of robot manipulators.54 In the mid-1980s, Neville Hogan
developed impedance control for physical interaction,55 which was an important

50
Witten, “An Adaptive Optimal Controller for Discrete-Time Markov Environments” (1977)
34(4) Information and Control 286‒295.
51
Samuel, “Some Studies in Machine Learning Using the Game of Checkers” (1959) 3(3) IBM
Journal of Research and Development 210‒229.
52
Barto, Sutton, and Anderson, “Neuronlike Adaptive Elements That Can Solve Difficult
Learning Control Problems” (1983) 5 IEEE Transactions on Systems, Man, and Cybernetics
834‒846.
53
Watkins, Learning from Delayed Rewards PhD Thesis, King’s College 1989.
54
Raibert and Craig, “Hybrid Position/Force Control of Manipulators” (1981) 103(2) Journal of
Dynamic Systems, Measurement, and Control 126‒133.
55
Hogan, “Impedance Control: An Approach to Manipulation: Part I – Theory, Part II –
Implementation, Part III – Applications” (1985) 107 Journal of Dynamic Systems, Measurement
and Control 1‒24.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 11

step toward enabling the safe human‒robot interactions of today. In 1986,56


Oussama Khatib published his work on real-time obstacle avoidance for manipula-
tors and mobile robots, which was the beginning of time-varying artificial potential
fields for collision avoidance. This concept made real-time robot operations in
dynamic and complex environments possible. A year later Khatib developed a new
operational space framework for unified motion and force control.57 This new
mathematical formulation of robotic manipulators made the modeling and control
of these nonlinear dynamic systems much easier to understand.
With the introduction of its P1 system, Honda entered humanoid research and
development in the early 1990s.58 P1 was 191.5 cm tall, weighed 175 kg and was able
to walk at a speed of up to 2 km/h with his battery lasting for around 15 minutes.
Further developments in the field of telerobotics led to the success of the Rotex
mission in 1993, in which researchers around Gerd Hirzinger developed the first
Earth-controlled space robot.59
In 1995 Ernst Dickmanns and his team pioneered autonomous driving, conduct-
ing a journey from Munich in Germany to Odense in Denmark and back (approxi-
mately 1,758 km) as part of the PROMETHEUS project. They used a Mercedes-
Benz S-class vehicle converted for autonomous driving. About 95% of this distance
could be covered completely autonomously, a milestone in autonomous driving.60
In the following years, IBM developed the Deep Blue system.61 Deep Blue was an
intelligent computer program designed for playing chess. It is known for being the
first computer system that, with the physical support of a human to execute the
actual moves, won a game of chess against reigning world champion Garry Kasparov
under regular time rules.
Following on from the pioneering work of RC Smith and P Cheeseman in 198662
and the research group of Hugh F Durrant-Whyte in the early 1990s,63 the next steps
toward autonomous propulsion systems were taken at the beginning of the twenty-
first century with the foundations of modern simultaneous localization and mapping
(SLAM) algorithms for vehicle or robot navigation. As part of this development, in
1998 Wolfram Burgard and colleagues published a new software architecture for an

56
Khatib, Real-Time Obstacle Avoidance for Manipulators and Mobile Robots. Autonomous
Robot Vehicles (Springer 1986).
57
Khatib, “A Unified Approach for Motion and Force Control of Robot Manipulators: The
Operational Space Formulation” (1987) 3(1) IEEE Journal on Robotics and Automation 43‒53.
58
Hirose and Ogawa, “Honda Humanoid Robots Development” (2006) 365(1850) Philosophical
Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 11‒19.
59
Hirzinger et al., “Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features” (1993)
9(5) IEEE Transactions on Robotics and Automation 649‒663.
60
Dickmanns, “Computer Vision and Highway Automation” (1999) 31(5–6) Vehicle System
Dynamics 325‒343; Dickmanns,“Vehicles Capable of Dynamic Vision” (1997) 97 IJCAI.
61
Nilsson (n 5).
62
Thrun, Burgard, and Fox, Probabilistic Robotics (The MIT Press 2005).
63
Ibid.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
12 Sami Haddadin and Dennis Knobbe

autonomous tour-guide robot used in the Deutsche Museum in Bonn.64 These


innovative algorithms for autonomous navigation provided the capability for the
robot to guide museum visitors quickly and safely through a large crowd. In
2005 Thrun and his Stanford University racing team won the DARPA Grand
Challenge with their Stanley autonomous driving system, showing the capabilities
of SLAM. Their self-driving car completed a 212-kilometer off-road circuit in 6 hours
and 54 minutes.65 Nowadays, SLAM algorithms are implemented in some con-
sumer robot vacuum cleaners like the Roomba system from iRobot.66
In the year 2000 a significant technological step forward in humanoid robots
came with Honda’s introduction of its latest humanoid system, Asimo.67 Asimo had
basic abilities to walk and socially interact with people. In the same year Intuitive
Surgical released the Da Vinci robot-assisted surgical system for usage in teleopera-
tive minimally invasive surgery, based on development work at Stanford Research
Institute.68 To this day, this system and its successors are used in hospitals around the
world in a range of surgical procedures ranging from hysterectomies in gynecology
to general surgery.69 In 2002, the German Aerospace Center (DLR) introduced the
lightweight robot III (LWR III), which marked a technological leap forward in the
field of lightweight robotics.70 Its new design paradigms enabled direct measure-
ments and active damping of joint vibrations, together with almost immediate
detection of collisions with the environment.71 The robot was also able to carry
and manipulate loads up to its own weight.
Around the same time, the Mars Exploration Rover (MER) mission was
launched, showing new possibilities in telerobotics and space robotics.72 The year
2010 was the year that drones became commercially available with the launch by

64
Burgard et al., “The Interactive Museum Tour-Guide Robot” Aaai/iaai. 1998.
65
Thrun et al., “Stanley: The Robot that Won the DARPA Grand Challenge” (2006) 23(9)
Journal of Field Robotics 661‒692.
66
Knight, “With a Roomba Capable of Navigation, iRobot Eyes Advanced Home Robots” (2015)
MIT Technology Review. https://www.technologyreview.com/2015/09/16/247936/the-roomba-
now-sees-and-maps-a-home/. Date of consultation: May 2020.
67
Hirose and Ogawa (n 58).
68
Hockstein et al., “A History of Robots: From Science Fiction to Surgical Robotics” (2007) 1(2)
Journal of Robotic Surgery 113‒118.
69
Leung and Vyas, “Robotic Surgery: Applications” (2014) 1(1) American Journal of Robotic
Surgery 1–64.
70
Hirzinger et al., “DLR’s Torque-Controlled Light Weight Robot III-Are We Reaching the
Technological Limits Now?” (2002) 2 Proceedings 2002 IEEE International Conference on
Robotics and Automation (Cat No 02CH37292), Washington, DC 1710‒1716; Albu-Schäffer,
Haddadin, Ott, Stemmer, Wimböck, and Hirzinger, “The DLR Lightweight Robot: Design
and Control Concepts for Robots in Human Environments” (2007) 34(5) Industrial Robot: An
International Journal 376‒385.
71
Haddadin et al., “Collision Detection and Reaction: A Contribution to Safe Physical Human‒
Robot Interaction” 2008 IEEE/RSJ International Conference on Intelligent Robots and
Systems. IEEE, 2008, 3356–3363.
72
Squyres, Roving Mars: Spirit, Opportunity, and the Exploration of the Red Planet (Hachette
Books 2005).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 13

French company Parrot of its Parrot AR Drone, the first ready-to-fly drone available
on the open market.73
After years of basic research in the field of safe physical human‒robot interaction,
ranging from standardized dummy crash tests to injury analysis of human‒robot
impacts by soft-tissue experiments, in 2011 Sami Haddadin published a comprehen-
sive study of how robots could for the first time meet Asimov’s First Law in everyday
situations.74 The study developed the injury analysis, design paradigms and
collision-handling algorithms to ensure that robots could interact safely with
humans. It laid the foundations for the essential international safety standardization
and regulation of physical human‒robot interaction, paving the way for robotics in
everyday life.
In the same year, a new AI system was introduced by IBM.75 Watson was the first
computer system that could answer questions on the American quiz show Jeopardy!
In 2013, IBM made the Watson API available for software application providers. The
system is frequently used today as an assistive system in medical data analysis, for
example in cancer research.76

1.1.3 A Big Step Forward


The year 2012 saw the revival of deep neural networks (DNNs), also referred to as
deep learning, which are further developments from the standard neural network
approaches.77 The idea of DNN was first introduced in 1965 by Oleksiy Ivakhnenko
and Valentin Lapa.78 However, it took decades and substantial progress in comput-
ing technology before this idea could be used in well-functioning applications. In
2012 this stage was reached by Geoffrey Hinton and his team when their algorithm
won the image or object recognition competition ImageNet.79 Other researchers
such as Yoshua Bengio and Yann LeCun also contributed significant papers to
progress in deep learning.80

73
Bristeau et al., “The Navigation and Control Technology inside the ar. drone micro uav” (2011)
44(1) IFAC Proceedings 1477‒1484.
74
Haddadin, Towards Safe Robots: Approaching Asimov’s 1st Law, PhD Thesis, RWTH Aachen
2011; published by Springer 2014.
75
Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not” New York Times (16 February 2011).
76
Somashekhar et al., “Watson for Oncology and Breast Cancer Treatment Recommendations:
Agreement with an Expert Multidisciplinary Tumor Board” (2018) 29(2) Annals of Oncology
418‒423.
77
Parloff, “Why Deep Learning Is Suddenly Changing Your Life” (2016) Fortune.
78
Ivakhnenko and Lapa, “Cybernetic Predicting Devices” (1965) CCM Information Corporation.
79
Krizhevsky, Sutskever, and Hinton, “Imagenet Classification with Deep Convolutional Neural
Networks” (2012) Advances in Neural Information Processing Systems.
80
LeCun, Bottou, Bengio, and Haffner, “Gradient-Based Learning Applied to Document Rec-
ognition” (1998) 86(11) Proceedings of the IEEE 2278‒2324; LeCun, Bengio, and Hinton,
“Deep Learning” (2015) 521(7553) Nature 436.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
14 Sami Haddadin and Dennis Knobbe

Boston Dynamics, founded by ex-MIT professor Marc Raibert, first made the
news in 2012 with its four-legged robot BigDog.81 BigDog was a dynamically stable
four-legged military robot that could withstand strong physical hits and remain
stable. In 2013 Boston Dynamics unveiled their two-legged humanoid robot, Atlas.82
Its humanoid shape was designed to allow it to work with tools and interact with the
environment. The system has since been further developed and equipped with
increasingly complex acrobatic skills.
In the same year a team from Johns Hopkins University and DLR conducted a
telepresence experiment in which a Da Vinci master console in Baltimore, USA
controlled a DLR lightweight robot in Oberpfaffenhofen, Germany, over 4,000
miles away.83 This marked a milestone in telerobotics by combining telepresence
via standard internet with the slave robot system’s local AI capabilities.
In 2014, a major step forward in certification and standardization of personal
care robot safety requirements was taken with the publication of the ISO 13482
standard, a catalogue of requirements, protective measures and guidelines for the
safe design and use of personal care robots, including mobile servant robots, physical
assistant robots and person-carrier robots, generally earthbound robots for nonmedi-
cal use.84
The next step in software-based AI was demonstrated a year later, in 2015, by
DeepMind’s AlphaGo system.85 AlphaGo’s learning algorithms included a self-
improvement capability through which it could master highly complex board
games, such as Go, chess and shogi, by playing the games with itself.
By 2016, virtual assistants had finally arrived in everyday life.86 In 2011, Apple
started to deliver smartphones with a beta version of their virtual assistant Siri.
Further systems have been launched, including Cortana from Mirosoft, Alexa from
Amazon and finally Google Assistant from Google. Virtual assistants in general
are designed to perform tasks given by a user, usually by voice command, and
reflect current state-of-the-art speech-based human‒machine communication
technologies.

81
Playter, Buehler, and Raibert, “BigDog, Unmanned Systems Technology VIII” vol 6230 Inter-
national Society for Optics and Photonics, 2006.
82
Fukuda, Dario, and Yang, “Humanoid Robotics – History, Current State of the Art, and
Challenges” (2017) 13(2) Science Robotics, eaar4043.
83
Bohren, Papazov, Burschka, Krieger, Parusel, Haddadin, Shepherdson, Hager, and Whitcomb,
“A Pilot Study in Vision-Based Augmented Telemanipulation for Remote Assembly over High-
Latency Networks” (2013) Proceedings of IEEE ICRA 3631‒3638.
84
ISO, ISO 13482:2014: Robots and Robotic Devices ‒ Safety Requirements for Personal Care
Robots (International Organization for Standardization, 2014); Jacobs and Virk, “ISO 13482:
The New Safety Standard for Personal Care Robots” ISR/Robotik, 41st International Sympo-
sium on Robotics 2014.
85
Silver et al., “Mastering the Game of Go without Human Knowledge” (2017) 550 Nature
354‒359.
86
Goksel and Emin Mutlu, “On the Track of Artificial Intelligence: Learning with Intelligent
Personal Assistants” (2016) 13(1) Journal of Human Sciences 592‒601.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 15

The next level of underwater robotics and telerobotics was introduced by Khatib
and his research team at Stanford University in 2016. The teleoperated underwater
humanoid robot system OceanOne demonstrated its bimanual manipulation cap-
abilities in an underwater research mission to study the wreck of La Lune, King
Louis XIV’s flagship, off the Mediterranean coast of France in 2016.87 In 2017 Franka
Emika’s human-centered industrial robot system Panda was introduced.88 This next-
generation industrial robot is the first sensitive, networked, cost-effective and adap-
tive tactile robot. It is operated via simple apps on personal devices like tablets or
smartphones. This first mass-produced robot is self-assembled, showing the potential
for versatile manufacturing and marking the first step into the future of self-
replicating machines.89
One year later Skydio launched its Skydio R1 drone, a further step in the direction
of intelligent flying robots. This system has stable flying capability in windy environ-
ments, can follow its user reliably and while following avoids obstacles in its way.90
A new concept in neural networks was also published in 2018.91 First-order
principles networks (FOPnet) use basic physical assumptions to build a physically
informed neural network. With the application of this new concept, it has already
been shown that both the body structure and dynamics of a humanoid can be
learned on the basis of basic kinematic laws as well as the balance of force and
moments acting on this kind of multi-body system. This can be regarded as the first
step toward machines able to learn self-awareness.
The lighthouse initiative Geriatronics from the School of Robotics and Machine
Intelligence at the Technical University of Munich was launched in 2018 with the
aim of developing robot assistants for independent living for the elderly.92 This
initiative is sustainably supported by the Bavarian State Ministry of Economic
Affairs, Energy and Technology and LongLeif GaPa Gemeinnützige GmbH.
In early 2019, Haddadin, Johannsmeier, and Ledezma published a paper in which
they discussed a concept they called Tactile Internet as the next-generation Internet
of Things.93 They propose that 5G communication infrastructures combined with
rich tactile feedback and advanced robotics provide the potential for a meaningful

87
Khatib et al. “Ocean One: A Robotic Avatar for Oceanic Discovery” (2016) 23(4) IEEE Robotics
& Automation Magazine 20‒29.
88
Franka Emika GmbH, Franka Emika <https://www.franka.de/>, 17 January 2019.
89
Franka Emika GmbH, “Franka Emika R:Evolution” <https://www.youtube.com/watch?v=_
FbhNsRjqdQ, 04/05/2019>.
90
Skydio Inc, Skydio <https://www.skydio.com>, 4 May 2019.
91
Díaz Ledezma and Haddadin, “FOP Networks for Learning Humanoid Body Schema and
Dynamics” (2018) 2018 IEEE-RAS 18th International Conference on Humanoid Robots
(Humanoids), Beijing, China 1‒9.
92
Technische Universität München MSRM – Munich School of Robotics and Machine Intelli-
gence, “Lighthouse Initiative Geriatronics” <https://www.msrm.tum.de/en/geriatronics/>,
5 May 2019.
93
Haddadin, Johannsmeier, and Díaz Ledezma, “Tactile Robots as a Central Embodiment of the
Tactile Internet” (2019) 107(2) Proceedings of the IEEE 471‒487.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
16 Sami Haddadin and Dennis Knobbe

and immersive connection to human operators via advanced “smart wearables” and
Mixed Reality devices, effectively making real avatars a reality.

1.2 key technologies in modern robotics and


artificial intelligence
This section reviews the progress in key technologies that has paved the way for
robotics and AI technologies to integrate perception, AI and robotics into a trust-
worthy, embodiment-aware artificial intelligence system driving intelligent robots.

1.2.1 Trustworthy Artificial Intelligence


Artificial intelligence (AI) is a superordinate term for the discipline that creates
intelligent algorithms and systems, which can be software-based or actual physical
systems, or combinations of the two. An AI system uses sensors to perceive its
surroundings, may use actors to interact with it, and collects and analyzes large
amounts of partly unstructured data, processing and interpreting it to uncover latent
knowledge and skills. Using this knowledge, it supports decision-making to reach the
desired objectives of humans, for example, by acting as a software-based advisor or by
adjusting its embodiment with actuators. AI systems are capable of learning from
their previous actions and the corresponding responses, making them self-
optimizing. AI has wide fields of application and great potential to help with the
challenges of, for example, improving medical diagnostics and therapy, finding
ethically acceptable ways to cope with demographic change and reducing the effects
of environmental problems such as climate change or pollution. Other useful
applications are promoting sustainability in everyday life, for example by optimizing
transport and logistics, promoting sustainable agriculture, or reducing strenuous
physical labor in the workplace.
In order for AI to find its way into people’s everyday lives as a useful helper, it is
important that this technology is trustworthy. AI is often used where humans reach
their limits, such as when analyzing and interpreting large amounts of unstructured
data. Trust in this context means that the human can rely on the correctness and
unbiasedness of the resulting information, and is therefore able to make informed
decisions. Among the many examples of the importance of trust in the evaluation of
data by AI are the security of private data, human rights, respect for the rule of law
and the preservation of democratic freedoms. If AI does not consider these aspects,
its output may lead, among other things, to diversity and inclusion issues. In a
nutshell, trustworthy AI has to be human centered and have human values and well-
being at its core. It has to comply with human rights, the rule of law and democratic
freedoms. From a technological point of view, its robustness and reliability need to
be guaranteed, which has significant effects on transparency and explainability.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 17

1.2.2 Safety in Physical Human‒Robot Interaction


Safety in robotics and AI has been and still is a widely researched topic. For a very
long time, it was assumed that safety between humans and robots could only be
ensured by installing protective safety systems on or near the robot, such as a safety
fence for workspace segregation. However, such protective enclosures are very
obstructive in general physical and intuitive human‒robot interactions and real
collaboration. The practical goal is to enable the safe coexistence of humans and
robots in the same workspace, where interactions may occur intentionally and safely.
A variety of potential risks can arise that depend on the dynamically changing system
state and its environment.
The first approach to safe robotics was to quantify mechanical hazards inducing
potential injuries during human‒robot interactions. Dummy crash-test and soft-
tissue collision experiments were performed. Impact scenarios can be simulated
and analyzed using information from impact experiments already carried out in
areas such as injury biomechanics or forensics, combined with suitable mathemat-
ical models. Characteristic force profiles can then be defined for specific parts of the
human body representing targeted physical collisions between a human and a robot.
These force profiles in turn serve as the basis for defining safety limits for robot
velocities so that safe human‒robot interaction is guaranteed.94
Based on injury analyses from various impact scenarios with robots, international
safety standards for human‒robot interaction were devised, such as the ISO 13482
standard. This is the first non-industrial standard to specify safety requirements for
personal care robots such as mobile servant or physical assistant robots. It defines the
guidelines for safe design and general safety measures for the operation of earth-
bound nonmedical robots in non-industrial applications. However, there are still
many research questions to be solved before complete standardization of robot safety
is achieved.95

1.2.3 Robot Mechatronics As AI Embodiment


The physical parts of a robotic system are an example of an AI embodiment. The
physical body, which is the mechatronic design of such systems, must be specifically
designed for safe physical human‒robot interaction, which requires human-
centered development for optimal security and performance in human-centered
environments. Research in this field has led to new and innovative design paradigms
based on active and/or passive compliance in combination with lightweight design
principles.

94
Haddadin and Croft, “Physical Human‒Robot Interaction” in Bruno, Siciliano, Oussama and
Khatib (eds) Springer Handbook of Robotics (Springer 2016) 1835–1874.
95
For a deeper insight into this topic, please refer to Haddadin and Croft (n 94).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
18 Sami Haddadin and Dennis Knobbe

Lightweight concepts involve the whole system and the moving parts are designed
to be as light as possible to reduce possible collision metrics. Generally, there are
two major approaches, the mechatronic approach and the tendon-based approach.96
In both, the robot structure consists of light and strong materials such as light metal
alloys or composites. In order to optimize power consumption and to meet safety
standards, both motors and moving parts are designed to have low inertia.
The mechatronic approach is based on a highly modular structure. To achieve
this, the majority of the robot’s electronics are integrated into its joints. This
modularity enables the development of highly complex, self-contained robotic
systems that can be controlled efficiently. An important feature of the motors used
in this approach is that they can generate high torque, enabling the system to act and
react fast and dynamically. One characteristic that stands out in the mechatronic
approach is the use of a redundant sensor. Normally only motor-position sensors are
used, but with this concept, additional sensors for measuring torque, force or current
are integrated into the system. These additional sensors can be used to increase the
measuring accuracy and/or to provide certain safety features.
In contrast to the mechatronic approach, tendon-based robots use remotely located
motors to reduce weight. The motors are connected to the parts to be moved via a
cable. One disadvantage of this approach is that the motors required to move such a
system are quite large: the weight of the moving parts is reduced but the total weight
of the system remains relatively high. Further information on robot design concepts
and other important classes of robot structures can be found in the literature.97

1.2.4 Multimodal Perception and Cognition


Perception technologies are the artificial sense organs of machines and are indis-
pensable for interacting with the world. The human example shows that to cope
well with dynamically changing environments in daily life it is also important to use
more than one sense at a time. Multimodal perception combines, for example,
tactile with visual perception. Three common types of perception in close physical
human‒robot interaction and general robotics are explained in the following
sections: force/torque sensing, tactile perception and visual perception.98
Taken together, standard proprioceptive position sensing and force/torque meas-
urement provide a sense of touch to sensitively grasp and hold very fragile objects. The

96
Bicchi and Tonietti, “Fast and Soft Arm Tactics: Dealing with the Safety Performance Trade-
off in Robot Arms Design and Control” (2004) 11 IEEE International Conference on Robotics
and Automation Magazine; Albu-Schäffer et al., “Soft Robotics” (2008) 15(3) IEEE Robotics
and Automation Magazine 20‒30.
97
Khatib, “Inertial Properties in Robotic Manipulation: An Object-Level Framework” (1995) 14(1)
International Journal of Robotics Research 19‒36; Bicchi and Tonietti (n 96); Haddadin and
Croft (n 94).
98
Siciliano and Khatib (eds) Springer Handbook of Robotics (Springer 2016).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 19

most commonly used sensing techniques are strain gauges within a measuring bridge
or implicit deflection-based measurement. This perceptual technique enables force-
regulated manipulations and sensitive haptic interactions with humans.
The tactile perception approach was inspired by the properties of human skin.
Here, the entire robot is enveloped in a tactile skin consisting of many small-
networked sensor elements. In contrast to the previous type of sensing, contacts
occurring in close proximity to each other can be specifically measured by the
sensor skin during the completion of a task. The skin can give the robots significant
sensory capabilities, but also increases complexity and computational cost. Distrib-
uted data processing could help here. If each sensor element was equipped with its
own microcontroller, which prepared the sensor data in such a way that the central
computer only has to process simple high-level signals, the high computing effort for
the main controller could be reduced. Such systems still require a lot of research
work in order to be fully mature and robust.
Visual perception is a quite common non-contact sensor technology, often used
for the autonomous execution of robotic tasks without interaction with humans or
for preparatory activities, such as identifying humans or objects in the environment,
in connection with a human‒robot interaction. One technique in this field, marker-
based visual sensing, is used as a high-resolution tracking system, for example to
navigate drones safely through a room. These systems usually consist of infrared
cameras, which measure the positions of the highly reflective markers in a room
even during very fast movements. The use of such a system is not always practicable
or universally applicable, since markers must always be positioned and calibrated
beforehand. In addition, this principle is often sensitive to interference, for example
from sunlight, or has problems with sensor shading. Another type of visual percep-
tion is the use of inexpensive 3D RGB depth cameras in combination with AI
algorithms for the visual tracking of objects or people or for general navigation in
space during everyday operations. However, from a robustness and performance
point of view, visual perception with 3D RGB depth cameras still needs several years
of research before it can be used reliably in all everyday conditions.

1.2.5 Navigation and Cognition


Research into autonomous navigation has been a high priority for several decades.99
Particularly in the field of mobility and transport or logistics, it promises to finally
give robotic systems such as autonomous vehicles the ability to relieve people of the
mostly strenuous and tiring work at the wheel of vehicles. In order to achieve
autonomous navigation capability in space, an intelligent robotic system needs
robust algorithms for self-localization, route planning and mapping as well as map
interpretation.Self-localization is the ability of a robot to determine its own position
99
Nilsson, “Shakey the Robot” SRI International – Technical Note 323, 1984.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
20 Sami Haddadin and Dennis Knobbe

in the reference system. There are several techniques to do this as Global Position-
ing System (GPS)-based techniques are quite accurate for outdoor self-localization
but not suitable for indoor applications. For indoor navigation, visual perception-
based techniques combined with inertial sensors are more promising. Once the
robot has its position, it must plan the route to the target position. The first step is to
calculate the distance between the robot’s position and its destination. The next step
is map generation, which in general terms means the analysis of the environment
between the robot’s own position and the destination. The subsequent interpretation
of this generated map is crucial in order to execute the overall task of movement.
Here, the algorithm performs a semantic recognition of the environment, for
example recognizing obstacles on the map as non-movable areas between the robot’s
own position and the target.
A more specific application area is indoor navigation and cartography without a
comprehensive decentralized tracking system. A quite simple and robust method of
solving the navigation problem is the use of line markings on the ground that are
recognized and tracked by the robotic system’s sensors and controls. This is a rather
static method, since the predefined paths ‒ the environment map ‒ are fixed on an
abstract level. Dynamic changes, which can occur frequently when interacting with
humans, are difficult to update online with this approach.
The SLAM algorithm100 is more suitable for use in environments with fast
dynamically changing conditions. This algorithm can simultaneously determine
the robot’s own position and create an online map of the previously unknown
environment using sensing systems such as 3D RGB depth cameras or LIDAR (laser
detection and ranging) systems.101 The robot performs relative measurements of its
own motion and of features in its environment to obtain the necessary information
for navigation. Both measurements are often noisy due to disturbances, so the
SLAM algorithm now tries to reconstruct a map of the environment from these
noisy measurements and to calculate the distance the robot has covered during the
measurement.102 The biggest issue with using SLAM is that the complexity of
constantly changing dynamic environments leads to a high computing effort, thus
the real-time capability of the overall system cannot always be guaranteed.

1.2.6 Modern Control Approaches in Robotics


The goal of modern control in robotics is to develop approaches that enable the
robot to act optimally on its own but also to handle potentially physical interactions
100
Thrun, Burgard, and Fox (n 62).
101
Henry, Krainin, Herbst, Ren, and Fox, “RGB-D Mapping: Using Depth Cameras for Dense
3D Modeling of Indoor Environments” (2012) 31(1) The International Journal of Robotics
Research 1–28.
102
For more detailed information on how the SLAM algorithm works, see Thrun, Burgard, and
Fox (n 62).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 21

with humans gently and in a human-centered way. A very common approach to


control physical interaction is impedance control or compliance control.103 This
approach is based on controlling the connection between force and position on
interaction ports, such that the robot has the ability to interact compliantly with the
environment. For this purpose, the contact behavior between the robotic system and
the object it is to interact with is modeled by a mass-spring-damper system, whereby
the controller can adjust the stiffness and damping of this system. Classical imped-
ance control quickly reaches its limits in dynamic, rapidly changing processes,
which include human‒robot interactions. The impedance control parameters must
be known in advance and are usually set by experiments and calibration. In order to
avoid this limitation, adaptive impedance control (AIC) was developed, whereby
these parameters can also be changed online.104 New approaches combine AIC with
approaches from machine learning to teach the robot certain impedance behaviors
as well as how to deal with disturbances in the system. One example is the combined
use of AIC and artificial neural networks to map complex disturbances that cannot
be modeled analytically.

1.2.7 Machine-Learning Algorithms


When one thinks of machine learning, certain keywords like deep learning, neural
networks or pattern recognition immediately come to mind. This section, which
provides a brief overview of the topic of machine learning, aims to shed light on
these and other terms.
Machine learning originated in computer science with the aim of developing
algorithms to efficiently process complex signals and data.105 The main problem in
signal processing remains the handling of uncertainties caused, for example, by
measurement noise or low data density. Another problem is the analysis and
interpretation of extremely high amounts of data, which mostly represent very
complex and highly dynamic systems. One of the central foundations on which
machine learning to deal with these kinds of problems is based is stochastic
theory. With stochastic theory as the baseline, general machine learning can be
split into (semi-)supervised learning, unsupervised learning and reinforcement
learning. Before applying machine-learning algorithms, the raw data must often
be pre-processed, for example by feature extraction algorithms such as filter algo-
rithms, dimensionality reduction algorithms or other approaches to build up a
“feature space.”

103
Hogan (n 55); Craig and Raibert, “A Systematic Method for Hybrid Position/Force Control of a
Manipulator” (1979) IEEE Computer Software Applications Conference 446‒451.
104
Haddadin and Croft (n 94).
105
Bishop, Pattern Recognition and Machine Learning (Information Science and Statistics)
(Springer 2006).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
22 Sami Haddadin and Dennis Knobbe

The first type of learning concept to be discussed here is supervised or semi-


supervised learning, which attempts to train a model with labeled training data
(input‒output is known). Semi-supervised learning is the harder variant of this
training phase. It has only incomplete training data for the training phase, which
means that the sample inputs lack some desired outputs. After sufficiently long
training, the quality and generalization abilities of the model can be tested using a
data set that contains new and slightly different data. This type of machine learning
is mostly used for classification tasks like pattern recognition. Unsupervised learning
uses only input data for the training without any knowledge of the desired outputs.
One goal here is to discover new information such as similar structures in the data
set, known as clustering. The last type are reinforcement learning algorithms, which
are based on the principle of goal-directed trial-and-error learning, where an
improvement is rewarded or a deterioration is penalized.106 The difference between
this and other approaches is that reinforcement learning uses direct interaction with
the environment for the learning process. These algorithms are not based on
experience-based supervision or an overall model of the environment. Typical
applications are self-optimizing systems such as in game theory or control theory.
Next we look at some of the models which use these training concepts.
Commonly used machine-learning models are artificial neural networks,107
support-vector machines,108 Bayesian networks,109 and genetic algorithms.110 The
most popular model approach in the field of machine learning is neural networks,
often used in supervised learning. The idea behind this approach is to simulate aspect
of the behavior of neurons in the human brain using the so-called perceptron
algorithm.111 A perceptron or neural network consists of several artificial digital
neurons that are networked along different layers: the input layer, hidden layer and
output layer. This approach is also known as a black-box algorithm because interpret-
able information about the dynamics between input and output layer is not available.
An artificial digital neuron is represented by a nonlinear function, the activation
function and a weight function (transfer function) with variable weight parameters.
The special feature of the nonlinear function is that it has a threshold. If this threshold
value is exceeded by the input value of the function, the function outputs a one, and
otherwise a zero. This behavior can be used to train a specific input-output mapping

106
Sutton and Barto, Reinforcement Learning: An Introduction (MIT Press 2018).
107
Haykin, Neural Networks: A Comprehensive Foundation (Prentice Hall PTR 1994); Bishop,
Neural Networks for Pattern Recognition (Oxford University Press 1995).
108
Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization
in the Brain” (1958) 65(6) Psychological Review 386.
109
Judea Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference (Elsevier
2014).
110
Dan, Evolutionary Optimization Algorithms (John Wiley & Sons 2013).
111
Rosenblatt, “Principles of Neurodynamics. Perceptrons and the Theory of Brain Mechanisms”
No VG-1196-G-8. Cornell Aeronautical Lab Inc Buffalo NY, 1961; Minsky and Papert, Percep-
trons (MIT Press 1969).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 23

between the input and output layer of this type of network. If a specific network
structure is then designed for a desired application, the network can be trained to a
desired behavior, using the backpropagation algorithm and training data, by setting
the parameters of the network accordingly. In this context, a deep neural network is a
more complex variant of a normal neural network, where, for example, a higher
number of hidden layers are used.112 The hidden layers can generally be seen as a not
directly reachable layer with encoded information after the training phase. The
dynamics and properties of these layers are not yet fully understood.
Another machine-learning model is the support-vector machine, which is often
used as a classifier or regressor for pattern recognition tasks. This mathematical
algorithm tries to calculate so-called hyperplanes (decision boundaries) to separate
and therefore classify two or more objects in the feature space, using labeled training
data. Important training data is the data that is close to the transition from one object
to the neighboring object and only this data is needed to span the hyperplane
mathematically. These data points are called support vectors and give this model
approach its name.
Bayesian networks are used for decision-making. They are basically directed
acyclic graphs, but each node represents a conditional probability distribution of a
random variable and each edge, the associated conditional relationships or depend-
encies between the random variables. If one now considers a random variable that is
not conditionally independent, that is, it has relations to other random variables
represented by the connected edges, one can easily recognize the functionality of a
Bayesian net. This node gets input values for its probability function via the edges
directed to it, then the probability of the random variable belonging to the probabil-
ity function is obtained as an output. If you calculate this for the whole network, you
get a compact representation of the common probability distribution of all variables
involved. From this a conclusion or inference about complex problems such as
unobserved variables can be obtained. Not every Bayesian network is fully specified
because some conditional probability distributions may be unknown. These missing
pieces can be obtained by learning the probability distribution parameters from data,
for example by using maximum likelihood estimation (MLE). Sometimes the
relations between the random variables are unknown. In this case, structure learning
is applied to estimate the structure of the network and the parameters of the local
probability distributions from data. Various optimization-based search approaches
such as the Markov chain Monte Carlo algorithm can be used.
The last machine-learning model to be presented here is the genetic algorithm,
which belongs to the evolutionary class of algorithm. This algorithm works with
metaheuristics and is based on the idea of natural selection. In general, the algorithm
starts with a population of possible solutions, where each solution has certain
parameters that can be used to mutate or vary it. At the beginning, individuals are

112
Goodfellow et al., Deep Learning, vol 1 (MIT Press 2016).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
24 Sami Haddadin and Dennis Knobbe

randomly selected from the starting population, from which the strongest individuals
are then selected using an object function. Now the parameters of these individuals
are changed according to a measure given by the number of remaining individuals in
this generation. From this, the new generation is created, from which the fittest ones
are selected again. This continues until a previously defined number of generations
or a specific fitness level is reached.

1.2.8 Learning in Intelligent and Networked Machines


Now that we have discussed some approaches from the field of machine learning,
we next examine how some of them are used in robotics. One field of application
previously considered is in combination with adaptive impedance control. In
addition to the control of robots, machine learning is also used to avoid complex
manual programming of robotic task execution. One approach is apprenticeship
learning, where the human acts as teacher for the robot system by demonstrating the
task to it.113 The robot then tries to imitate what is shown in order to learn the skills
needed to complete the task. After a short training phase, the system should improve
itself independently, completing the task optimally after some time. Today,
reinforcement learning is often used for this autonomous self-improvement.114 An
explicit application of these learning algorithms is the robotic gripping and manipu-
lation of objects. Here, automatic development of complete scene understanding
using object-centric description is necessary to find generalizable solutions for more
complex manipulation tasks.115 The learned processes are not complete imitations,
but only the interaction points and movements with the object are modeled, which
makes generalization for applications to other systems possible. Another important
technological advance making complex manipulation tasks in robotics autono-
mously solvable was the further development of image-processing algorithms in
combination with powerful object localization in a dynamic environment.116

113
Asfour, Azad, Gyarfas, and Dillmann, “Imitation Learning of Dual-Arm Manipulation Tasks in
Humanoid Robots” (2008) 5(2) International Journal of Humanoid Robotics 183–202; Ijspeert,
Nakanishi, and Schaal, “Learning Attractor Landscapes for Learning Motor Primitives” in
Becker, Thrun, and Obermayer (eds), Advances in Neural Information Processing Systems 15
(MIT Press 2003).
114
Theodorou, Buchli, and Schaal, “A Generalized Path Integral Control Approach to Reinforce-
ment Learning” (2010) 11 Journal of Machine Learning Research 3137‒3181; Peters and Schaal,
“Reinforcement Learning of Motor Skills with Policy Gradients” (2008) 21(4) Neural Networks
682‒697.
115
Van Hoof, Kroemer, Ben Amor, and Peters, “Maximally Informative Interaction Learning for
Scene Exploration” (2012) Proceedings of the International Conference on Robot Systems
(IROS); Petsch and Burschka, “Representation of Manipulation-Relevant Object Properties
and Actions for Surprise-Driven Exploration” (2011) Proceedings of the IEEE/RSJ International
Conference on Intelligent Robots and Systems 1221‒1227.
116
Mair, Hager, Burschka, Suppa, and Hirzinger, “Adaptive and Generic Corner Detection Based
on the Accelerated Segment Test” (2010) Computer Vision-ECCV 2010 183‒196; Burschka and

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 25

However, to achieve the next step in robotic manipulation or in the field of


learning machines generally, approaches are needed that are even more general and
scalable. A promising approach is the concept of collective learning. This concept is
based on the prediction of a dramatic increase in robots in society over the coming
decades117 and on the idea of ever greater interconnectedness. Today, almost
everyone walks around with a smartphone that can be interpreted as part of a huge
networked cluster of small supercomputers. This trend will not stop at robotics
either, producing networked robots operating via the internet or bringing up entire
robot clusters with possibly highly complex hierarchical network structures. New
communication architectures, planning and control methods will become necessary
for the optimal use of these highly networked robot clusters. A new capability of
such robot clusters would be, for example, to exchange learned information with
each other while they perform complex manipulation or interaction tasks. In this
way, the robots would learn from each other as in a collective, by exchanging already
acquired knowledge about different but similar tasks. This transfer of knowledge, a
crucial aspect of the collective learning concept, will help the networked robots to
master new problems in everyday life more easily or to learn much faster.

1.3 man and machine in the age of machine intelligence


Let us now take a closer look at intelligent systems that are already available. On the
one hand, purely software-based AI systems are becoming more and more prevalent.
These primarily internet- and smart device-based services provide us with useful
knowledge in the best case, and with vast amounts of unsorted and at least partially
questionable information and data in the worst. On the other hand, the types of
robotic systems that we find in the private sector are mobile robots, such as lawn
mowers, vacuum-cleaning systems, unmanned aerial vehicles, and increasingly,
semi-autonomous cars. Due to safety issues when interacting with humans as well
as highly complex and task-specific programming processes, so far articulated robots
are still only found in the industrial sector. Clearly, we are a long way away from
intelligent, complex, and human-friendly robotic systems capable of interacting with
and manipulating our human-centered world.
In order to bridge this gap, a far more effective integration of the algorithmic and
physical worlds is necessary. The emerging discipline of machine intelligence (MI)
provides a new holistic paradigm to address this issue. This discipline, which is the

Hager, “V-gps (slam): Vision-Based Inertial System for Mobile Robots” (2004) 1 Robotics and
Automation ICRA’04. IEEE International Conference 409‒415.
117
Wilkinson, Bultitude, and Dawson, “Oh Yes, Robots! People Like Robots; The Robot People
Should Do Something: Perspectives and Prospects in Public Engagement with Robotics”
(2011) 33(3) Science Communication 367–397; Pineau, Montemerlo, Pollack, Roy, and Thrun,
“Towards Robotic Assistants in Nursing Homes: Challenges and Results” (2003) 42(3) Robotics
and Autonomous Systems 271–281.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
26 Sami Haddadin and Dennis Knobbe

reunification of perception (sensing), AI (planning) and robotics (acting) with


pervasive control and machine-learning roles, is critical to enabling truly autono-
mous AI robots, autonomous cars, flying taxis, networked cyber-physical systems,
molecular robots for drug delivery and other intelligent systems in our home, work
and healthcare spaces to become a reality.
The long-term vision of the MI discipline is a trustworthy, embodiment-aware
artificial intelligence that is aware both of itself and of its surroundings, and not only
drives, but also adapts its methods of control to the (intelligent) body it is supposed to
control. This advancement will fundamentally redefine the way in which we use and
interact with robotic systems in our daily lives. A human-centered development
approach as well as a strong focus on ensuring the trustworthiness of such increasingly
capable AI systems will be critical. Nevertheless, what is the starting point and what
are the next steps for these systems to reach the stated long-term goal? The following
sections seeks to shine some light on these questions from the systems viewpoint.

1.3.1 Flying Robots


Ever cheaper and more powerful computer hardware in ever smaller forms, together
with advances in sensors and real-time signal-processing algorithms, has brought
enormous progress in the field of flying robots. Not only do these small unmanned
aircraft vehicles (UAVs) have the ability to stay in the air longer than previous systems,
but their autonomy capabilities have also increased drastically. What does autonomy
mean in the field of flying robots? In general, autonomy in robotics means the ability
of robots to work in unknown, unsafe and unpredictable environments without the
intervention of a human operator. Many aspects of navigation already mentioned in
the section on key technologies play a role here. These include estimating the robot’s
position, mapping the environment, creating trajectories and deciding or interpreting
the created maps. Especially in the field of flying robots, computational algorithms for
aerodynamic modeling and wind estimation are important. Novel sensor systems are
crucial to ensure that the flying robot can use these algorithms in real time. The focus
here is on the fusion of exteroceptive sensors such as cameras and laser rangefinders
with proprioceptive sensors such as an inertial measurement unit, to form a multi-
modal sensor system. Modern UAVs today have the capability to use six stereo
cameras simultaneously in real time combined with various other sensors to perform
occupancy grid mapping, motion planning, visual odometry, state estimation and
person tracking using deep learning algorithms. These high-tech systems come with
actuators, sensors and computing systems that are integrated in a lightweight structure
to a weight of about one kilogram and manage a flight time of about 16 minutes. The
purchase price of these systems is around €2,500. Less intelligent flying robots, those
with limited or non-existent obstacle avoidance, cost about €200‒1,000, weigh several
hundred grams and have an average flight time of 10–30 minutes.
Looking at the missing pieces of these systems from a scientific point of view,
generalizable approaches to aerodynamic modeling are still lacking. Developing
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 27

generally valid models would reduce the development time and the costs of these
systems. Another problem is to find an elegant and at best purely model-based
approach to distinguish aerodynamic forces from collision and interaction forces.
A secure physical human‒flying-robot interaction interface still requires a lot of
research before it could enter the market in a product. Flight time would also have
to be extended to reach a level suitable for everyday use. This could be achieved, for
example, by further reducing the total weight with new materials or structural
approaches. This development would also increase the safety of human‒robot
interaction, since less energy would be transferred to the human body in the event
of a collision. It is clear from all these factors that there is still a long way to go before
small and affordable fully autonomous flying robots become ubiquitous.

1.3.2 Mobile Ground Robots


In the history of mobile robotics, the Shakey system can be seen as the first mobile
robotic system to be used in practice. This system laid the foundations for technolo-
gies such as hierarchical control architecture more than 40 years ago. Since then
much research has been done in the field of mobile robot platforms and many
different approaches for these systems have been developed for uses ranging from
industrial applications to applications in disaster zones or in general environments
dangerous to humans. In order for mobile robots to move from the laboratory
environment to applications for everyday life, research and development must focus
on the safe human‒robot interaction capabilities of these systems. One robot
developed specifically for safe human‒robot interaction is called Rollin’Justin. This
system is very powerful but its development did not focus on cost-effective produc-
tion and it therefore cannot be easily commercialized in the near future. One key
element to enabling safe human‒robot interaction is the use of impedance control
in mobile platforms. Until now, this approach has been rare and can only be found
in research work, if at all. If the research focus were to be increasingly directed
toward safe human‒robot interaction with the goal of bringing mobile robot tech-
nology to an affordable product, these systems could become much more common
in everyday life and contribute to shaping our society. A comparison of the available
mobile platforms is shown in Figure 1.1.

1.3.3 Tactile Robots


For more than 50 years, position-controlled rigid robots have been supporting
assembly and welding in industry. Since these robots were developed to perform
heavy work requiring high force, their systems are inappropriate for safe close
interaction with humans and they are therefore usually separated from humans by
a safety fence. In recent decades, the paradigm for the use of robots has changed.
Sensitive manipulation and close physical human‒robot interaction have become
the order of the day. To achieve this, highly integrated lightweight designs with low
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core

robot TIAGo Base TORU KMP 1500 Turtlebot LD Platform Vector


manufacturer PAL Robotics Magazino KUKA AG Yujin Robot OMRON Corporation Waypoint Robotics
country of origin Spain Germany Germany South Korea Japan USA
dimensions Ø 54 x 30 cm 138 x 69 x 300 cm 200 x 80 x 67 cm Ø 35 x 50 cm 70 x 50 x 38 cm 67 x 50 x 31 cm
speed ≤ 1 m/s² ≤ 1.5 m/s² ≤ 1m/s² ≤ 0.7 m/s² ≤ 1.35 m/s² ≤ 2 m/s²
payload 50 kg 60 kg 1500 kg 5 kg 90 kg 136 kg
sensor technology laser laser scanner 2x laser bumper laser scanner GPS
scanner bumper scanner clip sensor bumper laser scanner
IMU (6 DoF) distance sensors Kinect sonar
(3D) cameras IMU (1DoF)
capabilities autonomous autarkic robot autonomous open source autonomous autonomous
navigation central fleet navigation open hardware navigation navigation
management modular design 3D optional
perzeption

mobility leveled floor leveled floor leveled floor leveled floor leveled floor leveled floor
usability expert knowledge expert knowledge expert knowledge expert knowledge expert knowledge expert knowledge
required required required required required required

figu re 1 .1 Overview of available mobile robotic systems


Robotics and Artificial Intelligence 29

inertia and high active compliance have been developed and implemented. The
result is systems such as the Barrett WAM arm118 and the DLR lightweight robot
series,119 whose arm technology later led to the LWR iiwa robot from the company
KUKA. One of the most modern, human-centered lightweight robot systems
developed to date is Franka Emika’s Panda system.120 A high-precision force and
impedance control system allows the system to perform sensitive and accurate manipu-
lation and enables a high degree of compliance, which, in conjunction with safety
aspects already considered in the design phase of this robot, guarantee safe human‒
robot collaboration. One of the most important pragmatic aspects of human‒robot
collaboration besides safety is the operating, programming and interaction interface
between human and robot. Many collaborative robots use a tablet computer and
complex software as operating, programming and interaction interface. The Panda
system offers an elegantly designed interface in which the human can interact with the
robot in a natural way via haptic interactions such as tapping on the robot gripper to
stop the robot or to give a process confirmation. In addition, in the teaching mode, it is
possible to teach the compliant robot various work processes by taking it by the hand
and guiding it extremely smoothly through the process. Once the process has been
shown, it can be played repeatedly by simply pressing a button. This kind of program-
ming is extended by apps representing two levels of interaction with the robot: the
expert-level robot apps programmer and the user who does not need any special
robotics knowledge. The expert provides the basic robot capabilities, which are
assembled and operated by the user for complex processes and solutions. These basic
robot apps will be shared over a cloud-based robotic app store and made available to a
broad range of users. With the growth of this robotics skills database, many new
applications will emerge, bringing robotics more and more into our daily lives.

1.4 applications and challenges of robotics and


ai technologies

1.4.1 From Cleaning Robots to Service Humanoids


Drones in the park, vacuum-cleaning robots at home or lawn-mowing robots in the
backyard, all these robotic systems are nowadays nothing special to look at. However,

118
Townsend and Salisbury, “Mechanical Design for Whole-Arm Manipulation, Robots and
Biological Systems: Towards a New Bionics?”; Barrett Technology, “Barrett Arm” <http://
barrett.com/products-arm.htm>, 25 September 2017.
119
Hirzinger et al., “A Mechatronics Approach to the Design of Lightweight Arms and Multi-
fingered Hands” Robotics and Automation, 2000. Proceedings, ICRA’00. IEEE International
Conference on Robotics and Automation, vol 1 IEEE, 2000; Albu-Schäffer et al., “The DLR
Lightweight Robot: Design and Control Concepts for Robots in Human Environments” (2007)
34(5) Industrial Robot: An International Journal 376‒385.
120
Franka Emika GmbH, Franka Emika, <https://www.franka.de/>, 17 January 2019.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
30 Sami Haddadin and Dennis Knobbe

finding extraordinary intelligent service robots, which can act in a similar social
manner to humans, for example while supporting elderly people in their everyday
life, still presents a gap in the technology. Furthermore, technologies available today
are not able to adapt to short-term changes, are not user friendly in terms of
“programmability” and do not learn from experience. In addition, unlike the case
of industrial robots, security aspects have not been considered in these systems.
Early approaches in this direction can already be seen, for example in the user
interface developed by Franka Emika for their robotic arm system. Nevertheless,
what is still missing in these systems is the possibility of improving learned abilities
autonomously. Intelligent service robots have to be able to adapt to new conditions.
They have to meet the “lifelong learning” paradigm in order to be also accepted by
older people, who may be more skeptical about new technologies. In addition,
specific design and technology decisions regarding the acceptance and usability of
these robots need to be made in the development phase of these systems if they are
to be usable in the private sector.
A promising subfield of service robots are humanoids. As we have seen, service
robots should be human centered from the beginning of their development, espe-
cially from the point of view of safety. For this reason, systems like the NASA
Robonauten, DLR’s Justin or Boston Dynamics’ Atlas System are not considered
here. Figure 1.2 gives a current overview of existing service-oriented humanoid
systems or those under development.
One of the first complex service humanoids available was the PR2 system from
Willow Garage.121 It consists of a mobile motion platform, two grab arms and
numerous sensors to navigate in space by using position control. In addition to
“pick-and-place” tasks, the user can teach this humanoid simple motion sequences.
PR2 has relatively simple interaction channels such as motion control via a gamepad
or tablet. Other service robots such as the Care-O-Bot 4 from Fraunhofer IPA,122 the
Tiago system from PAL Robotics123 and the HSR robot from Toyota124 have similar
capabilities to the PR2, but some systems also have additional human interaction
channels such as voice command input. The Care-O-Bot 4 can even gesticulate and
interact with people via facial expressions or by touch from its built-in display.
Furthermore, all of the humanoids mentioned here can be teleoperated to a certain
extent. Two systems that stand out here are the Twendy-One robot from Waseda

121
Willow Garage Inc, PR2, <www.willowgarage.com/pages/pr2/overview>, 25 September 2017;
Bohren et al., “Towards Autonomous Robotic Butlers: Lessons Learned with the pr2” 2011 IEEE
International Conference on Robotics and Automation (ICRA) 2011.
122
Fraunhofer-Gesellschaft, Fraunhofer-Institut für Produktionstechnik und Automatisierung,
Care-O-Bot 4, <www.care-o-bot-4.de/>, 25 September 2017.
123
PAL Robotics, SL, TiaGo, <http://tiago.pal-robotics.com/>, 25 September 2017.
124
Toyota Motor Corporation, Human Support Robot (HSR), <www.toyota-global.com/innov
ation/partner_robot/family_2.html>, 25 September 2017; Hashimoto et al., “A Field Study of
the Human Support Robot in the Home Environment” 2013 IEEE Workshop on Advanced
Robotics and Its Social Impacts (ARSO) 2013.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core

system PR2 Care-O-Bot 4 Tiago HSR Twendy-One RIBA II GARMI


manufacturer Willow Garage Fraunhofer IPA PAL Robotics Toyota Sugano Lab. RIKEN FRANKA EMIKA
year 2008 2015 2015 2012 2009 2011 under development
technology
control concept position control position control position force control SEA control position control torque sensors
control + tactile sensors whole-body control
teaching ability simple -- simple simple simple guiding, simple complex two-arm or full-body processes
movements movements movements movements movements
navigation for mobile for mobile for mobile for mobile -- -- Full-body navigation, collision avoidance,
platform platform platform platform Human Aware Motion Planning
manipulation skills pick & place, pick & place pick & place pick & place pick & place, no one- and two-armed sensitive manipulation,
connecting the complex tactile assembly, pick & place
mains plug manipulation
learning ability movement -- -- -- -- -- Tasks, movement patterns, handling & assembly
learning tasks, sensitive interaction
HRI
human and -- -- -- -- -- voice location kinematic human model, face recognition,
environmental environmental recognition
observation
active physical -- -- -- -- Help to get up lifting ambidextrous help with complex processes
interaction bedridden
interaction channels, no display, LEDs, voice display, voice LEDs, voice display, LEDs, speech, gestures, robot actions
indication of internal sound, text-to- commands commands voice commands, commands, understandable
status speech, gestures tactile skin tactile sensors
external devices gamepad touchscreen -- tablet, mobile -- joystick touchpad / tablet, virtual reality / augmented
phone, joystick reality glasses, audiovisual and haptic
teleoperation console
robot gestures -- body gestures -- -- -- -- whole body gesture engine
remote control teleoperation teleoperation, teleoperation, teleoperation -- -- semi-autonomous telepresence / teleoperations
telepresence telepresence
application / UX
user level expert informed expert informed expert expert everybody
Laie Laie
complexity of pick-up/bringing pick-up/bringing pick- pick-up/bringing pick-up/bringing complexes pick-up and delivery services,
tasks services services up/bringing services services, passive two-handed complex everyday tasks,
services standing-up aid lifting function complex multimodal HRI
acceptance medium not eval. not eval. not eval.. high medium still to be evaluated
sympathy medium not eval. medium medium high high still to be evaluated
fields of application household, household, research household, household, care universal everyday assistant
nursing nursing assistance with assistance with for elderly people
assistance assistance, reduced reduced mobility
entertainment mobility

fig ure 1 .2 Overview of existing and upcoming service-oriented humanoid systems


32 Sami Haddadin and Dennis Knobbe

University125 and the RIBA II robot from Riken.126 Both systems have special features
making human‒robot interactions possible. Twendy-One has the ability to actively
help a person to stand up from seated. It also has a tactile skin, which enables
complex tactile manipulations. The RIBA II system is designed to be able to lift and
relocate bedridden people, reducing the burden on medical staff.
In general, service robots in nursing have the potential to partially solve the lack of
applicants and to enable older people to live independently as long as possible. The
value of direct human‒robot interactions, apart from these approaches to physical
interaction with the patient, has so far gone largely unnoticed. The systems pre-
sented here are not yet equipped with the necessary capabilities to perform smaller
pick-up and delivery services or even sensitive manipulation tasks such as tying shoe
laces. In general, there is great potential for helping humans in daily tasks and for
human‒robot communication through haptic gestures.
The company Franka Emika is currently working on a humanoid service robot
called GARMI, which will provide a sensitive human‒robot interaction. GARMI
will be equipped with two multi-sensorial robotic arms, which will have soft-robotic
features and the solutions required for direct human interaction and safe human‒
robot interaction. In addition, the small robot will have a multisensory “head” and
an agile platform, allowing it to move from a standing position in the desired
direction. It should be able to perform both simple tasks and pick-up services, but
also to be remotely controlled by relatives and professional helpers.

1.4.2 Production and Logistics


Low-cost and flexible national production of the next generation of industrial robots
will eliminate the need to exploit developing countries. Robotics will finally live up
to its original credo of freeing humanity from slavery. These new industrial robots
will be highly networked and mobile with extensive sensory capabilities enabling
them to autonomously perform a wide range of complex manipulation tasks and
safely collaborate with humans. Innovative design concepts with extreme light-
weight construction combined with new control approaches will lead to very low
energy consumption by these systems. Mutual exchange of information and know-
ledge between robots in a collective set-up can lead to a rapid increase in learning
speed. New complex tasks can thus be learned not over weeks, but over hours or
even minutes.

125
Sugano Laboratory, TWENDY-ONE <www.twendyone.com/concept_e.html>, 25 September
2017; Iwata and Sugano, “Design of Human Symbiotic Robot TWENDY-ONE” ICRA’09.
IEEE International Conference on Robotics and Automation 2009.
126
Riken, RIBA-II, <www.riken.jp/en/pr/press/2011/20110802_2/>, 25 September 2017; Mukai
et al., “Development of a Nursing-Care Assistant Robot RIBA That Can Lift a Human in Its
Arms” 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems
(IROS) 2010.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 33

In the coming decades, the development of autonomous vehicles will also create
major changes. Autonomous vehicles are already widely used today, but mostly in
closed warehouses or in confined areas that have been completely mapped in
advance. These application areas are also predominantly shielded from dynamic
sources of interference such as humans. One example is American online retailer
Amazon’s warehouse system. In a completely systematic environment, hundreds of
robots arrange themselves autonomously to select goods or goods shelves and drive
them to the parcel assembly. Simply put, these robots are nothing more than
powerful cleaning robots that can carry up to 300 kg. A lot of research will still be
required to move this technology on from the retail environment to transporting
people autonomously in our world. However, this next generation of autonomous
ground and air vehicles will not only be able to navigate safely in the real world, but
will also provide much more energy-efficient and environmentally friendly drives.
The interconnectedness of these systems now makes it possible to automate com-
plete logistics chains, and passengers can now be transported on demand, optimally
in terms of both time and energy. Through the temporary networking and coordin-
ation of heterogeneous vehicle fleets, the fundamental principles of public transport
are being redefined.

1.4.3 Robotic Disaster Relief


The application of robots in unsafe environments will be of great importance in our
future world. It will allow us to use technology instead of risking human lives to save
buried or trapped people or to perform highly risky maintenance tasks. The key
technology for these applications is called telerobotics. A technology originally
developed for space applications in the space agencies of the USA, Germany and
Japan, telerobotics has been designed to enable a transparent (bilateral) remote
control of robots in human-unfriendly environments. The first use of such a
technology was in 1993, when the Rotex mission used Shared Autonomy/Supervised
Autonomy on the first Earth-controlled space robot.127 Recently, a Da Vinci master
console (in Baltimore, USA) controlled a DLR lightweight robot (in Oberpfaffen-
hofen, Germany), over 4,000 miles away. The robot was able to recognize its
environment independently and perform selectable semi-autonomous functions
on site with perceptual support. It could initiate the most likely actions desired by
the user, such as gripping an object or inserting it, semi-automatically.128 The aim of
this research was to investigate functional tasks that lie between pure teleoperation
and full autonomy. In order to enable a more natural teleoperation that can also
handle long delays, model-based teleoperation approaches use environmental

127
Hirzinger et al., “Sensor-Based Space Robotics-ROTEX and Its Telerobotic Features” (1993)
9(5) IEEE Transactions on Robotics and Automation 649‒663.
128
Bohren et al. (n 83).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
34 Sami Haddadin and Dennis Knobbe

models generated from knowledge gained a priori and updated step by step during
manipulation.129 Thus, the teleoperation remains applicable even in the presence of
delays of up to 4 seconds, as an approach with model-based teleoperation and haptic
feedback has shown.130 Franka Emika goes one step further with the market launch
of the first cloud and distributed telerobotic-capable commercial robot system
Panda. The possibilities of this system were demonstrated in late 2018, when
37 Panda systems were connected in real time, with twelve operating in Düsseldorf
(Germany) and twenty-five in Munich (Germany). As a result, thirty-six robots could
be successfully teleoperated with one robot as an input device, with a maximum
distance of approximately 600 km between them.
The future benefits of this technology will be available in various applications
enabled by its high level of robustness, such as operating in space, defusing bombs,
firefighting or rescue and containment in the event of a nuclear catastrophe.

1.4.4 Multimodal Communication for AI-Enabled Telemedicine


Telemedicine is a technology that has emerged from telerobotics in combination
with real-time 3D visualization of the human body and multimodal communication
technologies.
Multimodal communication represents the future of communications. Instead of
communicating purely via voice, text or video, additional information channels are
used to increase the transparency and interactivity between communicators. One
channel, for example, would be the telepresence channel. This channel can be
attached to a haptic input device with force feedback on one side and a robotic
output device on the other. The robotic system moves according to the user’s input,
but also returns information on haptic interaction to the input device. The user does
not have direct access to the robot’s motion control system via the input device, but
instead gives more abstract high-level commands, which are then translated into the
desired motion. A framework for predictive and semi-autonomous interaction con-
trol in combination with a robot-side action recommendation system makes sugges-
tions to the user for further action based on the local information. This telepresence
channel is also available to be used for telemedicine. If an authorized physician uses
this interface, a module will be unlocked which enables the use of diagnostic
devices at the patient’s site, intelligent processing and visualization, and secure
handling of sensitive medical data.
Crucial to the process of justifiable diagnosis at a distance is real-time 3D
visualization of the human body, one element of which is the acquisition of the
129
Sayers, Paul, Whitcomb, and Yoerger, “Teleprogramming for Subsea Teleoperation Using
Acoustic Communication” (1998) 23(1) IEEE Journal of Oceanic Engineering 60‒71; Stoll,
Letschnik, Walter, Artigas, Kremer, Preusche, and Hirzinger, “On-Orbit Servicing” (2009)
16(4) Robotics Automation Magazine IEEE 29‒33.
130
Bohren et al. (n 83).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
Robotics and Artificial Intelligence 35

figure 1. 3 Telemedicine case scenario

kinematics of the body. Today, motion-capturing systems equipped with infrared


cameras for 3D detection of retroreflective markers positioned on anatomical land-
marks are used for this purpose. By synchronizing the real-time data of human
movements with musculoskeletal biomechanical models131 and dynamic models of
the internal organs, as well as 3D visualization models of the patient, it is now
possible to provide the physician with the patient’s digital twin. The medical data
obtained during examination by diagnostic devices, such as ultrasound, are then
displayed and synchronized with the digital twin.
The next paragraph describes a typical telemedicine scenario (see Figure 1.3).
Here, the humanoid GARMI is used as a teleoperated robot on the patient’s
premises.
Telemedicine emergency: shortly after his daily nap, Heinz suddenly feels
unwell. He calls out to GARMI: “I don’t feel well. Please call a doctor.” GARMI
comes immediately and establishes contact with the emergency doctor. At the
doctor’s office, Heinz’s emergency call appears on the user avatar remote station
display. The doctor can react immediately to the emergency as he is connected to
GARMI. After a brief analytical dialog, the doctor lets GARMI perform an ultra-
sound and ECG examination. The ultrasound images and the ECG are transmitted
to the doctor in real time. From the analysis of the transmitted data, which is
supported by machine-learning algorithms, the doctor is able to quickly identify
an emergency and immediately call the emergency service.

1.4.5 The Future of Medicine with Molecular Robots


The next step in medicine will be in the direction of personalized diagnostics and
therapy locally at the site of the disease. The vision is to develop an intelligent
medical machine that can perform measurements in the human body on the
cellular level and, if necessary, treat directly. Such treatment could be performed
in the future by molecular robots.

131
Cavallaro, Rosen, Perry, and Burns, “Real-Time Myoprocessors for a Neural Controlled
Powered Exoskeleton Arm” (2006) 53(11) IEEE Transactions on Biomedical Engineering
2387–2396; Jäntsch, Non-linear Control Strategies for Musculoskeletal Robots, PhD Thesis,
Technische Universität München 2014.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
36 Sami Haddadin and Dennis Knobbe

Molecular robots are small autonomous synthetic systems that can be used for
numerous medical purposes. Different molecule chains can map both structural
and functional properties of the molecular robot. Internal sensors will make it
possible to explore the human body and explore areas of medical interest. Through
controlled movement, they can penetrate the body, move to the treatment site (such
as a tumor) and perform medical treatment only where it is needed. In addition,
these robots will be able to take tissue samples and control the delivery of drugs
based on sophisticated micro sensors. The movement and control mechanisms used
here can be chemical, electromagnetic, bio-hybrid cell-driven or completely new
mechanisms that are yet to be researched. Robotic theory should be translated to
molecular and cellular-level systems, the dynamics of which are explained via first-
order principle-based machine-learning algorithms. In addition, the practical closed-
loop control and analysis of these systems via macro-robotic human‒machine
interaction technologies should be explored, enabling a multitude of applications
ranging from basic understanding of cellular dynamics and control to various
medical applications such as targeted drug transportation.
Cellular manipulation is one field of research that will serve as an indispensable
basis for molecular robotics. The mechanisms to be researched may be used to
communicate with cells in a natural way and, if necessary, to control them. For
example, it will be possible to have cells targeting certain positions, proliferating,
producing certain proteins or, if the cell is harmful to the body, to have it removed
through the body’s own degradation system. This research field combines concepts
from biology research (cell biology, genetics, biochemistry, biophysics, etc.) with
approaches from modern engineering sciences (systems theory, control engineering,
computer science, information theory, robotics, AI, etc.) to create a standardized
analysis environment for cell research. Over the next few years, this field will provide
completely new insights into how cells function or communicate and can be
expected to deliver new technologies.

1.5 conclusion
This chapter has shown the current technological status of robotics and AI and has
examined current problems, as well as providing an insight into the possible future
of these technologies in the age of machine intelligence. MI will change our
everyday life and our society. It offers a lot of potential to deal with existing problems
as well as those that society can already anticipate. The responsibility that comes
with this technology should not be underestimated. The focus must be on a
trustworthy, safe and human-centered development of this technology. Framework
conditions, for example, must be created that prohibit the exploitation of this
technology to the detriment of individuals and humanity as a whole.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.002
2

Regulating AI and Robotics

Ethical and Legal Challenges

Martin Ebers*

introduction
Rapid progress in AI and robotics is challenging the traditional boundaries of law.
Algorithms are widely employed to make decisions that have an increasingly far-
reaching impact on individuals and society, potentially leading to manipulation,
biases, censorship, social discrimination, violations of privacy and property rights,
and more. This has sparked a global debate on how to regulate AI and robotics.
This chapter outlines some of the most urgent ethical and legal issues raised by
the use of self-learning algorithms in AI systems and (smart) robotics and provides an
overview of key initiatives at the international and European levels on forthcoming
regulation and ethics. The chapter does not aim at definitive answers; indeed, the
policy debate is better served by refraining from rushing to solutions. What is needed
is a more precise inventory of the concrete ethical and legal challenges that can
strengthen the foundations of future evidence-based AI governance.

2.1 scenario

2.1.1 The Use of Algorithms by Businesses and Governments


Algorithms permeate our lives in numerous ways, performing tasks that until
recently could only be carried out by humans. Modern artificial intelligence (AI)
technologies based on machine-learning algorithms and big-data-powered systems
can perform sophisticated tasks – such as driving cars, analyzing medical data, or
evaluating and executing complex financial transactions – without active human

* This work was supported by Estonian Research Council grant no PRG124 and by the Research
Project “Machine learning and AI powered public service delivery”, RITA1/02-96-04, funded by
the Estonian Government. The chapter was submitted to the publisher in April 2019 and has
not been updated since, apart from all internet sources which were last accessed in April 2020.

Downloaded from https://www.cambridge.org/core. University of New 37 England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
38 Martin Ebers

control or supervision. Algorithms also play an important role in everyday decisions.


They influence nearly every aspect of our lives:

 Self-learning algorithms determine the results of web searches, select the


ads and news we read, and decide which purchase offers are made when
we shop online.1
 Dynamic pricing algorithms automatically evaluate events on (online)
markets so that traders can adjust their prices to the respective market
conditions in milliseconds.2
 Software agents optimize portfolios, assess credit risks, and autonomously
carry out the most favorable transactions in currency trading. On the
financial markets, algorithmic trading (including high-frequency
trading) generates more than 70% of the trading volume. In the FinTech
market, Robo-advisors are used for investment advice, brokerage, and
asset management.3
 Algorithms also play an increasing role in making substantive decisions.
Many important decisions which were historically made by people are
now either made by computers or at least prepared by them. We live in a
“scored society.”4 Companies from various industries collect, analyze,
acquire, share, trade, and utilize data on billions of people in order to
discern patterns, predict the likely behavior of people through scoring
systems, and act accordingly. Some algorithmic scores have existential
consequences for people. For example, they decide to an increasing
extent whether someone is invited for a job interview, approved for a
credit card or loan, or qualified to take out an insurance policy.
 Governmental institutions have become increasingly dependent on algo-
rithmic predictions. Tax offices have started using algorithms to predict
abuse and fraud in tax returns and to allocate cases for human review.5
Criminal law enforcement agencies use algorithms to detect, respond to,

1
Christl, “Corporate Surveillance in Everyday Life. How Companies Collect, Combine, Ana-
lyze, Trade, and Use Personal Data on Billions.” A Report by Cracked Labs, June 2017 <http://
crackedlabs.org/en/corporate-surveillance>.
2
Chen, Mislove, and Wilson, “An Empirical Analysis of Algorithmic Pricing on Amazon
Marketplace” (2016) Proceedings of the 25th International Conference on World Wide Web
1339–1349 <www.ccs.neu.edu/home/amislove/publications/Amazon-WWW.pdf>.
3
BI Intelligence, “The Evolution of Robo-Advising: How Automated Investment Products Are
Disrupting and Enhancing the Wealth Management Industry” (2017); Finance Innovation and
Cappuis Holder & Co., “Robo-Advisors: une nouvelle réalité dans la gestion d’actifs et de
patrimoine” (2016); OECD, “Robo-Advice for Pensions” (2017).
4
Citron and Pasquale, “The Scored Society: Due Process for Automated Predictions” (2014) 89
Washington Law Review 1.
5
DeBarr and Harwood, “Relational Mining for Compliance Risk,” Presented at the Internal
Revenue Service Research Conference, 2004 <www.irs.gov/pub/irs-soi/04debarr.pdf>
<https://perma.cc/Y9F8-RWNK>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 39

and predict crime (predictive policing).6 In the USA, algorithmic prog-


nosis instruments are already being used by courts to calculate the
likelihood of an accused person committing another crime on parole.7
In China, the government is implementing a “social credit system”
which is intended to standardize the assessment of citizens’ and busi-
nesses’ economic and social reputations.8
 In the health sector, medical expert systems based on self-learning algo-
rithms evaluate the medical literature and personal data of patients,
assisting physicians with their diagnosis and treatment, whether by read-
ing medical images and records, detecting illnesses, predicting unknown
patient risks, or selecting the right drug.9
 To an increasing extent, embodied AI systems also operate physically in
the world. They have left the factories and come into our lives as intelli-
gent robotic assistants, vacuum cleaners, drones, and automated cars. AI
systems are also an essential component of developing the emerging
Internet of Things (IoT)10 – a network of physical devices which are
embedded with electronics, software, sensors, and network connectivity
that enable them to collect and exchange data.

6
Barrett, “Reasonably Suspicious Algorithms: Predictive Policing at the United States Border”
(2017) 41(3) NYU Review of Law & Social Change 327; Ferguson, “Predictive Policing and
Reasonable Suspicion” (2012) 62 Emory Law Journal 259, 317; Rich, “Machine Learning,
Automated Suspicion Algorithms, and the Fourth Amendment” (2016) 164 University of
Pennsylvania Law Review 871; Saunders, Hunt, and Hollwood, “Predictions Put into Practice:
A Quasi Experimental Evaluation of Chicago’s Predictive Policing Pilot” (2016) 12 Journal of
Experimental Criminology 347.
7
Such processes are used at least once during the course of criminal proceedings in almost every
US state; Barry-Jester, Casselman, and Goldstein, “The New Science of Sentencing,” The
Marshall Project, 4 April 2015, <www.themarshallproject.org/2015/08/04/the-new-science-of-
sentencing#.xXEp6R5rD>. More than 60 predictive tools are available on the market, many
of which are supplied by companies, including the widely used COMPAS system from
Northpointe.
8
Hvistendahl, “In China, a Three-Digit Score Could Dictate Your Place in Society” Wired (14
December 2017) <www.wired.com/story/age-of-social-credit>; Botsman, “Big Data Meets Big
Brother as China Moves to Rate Its Citizens” Wired UK (21 October 2017) <https://www.wired
.co.uk/article/chinese-government-social-credit-score-privacy-invasion>; Chen, Lin, and Liu,
“‘Rule of Trust’: The Power and Perils of China’s Social Credit Megaproject” (2018) 32(1)
Columbia Journal of Asian Law 1 <https://ssrn.com/abstract=3294776>, pointing out that the
Social Credit System has not – at least for now – employed AI technologies, real-time data or
automated decisions, despite foreign media reports to the contrary.
9
Abu-Nasser, “Medical Expert Systems Survey” (2017) 1(7) International Journal of Engineering
and Information Systems 218; Gray, “7 Amazing Ways Artificial Intelligence Is used in Health-
care,” 20 September 2018 <www.weforum.org/agenda/2018/09/7-amazing-ways-artificial-intelli
gence-is-used-in-healthcare>.
10
The combination of AI, advanced robots, additive manufacturing, and the Internet of Things
will combine to usher in the Fourth Industrial Revolution; World Economic Forum, “Impact
of the Fourth Industrial Revolution on Supply Chains,” October 2017 <www.weforum.org/
whitepapers/impact-of-the-fourth-industrial-revolution-on-supply-chains>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
40 Martin Ebers

 Last but not least, new devices make it possible to connect the human
brain to computers. Brain‒computer interfaces (BCIs) enable informa-
tion to be transmitted directly between the brain and a technical circuit.
In this way, it is already possible for severely paralyzed people to com-
municate with a computer solely through brain activity.11 Researchers at
Elon Musk’s company Neuralink predict that machines will be con-
trolled in the future solely by thoughts.12 What is more, Facebook is
researching a technology that sends thoughts directly to a computer in
order to make it possible to “write” one hundred words per minute
without any muscle activity.13 Thus, the boundary between man and
machine is becoming blurred. Human and machine are increasingly
merging.
The technological changes triggered by AI and smart robotics raise a number of
unresolved ethical and legal questions which will be discussed in this chapter.
Before addressing these issues more fully, it is important to take a closer look at
the question of what we actually mean when we speak of “algorithms, AI and
robots,” whether common definitions are necessary from a legal point of view,
and, more generally, how AI systems and advanced robotics differ fundamentally
from earlier technologies, making it so difficult for legal systems to cope with them.

2.1.2 Concepts and Definitions

2.1.2.1 Algorithms, AI and Robots: Do We Need


All-Encompassing Definitions?
Algorithms are by no means new. For decades, they have served as integral com-
ponents of every computer program. Generally speaking, algorithms can be under-
stood as “sets of defined steps structured to process instructions/data to produce an
output.”14 From this point of view, every piece of software is composed of
algorithms.

11
Blankertz, “The Berlin Brain – Computer Interface: Accurate Performance from First-Session
in BCI-naïve Subjects” (2008) 55 IEEE Transactions on Biomedical Engineering 2452; Nicolas-
Alonso and Gomez-Gil, “Brain Computer Interfaces” (2012) 12(2) Sensors 1211 <www.ncbi.nlm
.nih.gov/pmc/articles/PMC3304110>.
12
<www.theverge.com/2017/2/13/14597434/elon-musk-human-machine-symbiosis-self-driving-cars>.
13
<https://techcrunch.com/2017/04/19/facebook-brain-interface/?guccounter=1>.
14
Kitchin, “Thinking Critically about and Researching Algorithms” (2017) 20(1) Information,
Communication and Society 1‒14. According to Miyazaki, the term “algorithm” emerged in
Spain during the twelfth century when scripts of the Arabian mathematician Muhammad ibn
_
Mūsā al-Khwārizmī were translated into Latin. These scripts describe “methods of addition,
subtraction, multiplication and division with the Hindu-Arabic numeral system.” Thereafter,
“algorism” meant “the specific step-by-step method of performing written elementary arith-
metic”; Miyazaki, “Algorhythmics: Understanding Micro-temporality in Computational

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 41

This definition is on the one hand too broad and on the other too narrow, since a
purely technical understanding of algorithms as computer code does not go far
enough in assessing their legal and social implications. As Kitchin15 points out,
algorithms “cannot be divorced from the conditions under which they are
developed and deployed.” Rather, “algorithms need to be understood as relational,
contingent, contextual in nature, framed within the wider context of their socio-
technical assemblage.”16
Popular definitions of AI are equally unrefined.17 AI is a catch-all-term referring to
the broad branch of computer science that studies and designs intelligent
machines.18 The spectrum of applications using AI is already enormous, ranging
from virtual assistants, automatic news aggregation, image and speech recognition,
translation software, automated financial trading, and legal eDiscovery to self-driving
cars and automated weapon systems.
From a legal standpoint, this lack of definitional clarity is sometimes regarded as
problematic. Scholars emphasize that any regulatory regime must define what
exactly it is that the regime regulates, and that we must therefore find a common
definition for the term “artificial intelligence.”19 Others believe that an all-
encompassing definition is not necessary at all, at least for the purposes of legal

Cultures” (2012) 2 Computational Culture <http://computationalculture.net/article/algorhyth


mics-understanding-micro-temporality-in-computational-cultures>.
15
Kitchin (n 14); Seaver, “Algorithms as Culture: Some Tactics for the Ethnography of Algorith-
mic Systems” (2017 July–September) Big Data & Society 1, suggested thinking of algorithms
not “in” culture, but “as” culture: part of broad patterns of meaning and practice that can be
engaged with empirically. Dourish, “Algorithms and Their Others: Algorithmic Culture in
Context” (2016 July‒September) Big Data & Society 1, 3, notes that “the limits of the term
algorithm are determined by social engagements rather than by technological or material
constraints.”
16
Cf. also Section 2.2.4, with reference to three dimensions that can be found in every ADM
system, i.e., the process level, the model level, and the classification level.
17
The High Level Expert Group on AI (AI HLEG), set up by the EU Commission, proposes the
following updated definition: “Artificial intelligence (AI) systems are software (and possibly also
hardware) systems designed by humans that, given a complex goal, act in the physical or digital
dimension by perceiving their environment through data acquisition, interpreting the collected
structured or unstructured data, reasoning on the knowledge, or processing the information,
derived from this data and deciding the best action(s) to take to achieve the given goal. AI
systems can either use symbolic rules or learn a numeric model, and they can also adapt their
behaviour by analysing how the environment is affected by their previous actions”; AI HLEG,
“A Definition of AI: Main Capabilities and Disciplines,” Brussels, 9 April 2019, https://ec
.europa.eu/newsroom/dae/document.cfm?doc_id=56341.
18
McCarthy, “What Is Artificial Intelligence?” 2007, www-formal.stanford.edu/jmc/whatisai/.
Russell and Norvig summarize eight definitions of AI differentiated by how they reflect
expectations of human thinking and behavior or (machine) rational thinking and behavior;
Russell and Norvig, Artificial Intelligence: A Modern Approach (3rd edn, Pearson 2011) 1 et seq.
19
Scherer, “Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strat-
egies” (2016) 29(2) Harvard Journal of Law and Technology 353, 359 et seq. Cf. also Lea, “Why We
Need a Legal Definition of Artificial Intelligence,” The Conversation, 2 September 2015 <http://
theconversation.com/why-we-need-a-legal-definition-of-artificial-intelligence-46796>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
42 Martin Ebers

research and regulation.20 After all, AI systems pose very different problems
depending on who uses them, where, and for what purpose. For example, an
autonomous weapon system can hardly be compared to a spam filter, even though
both are based on an AI system. Indeed, this example alone illustrates the futility of
lawmakers considering a general Artificial Intelligence Act that would regulate the
whole phenomenon top down, administered by an Artificial Intelligence Agency.
Accordingly, there is no need for a single all-encompassing definition for “algo-
rithms” and “AI.” Rather, it is more important to understand the different character-
istics of various algorithms and AI applications and how they are used in practice.
The same applies to the term “robot,” for which no universally valid definition
has yet emerged.21 Admittedly, at the international level some definitions can be
found. For example, the International Standards Organization defines a robot as an
“actuated mechanism programmable in two or more axes with a degree of auton-
omy, moving within its environment, to perform intended tasks.”22 This interpret-
ation, however, is a functional rather than legal definition for the purpose of
technical standards. Ultimately, all attempts at providing an encompassing defin-
ition are a fruitless exercise because of the extremely diverse nature of robots,
ranging from driverless cars, prosthetic limbs, orthotic exoskeletons, and manufac-
turing (industrial) robots to care robots, surgical robots, lawn mowers, and vacuum
cleaners. Rather than finding a common definition, greater insight can be gained
from keeping all these robots separate, looking at their peculiarities and the differ-
ences between them.
For our purposes, it is therefore sufficient to use a broad definition according to
which a robot is a machine that has a physical presence, can be programmed, and
has some level of autonomy depending, inter alia, on the AI algorithms used in such
a system; it is, in short, “AI in action in the physical world.”23
In the absence of a universally accepted characterization, this chapter uses the
terms AI/algorithmic/self-learning/intelligent/smart/autonomous and/or robotic
systems/machines interchangeably to refer to AI-driven systems with a high degree
of automation.

20
Jabłonowska, Kuziemski, Nowak, Micklitz, Pałka, and Sartor, “Consumer law and artificial
intelligence. Challenges to the EU consumer law and policy stemming from the business’s use
of artificial intelligence.” Final report of the ARTSY project, European University Institute
(EUI) Working Papers, LAW 2018, 11, p 4.
21
By contrast, the EU Parliament calls for a uniform, Union-wide definition of robots in its 2017
resolution; European Parliament, Resolution of 16 February 2017 with recommendations to the
Commission on Civil Law Rules on Robotics, P8_TA(2017)0051. Critical Lohmann, “Ein
europäisches Roboterrecht – überfällig oder überflüssig?” (2017) 168 Zeitschrift für Rechtspolitik
(ZRP) 169.
22
ISO 8373, 2012, available at <www.iso.org/obp/ui/#iso:std:iso:8373:ed-2:v1:en>. Additionally,
ISO makes a distinction between industrial robots and service robots, as well as between
personal service robots and service robots for personal use.
23
Cf. AI HLEG, A Definition of AI (n 17) 4.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 43

2.1.2.2 The Rise of Learning Algorithms


A particularly important subfield of AI is machine learning (ML). Instead of
programming machines with specific instructions to accomplish particular tasks,
ML algorithms enable computers to learn from “training data,” and even improve
themselves without being explicitly programmed. Although the idea of creating
“learning machines” was already present in the early AI years,24 it is only recently
that developments have brought algorithms to a new level, leading to an AI spring
that outshines all the previous ones.
Over the years, ML has developed into a number of different directions. By and
large, they can be classified into three broad categories, depending on their learning
pattern: supervised, unsupervised, and reinforcement learning.25
In a supervised learning setting, the algorithm uses a sample of labeled data to
learn a general rule that maps inputs onto outputs.26 For example, if the algorithm
needed to learn how to recognize cats, the developer would give the system many
examples of pictures of cats and the corresponding interpretation (that is, whether a
cat is or is not in that picture). After the learning period, the system, through its ML
algorithm, will then be able to generalize to know also how to interpret pictures of
cats never seen before.
In an unsupervised learning setting, on the other hand, the algorithm attempts to
identify hidden structures and patterns from unlabeled data.27 This learning method
is especially useful if data is rather unstructured. It can also be used to build better
supervised learning algorithms, for example, by combining the multitude of pixels of
a picture into a small number of important recognizable features (such as the
structures of eyes, nose, mouth), which can then serve as an input for a supervised
learning facial recognition algorithm.
Finally, in the reinforcement learning approach, the algorithm is not told how to
“behave,” but must learn in an (unknown but fixed) environment in which actions

24
The idea of “learning machines” was raised as early as 1950 by Turing, “Computing Machinery
and Intelligence” (1950) 49 Mind 433, 456 (suggesting that machines could simulate the child-
brain which is “subjected to an appropriate course of education”). Just a few years later, in 1952,
Samuel would then go on to create the first computer learning program, a Checkers-playing
program which improved itself through self-play; Samuel, “Some Studies in Machine Learning
Using the Game of Checkers” (1959) 3 IBM Journal of Research and Development 210.
25
Anitha, Krithka, and Choudhry (2014) 3(12) International Journal of Advanced Research in
Computer Engineering & Technology 4324 <http://ijarcet.org/wp-content/uploads/IJARCET-
VOL-3-ISSUE-12-4324-4331.pdf>; Buchanan and Miller, “Machine Learning for Policymakers.
What It Is and Why It Matters” Harvard Kennedy School, Belfer Center for Science and
International Affairs, Paper, June 2017; Mohri, Rostamizadeh, and Talwalkar, “Foundations of
Machine Learning” (2012). Cf. also Haddadin and Knobbe, Chapter 1 in this book.
26
Anitha, Krithka, and Choudhry (n 25), 4325 et seq.
27
Anitha, Krithka, and Choudhry (n 25), 4328 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
44 Martin Ebers

yield the best (scalar) reward.28 ML applications based on this approach are used
especially in a dynamic environment, such as driving a vehicle or playing a game
(such as DeepMind’s AlphaGo).

2.1.3 Overview
Before considering the legal and ethical problems posed by autonomous systems in
detail, it is worth taking a broader look at the general characteristics of algorithmic
systems, which are ultimately responsible for the irritations and disruptive effects we
are currently observing worldwide in all legal systems.

2.2 the problematic characteristics of


ai systems from a legal perspective

2.2.1 Complexity and Connectivity


Some of these characteristics, especially those regarding complexity and connectiv-
ity, are already known in connection with other IT systems. The increasing inter-
connectivity of computers leads to a multiplicity of actions and actors. This applies
in particular to smart objects in the IoT. The individual consumer who acquires a
smart object is regularly confronted with a large number of potential contractual
partners who owe various services (hardware, digital content, digital services, end-
user license agreements with third parties), all of which are required together for the
IoT to function properly.29 As a result, it is often no longer clear to the individual
with whom they have concluded a contract. Moreover, there is a serious problem of
proof: although the purchaser cannot always ascertain why their product does not
work (i.e., whether it is due to hardware or digital content), the burden of proof for
the existence of a defect lies in principle with them, so that they are also burdened
with the costs of determining its cause.
It can also be the case that the individual AI system works flawlessly on its own
and does not exhibit any problematic behavior at all, but that a functional failure
and/or damage occurs through the interaction of different software agents. Some
consider the so-called Flash Crash on 6 May 201030 to be just such an event: US$1
trillion in market value vanished in less than an hour, and trading had to be
28
For a comprehensive introduction to reinforcement learning see Sutton and Barto, Reinforce-
ment Learning – An Introduction (MIT Press 2017).
29
Wendehorst, “Sale of Goods in the Digital Age – From Bipolar to Multi-party Relationships” in
UNIDROIT (ed), Eppur si muove: The Age of Uniform Law. Essays in honour of Michael
Joachim Bonell to celebrate his 70th birthday 2 (UNIDROIT 2016) 1873‒1887.
30
Commodity Futures Trading Commission/Securities & Exchange Commission (2010), “Find-
ings Regarding the Market Events of May 6, 2010,” Report of the Staffs of the CFTC and SEC
to the Joint Advisory Committee on Emerging Regulatory Issues<www.sec.gov/news/studies/
2010/marketevents-report.pdf>. See also Kirilenko, Kyle, Samadi, and Tuzun, “The Flash

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 45

suspended. When such an event occurs, assumptions are destroyed about the
individuality of actors who are constitutive in the attribution of action and responsi-
bility. Both the actor and the causal relationships are difficult, if not impossible, to
identify.
In order to address these problems, various solutions have been proposed. For
contractual claims there have been discussions as to whether the doctrine of privity
of contract must be overcome by, for example, accepting linked contracts31 or
through the concept of a contractual network.32 For non-contractual claims, some
scholars propose a pro-rata liability for all those involved in the network, requiring
actors themselves to stand up for the unlawful behavior of the networked algo-
rithms,33 whereas others are in favor of attributing legal responsibility not to people,
organizations, networks, software agents, or algorithms, but rather to risk pools and
the decisions themselves.34

2.2.2 From Causation to Correlation


Another characteristic of AI systems in the context of big-data analysis is a shift “from
causation to correlation.”35 Most data-mining techniques rely on inductive know-
ledge and correlations identified within a dataset. Instead of searching for causation
between the relevant parameters, powerful algorithms are used to spot patterns and
statistical correlations.36

Crash: High-Frequency Trading in an Electronic Market” (2017) Journal of Finance, <https://


ssrn.com/abstract=1686004> or <http://dx.doi.org/10.2139/ssrn.1686004>.
31
Forgó, in Forgó and Zöchling-Jud, “Das Vertragsrecht des ABGB auf dem Prüfstand: Überle-
gungen im digitalen Zeitalter, Gutachten Abteilung Zivilrecht, Verhandlungen des zwanzig-
sten österreichischen Juristentages” (Manz 2018) 276 et seq.
32
Cf. Cafaggi, “Contractual Networks and the Small Business Act: Towards European Prin-
ciples?” EUI Working Paper Law No 2008/15 <https://cadmus.eui.eu/handle/1814/8771>;
Idelberger, “Connected Contracts Reloaded – Smart Contracts As Contractual Networks” in
Grundmann (ed), European Contract Law in the Digital Age (Intersentia 2018) 205 et seq.
33
Spiecker, “Zur Zukunft systemischer Digitalisierung – Erste Gedanken zur Haftungs- und
Verantwortungszuschreibung bei informationstechnischen Systemen” (2016) Computer und
Recht (CR) 698, 703.
34
Teubner, “Digitale Rechtssubjekte? Zum privatrechtlichen Status autonomer Softwareagen-
ten” (2018) 218 Archiv für die civilistische Praxis (AcP) 155.
35
Mayer-Schönberger and Cukier, Big Data: A Revolution that Will Transform How We Live,
Work and Think (Murray 2013) 14, 15, 18 and 163: “Big Data does not tell us anything about
causality.”
36
Some commentators believe that new data-mining techniques will free science of the con-
straints of theory, establishing a world in which the search for causation will no longer be
paramount as correlation takes the center stage. Chris Anderson refers to this phenomenon as
“the end of theory”; Anderson, “The End of Theory,” Wired (July 2008) 108. Critically, Skopek,
“Big Data’s Epistemology and Its Implications for Precision Medicine and Privacy” in Cohen,
Lynch, Vayena, and Gasser (eds), Big Data, Health Law, and Bioethics (Cambridge University
Press 2018) 30 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
46 Martin Ebers

Relying on correlations when statistical analysis indicates a significant relationship


between factors provides clear benefits in terms of speed and costs.37 However, it
becomes problematic when correlation is increasingly seen as sufficient grounds for
directing action without first establishing causality. Data analysis, actions, and far-
reaching decisions (e.g., scoring values or a medical diagnosis) relying on mere
correlations in probability values might be severely flawed. First and foremost,
relying on correlations without investigating causal effects risks correlations being
“forced” on the data.38 As Marcus and Davis explain, big data detecting correlations
“never tells us which correlations are meaningful. A big-data analysis might reveal,
for instance, that from 2006 to 2011 the United States murder rate was well correlated
with the market share of Internet Explorer: Both went down sharply. But, it’s hard to
imagine there is any causal relationship between the two.”39 Moreover, even if a
strong statistical correlation is found, this only says something about a particular
(sub)group of persons, but not about the individual belonging to that (sub)group.
Finally, pure correlation statements do not allow individuals to engage in self-
improvement. How, for example, should a policyholder react if they are informed
that they are in a higher tariff bracket not because their driving is risky, but because a
big-data analysis has shown that their Facebook “likes” indicate an increased
accident risk? Thus, finding causation can be crucial in promoting the quality of
the entire process and ensuring that in the end individuals are treated fairly.

2.2.3 Autonomy
Probably the biggest problem is the growing degree of autonomy of AI systems and
smart robotics.40 Self-learning systems are not explicitly programmed; instead, they

37
Zarsky, “Correlation versus Causation in Health-Related Big Data Analysis. The Role of
Reason and Regulation” in Cohen et al. (n 36) 42, 50.
38
Silver, The Signal and the Noise. Why So Many Predictions Fail – but Some Don’t (The
Penguin Press 2012) 162.
39
Marcus and Davis, “Eight (No, Nine!) Problems with Big Data” New York Times (6 April 2014)
<www.nyti.ms/1kgErs2>. Cf. also Kosinski, Stillwell, and Graepel, “Private Traits and Attri-
butes are Predictable from Digital Records of Human Behavior” (2013) Proceedings of the
National Academy of Sciences of the United States of America (PNAS) 5802 <www.pnas.org/
content/110/15/5802.full>, stating a correlation between high intelligence and Facebook likes of
“thunderstorms,” “The Colbert Report,” and “curly fries,” while users who liked the “Hello
Kitty” brand tended to be higher in openness and lower in conscientiousness, agreeableness,
and emotional stability.
40
In the discussion, various criteria are offered as the starting point from which an AI system can
be regarded as autonomous. What is clear, however, is that autonomy seems to be a gradual
phenomenon. On the different concepts of autonomy cf. Bertolini, “Robots as Products: The
Case for a Realistic Analysis of Robotic Applications and Liability Rules” (2013) 5(2) Law,
Innovation and Technology 214, 220 et seq.; Floridi and Sanders, “On the Morality of Artificial
Agents” in Anderson and Anderson (eds), Machine Ethics (Cambridge University Press 2011)
184, 192; Zech, “Zivilrechtliche Haftung für den Einsatz von Robotern: Zuweisung von
Automatisierungs- und Autonomierisiken” in Gless and Seelmann (eds), Intelligente Agenten

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 47

are trained by thousands and millions of examples, so that the system develops by
learning from experience. The increasing use of ML systems poses great challenges
for legal systems. With a certain level of automation, it seems impossible to ascertain
with certainty whether the programmer, the producer, or the operator is responsible
for actions caused by such systems. Specific problems arise in particular from the
point of view of foreseeability and causation.
As regards foreseeability, we have already seen numerous instances of AI making
decisions that a person would not have made or would have made differently.
A particularly fascinating example highlighted by Mathew Scherer41 comes from
C-Path, a machine-learning program for the detection of cancer. Pathologists had
believed that the study of tumor cells is the best method for diagnosing cancer, whereas
studying the supporting tissue (stroma) might only aid in cancer prognosis. But in a
large study, C-Path found that the properties of stroma were actually a better prognostic
indicator for breast cancer than the properties of the cancer cells themselves – a
conclusion that contradicted both common sense and predominant medical think-
ing.42 Another example concerns AlphaGo, a computer program developed by Google
DeepMind that defeated Lee Sedol, the South Korean world champion Go player, in a
five-game match in March 2016. As DeepMind noted on their blog, “during the games
AlphaGo played a handful of highly inventive winning moves, one of which–move
37 in game 2–was so surprising it overturned hundreds of years of received wisdom and
has been intensively examined by players since. In the course of winning, AlphaGo
somehow taught the world completely new knowledge about perhaps the most studied
game in history.”43 Both examples show that AI systems may act in unforeseeable ways,
as they come up with solutions that humans may not have considered, or that they
considered and rejected in favor of more intuitively appealing options.
The experiences of a self-learning AI system can also be viewed, as Scherer
correctly points out, as a superseding cause – that is, “an intervening force or act
that is deemed sufficient to prevent liability for an actor whose tortious conduct was
a factual cause of harm”44 – of any harm that such systems cause. This is especially
true when an AI system learns not only during the design phase, but also after it has
already been launched on the market. In this case, even the most cautious designers,

und das Recht, (Nomos Verlagsgesellschaft 2016) 163, 170 et seq., fn. 16. For the different levels
of automation for self-driving cars, see the categories proposed by SAE International (Society of
Automotive Engineers) and DOT (US Department of Transportation); DOT, “Federal Auto-
mated Vehicles Policy” (September 2016) 9, available at <www.nhtsa.gov/technology-innov
ation/automated-vehicles-safety>.
41
Scherer (2016) 29(2) Harvard Journal of Law and Technology 353, 363‒364.
42
Beck et al., “Systematic Analysis of Breast Cancer Morphology Uncovers Stromal Features
Associated with Survival” (2011) 108(3) Science Translational Medicine 1.
43
Hassabis, “The Mind in the Machine: Demis Hassabis on Artificial Intelligence” Financial
Times (21 April 2017) <www.ft.com/content/048f418c-2487-11e7-a34a-538b4cb30025>.
44
Restatement (Third) of Torts: Phys. & Emot. Harm § 34 cmt. b (AM. LAW INST. 2010).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
48 Martin Ebers

programmers, and manufacturers will not be able to control or predict what an AI


system will experience in the environment.
For all these reasons, self-learning systems with a high degree of automation cause
considerable irritations in legal systems.45

2.2.4 Algorithms As Black Boxes


A particular concern in relation to advanced ML techniques is the opacity of many
algorithmic decision-making (ADM) systems. The notion of black-box AI is used to
refer to scenarios where we can see only input data and output data for algorithm-
based systems without understanding exactly what happens in between.46
Explainability is relevant for a number of reasons.47 For a researcher or developer,
it is crucial to understand how their system or model is working in order to debug or
improve it. For those affected by an algorithmic decision, it is important to compre-
hend why the system arrived at this decision in order to develop trust in the
technology, and – if the ADM process is illegal – initiate appropriate remedies
against it. Last but not least, explainability enables experts (and regulators) to audit
ADM and verify whether legal regulatory standards have been complied with.
According to Gunning48 and Waltl and Vogl,49 an ADM system has a high degree
of explainability if the following questions can be answered:

 Why did that output happen?


 Why not some other output?
 For which cases does the machine produce a reliable output?
 Can you provide a confidence score for the machine’s output?
 Under which circumstances, i.e., state and input, can the machine’s
output be trusted?
 Which parameters affect the output most (negatively and positively)?
 What can be done to correct an error?
In order to answer these questions, it is helpful to distinguish the following three
dimensions that can be found in every ADM system: the process level, the model
level, and the classification level.50
45
Cf. Section 2.5.
46
Additionally, it might be that the inputs themselves are entirely unknown or known only
partially.
47
Anand et al., “Effects of Algorithmic Decision-Making and Interpretability on Human Behav-
ior: Experiments Using Crowdsourcing” (2018) <www.l3s.de/~gadiraju/publications/
HCOMP18.pdf>.
48
Gunning, “Explainable Artificial Intelligence (XAI)” (2017) <www.darpa.mil/attachments/
XAIProgramUpdate.pdf>.
49
Waltl and Vogl, “Explainable Artificial Intelligence – The New Frontier in Legal Informatics”
Jusletter IT (22 February 2018).
50
Waltl and Vogl, “Increasing Transparency in Algorithmic Decision-Making with Explainable
AI” (2018) Datenschutz und Datensicherheit (DuD) 613.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 49

The process level refers to the different steps an AI system has gone through in order
to make an autonomous decision, usually beginning with the data-acquisition phase,
followed by data pre-processing, the selection of features, the training and application
of the AI model, and the post-processing phase, in which steps are taken to improve
and revise the output of the AI model. Exact knowledge of these steps is necessary to
understand decisions. If, for example, a discriminatory decision is based on biased
training data, precise knowledge of the data-acquisition phase is required. The model
level, on the other hand, refers to the different types of algorithms that are used for
decision-making, for example, decision trees, Bayesian networks, support-vector
machines, k-nearest neighbors, or neural networks. This must be distinguished from
the classification level, which provides information about which attributes (e.g.,
gender, age, salary) are used in the model and what weight is given to each attribute.
Opacity in ML algorithms can have quite different causes.51 First, it might be that
algorithms are kept secret intentionally for the sake of competitive advantage,52
national security,53 or privacy.54 Keeping an AI system opaque can also be important
to ensure its effectiveness, for example preventing spambots from using the disclosed
algorithm to attack the system.55 Moreover, corporations might wish to protect their
ADM system to avoid or confound regulation, and/or to conceal manipulation or
discrimination of consumers.56 Second, opacity can be an expression of technical
illiteracy. Writing and reading code as well as designing algorithms requires expert-
ise that the majority of the population does not have. Third, it may be that opacity
arises due to the unavoidable complexity of ML models. As Burrell notes, in the era
of big data, “Billions or trillions of data examples or tens of thousands of properties of
the data (termed ‘features’ in ML) may be analyzed. . . . While datasets may be
extremely large but possible to comprehend, and code may be written in clarity, the
interplay between the two in the mechanism of the algorithm is what yields the
complexity (and thus opacity).”57
Apart from that, it is important to understand that different classes of ML
algorithms have different degrees of transparency as well as performance.58 Thus,

51
Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algo-
rithms” (2016 January‒June) Big Data & Society 1.
52
Kitchin (n 14).
53
Leese, “The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-discriminatory
Safeguards in the European Union” (2014) 45(5) Security Dialogue 494.
54
Mittelstadt, Allo, Taddeo, Wachter, and Floridi, “The Ethics of Algorithms: Mapping the
Debate” (2016 July‒September) Big Data & Society 1, 6.
55
Sandvig, Hamilton, Karahalios, and Langbort, “Auditing Algorithms: Research Methods for
Detecting Discrimination on Internet Platforms” in Annual Meeting of the International
Communication Association (2014) <http://social.cs.uiuc.edu/papers/pdfs/ICA2014-Sandvig
.pdf>, 1, 9.
56
Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information
(Harvard University Press 2015) 2.
57
Burrell (n 51) 5.
58
Waltl and Vogl (n 49).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
50 Martin Ebers

for example, deductive and rule-based systems (such as decision trees) have a
high degree of transparency: since each node represents a decision, the way to
the respective leaf can be understood as an explanation for a concrete decision.
By comparison, artificial neural networks (ANN), especially deep learning
systems, show a very high degree of opacity. In such a network, all learned
information is not stored at a single point but is distributed all over the neural
net by modifying the architecture of the network and the strength of individual
connections between neurons (represented as input “weights” in artificial net-
works). Therefore, ANN systems possess a high degree of unavoidable complex-
ity and opacity. On the other hand, when it comes to performance, it is precisely
ANNs that show a much higher degree of accuracy and effectiveness than
decision trees.59
We are therefore faced with a dilemma: How can human-interpretable systems be
designed without sacrificing performance?

2.3 fundamental questions


The use of AI systems and smart robots – in addition to the problems discussed
above – raises a number of fundamental questions.

2.3.1 Replacement of Humans by Machines: To What Extent?


Arguably the first and most fundamental question is to what extent we, as a society,
are willing to replace humans with machines. This question arises in many areas,
but above all when decisions are no longer made by people: When should a human
decision be replaced with an algorithm? Which decisions should in any case be
made by a human being? Are there certain decisions that must always be made by
humans for deontological or other (ethical/legal) reasons? To what extent should an
algorithm be able to influence a human decision?
Such questions are currently being discussed, particularly with regard to the use
of lethal autonomous weapon systems (LAWS): Is it right for machines to have the
power of life and death over humans or the ability to inflict serious injury? Are
LAWS both inherently unethical and unlawful under current international humani-
tarian law? Do we need a new international agreement?60 The consensus seems to

59
Waltl and Vogl (n 58).
60
Melzer, Targeted Killing in International Law (Oxford University Press 2008); Wagner, “The
Dehumanization of International Humanitarian Law: Legal, Ethical, and Political Implica-
tions of Autonomous Weapons Systems” (2014) 47 Vanderbilt Journal of Transnational Law
1371; Crawford, “The Principle of Distinction and Remote Warfare” (2016) Sydney Law School
Research Paper No 16/43; Ohlin, “Remoteness and Reciprocal Risk” (2016) Cornell Legal
Studies Research Paper No 16–24.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 51

be that the decision to kill a human person in a concrete combat situation cannot be
delegated to a machine.61
The question as to whether decisions should be delegated to machines also arises
in many other cases, especially when decisions by states are involved:

 To what extent can administrative decisions be automated? Is the idea


62

of algorithmic regulation in line with the nondelegation doctrine, the


principles of procedural due process, equal protection, and/or the prin-
ciples of reason-giving and transparency?
 How far should the judiciary go in using AI systems to resolve a dispute or
as a tool to assist in judicial decision-making?63 How can we ensure that
the design and implementation of AI tools and services in the judicial
system are compatible with fundamental rights, especially the guarantees
of the right of access to the judge, the right to a fair trial (equality of arms
and respect for the adversarial process), and the rule of law?
 What are the advantages and drawbacks of legal automation?64 How can
the law govern human behavior through codes, IT architectures, and
design? Should legislators be allowed to adopt “personalized laws” by
tailoring laws/legal provisions to the individual needs and characteristics
of addressees?65
How about the private sector? To what extent may private companies delegate
decisions to an algorithm and which decisions should be reserved for humans
alone?66 How do ADM procedures impact consumers’ autonomy and freedom to
make decisions, as well as how they access products and services?67

61
European Parliament, Resolution of 12 September 2018 on autonomous weapon systems,
P8_TA-PROV(2018)0341; Scharre, “The Trouble with Trying to Ban ‘Killer Robots,’” World
Economic Forum, 4 September 2017 <www.weforum.org/agenda/2017/09/should-machines-
not-humans-make-life-and-death-decisions-in-war/>.
62
Cf. Coglianese and Lehr, “Regulating by Robot: Administrative Decision Making in the
Machine-Learning Era” (2017) 105 Georgetown Law Journal 1147; <https://ssrn.com/abstract=
2928293>.
63
Cf. Council of Europe, “European Commission for the Efficiency of Justice (CEPEJ),
European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their
environment,” adopted by the CEPEJ during its 31st Plenary meeting (Strasbourg, 3‒4 Decem-
ber 2018), CEPJ (2018)14 (Council of Europe, Ethical Charter).
64
Pagallo and Durante, “The Pros and Cons of Legal Automation and Its Governance” (2016) 7
European Journal of Risk Regulation 323.
65
Porat and Strahilevitz, “Personalizing Default Rules and Disclosure with Big Data” (2014) 112
Michigan Law Review 1417; Ben-Shahar and Porat, “Personalizing Negligence Law” (2016) 91
NYU Law Review 627; Hacker, “Personalizing EU Private Law. From Disclosures to Nudges
and Mandates” (2017) 25 European Review of Private Law (ERPL) 651. Moreover, see (2019)
86.2 University of Chicago Law Review, a special issue on “Personalized Law.”
66
Möslein, “Robots in the Boardroom: Artificial Intelligence and Corporate Law” in Barfield and
Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (2018) <https://ssrn.com/
abstract=3037403>.
67
Cf. Section 2.7.1.4.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
52 Martin Ebers

At present, there is no legal system in the world that provides satisfactory answers
to these questions. In the European Union, Art 22 GDPR68 prohibits fully auto-
mated decisions. However, this provision has a rather limited scope of application.
First, it establishes numerous exceptions in Art 22(2) GDPR. And second, it only
covers decisions “based solely on automated processing” of data (Art 22(1) GDPR).
Since most algorithmically prepared decisions still involve a human being, the
majority of ADM procedures is not covered by the prohibition of Art 22 GDPR.69
The policy decision as to which decisions must be reserved for humans is by no
means an easy one,70 as the transfer of decision-making power to machines brings
great advantages, especially in terms of efficiency and costs. The political decision
not to transfer certain tasks to machines can thus lead to economic loss. Moreover,
in most cases it is impossible to make a clear distinction between purely machine
and purely human decisions. Rather, many decisions are made in a more or less
symbiotic relationship between humans and machines. For this reason, it is very
difficult to determine at what point in this continuum the “essence of humanity” is
compromised.

2.3.2 Brain‒Computer Interfaces and Human Enhancement


An equally fundamental question is to what extent the use of brain‒computer
interfaces (BCIs) should be permitted. This problem arises in particular when a
68
GDPR Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April
2016 on the protection of natural persons with regard to the processing of personal data and on
the free movement of such data, and repealing Directive 95/46/EC (General Data Protection
Regulation) OJ 2016 L 119/1.
69
It is still unclear which type of human participation deprives a decision of its automated nature.
Art 29 Working Party (WP) argues that a decision cannot be regarded as wholly automated if an
automated profile is accompanied by an “additional meaningful intervention carried out by
humans before any decision is applied to an individual”; Article 29 Data Protection Working
Party, Guidelines on Automated individual decision-making and Profiling for the purposes of
Regulation 2016/679, adopted on 3 October 2017, as last Revised and Adopted on 6 February
2018, WP251rev.01, p 8. Bygrave argues that decisions formally attributed to humans but
originating “from an automated data-processing operation the result of which is not actively
assessed by either that person or other persons before being formalised as a decision” would fall
under the scope of “automated decision-making”: Bygrave, “Automated Profiling: Minding the
Machine: Article 15 of the EC Data Protection Directive and Automated Profiling” (2001) 17
Computer Law & Security Review 17. However, as Wachter, Mittelstadt, and Floridi correctly
point out, whereas the EP’s proposed amendments suggested the words “based solely or
predominantly on automated processing,” the final text did not adopt the word “predomin-
antly,” suggesting that a strict reading of “solely” was intended: Wachter, Mittelstadt, and
Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the
General Data Protection Regulation” (2017) 7(2) International Data Privacy Law 76, 92. The
EP Amendments are available at: <www.europarl.europa.eu/sides/getDoc.do?pubRef=-%2F
%2FEP%2F%2FTEXT%2BREPORT%2BA7-2013-0402%2B0%2BDOC%2BXML%2BV0%2F
%2FEN&language=EN>.
70
Burri, “Künstliche Intelligenz und internationales Recht” (2018) Datenschutz und Datensi-
cherheit (DuD) 603, 606 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 53

healthy person connects his body with a BCI in order to be more efficient (BCI
enhancement). The blurring of the distinction between man and machine makes it
more difficult to assess the limits of the human body and raises questions concerning
free will and moral responsibility.71
Should everyone be free to expand and influence their cognitive, mental, and
physical abilities beyond the boundaries of the natural? Is such a fusion socially
desirable and ethically acceptable? If we restrict individual enhancement, should
those limits include only biological considerations (in order to restore the body to a
“normal” state) or psychological ones as well? Does our existing liability framework
provide appropriate remedies for those who suffer injuries caused by BCI systems,
especially in cases where users may be able to send thoughts or commands to other
people, including unintended commands? Is the existing data protection law suffi-
cient or do we need to protect highly sensitive personal BCI data emanating from
the human mind in a particular way? What precautions must be taken against brain
spyware?
Leading international neuroscientists facing such questions demand ethical and
legal guidelines for the use of BCI.72

2.4 safety and security issues


The use of AI and smart robotics also raises a number of safety and security issues.

2.4.1 Superintelligence As a Safety Risk?


The AI safety problem is often associated with the concern that a “superintelli-
gence” – or artificial general intelligence (AGI) – will inevitably turn against
humanity and trigger a “post-human” future.73 Various (global and local) solutions
have been proposed to address this concern,74 in particular: (i) “no AI” solutions

71
Schermer, “The Mind and the Machine. On the Conceptual and Moral Implications of
Brain‒Machine Interaction” (2009) 3(3) Nanoethics 217.
72
Clausen et al., “Help, Hope, and Hype: Ethical Dimensions of Neuroprosthetics” (2017) 356
Science 1338 et seq. <http://science.sciencemag.org/content/356/6345/1338>. Bostrom and
Sandberg, “Cognitive Enhancement: Methods, Ethics, Regulatory Challenges” (2009) 15 Sci
Eng Ethics 311; Holder et al., “Robotics and Law: Key Legal and Regulatory Implications of the
Robotics Age (part II of II)” (2016) 32 Computer Law & Security Review 557, 570 et seq.
73
Bostrom, Superintelligence (Oxford University Press 2014); Russell, “3 Principles for Creating
Safer AI” (2017), retrieved from <www.youtube.com/watch?v=EBK-a94IFHY>; Yudkowsky,
“Artificial Intelligence as a Positive and Negative Factor in Global Risk” in Bostrom and
Cirkovic (eds), Global Catastrophic Risks (Oxford University Press 2008) 308–348.
74
Turchin and Denkenberger, “Classification of the Global Solutions of the AI Safety Problem”
PhilArchive copy v1 <https://philarchive.org/archive/TURCOT-6v1>; Sotala and Yampolskiy,
“Responses to Catastrophic AGI Risk: A Survey,” last modified 13 September 2013 <https://
iopscience.iop.org/article/10.1088/0031-8949/90/1/018001>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
54 Martin Ebers

consisting of an international ban on AI, legal/technical relinquishment, destruction


of the capability to produce AI, and a slowdown of AI creation; (ii) the “one AI”
solution in which the first AI will become dominant and prevent the development of
other AIs; (iii) “many AIs” solutions in which a network of AIs may provide global
safety; and (iv) solutions in which “humans are incorporated inside AI.”
Such discussions, however, ultimately lead in the wrong direction. Not only is
there controversy among experts as to whether superintelligence will ever happen75
and whether – once created – it might do something dangerous,76 but what is more,
the ongoing discussion about a rising superintelligence obscures our view of the
actual safety and security problems we are facing today.

2.4.2 Current Safety Risks


First of all, one might wonder whether existing (product) safety rules are sufficient to
ensure an adequate level of safety. Special safety requirements exist above all in the
field of robotics. The ISO and IEC standards governing robot safety include:

 Industrial robots, ISO 10218-1 and ISO 10218-2:2011


 Personal care robots, ISO 13482:2014
 Collaborative robots, ISO/TS 15066:2016
 Robotic lawn movers, IEC 60335-2-107:2017
 Surgical robots, IEC 80601-2-78:2019
 Rehabilitation robots, IEC 80601-2-77.
In Europe, these safety requirements are translated into national law by the EU
Machinery Directive 2006/42. Whether the international standards are fit to deal
with innovative robots with machine intelligence is highly controversial. The
International Federation of Robotics believes that existing safety standards are
sufficient to cover current developments in the use of AI in robots in commercial
applications, and that no additional regulation is required.77 By contrast, the Euro-
pean Commission’s evaluation report of the Machinery Directive is more cautious,
75
According to a survey by Müller and Bostrom, which gathered opinions from the world’s top
100 most cited AI researchers, the median estimate for the time of emergence of what might be
labelled human-level AI is 2050, with experts forecasting the emergence of superintelligence by
the turn of the century: Müller and Bostrom, “Future Progress in Artificial Intelligence:
A Survey of Expert Opinion” in Müller (ed), Fundamental Issues of Artificial Intelligence
(Springer 2016) 553 et seq. According to the survey by Grace et al., there is a “50% chance AI
will outperform humans in all tasks in 45 years”; Grace, Salvatier, Dafoe, Zhang, and Evans,
“When Will AI Exceed Human Performance? Evidence from AI Experts,” last revised 3 May
2018, arXiv:1705.08807.
76
Cf. Häggström, “Remarks on Artificial Intelligence and Rational Optimism” in European
Parliament (ed), Should We Fear Artificial Intelligence? (March 2018) PE 614.547, 19, 21.
77
International Federation of Robotics, “Artificial Intelligence in Robotics,” May 2018 <https://ifr
.org/downloads/papers/Media_Backgrounder_on_Artificial_Intelligence_in_Robotics_May_2018
.pdf>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 55

highlighting that the suitability of the Directive may be tested when it comes to
AI-powered advanced robots and autonomous self-learning systems.78 In the same
vein, the UK Science and Technology Committee maintains that so far, according
to experts, “no clear paths exist for the verification and validation of autonomous
systems whose behavior changes with time.”79 Another report notes that regulation
lags behind and is not yet consolidated, resulting in gaps and overlaps between
standards.80
International standard-setting organizations also see a need for action. Work in
this area has already started with the Joint Technical Committee 1 between ISO and
IEC (JTC 1) and its subcommittee (SC) 42 (JTC 1/SC 42)81 led by the American
National Standards Institute (ANSI)82 and US secretariat. Similar initiatives have
been taken since 2018 by the European standardization organizations CEN and
CENELEC.83

2.4.3 Security Risks Due to Malicious Use of AI


Security issues also play a crucial role. AI is a dual-use technology that can be used
both for beneficial and harmful ends, bringing enormous security risks not only to
individuals, governments, industries, and organizations but also to the future of
humanity. Malicious use of AI could, as a recent report suggests,84 threaten physical
security (e.g., non-state actors weaponizing consumer drones), digital security (e.g.,
through criminals training machines to hack), and political security (e.g., through
privacy-eliminating surveillance, profiling, and repression, or through automated
and targeted disinformation campaigns). As AI capabilities become more powerful
and widespread, the authors of the report expect (i) an expansion of existing threats
(because the costs of attacks may be lowered and AI might enable larger-scale and
more numerous attacks), (ii) an introduction of new threats (enabling tasks that
would be otherwise impractical for humans), and (iii) a change to the typical

78
Commission Staff Working Document, “Evaluation of the Machinery Directive,” SWD (2018)
161 final 38.
79
UK Science and Technology Committee, “Robotics and Artificial Intelligence, Fifth Report,”
Session 2016‒17, HC 145<www.publications.parliament.uk/pa/cm201617/cmselect/cmsctech/
145/145.pdf>.
80
Jacobs, “Report on Regulatory Barriers, Robotics Coordination Action for Europe Two,” Grant
Agreement Number: 688441, 3 March 2017.
81
<www.iso.org/committee/6794475.html>.
82
<www.ansi.org/>.
83
Schettini Gherardini, “Is European Standardization Ready to Tackle Artificial Intelligence?,”
19 September 2018 <www.linkedin.com/pulse/european-standardization-ready-tackle-artificial-
bardo/>.
84
Brundage et al., “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and
Mitigation,” arXiv preprint arXiv:1802.07228, 2018. Cf. also King et al., “Artificial Intelligence
Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions” (2019) Science and
Engineering Ethics <https://doi.org/10.1007/s11948-018-00081-0>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
56 Martin Ebers

character of threats (because AI enables more effective, finely targeted, difficult-to-


attribute attacks).
In light of these considerations, one key question for future regulation is: What
safeguards should be put in place to prevent the malicious use of AI systems and
smart robots? Are the existing security regulations sufficient or do we need new rules
specifically tailored to the risks posed by AI?

2.5 accountability, liability, and insurance for


autonomous systems
Closely related to these questions is the issue of accountability and liability for
autonomous systems.

2.5.1 Emerging Questions


The use of semi-autonomous and autonomous systems leads to a loss of human
control over the system and its “actions.” With the increasing independence of
technical systems, people’s ability to influence technology is diminishing. The more
complex the tasks assigned to machines, the greater the probability that the result
will not correspond to the user’s, the systems owner’s/keeper’s, and/or the manufac-
turer’s ideas and wishes.
This growing degree of autonomy inevitably raises the question of who is respon-
sible if the autonomous AI system “makes” a declaration of intent to conclude a
contract, “violates” a contractual obligation, or “commits” a wrong or even a crime.
All major legal systems around the world are based on the premise that only natural
and legal persons have legal capacity and are thus actors. From this anthropocentric
perspective, technical artifacts are seen only as tools used by humans. It is precisely
this perspective, however, that turns out to be problematic as the degree of auton-
omy of machines increases. With increasing automation, it becomes more and more
difficult to identify a responsible person as the author of declarations of intent, to
whom it is possible to assign responsibility in order to establish liability:

 Is it even possible to attribute a computer-generated declaration to a


human if the person in question has no concrete idea what exactly the
system will do?
 What happens if the software agent, like a falsus procurator, misrepre-
sents a third party as the principal?
 Who is liable to pay damages if a largely autonomous machine causes
damage? The manufacturer of the machine who has originally
developed the autonomous system? The operator who is actually run-
ning the system by providing the required data, overseeing possible
machine-learning processes and pushing necessary updates? The systems

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 57

owner/keeper or the user of the autonomous system? Or does the injured


party, in the end, have to bear the costs, since no responsible person can
be found?
 Do we need special rules in contract and tort law to tackle the allocation
problems caused by the use of autonomous systems?

2.5.2 Overview of Opinions


All these questions have triggered a lively debate in the literature both in the USA
and in Europe.85 The solutions proposed to overcome these difficulties vary widely.
For contract law, for example, consideration is being given to (i) modifying
contract doctrine by relaxing the requirement of intentionality in contract-making;
(ii) understanding computers as mere tools or legal agents; (iii) denying validity to
transactions generated by autonomous systems; and (iv) granting legal personhood to
software agents.
A similarly broad spectrum of opinion exists in tort law. Here, the suggestions
range from: (i) applying or expanding existing doctrines, for example by treating AI
systems as we would employees or other assistants, minors, or animals – or by
drawing on the existing liability measures such as the guardian liability in France;
(ii) revising product liability law; (iii) introducing new strict liability regimes; to,
once again, (iv) granting legal personhood to software agents.

2.5.3 Revising (Product) Liability Law in the European Union

2.5.3.1 Product Liability Law


In the European Union, product liability has been fully harmonized in all Member
States through the Product Liability Directive 85/374/EEC, which establishes a

85
Cf. the extensive references in note 94. For the discussion in the USA, cf. moreover Geistfeld,
“A Roadmap for Autonomous Vehicles: State Tort Liability, Automobile Insurance, and
Federal Safety Regulation” (2017) 105(6) California Law Review 1611; Hubbard, “‘Sophisticated
Robots’: Balancing Liability, Regulation, and Innovation” (2014) 66(5) Florida Law Review
1803; Karnow, “The Application of Traditional Tort Theory to Embodied Machine Intelli-
gence” in Calo, Froomkin, and Kerr (eds), Robot Law (Edward Elgar Publishing 2016) 51 et
seq.; Selbst, “Negligence and AI’s Human Users,” Boston University Law Review, forthcoming
<www.ssrn.com/abstract=3350508>. For the European discussion cf. Pagallo, The Laws of
Robots: Crimes, Contracts, and Torts, (Springer 2013); Ebers, “La utilización de agentes
electrónicos inteligentes en el tráfico jurídico: ¿Necesitamos reglas especiales en el Derecho
de la responsabilidad civil?,” InDret 3/2016 <www.indret.com/pdf/1245.pdf>; Ebers, “Autono-
mes Fahren: Produkt- und Produzentenhaftung” in Oppermann and Stender-Vorwachs (eds),
Autonomes Fahren (CH Beck 2017) 93 et seq. <https://ssrn.com/abstract=3192911>; Wagner,
“Produkthaftung für autonome Systeme” (2017) 217 Archiv für die civilistische Praxis (AcP) 707.
Cf. also Navas, Chapter 5, and Janal, Chapter 6 in this book.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
58 Martin Ebers

system of strict liability, that is, liability without fault, for producers when a defective
product causes physical or material damage to the injured person. Whether this
directive is sufficient to take into account the special features of AI systems and
robots is controversial.
First of all, it is not clear whether the directive, with its definition of “product,”86
also covers non-tangible AI software and especially cloud technologies. Second, the
directive only applies to products and not to services.87 Companies providing
services such as (real-time) data services, data access, data-analytics tools, and
machine-learning libraries are therefore not liable under the Product Liability
Directive88 so that national (non-harmonized) law decides whether the (strict)
liability rules developed for product liability can be applied accordingly to services.
Third, there is the problem that, under Art 4 Product Liability Directive, the
injured party must prove that the product was defective when it was put into
circulation. This is precisely what is difficult with learning AI systems. Is an unin-
tended autonomous behavior of an AI system or an advanced robot a defect? Can
the producer invoke the “development risks defence” admitted by Art 7(e) of the
directive and claim an exemption from liability, arguing that he could not have
foreseen that the product would not provide the safety a person could expect? How
can a defect be proven at all,89 if the product’s behavior is changing over its lifetime
through learning experiences, over which the manufacturer no longer has any
influence once the product is launched onto the market? And how about cyber
security? Could software vulnerability (for instance, a cyber-attack, a failure to
update security software, or a misuse of information) be considered a defect?

86
According to Art 2(1) Product Liability Dir., “product” means all movables even if incorporated
into another movable or into an immovable. The directive, however, is silent on whether
movables need to be tangible. Given that Art 2(2) explicitly includes an intangible item like
electricity, this could mean that tangibility is not a relevant criterion in terms of the directive.
On the other hand, it could be argued that electricity is an exception which cannot be
generalized.
87
Cf. ECJ, 21.12.2011, case C-495/10 (Dutrueux), ECLI:EU:C:2011:869; Commission Staff
Working Document, Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the
approximation of the laws, regulations, and administrative provisions of the Member States
concerning liability for defective products, SWD(2018) 157 final 7. Cf. also the failed proposal
for a Council Directive on the Liability of Suppliers of Services, COM(90) 482 final, OJ 1990
C 12/8. The new Digital Content Directive (DCD) does not change this either, as damages are
left to national law; cf. Art 3(10) DCD.
88
Service providers could only be liable if they manufacture the product as part of their service; if
they put their name, trade mark, or other distinguishing feature on the product; or if the they
import the product into the EU. However, they do not incur any product liability for the service
rendered by them.
89
According to Borghetti, “How Can Artificial Intelligence be Defective?” in Lohsse, Schulze,
and Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Nomos
2019) 63, 71, “defectiveness is not an adequate basis for liability,” because in most circum-
stances, “it will be too difficult or expensive to prove the algorithm’s defect.”

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 59

Finally, the question arises whether the definition of damages is adequately laid
out in the directive, since it does not cover all types of possible damages, especially
with regard to the damages which can be caused by new technological develop-
ments, such as economic losses, privacy infringements, or environmental damages.
With these factors in mind, the European Commission is currently in the process
of assessing whether national and EU safety and liability frameworks are fit for
purpose considering these new challenges, or whether any gaps should be addressed.
Within the next months, a report is to be drawn up on this subject, supplemented by
guidance on the interpretation of the Product Liability Directive in light of techno-
logical developments, to ensure legal clarity for consumers and producers in the
event of defective products.90

2.5.3.2 Beyond Product Liability Law


Beyond product liability law, the issue remains as to when other persons are liable,
in particular the operator, the owner/keeper, or the user. As these persons do not
usually act negligently due to the high degree of autonomy of the AI system,91 they
can only be held accountable if there is strict liability. However, such a liability
regime is usually lacking. Many legal orders are based on the principle of fault
liability and only have specific rules of strict liability which are not open to
analogy.
In its Civil Law Rules on Robotics resolution of 16 February 2017, the European
Parliament suggested introducing a system of registration for specific categories of
advanced robots and adopting a future legislative instrument that should be based
either on strict liability or on a risk management approach, in each case supple-
mented by an obligatory insurance scheme backed up by a fund to ensure that
reparation can be made for damages in cases where no insurance cover exists.92
Which persons should be liable has been left open by the European Parliament
resolution,93 which merely emphasizes in general terms that, according to the risk
management approach, the person liable should be the one who is able “to minim-
ize risks and deal with negative impacts.” Once the parties bearing the ultimate
responsibility have been identified, “their liability should be proportional to the
actual level of instructions given to the robot and of its degree of autonomy.”
According to the European Parliament, therefore, the greater a robot’s learning
capability or autonomy, and the longer a robot’s training, the greater the responsi-
bility of its trainer should be.

90
European Commission, Communication “Coordinated Plan on Artificial Intelligence,” COM
(2018) 795 final 8.
91
Selbst (n 85).
92
European Parliament, Resolution (n 21), Nos 2, 53, 57, 58.
93
Critical Lohmann (n 21) 170.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
60 Martin Ebers

Overall, the European Parliament’s proposals remain very vague. There is no


detailed discussion of who should be liable and under what conditions, nor does it
take into account the numerous proposals discussed by scholars.

2.5.4 A Specific Legal Status for AI and Robots?


Another option that has been discussed for some time to overcome the autonomy
problem is the conferral of (limited) legal personhood on robots and AI systems.94
This idea was taken up by the European Parliament in its resolution of 16 February
2017, with the suggestion that the legislature should consider:
creating a specific legal status for robots in the long run, so that at least the most
sophisticated autonomous robots could be established as having the status of
electronic persons responsible for making good any damage they may cause, and
possibly applying electronic personality to cases where robots make autonomous
decisions or otherwise interact with third parties independently.95

This proposal has been strongly criticized, including in an open letter from a group
of “Artificial Intelligence and Robotics Experts” in April 201896 calling for the
creation of a legal status of an “electronic person” to be discarded from both a
technical perspective and a normative, in other words legal and ethical, viewpoint.
Indeed, the introduction of a legal personhood for AI systems and/or robots is
problematic for several reasons. First, it is questionable how AI systems and/or robots
can be identified at all. Should personhood be conferred on the hardware, the
software, or some combination of the two? To make matters worse, the hardware
and software may be dispersed over several sites and maintained by different
individuals. They may be copied, deleted, or merged with other systems at very
low cost. Even if software agents and/or robots had to be registered in the future,
there would be a number of cases in which the “acting” machine could not be

94
Solum, “Legal Personhood for Artificial Intelligence” (1992) 70 North Carolina Law Rev 1231;
Karnow, “Liability for Distributed Artificial Intelligence” (1996) 11 Berkeley Technol Law J 147;
Allen and Widdison, “Can Computers Make Contracts?” (1996) 9 Harvard Journal of Law &
Technology 26; Sartor, “Agents in Cyber Law” in Proceedings of the Workshop on the Law of
Electronic Agents, CIRSFID (LEA02) (Gevenini 2002) 7; Teubner, “Rights of Non-humans?
Electronic Agents and Animals as New Actors in Politics and Law” (2006) 33 Journal of Law &
Society 497, 502; Matthias, Automaten als Träger von Rechten. Plädoyer für eine Gesetzesänder-
ung, PhD Thesis, Berlin 2007; Chopra and White, A Legal Theory for Autonomous Artificial
Agents, 2011. For an overview of the different concepts cf. Koops, Hildebrandt, and Jaquet-
Chiffelle, “Bridging the Accountability Gap: Rights for New Entities in the Information
Society?” (2010) 11(2) Minnesota Journal of Law, Science & Technology 497; Pagallo, “Apples,
Oranges, Robots: Four Misunderstandings in Today’s Debate on the Legal Status of AI
Systems” (2018) Philosophical Transactions of the Royal Society A376.
95
European Parliament, Resolution (n 21), No 59.
96
<www.robotics-openletter.eu/>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 61

identified as a person at all. The introduction of a specific legal status for machines
would therefore by no means solve all liability problems.
The second problem is that the electronic agent would have to be equipped with
its own assets in order to compensate victims. Such a solution raises, first of all, the
question of who should make the assets available: The manufacturer? The operator?
The keeper/owner or the user? All of them? Or the robot itself, depending on the
profit it makes? Additionally, it remains unclear how the relevant funds should be
paid out in the event of damages. If strict liability were applied here, it is not clear
what advantages the introduction of a legal personhood would bring over introdu-
cing a stricter tort law. All these considerations show that creating a legal person-
hood for machines does not seem economically very efficient, as the same purpose
can be more easily achieved simply by introducing strict lability and/or requiring
insurance.97
Last but not least, many fear that the agenthood of artificial agents could be a
means of shielding humans from the consequences of their conduct.98 Damages
provoked by the behavior and decisions of AI systems would not fall on the
manufacturers, keepers, etc. Instead, only AI systems would be liable. Moreover,
there is the danger of machine insolvency: “Money can flow out of accounts just as
easily as it can flow in; once the account is depleted, the robot would effectively be
unanswerable for violating human legal rights.”99
All in all, the decision to confer a legal personality on an autonomous system
would most likely lead to more questions and problems than solutions.

2.6 privacy, data protection, data ownership,


and access to data

2.6.1 The Interplay between Data and Algorithms


The current success of AI systems is based not only on the accessibility of cheap,
robust computational power and ever more sophisticated algorithms, but also – and
above all – on the availability of large amounts of data.
The more data is available to a learning algorithm, the more it can learn. In a
ground-breaking paper, Banko and Brill showed in 2001 that the amount of data
used to train ML algorithms has a greater effect on prediction accuracy than the type

97
Nevejans, “Citizens’ Rights and Constitutional Affairs – Legal Affairs, European Civil Law Rules
in Robotics.” Study, European Union 2016 <https://www.europarl.europa.eu/RegData/etudes/
STUD/2016/571379/IPOL_STU(2016)571379_EN.pdf> 15; Keßler, “Intelligente Roboter – neue
Technologien im Einsatz” (2017) MultiMedia und Recht (MMR) 593.
98
Bryson, Diamantis, and Grant, “Of, for, and by the People: The Legal Lacuna of Synthetic
Persons” (2017) 25 Artificial Intelligence and Law 273.
99
Bryson, Diamantis, and Grant (n 98) 288.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
62 Martin Ebers

of ML method used.100 Or, as Peter Norvig, chief scientist at Google, puts it: “We
don’t have better algorithms than anyone else. We just have more data.”101 This is
precisely one of the reasons why some of the most successful companies today are
those that have the most data on which to train their algorithms.
The race for AI is particularly influenced by the network effects that are already
known from the platform economy: the more users a company has, the more
personal data can be collected and processed to train the algorithms. This in turn
leads to better products and services, which results in more customers and more
data. In view of these network effects, some fear that the market for AI systems will
become oligopolistic with high barriers to entry.102 According to Pedro Domingos,
“Control of data and ownership of the models learned from it is what many of the
twenty-first century’s battles will be about – between governments, corporations,
unions, and individuals.”103
A number of very different questions arise from consideration of these points:
When should companies and governments be allowed to process personal data
using big-data analysis? Is (European) data protection law compatible with big-
data and AI systems? Who “owns” personal and non-personal data? How can
companies protect investments that flow into big-data analysis? Should we recognize
“data ownership” or “data producer’s rights”? To what extent must competitors be
given the opportunity to gain access to data from other companies?

2.6.2 Privacy, Data Protection, and AI Systems

2.6.2.1 How AI Systems and Robots Threaten Privacy


AI systems challenge current understandings of privacy. Most AI technologies have a
deleterious impact on the right to privacy. On the one hand, AI systems based on
ML cannot work without data. On the other hand, without AI systems it would not
be possible to “understand” many of the unstructured masses of data. In a nutshell:
100
Banko and Brill, “Scaling to Very Very Large Corpora for Natural Language Disambiguation,”
paper presented at Proceedings of the 39th Annual Meeting on Association for Computational
Linguistics, 2001.
101
Norvig, quoted by Cleland, “Google’s ‘Infringenovation’ Secrets,” Forbes, 3 October 2011 <www
.forbes.com/sites/scottcleland/2011/10/03/googles-infringenovation-secrets/#78a3795430a6>.
102
Mayer-Schönberger and Ramge, Reinventing Capitalism in the Age of Big Data (John Murray
2018). Some critics point out that as few as seven for-profit institutions – Google, Facebook,
IBM, Amazon, Microsoft, Apple, and Baidu in China – hold AI capabilities that vastly outstrip
all other institutions; Iyengar, “Why AI Consolidation Will create the Worst Monopoly in US
History,” TechCrunch, 24 August 2016 <https://techcrunch.com/2016/08/24/why-ai-consolida
tion-will-create-the-worst-monopoly-in-us-history/>; Quora, “What Companies Are Winning
the Race for Artificial Intelligence?” Forbes, 24 February 2017 <www.forbes.com/sites/quora/
2017/02/24/what-companies-are-winning-the-race-for-artificial-intelligence/#7a5025eaf5cd>.
103
Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will
Remake Our World (Basic Books 2015) 45.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 63

personal data is increasingly both the source and the target of AI applications.
Accordingly, AI technologies create strong incentives to collect and store as much
additional data as possible in order to gain meaningful new insights. This trend is
further reinforced by the shift to ubiquitous tracking and surveillance through
“smart” devices and other networked sensors omnipresent in the IoT. AI amplifies
large-scale surveillance through techniques that analyze video, audio, images, and
social media content across entire populations. The spread of smart robots in
everyday life contributes to this development. As Ryan Calo104 points out, robots
not only greatly facilitate direct surveillance; they also introduce new points of
access to historically protected spaces. Moreover, through becoming increasingly
human-like, the social nature of robots may lead to new varieties of highly sensitive
personal information.
In light of this development, there is growing doubt as to whether the existing data
protection rules are sufficient to ensure adequate protection. This is particularly the
case in countries such as the USA, where data protection legislation is a patchwork
of sector-specific laws that fail to adequately protect privacy.105

2.6.2.2 Friction between Big-Data Practices Based on AI and the GDPR


The same cannot be said for the European Union. Since the General Data Protec-
tion Regulation (GDPR) came into force in May 2018, a high standard of personal
data protection has been introduced in all Member States – at least in theory.
However, there are increasing doubts as to whether the GDPR properly addresses
the surge in big-data practices and AI systems.
The GDPR applies to all personal data, meaning any information relating to an
identified or identifiable natural person (Art 4(1) GDPR). As most of the data that
drives AI systems is either directly linked to a person, or, if anonymized, at least
identifiable by an algorithm,106 the GDPR applies regularly both when AI is under
development (since it governs the collection and use of data in generating ML
models) and also, under certain limited conditions, when it is used to analyze or
reach decisions about individuals. However, there are no data protection rights or

104
Calo, “Robots and Privacy” in Lin, Abney, and Beke (eds), Robot Ethics: The Ethical and
Social Implications of Robotics (MIT Press 2012) 187 et seq.
105
According to Solove, “Privacy and Power: Computer Databases and Metaphors for Information
Privacy” (2001) 53 Stanford Law Review 1393, 1430, the US system of data protection is one
which “uses whatever is at hand [. . .] to deal with the emerging problems created by the
information revolution.”
106
In the era of big data, anonymous information can be de-anonymized by employing related and
non-related data about a person; Barocas and Nissenbaum, “Big Data’s End Run around
Anonymity and Consent” in Julia Lane et al. (eds), Privacy, Big Data and the Public Good
(Cambridge University Press 2014) 49 et seq.; Floridi, The 4th Revolution (Oxford University
Press 2014) 110; Rubinstein and Hartzog, “Anonymization and Risk” (2016) 91 Washington Law
Review 703, 710‒711.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
64 Martin Ebers

obligations concerning the ML models themselves in the period after they have
been built but before any decisions have been taken about using them. As a rule, ML
models do not contain any personal data, but only information about groups and
classes of persons.107 Although algorithmically designed group profiles may have a
big impact on a person,108 (ad hoc) groups are not recognized as holders of privacy
rights. Hence, automated data processing by which individuals are clustered into
groups or classes (based on their behavior, preferences, and other characteristics)
creates a loophole in data protection law, pointing toward the need to recognize in
the future some type of “group privacy” right.109
Beyond the issue of group privacy there is a series of further issues that show how
little the GDPR takes into account the peculiarities of AI systems, self-learning
algorithms, and big-data analytics, as many basic concepts and rules are in tension
with these practices.110
First of all, the principle of purpose limitation (Art 5(1)(b) GDPR) is at odds with
the prospect of big-data analyses.111 According to this principle, personal data must
be collected for specified, explicit, and legitimate purposes and not further processed
in a way incompatible with those purposes. However, analyzing big data quite often
involves methods and usage patterns which neither the entity collecting the data nor
the data subject considered or even imagined at the time of collection. Additionally,

107
This could change due to evolving technologies. Cf. in particular Veale, Binns, and Edwards,
“Algorithms that Remember: Model Inversion Attacks and Data Protection Law,” Philosophical
Transactions of the Royal Society A 376: 20180083, <http://dx.doi.org/10.1098/rsta.2018.0083>,
with the assumption that new forms of cyber attacks are able to reconstruct training data (or
information about who was in the training set) in certain cases from the model.
108
As Hildebrandt, “Slaves to Big Data. Or Are We?” (2013) IDP Revista De Internet, Derecho y
Política 27, 33 et seq., notes: “If three or four data points of a specific person match inferred data
(a profile), which need not be personal data and thus fall outside the scope of data protection
legislation, she may not get the job she wants, her insurance premium may go up, law
enforcement may decide to start checking her email or she may not gain access to the
education of her choosing.”
109
For further discussion, see Mittelstadt, “From Individual to Group Privacy in Biomedical Big
Data” in Cohen, Lynch, Vayena, and Gasser (eds), Big Data, Health Law, and Bioethics
(Cambridge University Press 2018) 175 et seq.; Taylor, Floridi, and van der Sloot (eds), Group
Privacy: New Challenges of Data Technologies (1st edn, Springer 2017).
110
Zarsky, “Incompatible: The GDPR in the Age of Big Data” (2017) 47(4) Seton Hall Law Review
995; Humerick, “Taking AI Personally: How the E.U. Must Learn to Balance the Interests of
Personal Data Privacy & Artificial Intelligence” (2018) 34 Santa Clara High Tech Law Journal
393. In contrast, the Information Commissioner’s Office (ICO) in the UK does “not accept the
idea that data protection, as currently embodied in legislation, does not work in a big data
context,” ICO, “Big Data, Artificial Intelligence, Machine Learning, and Data Protection,”
20170904 (Version 2.2) 95. Cf. also Pagallo, “The Legal Challenges of Big Data: Putting
Secondary Rules First in the Field of EU Data Protection” (2017) 3 European Data Protection
Law Review 36, with reference to two possible solutions to make the collection and use of Big
Data compatible with the GDPR: the use of pseudonymization techniques and the exemption
of data processing for statistical purposes.
111
Forgó, Hänold, and Schütze, “The Principle of Purpose Limitation and Big Data” in Corrales,
Fenwick, and Forgó (eds), New Technology, Big Data and the Law (Springer 2017) 17 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 65

when it comes to ML algorithms it may be difficult to already define the purpose of


processing at the stage of data collection because it is not possible to predict what the
algorithm will learn. To inform data subjects of the future forms of processing might
prove costly, difficult, and even impossible.
The principle of data minimization (Art 5(1)(c) GDPR) also represents a challen-
ging issue. Both big-data and ML algorithms need a large amount of data to produce
useful results. Arguably, the principle of data minimization does not mean that data
controllers should always collect as little data as possible, but only that the quantity
must be related to the purpose provided that the data are adequate.112 Nevertheless,
this principle potentially undermines the utility and benefits of big-data analyses.
A third problem is that the GDPR establishes a special regime for particularly
sensitive data, for example, data revealing not only racial or ethnic origin, political
opinions, religious or philosophical beliefs, or trade union membership, but also
genetic data, biometric data, and data concerning health, sex life, or sexual orienta-
tion (Art 9 GDPR). While the justification for setting a higher level of protection for
special categories of data is intuitive, new forms of enhanced analytics challenge the
ability to draw a clear distinction between “normal” personal data and “sensitive”
data. After all, even an analysis merely relying on “regular” categories can quickly
end up revealing sensitive data.
Finally, AI-driven technologies also call into question another fundamental
principle of data protection law, namely the principle of consent. How can data
controllers possibly provide consent notices to individuals for potential secondary
purposes that are yet to exist or have not been conceived? How can individuals have
information regarding all of the possible implications communicated to them in
comprehensible form, and be afforded the opportunity to understand what it is that
they are being asked to consent to? How can algorithm-based profiling, nudging,
and manipulation113 be reconciled with freedom of choice and the idea of data
protection as data subjects’ control over their information?114
All these considerations show how little the new GDPR is compatible with big-
data analysis and AI products. Whether companies can comply with the require-
ments of the GDPR has yet to be proven. At the end of the day, much will depend
on how the regulation is interpreted by the courts and applied in practice. In this

112
Noto La Diega, “Against the Dehumanisation of Decision-Making. Algorithmic Decisions at
the Crossroads of Intellectual Property, Data Protection, and Freedom of Information” (2018)
9(1) jipitec (Journal of Intellectual Property, Information Technology and E-Commerce Law) 1.
113
Cf. Section 2.7.1.
114
Council of Europe, “Report on Artificial Intelligence. Artificial Intelligence and Data Protection:
Challenges and Possible Remedies,” report by Alessandro Mantelero, T-PD(2018)09Rev <https://
rm.coe.int/artificial-intelligence-and-data-protection-challenges-and-possible-re/168091f8a6> 7.
To address these issues, legal scholars have highlighted the potential role of transparency, or risk
assessment as well as more flexible forms of consent, such as broad consent and dynamic consent;
Mantelero, “Regulating Big Data. The guidelines of the Council of Europe in the Context of the
European Data Protection Framework” (2017) 33(5) Computer Law and Security Review 584.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
66 Martin Ebers

respect, two (extreme) scenarios are conceivable.115 On the one hand, the GDPR
might allow EU citizens to benefit from enhanced data protection, while still
enjoying the innovations data analytics bring about.116 On the other hand, the
GDPR could threaten the development of AI, creating high market-entry barriers
for companies developing and/or using AI systems. According to this view, over-
regulation of personal data would lead to limited research and use of AI products.
Recent surveys show that such a scenario is not unlikely: many companies see data
protection as an obstacle to competition and are already complaining that AI
products cannot be developed and distributed in the EU due to the strict rules.117
For all these reasons, a thorough balancing seems necessary. If the EU wants to
keep up with the global race to AI, it must carefully balance its interest in protecting
personal data against its interest in developing new AI technologies.

2.6.3 Data Ownership v Data Access Rights

2.6.3.1 Protection of Data As (Intellectual) Property Rights?


Data has become the “new currency” in the digital world.118 Data is collected by a
variety of companies and converted into a valuable commercial product, which
pays for many of the “free” services most consumers nowadays take for granted.
Originally, Art 3(1) of the proposal for an EU Digital Content Directive119 explicitly
mentioned the possibility of regarding personal data as a counter-performance (con-
sideration) for the services received.120 In business-to-business (B2B) relationships, the
possibility that (non-personal) data can be the subject of contractual agreements as
115
Zarsky (n 110).
116
Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and
Technology (Edward Elgar Publishing 2015) 211.
117
Cf. Delponte, “European Artificial Intelligence (AI) leadership, the path for an integrated
vision,” Study requested by the ITRE committee of the European Parliament, PE 626.074,
September 2018, Figure 3 (Key barriers inhibiting faster deployment of AI systems in Europe)
17. According to surveys conducted by Bitkom, Germany’s IT and telecommunications indus-
try association, almost two-thirds of companies in Germany also say that data protection is an
obstacle to the use of new technologies; (2018) Redaktion MMR-Aktuell 406071.
118
Eggers, Hamill, and Ali, “Data as currency” (2013) 13 Deloitte Review 18 ff.; Langhanke and
Schmidt-Kessel, “Consumer Data as Consideration” (2015) Journal of European Consumer and
Market Law (EuCML) 218; The European Commission, “Communication Building a Euro-
pean Data Economy,” COM(2017) 9 final, predicts that the value of the European data
economy will increase to EUR 643 billion by 2020, representing 3.17% of the overall EU GDP.
119
Art 3(1) of the Proposal for a Directive on certain aspects concerning contracts for the supply of
digital content, COM(2015) 634 final, stated that the Directive “shall apply to any contract
where the supplier supplies digital content to the consumer or undertakes to do so and, in
exchange, a price is to be paid or the consumer actively provides counter-performance other
than money in the form of personal data or any other data.”
120
By contrast, Art 3(1) of the Digital Content Directive no longer uses the term “counter-
performance” in order to mitigate the concerns about treating personal data as a commodity;
cf. the concerns of the European Data Protection Supervisor, Opinion 4/2017 on the Proposal
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 67

commodities has long been recognized.121 However, the problem with every “con-
tractual” approach is that contractual obligations are only binding inter partes.
Consequently, third parties cannot be prevented legally by contracts from using the
data. In light of these considerations, there is an intensive discussion, especially in
Europe, about whether a(n) (intellectual) property right in personal and/or non-
personal data with erga omnes effect should be recognized.122

personal data The discussion about possible property rights in data is not new. US
scholars have been debating whether personal information should be viewed as
property since the early 1970s.123 The current debate, however, is based on very
different premises. As Purtova points out, the propertization of personal information
was viewed in the USA mainly as an alternative to the existing data protection regime
and one of the ways to fill in the gaps in the US data protection system.124 It is different
in Europe, where the GDPR provides a comprehensive set of data protection rules
that in the end would interfere with the recognition of property rights in personal data.
First of all, as the European Commission points out, such a property right would be
incompatible with the fact that “the protection of personal data enjoys the status of a
fundamental right in the EU.”125 In addition, a property right in personal data would
be inconsistent with Art 7(3) GDPR, according to which consent can be withdrawn
even against the will of the entitled legal entity. Finally, even if a right to one’s data was
constituted, it would remain a challenge to assign such a right to one single person, as
most personal data relates to more than one data subject.126

non-personal data Admittedly, these problems do not exist with non-personal


data (“pure” machine-generated data). As non-personal data is neither protected by
data protection law nor as such by (European) IP law,127 some scholars have recently

for a Directive on certain aspects concerning contracts for the supply of digital content (14
March 2017) 7‒9 and 16‒17.
121
COM (2017) 228 final, under 3.2; SWD (2017) 2 final 16; cf. Berger, “Property Rights to Personal
Data? – An Exploration of Commercial Data Law” (2017) Zeitschrift für geistiges Eigentum
(ZGE) 340: “data contract law lies at the heart of commercial data law.”
122
For an overview of the academic discussion in several countries cf. Osborne Clarke LLP,
“Legal Study on Ownership and Access to Data,” Study prepared for the European Commis-
sion DG Communications Networks, Content & Technology, 2016.
123
Westin, Privacy and Freedom (Atheneum 1967); Lessig, “Privacy as Property” (2002) 69(1)
Social Research: An International Quarterly of Social Sciences 247; Schwartz, “Property, Privacy
and Personal Data” (2004) 117(7) Harvard Law Review 2055.
124
Purtova, “Property Rights in Personal Data: Learning from the American Discourse” (2009) 25
Computer Law & Security Review 507.
125
European Commission, “Staff Working Document on the free flow of data and emerging
issues of the European data economy accompanying the document Communication Building
a European data economy,” 10 January 2017, SWD (2017) 2 final 24.
126
Purtova, “Do Property Rights in Personal Data Make Sense after the Big Data Turn?: Individ-
ual Control and Transparency” (2017) 10(2) Journal of Law and Economic Regulation 64.
127
Raw machine-generated data are not protected by existing IP rights since they are not deemed
to be the result of an intellectual effort and/or have no degree of originality. Likewise, the
Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
68 Martin Ebers

argued in favor of the creation of a new property right with the objective of
enhancing the tradability of anonymized machine-generated data.128 The European
Commission also temporarily considered the introduction of a “data producer’s
right” with the aim of “clarifying the legal situation and giving more choice to the
data producer, by opening up the possibility for users to utilize their data.”129
There are serious concerns about the introduction of such a right, however. First,
there is no practical need for such a property right, since companies can effectively
control the access to “their” data by technical means. Second, companies “possess-
ing” data are protected through a number of other legal instruments (e.g., tort and
criminal law) against destruction, certain impediments to access and use, and against
compromising their integrity.130 Third, the legal discussion has shown that the
specification of the subject matter and the scope of protection seems to be extremely
difficult in regard to data.131 Last but not least, the introduction of an exclusive right to
data carries the serious risk of an inappropriate monopolization of data.132 Granting
data holders an absolute (intellectual) property right over data would strengthen their
(dominant) position, increasing entry barriers for competitors.
It is therefore fitting that the European Commission no longer appears to be
pursuing the discussion on the introduction of data ownership rights and is instead
concentrating on the question of how to deal with data-driven barriers to entry.

2.6.3.2 Access to Data


The European Commission acknowledges a growing concern that the control of
large volumes of data could lead to situations of market power.133 In the same vein, the

Database Directive 96/9/EC does not protect data as such, but only data originating from a
protected database. Similarly, the Trade Secrets Directive 2016/943, does not grant an absolute
right to data but is based on the maintenance of factual secrecy; as Wiebe, “Protection of
Industrial Data – A New Property Right for the Digital Economy?” (2016) Gewerblicher
Rechtsschutz und Urheberrecht, Internationaler Teil (GRUR Int.) 877, points out: “Once
secrecy is lost, legal protection is lost as well.”
128
Cf. in particular Zech, “Data as a Tradeable Commodity” in de Franceschi (ed), European
Contract Law and the Digital Single Market. The Implications of the Digital Revolution
(Intersentia 2016) 51 et seq.; Becker, “Rights in Data. Industry 4.0 and the IP-Rights of the
Future” (2017) 9 ZGE/Intellectual Property Journal (IPJ) 253.
129
European Commission, “Communication ‘Building a European Data Economy,’” COM
(2017) 2 final 13; cf. moreover “Commission Staff Working Document” (n 125) 33 et seq.
130
Kerber, “A New (Intellectual) Property Right for Non-personal Data? An Economic Analysis”
(2016) GRUR Int. 989.
131
Wiebe (n 127) 881‒883.
132
Max Planck Institute for Innovation and Competition, “Position Statement of 26 April 2017 on
the European Commission’s ‘Public consultation on Building the European Data Economy’”
6; Drexl, “Neue Regeln für die Europäische Datenwirtschaft? Ein Plädoyer für einen wettbe-
werbspolitischen Ansatz – Teil 1” (2017) Neue Zeitschrift für Kartellrecht (NZKart) 339, 343.
133
EU Commissioner Vestager, “Competition in a Big Data World,” paper presented at the
Digital Life Design (DLD) Conference, 2016. Cf. moreover Rubinfeld and S Gal, “Access
Barriers to Big Data” 2017 (59) Arizona Law Review 339; Vezzoso, “Competition Policy in a

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 69

Organisation for Economic Cooperation and Development (OECD) points out that
larger incumbents – due to the network effects previously discussed134 – are likely to
benefit from significant advantages over smaller firms and “second movers” in
collecting, storing, and analyzing large and heterogeneous types of data.135 Smaller
firms and new entrants might therefore face barriers to entry, preventing them from
developing algorithms that can effectively exert competitive pressure.
Some argue that we only need to apply competition law and split up internet
giants, as was the case with Standard Oil or AT&T in decades past.136 Others believe
that the appropriate remedy against a concentration of data in too few hands is
aggressive anti-trust action and a mandate for companies to share proprietary data
proportional to market share. In this spirit, Mayer-Schönberger and Ramges propose
in their book Reinventing Capitalism a progressive data-sharing mandate which
would require Facebook (and any similarly structured powerful player) to share
proprietary data proportional to their market share.137 However, neither demand can
in practice be realized on the basis of current competition law. According to many
legal systems, an unbundling of an entire company is only permissible – if at all – in
cases where it repeatedly violates competition law in a particularly serious
manner.138 The essential facility doctrine, under which a company in a dominant
position must grant access to a facility under specific conditions,139 does not help
either, because this doctrine only applies under “extraordinary circumstances.”140

World of Big Data” in Olleros and Zhegu (eds), Research Handbook on Digital Transform-
ations (Edward Elgar Publishing 2016) 400 et seq.
134
Cf. Section 2.6.1.
135
OECD, “Big Data: Bringing Competition Policy to the Digital Era,” 2016, <https://www.oecd
.org/competition/big-data-bringing-competition-policy-to-the-digital-era.htm>.
136
In this sense, for example Galloway, “Silicon Valley’s Tax-Avoiding, Job-Killing, Soul-Sucking
Machine,” Esquire (March 2018) <www.esquire.com/news-politics/a15895746/bust-big-tech-sil
icon-valley/?src=nl&mag=esq&list=nl_enl_news&date=020818>.
137
Mayer-Schönberger and Ramges (n 102).
138
For the EU, cf. Regulation 1/2003, recital (12): “Changes to the structure of an undertaking as it
existed before the infringement was committed would only be proportionate where there is a
substantial risk of a lasting or repeated infringement that derives from the very structure of the
undertaking.” For the USA, cf. Sec 2 of the Sherman Antitrust Act 1890: “Every person who
shall monopolize, or attempt to monopolize, or combine or conspire with any other person or
persons, to monopolize any part of the trade or commerce among the several States, or with
foreign nations, shall be deemed guilty [. . .].”
139
For the US, see MCI Commc’ns Corp v American Tel & Tel Co, 708 F.2d 1081, 1132–33 (7th
Cir. 1983); Maurer and Scotchmer, “The Essential Facilities Doctrine: The Lost Message of
Terminal Railroad” 10 March 2014, UC Berkeley Public Law Research Paper No 2407071,
<https://ssrn.com/abstract=2407071>; Pitofsky, Patterson, and Hooks, “The Essential Facilities
Doctrine under US Antitrust Law” (2002) 70 Antitrust Law Journal 443, 448. For the EU, see
ECJ, 6.4.1995, joined cases C‑241–242/91 P (RTE and ITP/Kommission – “Magill”), ECLI:EU:
C:1995:98; 29.4.2004, case C‑418/01 (IMS Health), ECLI:EU:C:2004:257; CFI, 17.9.2007, case
T‑201/04 (Microsoft/Commission), ECLI:EU:T:2007:289; Evrard, “Essential Facilities in the
European Union: Bronner and Beyond” (2004) 10 Columbia Journal of European Law 491.
140
On the question of whether data can be regarded as an essential facility, cf. from a US
perspective Sokol and Comerford, “Antitrust and Regulating Big Data” (2016) 23 George Mason

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
70 Martin Ebers

Apart from this, anti-trust law is a very limited tool for mandating access to data,
for three main reasons. First, in dynamic multi-sided markets it is very difficult to
prove the existence of a monopolistic position and/or market dominance141 and
establish clear criteria for exploitative abuse in regard to data. Second, competition
law is generally unable to limit the price that can be set by the data monopolist in
exchange for access. And third, anti-trust law does not deal effectively with situations
in which market power arises from oligopolistic coordination.142
For all these reasons, it seems more promising to create specific statutory data
access rights. In the European Union, such rights already exist in specific contexts.143
Accordingly, there are models upon which the European legislature could build.
A general right of access to data applicable to all sectors, on the other hand, does not
seem appropriate. Rather, a targeted approach is to be preferred144 which, depending
on the sector, attempts to balance the legitimate interest of persons in access to
external data with the legitimate interest of data generators (or data holders) in the
protection of their investment and – where personal data is involved – the interests of
data subjects.

2.7 algorithmic manipulation and discrimination of


citizens, consumers, and markets
Self-learning algorithms are used by many companies, political parties, and other
actors to influence and manipulate citizens and consumers through microtargeting.
This raises the question of how the law can provide adequate safeguards against such

Law Review, 1129, 1158 et seq.; Balto, “Monopolizing Water in a Tsunami: Finding Sensible
Antitrust Rules for Big Data,” 2016 <http://ssrn.com/abstract=2753249>. For the European
perspective cf. Graef, “Data as Essential Facility. Competition and Innovation on Online
Platforms,” PhD Thesis, KU Leuven 2016 <https://core.ac.uk/download/pdf/34662689.pdf>;
Lehtioksa, “Big Data as an Essential Facility: The Possible Implications for Data Privacy,”
Master’s Thesis, University of Helsinki 2018 <https://www.paulo.fi/sites/default/files/inline-files/
Lehtioksa%20Jere_pro%20gradu.pdf>; Telle, “Kartellrechtlicher Zugangsanspruch zu Daten
nach der essential facility doctrine” in Hennemann and Sattler (eds), Immaterialgüter und
Digitalisierung (Nomos 2017) 73‒87.
141
Traditional approaches to market definition fail with digital platforms because many platforms
(i) work with free goods and services and (ii) are characterized by having several market sides,
which makes it very difficult to assess the competitive powers at play; cf. Podszun and Kreifels,
“Digital Platforms and Competition Law” (2016) EuCML 33.
142
OECD, Directorate for Financial and Enterprise Affairs Competition Committee, “Competi-
tion Enforcement in Oligopolistic Markets” Issues paper by the Secretariat, 16‒18 June 2015,
DAF/COMP(2015)2.
143
Cf. for example Art 6‒9 Regulation 715/2007/EC, Art 35‒36 Directive 2015/2366/EU, Art 27,
30 Regulation 1907/2006/EC, Art 30, 32 Directive 2009/72/EC and Recital 11 Directive 2010/40/
EU. The right to portability embodied in Art 20 GDPR is also based on the ratio to avoid lock-
in effects and to improve the switching process from one service provider to another.
144
Similarly, Max Planck Institute for Innovation and Competition, “Position Statement of
26 April 2017 on the European Commission’s ‘Public consultation on Building the European
Data Economy’” 11.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 71

practices. Another problem closely related to algorithmic decision-making is the risk


of discrimination. Many studies indicate that algorithms are often not value neutral,
but biased and discriminatory. Here, too, the question arises as to what extent
citizens and consumers can and should be protected. Beyond these issues, the
phenomenon of algorithmic manipulation and discrimination also poses interesting
competition law questions in cases where algorithms interact collusively.

2.7.1 Profiling, Targeting, Nudging, and Manipulation of


Citizens and Consumers

2.7.1.1 The Technique of Behavioral Microtargeting


In recent years, behavioral microtargeting has developed into a new and promising
business strategy. The technique of behavioral microtargeting allows companies to
address people individually according to their profile, which is created algorithmic-
ally from personal data about the individual’s behavior and personality.145
By and large, behavioral microtargeting is based on three elements. The psycho-
metric analysis of individuals requires, first, the collection of large amounts of data.
In a second step, the collected data is evaluated by machine-learning algorithms in
order to analyze and predict certain personal traits of users: their character strengths,
but also their cognitive and voluntative weaknesses. In this regard, several studies by
researchers from the University of Cambridge have shown that the analysis of
(neutral) Facebook “likes” provides far-reaching conclusions about the personality
of an individual.146 According to these studies, an average of 68 Facebook “likes”
suffices to determine the user’s skin color with 95% accuracy, sexual orientation
(88% accuracy), and affiliation to the Democratic or Republican party (95% accur-
acy). In addition, the studies claim that it is possible to use Facebook “likes” to
predict religious affiliation, alcohol, cigarette, and drug consumption, and whether
or not a person’s parents stayed together until that person reached the age of 21. With
the input of even more Facebook “likes,” the algorithm was able to evaluate a person
better than their friends, parents, and partners could, and could even surpass what
the person thought they knew about themselves.147

145
Calo, “Digital Market Manipulation” (2014) 82(4) The George Washington Law Review 995,
1015 et seq.; O’Neil, Weapons of Math Destruction (Crown 2016) 194 et seq.; European Data
Protection Supervisor (EDPS), “Opinion 3/2018 on online manipulation and personal data,” 19
March 2018.
146
Kosinski, Stillwell, and Graepel (2013) 110(15) PNAS 5802, <www.pnas.org/content/110/15/5802
.full>; Youyou, Kosinski, and Stillwell, “Computer-Based Personality Judgments Are More
Accurate than Those Made by Humans” (2015) 112(4) PNAS 1036 <www.pnas.org/content/112/
4/1036.full>.
147
Summarizing Grassegger and Krogerus, “The Data That Turned the World Upside Down,”
Motherboard, 28 January 2017 <https://motherboard.vice.com/en_us/article/mg9vvn/how-our-
likes-helped-trump-win>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
72 Martin Ebers

The processed data can be used, in a third step, in a variety of ways. Companies can
tailor their advertising campaigns but also their products and prices specifically to the
customer profile,148 credit institutions can use the profiles for credit rating,149 insurance
companies can better assess the insured risk,150 HR departments can pre-select candi-
dates,151 and parties can use the data for political campaigns – a practice which in the
end led to the well-known Cambridge Analytica scandal.152 In the USA, the judicial
system is now using big-data analysis to predict the future behavior of criminals.153

2.7.1.2 Behavioral Economics and Behavioral Microtargeting


Combining big data with findings in behavioral economics leads to some note-
worthy insights on microtargeting. For some time now, economists have been
shifting away from the paradigm of economic neoclassicism, the homo oeconomicus,
whose guiding principle is based on the assumption that individuals make rational
decisions. By contrast, behavioral economics has been able to show that humans
have only limited rationality, primarily because of cognitive limitations of the
human mind (bounded rationality), but also because humans often take actions
that they know to be in conflict with their own long-term interests (bounded will-
power), and, moreover, their concern for others (bounded self-interest).
Modern market research tries to exploit these vulnerabilities and combines them
with big data. In this respect, mounting empirical evidence shows that companies
are exploiting or even trying to cause irrational behavior:

 In 2014, Facebook manipulated the newsfeeds of over half a million users


in order to alter the emotional content of users’ posts, showing in this
experiment that user feelings can be deliberately manipulated by certain
messages (so-called emotional contagion).154

148
Hofmann, “Der maßgeschneiderte Preis” (2016) Wettbewerb in Recht und Praxis (WRP) 1074;
Zuiderveen Borgesius, and Poort, “Online Price Discrimination and EU Data Privacy Law”
(2017) 40 Journal of Consumer Policy 347.
149
Cf. Citron and Pasquale (2014) 89 Washington Law Review 1; Zarsky, “Understanding Discrim-
ination in the Scored Society” (2014) 89 Washington Law Review 1375.
150
Cf. Swedloff, “Risk Classification’s Big Data (R)evolution” (2014) 21(1) Connecticut Insurance
Law Journal 339; Helveston, “Consumer Protection in the Age of Big Data” (2016) 93(4)
Washington University Law Review 859.
151
Cf. O’Neil (n 145) 105 et seq.
152
Cf. the speech by Alexander Nix, ex CEO of Cambridge Analytica, at the 2016 Concordia
Annual Summit in New York, <www.youtube.com/watch?v=n8Dd5aVXLCc>; moreover
Rubinstein, “Voter Privacy in the Age of Big Data” (2014) 5 Wisconsin Law Review 861;
Hoffmann-Riem (2017) 142 Verhaltenssteuerung durch Algorithmen, Archiv des öffentlichen
Rechts (AöR) 1.
153
Angwin et al., “Machine Bias,” 23 May 2016 <www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing>.
154
Goel, “Facebook Tinkers with Users’ Emotions in News Feed Experiment, Stirring Outcry”
New York Times (29 June 2014) <www.nytimes.com/2014/06/30/technology/facebook-tinkers-

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 73

 In early 2017, it also became known that Facebook Australia had offered
its advertisers software that could accurately locate psychologically
unstable, depressed teenagers.155
 In 2012, Microsoft registered a patent on “Targeting Advertisements
Based on Emotion.”156 And in 2013, Samsung filed the patent “Apparatus
and methods for sharing user’s emotion.”157

2.7.1.3 Algorithmic Echo Chambers, Filter Bubbles, and Fake News:


A Danger to Democracy?
The use of algorithms to channel information on social media platforms and search
engines has led to a growing fear that the use of content-filtering and content-
removing AI systems as well as social media bots spreading political messages will
have a detrimental effect on the right to freedom of information, the right to
freedom of expression, media pluralism, and political discourse in general. Since
the US elections in 2016, public concern about the creation and dissemination of
fake news and its influence over democratic decision-making processes has
also grown.
Indeed, algorithm-based search engines and social networks can channel and
control a variety of factors that affect how opinions are formed. In many cases,
algorithms (and social bots) determine which content is selected, processed, and
published; sometimes algorithms and social bots are even used to create new
content. The “master” of the algorithm is thus to a large extent also the “ruler” of
public opinion: whoever configures the respective algorithm makes essential deci-
sions regarding the information displayed and thus influences opinion.
The use of algorithms combined with the increasing monopolization of market
power and knowledge in the platform economy158 can lead in particular – so it is
feared – to “echo chambers,” in which people encounter only information that
confirms their existing political views.159 A related theory about “filter bubbles”

with-users-emotions-in-news-feedexperiment-stirring-outcry.html?_r=0>; Kramer, Guillory,


and Hancock, “Experimental Evidence of Massive-Scale Emotional Contagion through Social
Networks” (2014) 111(24) PNAS 8788, <www.pnas.org/content/111/24/8788.full.pdf>.
155
Davidson, “Facebook Targets ‘Insecure’ Young People” The Australian (1 May 2017); cf. also
<www.news.com.au/technology/online/social/leaked-document-reveals-facebook-conducted-
research-to-target-emotionally-vulnerable-and-insecure-youth/news-story/d256f850be6b1c8a21a
ec6e32dae16fd>.
156
Microsoft Corporation (2012) “Targeting Advertisements Based on Emotion,” US 20120143693
A1, <www.google.com/patents/US20120143693>.
157
Samsung Electronics Co., Ltd. (2013) “Apparatus and Method for Sharing User’s Emotion,”
US 20130144937 A1, <www.google.com/patents/US20130144937>.
158
Cf. Sections 2.6.1 and 2.6.3.2.
159
Sunstein, #Republic, Divided Democracy in the Age of Social Media (Princeton University
Press 2017).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
74 Martin Ebers

claims that algorithms cause bubbles of like-minded content around news users.160
For these reasons, there are serious concerns both in the USA and in Europe that
(media) diversity could be drastically reduced.161 Moreover, AI systems create new
opportunities to enhance “fake news” by simplifying the production of high-quality
fake video footage, automating the writing and publication of fake news stories, and
microtargeting citizens by delivering the right message at the right time to maximize
persuasive potential.162
In light of these considerations, there are a number of (regulatory) issues for
discussion.163 Are information intermediaries such as Facebook and Google simply
hosts of user-created content, or have they already turned into media companies
themselves? At what point is it no longer justifiable to maintain the differences in
(self-) regulation between traditional media and these new platforms in terms of
advertising regulation, taxation, program standards, diversity, and editorial independ-
ence? What are the responsibilities of information intermediaries regarding fake
news and the filtering of information in general? Should users be (better) informed
about the personalization of (news) content? Do we want to legislate to limit the
personalization of information/communication? Is it perhaps even necessary to
regulate the algorithm itself in order to ensure adequate diversity of media and
opinion?
Although these questions certainly need to be addressed, it should also be noted
that there is still no established scientific evidence for the existence of echo
chambers and filter bubbles. Recently published studies claim that these fears might
be blown out of proportion, because most people already have media habits that
help them avoid “echo chambers” and “filter bubbles.”164 Moreover, it is unclear to
what extent political bots spreading fake news succeed in shaping public opinion,
especially as people become more aware of these bots’ existence.165 In this light, the

160
Pariser, The Filter Bubble: What the Internet Is Hiding from You (Penguin Books 2012).
161
Epstein, “How Google Could End Democracy” US News & World Report (9 June 2014)
<www.usnews.com/opinion/articles/2014/06/09/how-googles-search-rankings-could-manipulate-
elections-and-end-democracy>. See also the 2016 Report of the UN Special Rapporteur on the
promotion and protection of the right to freedom of opinion and expression, David Kaye, to the
32nd session of the Human Rights Council (A/HRC/32/38), noting that “search engine algo-
rithms dictate what users see and in what priority, and they may be manipulated to restrict or
prioritise content.”
162
Brundage et al. (n 84) 43 et seq.
163
Helberger, Kleinen-von Königslöw, and van der Noll, “Regulating the New Information
Intermediaries as Gatekeepers of Information Diversity” (2015) 17(6) Info 50 <www.ivir.nl/
publicaties/download/1618.pdf>.
164
Dubois and Blank, “The Echo Chamber Is Overstated: The Moderating Effect of Political
Interest and Diverse Media” (2018) 21(5) Information, Communication & Society 1; Moeller and
Helberger, “Beyond the Filter Bubble: Concepts, Myths, Evidence and Issues for Future
Debates,” 25 June 2018 <http://hdl.handle.net/11245.1/478edb9e-8296-4a84-9631-c7360d593610>.
165
Nyhan, “Fake News and Bots May Be Worrisome, but Their Political Power Is Overblown”
The New York Times (13 February 2018) <www.nytimes.com/2018/02/13/upshot/fake-news-and-
bots-may-be-worrisome-but-their-political-power-is-overblown.html>; Brundage et al. (n 84),

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 75

call for legislation appears premature. What is needed above all are further empirical
studies examining the effect of algorithm-driven information intermediaries more
closely.

2.7.1.4 Manipulation of Consumers: The Case of Exploitative Contracts


The use of microtargeting techniques also leads to new forms of information
asymmetries between contractual partners, and to an erosion of private autonomy.166
AI-driven big-data profiling techniques give companies the opportunity to gain
superior knowledge about customers’ personal circumstances, behavioral patterns,
and personality, including future preferences. These insights enable companies to
tailor contracts in ways that maximize their expected utility by exploiting the
behavioral vulnerabilities of their clients. Behavioral economics has identified
hundreds of effects, all of which demonstrate that human decision-making behavior,
while irrational in many situations, is nevertheless predictable and can be exploited
accordingly. Microtargeting makes it possible, for instance, to offer products exactly
when the customer can only make suboptimal decisions – for example, due to the
time of day or a previous event. This so-called emotional targeting is already being
used by many companies. For example, the US advertising company MediaBrix
developed a system that analyzes the emotions of computer players in real time and
then addresses them directly through personalized advertising at particularly suitable
moments (such as breakthrough moments).167
This example alone demonstrates that behavioral microtargeting has a high
potential for abuse: based on the findings of behavioral economics, companies
can exploit or even induce suboptimal decision-making behaviors in their
customers.
Existing European consumer and data protection law as well as national contract
law arguably fail to provide sufficient instruments to effectively sanction such
behavior.
First of all, it is questionable whether microtargeting can be classified as an unfair
commercial practice according to the Unfair Commercial Practices Directive

46. Cf. also Kalla and Broockman, “The Minimal Persuasive Effects of Campaign Contact in
General Elections: Evidence from 49 Field Experiments” (2018) 112(1) American Political
Science Review 148‒166, <https://ssrn.com/abstract=3042867>.
166
Mik, “The Erosion of Autonomy in Online Consumer Transactions” (2016) 8(1) Law, Innov-
ation and Technology 1, <http://ink.library.smu.edu.sg/sol_research/1736>; Sachverständigen-
rat für Verbraucherfragen (SVRV), “Verbraucherrecht 2.0, Verbraucher in der digitalen Welt,”
December 2016 58 et seq., <www.svr-verbraucherfragen.de/wp-content/uploads/Gutachten_
SVRV-.pdf>.
167
Pritz, “Mood Tracking: Zur digitalen Selbstvermessung der Gefühle” in Selke (ed), Lifelogging
(Springer VS 2016) 127, 140 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
76 Martin Ebers

(UCPD). As Eliza Mik168 and others169 have pointed out, the main weaknesses of
the UCPD lie in the definitions and assumptions underlying the concepts of
“average” and “vulnerable” consumers (which disregard the findings in behavioral
economics and cognitive science), as well as the narrow definition of aggressive
practices such as undue influence, which requires the presence of pressure. It
therefore fails to address cases of subtler forms of manipulation. A similar picture
emerges for European data protection law, which suffers above all from an over-
reliance on control and rational choice that vulnerable users are unlikely to exert.170
Whether these gaps in protection can be compensated by (national) contract law
is also questionable since it is difficult to subsume microtargeting under any of the
traditional protective doctrines – such as duress, mistake, undue influence, misrep-
resentation, or culpa in contrahendo.171 At the end of the day, the impact of
microtargeting on customer behavior appears to be too subtle to be covered by
common concepts of contract law, despite the fact that such a technique affects one
of its central values: autonomy.
Future regulation will therefore have to evaluate the extent to which customers
should be protected from targeted advertisements and offers that seek to exploit their
vulnerabilities. This is by no means an easy task because – as Natali Helberger172
rightly points out – there is a very fine line between informing, nudging, and
outright manipulation.

2.7.2 Discrimination of Citizens and Consumers

2.7.2.1 How AI Systems Can Lead to Discrimination


The widespread use of algorithms for preparing or even making decisions, some of
which may have existential significance for people, is being increasingly criticized
by policymakers around the world on the grounds of discrimination.173 In fact, a

168
Mik (n 166).
169
Ebers, “Beeinflussung und Manipulation von Kunden durch ‘Behavioral Microtargeting’”
(2018) MultiMedia und Recht (MMR) 423; Duivenvoorde, “The Protection of Vulnerable
Consumers under the Unfair Commercial Practices Directive” (2013) 2 Journal of European
Consumer and Market Law 69.
170
Hacker, “Personal Data, Exploitative Contracts, and Algorithmic Fairness: Autonomous
Vehicles Meet the Internet of Things” (2017) 7 International Data Privacy Law 266 <https://
ssrn.com/abstract=3007780>.
171
Cf. Mik (n 166).
172
Helberger, “Profiling and Targeting Consumers in the Internet of Things – A New Challenge
for Consumer Law,” in Schulze and Staudenmayer (eds), Digital Revolution: Challenges for
Contract Law in Practice (Harvard University Press 2016) 135 et seq., 152.
173
Executive Office of the [US] President, “Preparing for the Future of Artificial Intelligence”
(Report, 2016) 30‒32; European Parliament, Resolution of 14 March 2017 on fundamental rights
implications of big data: privacy, data protection, non-discrimination, security and law-
enforcement (March 2017), Art 19‒22; and for Germany: “Wissenschaftliche Dienste des

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 77

number of examples show that ADM procedures are by no means neutral, but can
perpetuate and even exacerbate human bias in various ways.
Examples include a chatbot used by Microsoft that unexpectedly learned how to
post racist and sexist tweets,174 face-recognition software used by Google which
inadvertently classified black people as gorillas,175 and the COMPAS algorithm
which is increasingly being used by US courts to predict the likelihood of recidivism
of offenders: As the news portal ProPublica revealed in 2016, COMPAS judged
black and white prisoners differently. Among other things, it was found that the
probability that black inmates were identified as high risk but did not re-offend, was
twice as high as for white inmates. Conversely, white inmates were more likely to be
classified as low-risk, but later to re-offend.176
There can be various reasons for this type of discrimination.177
Discrimination occurs primarily at the process level178 when the algorithmic
model is fed with biased training data. Such bias can take two forms.179 One occurs
when errors in data collection lead to inaccurate depictions of reality due to
improper measurement methodologies, especially when conclusions are drawn
from incorrect, partial, or nonrepresentative data. This type of bias can be addressed
by “cleaning the data” or improving the data-collection process. The second type of
bias occurs when the underlying process draws on information that is inextricably
linked to structural discrimination, exhibiting long-standing inequality. This
happens, for example, when data on a job promotion is collected from an industry
in which men are systematically favored over women. In this scenario, the data basis
itself is correct. However, by using this kind of data in order to decide whether
employees are worthy of promotion, a discriminatory practice is perpetuated and
continued in the future.

Deutschen Bundestags, Einsatz und Einfluss von Algorithmen auf das digitale Leben,” Aktueller
Begriff (27 October 2017).
174
See Vincent, “Twitter Taught Microsoft’s Friendly AI Chatbot to Be a Racist Asshole in Less
than a Day” The Verge (24 March 2016) <www.theverge.com/2016/3/24/11297050/tay-microsoft-
chatbot-racist>.
175
Barr, “Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms” Wall
Street Journal (1 July 2015).
176
See Larson et al., “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, 23 May
2016 <www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm>; Klein-
berg et al., “Inherent Trade-Offs in the Fair Determination of Risk Scores,” Working Paper (2016)
<https://arxiv.org/abs/1609.05807>, 5‒6.
177
See for example, Barocas and Selbst, “Big Data’s Disparate Impact” (2016) 104 California Law
Review 671, 680; Kroll et al., “Accountable Algorithms” (2017) 165 University of Pennsylvania
Law Review 633, 680 et seq.
178
For the different dimensions (process, model, and classification level) cf. Section 2.2.4.
179
Crawford and Whittaker, “The AI Now Report, The Social and Economic Implications of
Artificial Intelligence Technologies in the Near-Term,” 2016.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
78 Martin Ebers

Apart from biased training data, discrimination can also be caused at the classifi-
cation level180 by feature selection, for example by using certain protected character-
istics (such as race, gender, or sexual orientation) or by relying on factors that
happen to serve as proxies for protected characteristics (e.g., using place of residence
in areas that are highly segregated).181

2.7.2.2 Anti-discrimination Law


Although there is extensive anti-discrimination legislation in both the USA and the
European Union, the problem of algorithmic discrimination has been insufficiently
addressed on both sides of the Atlantic. In the USA, this is partly due to the fact that
anti-discrimination legislation is limited primarily to the employment sector.182
There are a number of other reasons why discriminatory algorithmic systems often
escape the doctrinal categories of US anti-discrimination law, or, more precisely,
Title VII of the Civil Rights Act of 1964. As Barocas and Selbst have highlighted, this
is mainly the case because (i) the disparate treatment doctrine focuses on human
decision makers as discriminators without taking into account unintentional dis-
crimination and (ii) decision makers can often escape disparate impact liability if the
factors used for data mining are job related.
Likewise, EU anti-discrimination law does not provide adequate protection
against algorithmic discrimination.183 Problems arise, first of all, with regard to the
limited scope of EU anti-discrimination directives. Although the Race Equality
Directive 2000/43/EC and the Gender Equality Directive 2004/113/EC extend equal
treatment principles beyond employment matters far into general contract law, their
scope is nevertheless limited, because they only apply (i) to race and gender
discrimination and (ii) when goods or services are “available to the public.”184 Both
limitations appear to be problematic. On the one hand, the respective directives do
not cover other discriminatory factors such as religion or belief, disability, age, sexual
orientation, or financial status and willingness to pay,185 nor (new) types of AI-driven
180
Cf. again Section 2.2.4.
181
Kroll et al. (n 177) 681 et seq.
182
For a comparison between US and EU anti-discrimination law cf. de Búrca, “The Trajectories
of European and American Antidiscrimination Law” (2012) 60 American Journal of Compara-
tive Law 1.
183
Cf. Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against
Algorithmic Discrimination under EU Law” (2018) 55 Common Market Law Review (CMLR)
1143.
184
Art 3(1)(h) Race Equality Directive 2000/43/EC; Art 3(1) Gender Equality Directive 2004/113.
185
On the problem of so-called first-degree price-discrimination, see European Data Protection
Supervisor (2015), Opinion No 7/2015, Meeting the challenges of big data: A call for transparency,
user control, data protection by design and accountability <https://edps.europa.eu/data-protection/
our-work/publications/opinions/meeting-challenges-big-data_en>; Article 29 Working Party, Opin-
ion 03/2013 on purpose limitation <http://ec.europa.eu/justice/article-29/documentation/opinion-
recommendation/files/2013/wp203_en.pdf>; Bar-Gill, “Algorithmic Price Discrimination: When

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 79

differentiations which treat people unequally because they belong to a specific


group (for example the group of cat lovers or Nike shoe wearers).186 On the other
hand, there is the problem that, due to the use of microtargeting, offers and contracts
are increasingly tailored and personalized, which raises the question of whether
such goods or services are any longer “available to the public.”187
Moreover, anti-discrimination law does not address the possibility that a predic-
tion may prove to be wrong in a particular case. If, for example, the predictive model
is based on the assumption that 80% of the people living in a certain area pay their
bills late, and a company denies loans to all people living there, it also denies loans
to the 20% who pay their bills on time.188 In this case, too, the outcome of the
assessment is of course unfair. Such a result is, however, not due to a discriminatory
practice, but to the fact that statistical models do not consider individual cases but
rather generalize them. In these scenarios, the tricky question is what degree of
individual fairness is required and how much generalization can be accepted.
Finally, many biased decisions which amount to indirect discrimination can be
justified if the predictive task of the ADM process furnishes a legitimate aim (such as
future job performance, creditworthiness, etc.).189 In these cases, the victim has to
prove the model wrong by establishing, for example, that the seemingly high predict-
ive value of the AI system stems from biased training data. Doing so is no easy task,
however, as victims of algorithmic discrimination will be unable to establish even a
prima facie case of discrimination without access to the data and algorithms, and in
many cases do not even know they have been the victim of discrimination at all.

2.7.2.3 Discussion
In view of this situation, various solutions are being discussed for both the USA and
the European Union.
With regard to individual enforcement, the following measures in particular are
proposed: (i) information rights regarding the scoring process; (ii) duties to provide

Demand Is a Function of Both Preferences and (Mis)perceptions,” 29 May 2018, The Harvard John
M Olin Discussion Paper Series, No 05/2018; Harvard Public Law Working Paper No 18-32 <www
.ssrn.com/abstract=3184533>; Zuiderveen Borgesius and Poort (n 148). EU competition law pro-
hibits different prices only if a company abuses its dominant position; cf. esp Art 102(2)(a) and (c)
TFEU. EU consumer protection rules, in particular the Unfair Commercial Practices Directive
2005/29/EC, also leave traders free to set prices as long as they inform consumers about the prices
and how they are calculated; European Commission, “Guidance on the Implementation/Applica-
tion of Directive 2005/29/EC on Unfair Commercial Practices,” SWD(2016) 163 final 134.
186
Martini, Chapter 3 in this book; Zuiderveen Borgesius, “Discrimination, artificial intelligence,
and algorithmic decision-making,” Study for the Council of Europe, 2018 <https://rm.coe.int/
discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73> 35 et seq.
187
Hacker (n 183) 1156 et seq.; Busch, “Algorithmic Accountability,” ABIDA Project Report
(March 2018) <www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%
20Accountability.pdf> 47.
188
Zuiderveen Borgesius (n 186) 36.
189
Hacker (n 183) 1160 et seq.; Zuiderveen Borgesius (n 186) 19 et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
80 Martin Ebers

consumers with tools for interactive modeling; (iii) access rights to data sets; or,
alternatively, (iv) a right to confidential review (e.g. by trusted third parties) of the
logics of predictive scoring, including the source code, in order to challenge
decisions based on ADM procedures. In the EU, it is disputed above all whether
a right to explanation of automated decision-making can be derived from the GDPR
itself.190
In addition to individual remedies, a number of other measures have been
proposed, ranging from (i) controlling the design stage to (ii) licensing and auditing
requirements for scoring systems to (iii) ex-post measures by public bodies.
In this vein, some authors propose for the USA an oversight by regulators, such as
the Federal Trade Commission (under its authority to combat unfair trading
practices) with the possibility of accessing scoring systems, testing hypothetical
examples by IT experts, issuing impact assessments evaluating the system’s negative
effects, and identifying risk mitigation measures.191
For the EU, some scholars suggest that the enforcement apparatus of the GDPR
should be harnessed and used by national data protection authorities, making use of
algorithmic audits and data protection impact assessments to uncover the causes of
bias and enforcing adequate metrics of algorithmic fairness.192
Although (European) data protection law can surely help to mitigate risks of
unfair and illegal discrimination, the GDPR is no panacea. As Zuiderveen Borgesius
points out, there are five plausible reasons.193 First, data protection authorities have
limited financial and human resources to take effective action. Many authorities
may also lack the necessary expertise to detect and/or evaluate algorithmic discrim-
ination. Second, the GDPR only covers personal data, not the ML models them-
selves.194 Third, the regulation is vaguely formulated, which makes it difficult to
apply its norms. Fourth, a conflict between data protection and anti-discrimination
law arises when the use of sensitive personal data is necessary for avoiding discrimin-
ation in data-driven decision models.195 And fifth, even if data protection authorities

190
Wachter, Mittelstadt, and Floridi (2017) 7(2) International Data Privacy Law 76. Cf. also
Sancho, Chapter 4 in this book.
191
Citron and Pasquale (2014) 89 Washington Law Review 1. For a detailed overview of the various
regulatory proposals, see Mittelstadt, Allo, Taddeo, Wachter, and Floridi (2016 July–
September) Big Data & Society 13.
192
Hacker (n 183). Cf. also Mantelero, “Regulating Big Data” (2017) 33(5) The Computer Law and
Security Review 584; Wachter, “Normative Challenges of Identification in the Internet of
Things: Privacy, Profiling, Discrimination, and the GDPR” (2018) 34(3) The Computer Law
and Security Review 436; Wachter and Mittelstadt, “A Right to Reasonable Inferences: Re-
thinking Data Protection Law in the Age of Big Data and AI” (2019) Columbia Business Law
Review 494 <https://ssrn.com/abstract=3248829>.
193
Zuiderveen Borgesius (n 186) 24 et seq.
194
Cf. Section 2.6.2.2.
195
Žliobaité and Custers, “Using Sensitive Personal Data May Be Necessary for Avoiding Dis-
crimination in Data-Driven Decision Models” (2016) 24 Artificial Intelligence and Law 183.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 81

are granted extensive powers of control, the black-box problem196 still remains. In
this respect, Lipton reminds us that the whole reason we turn to machine learning
rather than “handcrafted decision rules” is that “for many problems, simple, easily
understood decision processes are insufficient.”197
For all these reasons, data protection law is not a cure-all against discrimination.
Rather, further research is needed on the extent to which data protection law can
contribute to the fight against algorithmic discrimination, whether there are still
deficiencies to be addressed by other areas of law (such as consumer law, competi-
tion law, and – when ADM systems are used by public bodies – administrative law
and criminal law), or whether we need completely new rules.

2.7.3 Market Manipulation: The Case of Algorithmic Collusion


The increasingly widespread use of algorithms raises concerns over anti-competitive
behavior, as they enable companies to achieve and sustain collusion without any
formal agreement or human interaction.198 This applies in particular to dynamic
pricing algorithms. As the OECD points out in a recent report, pricing algorithms
are “fundamentally affecting market conditions, resulting in high price transparency
and high-frequency trading that allows companies to react fast and aggressively.”199
In concrete terms, such algorithms provide companies with the ability to evaluate a
wide range of information relevant to pricing, in particular information about
competitors’ pricing behavior, the current demand situation, price elasticity, and a
number of other factors. On this basis, companies can adjust their own prices for
thousands of products automatically and adapt them to the respective market
situation in (milli)seconds.
According to Stucke and Ezrachi,200 the following scenarios for algorithmic
collusion can be distinguished:

196
Cf. Section 2.2.4.
197
Lipton, “The Myth of Model Interpretability,” KDnuggets, 27 April 2015 <www.kdnuggets
.com/2015/04/model-interpretability-neural-networks-deep-learning.html>.
198
Stucke and Ezrachi, “Artificial Intelligence and Collusion: When Computers Inhibit Compe-
tition,” University of Tennessee, Legal Studies Research Paper Series #267, 2015 <https://ssrn
.com/abstract=2591874>; Ezrachi and Stucke, Virtual Competition: The Promise and Perils of
the Algorithm-Driven Economy (Harvard University Press 2016); id., “Two Artificial Neural
Networks Meet in an Online Hub and Change the Future (of Competition, Market Dynamics
and Society),” 2017 <https://ssrn.com/abstract=2949434>; Mehra, “Antitrust and the Robo-
Seller: Competition in the Time of Algorithms” (2016) 100 Minnesota Law Review, 1323; Oxera,
“When Algorithms Set Prices: Winners and Losers,” 2017 <www.oxera.com/publications/
when-algorithms-set-prices-winners-and-losers/>; Woodcock, “The Bargaining Robot,” CPI
Antitrust Chronicle (May 2017) <https://ssrn.com/abstract=2972228>.
199
OECD, “Algorithms and Collusion: Competition Policy in the Digital Age,” 2017 <www.oecd
.org/competition/algorithms-collusion-competition-policy-in-the-digital-age.htm> 51.
200
Stucke and Ezrachi (n 198).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
82 Martin Ebers

1. Pricing algorithms can be used to enforce a previously agreed-upon


pricing arrangement. This was the case, for example, with the so-called
poster cartel, which was prosecuted by both the US and UK
authorities.201
2. Competitors may use the same pricing algorithm, which may be pro-
grammed to prevent competition. Again, competition law provides suffi-
cient means to address such behavior: if companies exchange their
algorithms with rivals, it is a clear violation of competition law. In
addition, collusive behavior can also occur when competitors purchase
similar algorithms and data sets from the same third party. In this
scenario, what is known as a “hub and spoke” cartel may exist where
coordination is, willingly or not, caused by competitors using the same
“hub” to develop their pricing algorithms.202
A particular problem arises in the third constellation, in which competing com-
panies use their own algorithms and datasets without evidence of an agreement
between them. In this case, too, the use of pricing algorithms can lead to a
restriction of competition. The high market transparency and the homogeneity
of products in online trading facilitate parallel behavior. This situation is exacer-
bated if profit-maximizing algorithms are used. As pricing algorithms “observe”
each other’s price strategies and react directly to them, it is likely that a higher anti-
competitive price will prevail. Since algorithms react immediately to any price
change, companies have little incentive to gain an advantage through price
undercutting. Recent studies show that this scenario is indeed very likely.203
Autonomous pricing algorithms may independently discover that they can make
the highest possible profit if they avoid price wars. As a result, they may learn to
collude even if they have not been specifically instructed to do so, and even if they
do not communicate with one another. This is particularly problematic because in
most countries (including the United States and EU Member States) such “tacit”
collusion – not relying on explicit intent and communication – is currently treated
as not illegal.

201
US Department of Justice (DOJ) 2015, “Former E-Commerce Executive Charged with Price
Fixing in the Antitrust Division’s First Online Marketplace Prosecution,” Justice News of the
US Department of Justice, Office of Public Affairs <www.justice.gov/opa/pr/former-e-com
merce-executive-charged-price-fixing-antitrust-divisions-first-online-marketplace>. For the UK
see <www.gov.uk/government/news/cma-issues-final-decision-in-online-cartel-case>.
202
Ezrachi and Stucke, Virtual Competition (2016) 46 et seq. For the EU, cf. also the Eturas case
where a booking system was employed as a tool for coordinating the actions of the firms; ECJ,
21.1.2016, case C-74/14 (Eturas), ECLI:EU:C:2016:42.
203
Calvano, Calzolari, Denicolo, and Pastorello, “Artificial Intelligence, Algorithmic Pricing and
Collusion” (20 December 2018) <https://ssrn.com/abstract=3304991>. In contrast, cf. also
Schwalbe, “Algorithms, Machine Learning, and Collusion” (1 June 2018) <https://papers.ssrn
.com/sol3/papers.cfm?abstract_id=3232631s> (“problem of algorithmic collusion rather belongs
to the realm of legal sci-fi”).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 83

In addition, autonomous pricing algorithms give rise to new problems with


respect to liability,204 auditing, and monitoring205 as well as enforcement.206
The same is true for other forms of market manipulation, for example, high-
frequency trading strategies such as “quote stuffing” (creating a lag in data availabil-
ity in order to enhance latency arbitrage opportunities) and “spoofing” (placing large
orders to create the impression of large demand or supply for a security, with the
intention of driving the prevailing market price in a particular direction).207 Here,
too, the problem of attribution arises: as algorithmic systems interact at higher levels
of automation and connectivity,208 it becomes increasingly difficult to trace their
behavior to a particular human actor and/or company.

2.8 (international) initiatives to


regulate ai and robotics

2.8.1 Overview
The previous overview shows that the use of AI systems and smart robotics raises a
number of unresolved ethical and legal issues. Despite these findings, there is
currently not a single country in the world with legislation that explicitly takes into
account the problematic characteristics of autonomous systems209 in general. With a
few exceptions,210 there are also no special rules for AI systems and smart robotics in
particular.

204
Cf. Mehra (n 198) 1366 et seq. See also Sections 2.2.3 and 2.5.
205
Cf. Ezrachi and Stucke, “Algorithmic Collusion: Problems and Counter-Measures,” OECD
Roundtable on Algorithms and Collusion, 31 May 2017 <www.oecd.org/officialdocuments/
publicdisplaydocumentpdf/?cote=DAF/COMP/WD%282017%2925&docLanguage=En> 25:
“Due to their complex nature and evolving abilities when trained with additional data, auditing
these networks may prove futile. The knowledge acquired by a Deep Learning network is
diffused across its large number of neurons and their interconnections, analogous to how
memory is encoded in the human brain.”
206
For further information see Ezrachi and Stucke, Virtual Competition (2016) 203 et seq.
207
Fisher, Clifford, Dinshaw, and Werle, “Criminal Forms of High Frequency Trading on the
Financial Markets” (2015) 9(2) Law and Financial Markets Review 113.
208
For the connectivity problem, see Section 2.2.1. On the problem of autonomy, see Sections
2.2.3 and 2.5.
209
Cf. Section 2.2.
210
Special regulation exists above all for self-driving vehicles, drones, and high-frequency trading.
In the USA, most of the states have either enacted legislation or executive orders governing self-
driving vehicles; cf. National Conference of State Legislatures, “Autonomous Vehicles State
Bill Tracking Database” <www.ncsl.org/research/transportation/autonomous-vehicles-legisla
tive-database.aspx>. In 2017, the House of Representatives passed a bill for a Self-Drive Act
which was supposed to lay out a basic federal framework for autonomous vehicle regulation
but, ultimately, failed to be considered on the Senate floor. In the EU, the Regulation on Civil
Aviation 2018/1139 addresses issues of registration, certification, and general rules of conduct for
operators of drones – however, without regulating civil liability directly; cf. Bertolini, “Artificial
Intelligence and civil law: liability rules for drones,” Study commissioned by the European

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
84 Martin Ebers

Admittedly, many countries and sometimes also international/intergovernmental


organizations have rules, laws, and norms that are relevant for AI and robotics –
ranging from constitutional principles (rule of law, democracy),211 human rights,212
and (international) humanitarian law213 to administrative and criminal law protect-
ing inter alia fair procedures,214 special laws that could help to mitigate the described
problems such as data protection law, cybersecurity law, product safety and product
liability law, competition law, and consumer law, and many other fields. These laws,
however, were not made with AI and smart robotics in mind. Accordingly, it is
difficult to gage to what extent existing legislation sufficiently regulates the negative
implications of AI.
Since the beginning of 2017, many governments in the world have begun to
develop national strategies for the promotion, development, and use of AI systems.
Still, as Tim Dutton, a senior policy advisor to the Canadian government who
regularly updates a summary of the different AI policies – observes, no two strategies
are alike.215 Instead, national (and international) initiatives focus on a wide variety of
aspects, such as research and development programs, skills and education, data and
digital infrastructure, technical standardization, AI-enhanced public services, ethics
and inclusion, and sometimes also legal standards. While some countries have laid

Parliament’s Policy Department for Citizens’ Rights and Constitutional Affairs at the request of
the JURI Committee, PE 608.848, November 2018. In addition, the EU enacted provisions on
High Frequency Trading, explained in this book by Spindler, Chapter 7. Moreover, in France,
the Digital Republic Act (Loi No 2016-1321 du 7 octobre 2016 pour une République numér-
ique), provides that, in the case of state actors taking a decision “on the basis of algorithms,”
individuals have a right to be informed about the “principal characteristics” of the decision-
making system. For more details see Edwards and Veale, “Enslaving the Algorithm: From a
‘Right to an Explanation’ to a ‘Right to Better Decisions’?” (2018 May/June) IEEE Security &
Privacy 46.
211
Cf. for example Council of Europe, “Ethical Charter” (n 63).
212
Cf. Council of Europe, “Algorithms and Human Rights, Study on the Human rights dimen-
sions of automated data processing techniques and possible regulatory implications,” Council
of Europe study, DGI(2017)12, prepared by the Committee of Experts on Internet Intermedi-
aries (MSI-NET), 2018; Berkman Klein Center, “Artificial Intelligence & Human Rights:
Opportunities and Risks,” 25 September 2018.
213
Margulies, “The Other Side of Autonomous Weapons: Using Artificial Intelligence to
Enhance IHL Compliance” (12 June 2018) <https://ssrn.com/abstract=3194713>.
214
On AI and administrative law cf. Oswald and Grace, “Intelligence, Policing and the Use of
Algorithmic Analysis: A Freedom of Information-Based Study” (2016) 1(1) Journal of Infor-
mation Rights, Policy and Practice; Cobbe, “Administrative Law and the Machines of Govern-
ment: Judicial Review of Automated Public-Sector Decision-Making,” 6 August 2018 <https://
ssrn.com/abstract=3226913>; Coglianese and Lehr (2017) 105 Georgetown Law Journal 1147;
<https://ssrn.com/abstract=2928293>.
215
Dutton, “An Overview of National AI Strategies,” 28 June 2018 <https://medium.com/politics-
ai/an-overview-of-national-ai-strategies-2a70ec6edfd>. Cf. also the overview by Thomas,
“Report on Artificial Intelligence: Part I – the existing regulatory landscape,” 14 May 2018
<www.howtoregulate.org/artificial_intelligence/>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 85

down specific and comprehensive AI strategies (China, the UK, France), some are
integrating AI technologies within national technology or digital roadmaps (Den-
mark, Australia), while still others have focused on developing a national AI R&D
strategy (USA).216
In the USA, most notably, the government was already relying heavily under the
Obama administration on the liberal notion of the free market.217 In its report
“Preparing for the Future of Artificial Intelligence,” published in October 2016,218
the White House Office of Science and Technology Policy (OSTP) explicitly
refrains from a broad regulation of AI research and practice. Instead, the report
highlights that the government should aim to fit AI into existing regulatory schemes,
suggesting that many of the ethical issues related to AI can be addressed through
increasing transparency and self-regulatory partnerships.219 The Trump administra-
tion, too, sees its role not in regulating AI and robotics but in “facilitating AI R&D,
promoting the trust of the American people in the development and deployment of
AI-related technologies, training a workforce capable of using AI in their occupa-
tions, and protecting the American AI technology base from attempted acquisition
by strategic competitors and adversarial nations” – thus maintaining US leadership
in the field of AI.220
By contrast, the AI strategy of the European Union, published in April 2018,221
focuses not only on the potential impact of AI on competitiveness but also on its
social and ethical implications.
The following sections provide a brief overview of the EU’s AI strategy, the efforts
of the most important international organizations in this field, and the individual
and collective efforts of companies and industries/branches at self-regulation.
National AI strategies, on the other hand, are beyond the scope of this chapter
and are not discussed here.

216
Delponte (n 117) 22.
217
For a detailed discussion of the various AI strategies in the US, the EU, and the UK, see Cath,
Wachter, Mittelstadt, Taddeo and Floridi, “Artificial Intelligence and the ‘Good Society’: The
US, EU, and UK approach” (2018) 24(2) Science and Engineering Ethics 505.
218
Executive Office of the President National Science and Technology Council Committee on
Technology, “Preparing for the Future of Artificial Intelligence” (OSTP report), 2016, Wash-
ington, DC, USA <https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/
microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf>. The report followed five work-
shops and a public request for Information, cf. OSTP report 12.
219
OSTP report (n 218) 17.
220
Trump, Executive Order on Maintaining American Leadership in Artificial Intelligence,
issued on 11 February 2019 <www.whitehouse.gov/presidential-actions/executive-order-main
taining-american-leadership-artificial-intelligence/>. Cf. also Shepardson, “Trump Adminis-
tration Will Allow AI to ‘Freely Develop’ in US: Official,” Technology News, 10 May 2018
<www.reuters.com/article/us-usa-artificialintelligence/trump-administration-will-allow-ai-to-
freely-develop-in-u-s-official-idUSKBN1IB30F>.
221
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
86 Martin Ebers

2.8.2 European Union

2.8.2.1 The European Parliament’s Resolution of February 2017


In the European Union, it was above all the European Parliament (EP) that first
developed a strategy for EU-wide regulation of AI and robotics. In February 2017, the
EP passed a resolution “with recommendations to the Commission on Civil Law
Rules on Robotics.”222 The resolution calls for the creation of a “European Agency
for Robotics and AI” consisting of regulators and external experts who could provide
the “technical, ethical and regulatory expertise needed to support the relevant
public actors, at both Union and Member State level, in their effort to ensure a
timely, ethical and well-informed response to the new opportunities and chal-
lenges,”223 and could monitor robotics-based applications, identify standards for best
practice and, where appropriate, recommend regulatory measures, define new
principles, and address potential consumer protection issues.224 The resolution also
recommends introducing an EU-wide registration system for specific categories of
advanced robots.225
In addition, the EP proposes to develop a robotics charter consisting of a code of
ethical conduct for researchers and designers to “act responsibly and with absolute
consideration for the need to respect the dignity, privacy and safety of humans.”226 In
addition, the EP asks the European Commission to clarify the liability of industry
and autonomous robots when harm or damages occur and to adopt new rules on
liability if necessary.227

2.8.2.2 The European Economic and Social Committee’s


Opinion on AI As of May 2017
Shortly after the EP published its resolution, the European Economic and Social
Committee (EESC) presented an opinion on AI at the end of May 2017,228 which
provided not only an in-depth analysis of different types and subfields of AI, but

222
European Parliament, Resolution (n 21). The resolution does not include unembodied AI.
Instead, AI is understood as an underlying component of “smart autonomous robots.” Critic-
ally, Cath et al. (n 217).
223
European Parliament, Resolution (n 21) No 16.
224
European Parliament, Resolution (n 21) No 17.
225
European Parliament, Resolution (n 21) No 2.
226
European Parliament, Resolution (n 21) 19.
227
European Parliament, Resolution (n 21) Nos 49 et seq. For details regarding the recommenda-
tions of the EP relating to liability, cf. Sections 2.5.2 and 2.4.
228
European Economic and Social Committee, “Opinion, Artificial intelligence – The conse-
quences of artificial intelligence on the (digital) single market, production, consumption,
employment and society (own-initiative opinion), Rapporteur: Catelijne Muller, INT/806.”

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 87

also general recommendations, including a human-in-command approach for


“responsible AI.” The opinion identifies eleven areas where AI poses societal and
complex policy challenges: ethics, safety, privacy, transparency and accountability,
work, education and skills, (in-)equality and inclusiveness, law and regulation,
governance and democracy, warfare, and superintelligence.

2.8.2.3 The European Commission’s AI Strategy and the Work of the


High-Level Expert Group on AI
On 25 April 2018, two weeks after 25 European countries had signed the Declaration
of Cooperation on AI with the goal to build on “the achievements and investments
of Europe in AI” and agreed to shape a European approach on AI,229 the European
Commission published its communication “Artificial Intelligence for Europe.”230
The document – complemented by another communication of 7 December
2018231 – outlines three pillars as the core of the proposed strategy: (i) boosting the
EU’s technological and industrial capacity and AI uptake across the economy, (ii)
preparing for socio-economic changes brought by AI, and (iii) ensuring an appropri-
ate ethical and legal framework based on the Union’s values and in line with its
Charter of Fundamental Rights.
To support the strategy’s implementation, the Commission established the High-
Level Expert Group on Artificial Intelligence232 (AI HLEG) and mandated it with
the drafting of two documents in particular: (i) AI ethics guidelines building on the
work of the European Group on Ethics in Science and New Technologies233 and of
the European Union Agency for Fundamental Rights,234 and (ii) policy and invest-
ment recommendations. At the same time, the European AI Alliance,235 an open
multi-stakeholder platform with over 2,700 members, was set up to provide broader
input for the work of the AI HLEG.

229
Declaration “Cooperation on Artificial Intelligence,” Brussels, 10 April 2018 <https://ec.europa
.eu/digital-single-market/en/news/eu-member-states-sign-cooperate-artificial-intelligence>.
230
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final.
231
European Commission, “Communication ‘Coordinated Plan on Artificial Intelligence,’”
COM(2018) 795 final.
232
<https://ec.europa.eu/digital-single-market/en/high-level-group-artificial-intelligence>.
233
European Group on Ethics in Science and New Technologies, “Statement on Artificial
Intelligence, Robotics and ‘Autonomous’ Systems,” Brussels, 9 March 2018 <https://ec
.europa.eu/info/news/ethics-artificial-intelligence-statement-ege-released-2018-apr-24_en>.
234
The European Union Agency for Fundamental Rights (FRA), an independent EU body
funded by the EU budget, started a new project, Artificial Intelligence, Big Data and Funda-
mental Rights, in 2018 with the aim of helping create guidelines and recommendations in these
fields. Cf. <https://fra.europa.eu/en/about-fra/introducing-fra>.
235
<https://ec.europa.eu/digital-single-market/en/european-ai-alliance>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
88 Martin Ebers

At the end of 2018, the AI HLEG presented its first draft, “Ethics Guidelines for
Trustworthy AI.”236 After an open consultation which generated feedback from
more than 500 contributors, the AI HLEG published the final version at the
beginning of April 2019.237 The guidelines are neither an official document from
the European Commission nor legally binding. They are also not intended as a
substitute for any form of policy making or regulation, nor to deter from the creation
thereof.238
One of the main goals of the guidelines is to ensure that the development and use
of AI follows a human-centric approach, according to which AI is not seen as a
means in itself but as a tool to enhance human welfare and freedom. To this end,
the AI HLEG propagates “trustworthy AI” which is (i) lawful, complying with all
applicable laws and regulations; (ii) ethical, ensuring adherence to ethical principles
and values; and (iii) robust, both from a technical and social perspective. The
document aims to offer guidance on achieving trustworthy AI by setting out in
Chapter I the fundamental rights and ethical principles AI should comply with.
From those fundamental rights and principles, Chapter II derives seven key require-
ments (human agency and oversight; technical robustness and safety; privacy and
data governance; transparency; diversity, non-discrimination, and fairness; societal
and environmental wellbeing; and accountability), which then lead in Chapter III
to a concrete but non-exhaustive assessment list for applying the requirements of,
offering AI practitioners guidance.

2.8.2.4 Next Steps


Starting in June 2019, the European Commission will launch a targeted piloting,
focusing in particular on the assessment list which the AI HLEG has drawn up for
each of the key requirements.239 The feedback on the guidelines will be evaluated
by the end of 2019. Building on this evaluation, the AI HLEG will review and update
the guidelines at the beginning of 2020. In parallel, the AI HLEG is also working on
policy and investment recommendations on how to strengthen Europe’s competi-
tiveness in AI.
The work of the AI HLEG is accompanied by evaluations of the current EU safety
and liability framework. To this end, the Commission intends, with the help of other
236
The European Commission’s High Level Expert Group on Artificial Intelligence, Draft
“Ethics Guidelines for Trustworthy AI,” Working Document for stakeholders’ consultation,
Brussels, 18 December 2018 <https://ec.europa.eu/digital-single-market/en/news/draft-ethics-
guidelines-trustworthy-ai>.
237
AI HLEG, “Ethics Guidelines for Trustworthy AI (Ethics Guidelines),” Brussels, 8 April 2019
<https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419>. Moreover, the AI
HLEG published the document “A Definition of AI,” cf. note 17.
238
AI HLEG, “Ethics Guidelines” (n 237) 3.
239
European Commission, “Communication ‘Building Trust in Human-Centric Artificial Intelli-
gence,’” COM(2019) 168 final 7.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 89

expert groups, (i) to issue a guidance document on the interpretation of the Product
Liability Directive in light of technological developments by mid-2019 and (ii) to
publish, also by mid-2019, a report on the broader implications for, potential gaps in,
and orientations for the liability and safety frameworks for AI, the IoT, and
robotics.240

2.8.3 International Organizations


Beyond the European Union, several international organizations have also taken the
initiative to reflect on the future legal framework for AI and robotics. These include
the Council of Europe, the OECD, and the United Nations.

2.8.3.1 Council of Europe


The Council of Europe has already dealt with AI systems in the past, particularly
with regard to big-data analyses and their implications for data protection law. In
addition to the Data Protection Convention 108,241 the Council of Europe adopted
several guidelines and recommendations which are important for AI systems, espe-
cially on profiling,242 big data,243 and the police sector.244
Most recently, the Convention’s Consultative Committee published a report by
Alessandro Mantelero on “Artificial Intelligence and Data Protection: Challenges
and Possible Remedies,”245 as well as Guidelines on Artificial Intelligence.246 In
addition the Council of Europe published a study on “Algorithms and Human
Rights” prepared by the Committee of Experts on Internet Intermediaries,247 and
another study on “Discrimination, artificial intelligence, and algorithmic decision
making” written by Zuiderveen Borgesius.248

240
European Commission, “Communication ‘Artificial Intelligence for Europe,’” COM(2018)
237 final 16.
241
Convention for the Protection of Individuals with regard to Automatic Processing of Personal
Data, European Treaty Series – No 108.
242
Council of Europe, “The Protection of Individuals with Regard to Automatic Processing of
Personal Data in the Context of Profiling,” Recommendation CM/Rec(2010)13 and explana-
tory memorandum <https://rm.coe.int/16807096c3>.
243
Council of Europe, “Guidelines on the Protection of Individuals with Regard to the Processing
of Personal Data in a World of Big Data,” T-PD(2017)01 <https://rm.coe.int/CoERMPublic
CommonSearchServices/DisplayDCTMContent?documentId=09000016806ebe7a>.
244
Council of Europe, “Practical Guide on the Use of Personal Data in the Police Sector,” T-PD
(2018)01 <http://rm.coe.int/t-pd-201-01-practical-guide-on-the-use-of-personal-data-in-the-police-/
16807927d5>.
245
Council of Europe (n 114).
246
Council of Europe, “Guidelines on Artificial Intelligence and Data Protection,” T-PD(2019)01
<https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8>.
247
Council of Europe, “Algorithms and Human Rights” (n 212).
248
Zuiderveeen Borgesius (n 186).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
90 Martin Ebers

In addition, at the end of 2018, the Council of Europe’s European Commission


for the Efficiency of Justice adopted the European Ethical Charter on the Use of
Artificial Intelligence in Judicial Systems and their Environment.249 The Charter is
the first European instrument to set out five principles that should apply to the
automated processing of judicial decisions and data, based on AI techniques:
the principle of respect for fundamental rights, the principle of non-discrimination,
the principle of quality and security, the principle of transparency, and the principle
“under user control” which should ensure that users are informed actors and in
control of the choices made.

2.8.3.2 OECD
The Organisation for Economic Cooperation and Development (OECD) has been
working on AI for several years.250 In 2018, it created an expert group (AIGO) to
provide guidance in scoping principles for AI in society. The expert group’s aim is to
help governments, business, labor, and the public maximize the benefits of AI and
minimize its risks. The expert group plans to develop the first intergovernmental
policy guidelines for AI, with the goal of presenting a draft recommendation to the
next annual OECD Ministerial Council Meeting in May 2019.251
Moreover, the OECD is planning to launch in 2019 a policy observatory on AI: “a
participatory and interactive hub which would bring together the full resources of
the organization in one place, build a database of national AI strategies and identify
promising AI applications for economic and social impact.”252

2.8.3.3 United Nations


The United Nations (UN) has also been discussing the use of AI systems for some
time. Since 2014, under the aegis of the Convention on Certain Conventional
Weapons (CCW), experts have been meeting annually to discuss questions related
to lethal autonomous weapon systems (LAWS).253
Since 2017, the “AI for Good” series has been the leading UN platform for dialog
on AI. At the 2018 summit, which generated AI-related strategies and supporting

249
Council of Europe, “Ethical Charter” (n 63).
250
<www.oecd.org/going-digital/ai/oecd-initiatives-on-ai.htm>.
251
<www.oecd.org/going-digital/ai/oecd-moves-forward-on-developing-guidelines-for-artificial-
intelligence.htm>.
252
<www.oecd.org/going-digital/ai/oecd-moves-forward-on-developing-guidelines-for-artificial-
intelligence.htm>.
253
Cf. especially “Report of the 2017 UN Group of Governmental Experts on Lethal Autonomous
Weapons Systems,” 20 November 2017. Moreover, see the European Parliament’s resolution of
12 September 2018 on autonomous weapon systems, P8_TA-PROV(2018)0341.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 91

projects connecting AI innovators with public- and/or private-sector decision


makers, more than 30 UN agencies met to discuss their roles in AI and solidify
the UN-wide partnership. The results are published in a report which outlines the
diverse activities taking place across the UN system.254

2.8.4 Industry Initiatives and Self-Regulation at International Level


Over the last few years, several initiatives have emerged – propelled by the individual
and collective efforts of researchers, practitioners, companies, and industries –
aiming to develop ethical principles, best practices, and codes of conduct for the
development and use of AI systems and robots.
The following initiatives and organizations, among others, are particularly note-
worthy: AI Now Institute,255 Association for Computing Machinery (ACM) with its
Committee on Professional Ethics256 and the Public Policy Council,257 the Asilo-
mar Principles of the Future of Life Institute,258 the Foundation for Responsible
Robotics,259 Google’s AI Principles,260 The Institute of Electrical and Electronics
Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent
Systems,261 OpenAI,262 Partnership on AI,263 Software and Information Industry
Association (SIIS),264 and The World Economic Forum’s Center for the Fourth
Industrial Revolution.265
Of the initiatives mentioned here, the principles developed by the IEEE are likely
to be the most comprehensive and influential. The IEEE is the world’s largest
technical professional body and plays an important role in setting technology
standards. The current version of the treatise “Ethically Aligned Design”266 contains
more than 100 recommendations for technologists, policymakers, and academics.

254
<www.itu.int/pub/S-GEN-UNACT-2018-1>.
255
<https://ainowinstitute.org/>.
256
<https://ethics.acm.org/2018-code-draft-2/>.
257
<https://acm.org/public-policy/usacm>.
258
<https://futureoflife.org/ai-principles/>.
259
<http://responsiblerobotics.org>.
260
<www.blog.google/technology/ai/ai-principles/>.
261
<https://ethicsinaction.ieee.org/>.
262
<https://openai.com/>.
263
<www.partnershiponai.org/>. The Partnership on AI is an industry-led, non-profit consortium
set up by Google, Apple, Facebook, Amazon, IBM, and Microsoft in September 2016 to
develop ethical standards for researchers in AI in cooperation with academics and specialists
in policy and ethics. The consortium has grown to over 50 partner organizations.
264
SIIS, “Ethical Principles for Artificial Intelligence and Data Analytics,” 2017 <www.siia.net/
LinkClick.aspx?fileticket=b46tNqJuiJA%3d&tabid=577&portalid=0&mid=17113>.
265
<www.weforum.org/center-for-the-fourth-industrial-revolution/areas-of-focus>.
266
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, “Ethically Aligned
Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent
Systems,” Version 2 <http://standards.ieee.org/develop/indconn/ec/autonomous_systems
.html>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
92 Martin Ebers

They represent the collective input of several hundred participants from six contin-
ents. The goal of “Ethically Aligned Design” is “to advance a public discussion
about how we can establish ethical and social implementations for intelligent and
autonomous systems and technologies, aligning them to defined values and ethical
principles that prioritize human well-being in a given cultural context.”267
Finally, it should be noted that international standard-setting organizations are
also currently in the process of developing guidance for AI systems. To this end, in
2018 the International Electrotechnical Commission of the International Organiza-
tion for Standardization (ISO) created a committee on AI which will provide
guidance to other committees that are developing AI applications.268
Similar efforts are currently being made by the three European standards insti-
tutions: CEN, CENELEC, and ETSI.

2.9 governance of algorithms: regulatory options

2.9.1 Should AI Systems and Robotics be Regulated by Ethics or Law?


While governments, international organizations, companies, and industries around
the world have begun developing ethical guidelines and standards and started
discussing the future legal framework for AI and robotics, there is currently no
consensus on what concrete measures should be taken going forward.
Today, many efforts focus on developing ethical principles. However laudable
this work may be, it should be clear that soft law as such will not suffice. Work on
ethical principles and guidelines can lay the groundwork for subsequent legislation,
providing orientation on the possible content of legal rules. However, the main
problem is that ethical guidelines and self-regulatory initiatives by industries are non-
binding.269 In addition, these principles are often too abstract to provide detailed
guidance. As Ben Wagner has pointed out, “[M]uch of the debate about ethics
seems increasingly focused on companies avoiding regulation. Unable or unwilling
to properly provide regulatory solutions, ethics is seen as the ‘easy’ or ‘soft’ option
which can help structure and give meaning to existing self-regulatory initiatives.”270
Indeed, ethical guidelines and self-regulation should not be used as an escape from
(hard) regulation.
267
<https://standards.ieee.org/industry-connections/ec/autonomous-systems.html?utm_medium=
undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&
utm_term=undefined>.
268
<https://iecetech.org/Technical-Committees/2018-03/First-International-Standards-committee-
for-entire-AI-ecosystem>.
269
Saurwein, Just, and Latzer, “Governance of Algorithms: Options and Limitations” (2015) 17(6)
Info 35.
270
Wagner, “Ethics as an Escape from Regulation: From Ethics-Washing to Ethics-Shopping?” in
Hildebrandt (ed), Being Profiled. Cogitas Ergo Sum (Amsterdam University Press 2018) 108
et seq.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 93

2.9.2 General Regulation versus Sector-specific Regulation


This raises the difficult question of which AI and robotics applications and which
sectors require regulation. AI and robotic systems are used in many different sectors
and for many different purposes, and often they do not threaten fundamental values.
An AI-based spam filter does not carry the same risks as an AI system used by courts
to predict the recidivism of offenders.
Even for AI systems that make decisions about humans, the problems arising from
the use of algorithms can be quite different depending on the type of algorithm
used, its purpose, the field of application, and the actors involved. Accordingly, a
one-size-fits-all approach would be inappropriate. Rather, policymakers and scholars
should determine the need for legislative action sector specifically, taking into
account the different risks and legal interests at stake.

2.9.3 Guiding Questions For Assessing the Need to Regulate


In order to gage the need for new rules in a particular sector, we could consider,
according to Paul Nemitz,271 the following questions.
First, policymakers might ask which rules apply in a particular sector, whether these
rules apply to AI and robotics, and whether they address the challenges in a sufficient
and proportionate manner. Hence, before making a new law, we should first deter-
mine the scope of the applicable rules, their underlying principles and goals, their
ability to be applied in a specific context, and whether they are appropriate for tackling
the problems posed by intelligent machines. In this context, policymakers should also
take into account whether a particular action is legal under the existing law only
because the action is performed by a machine and not by a human being. If this is the
case, we should consider codifying the principle that an action carried out by AI is
illegal if the same action carried out by a human would be illegal.
A second aspect would be to evaluate whether regulatory principles found in
specific bodies of law should be generalized for intelligent machines. For example,
in most areas of sensitive human‒machine interaction, and in particular in the law on
pharmaceuticals, there is not only a far-reaching obligation to test products and
undergo an authorization procedure before placing the product on the market, but
also an obligation to monitor the effects of the product on humans. As Nemitz points
out, “AI may be a candidate for such procedures and obligations, both on a general
level, and with specific mutations, if developed for or applied in specific domains.”272

271
Nemitz, “Constitutional Democracy and Technology in the Age of Artificial Intelligence”
(2018) Philosophical Transactions of the Royal Society A 376.
272
Nemitz (n 271) 11.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
94 Martin Ebers

A third way to assess the risks of intelligent systems and the corresponding need for
regulation is to carry out an algorithmic impact assessment.273 In this regard, inspir-
ation can be drawn from Art 35(1) GDPR which requires a data protection impact
assessment when a practice is “likely to result in a high risk to the rights and
freedoms of natural persons,” especially when using new technologies. The intro-
duction of such an impact assessment – combined with the obligation to monitor
the risks of intelligent systems during its use – could strengthen the necessary dialog
between companies and policymakers and at the same time help to implement a
general culture of responsibility in the tech industry.274

2.9.4 Level of Regulation: Global, International, National, or Regional?


Given that AI and robotic systems are technologies with a global impact, some argue
for worldwide regulation.275 According to Turchin and Denkenberger,276 such regu-
lation could take the form of a UN agency similar to the International Atomic Energy
Agency but with much tighter and swifter control mechanisms, equivalent to a world
government designed specifically for AI and robotics. The creation of such an agency
is, however, unlikely in view of the fact that the UN is currently receiving less support
from its member states and international politics. Of course, this does not rule out the
possibility that non-global solutions could reach the global level, especially if an
external transfer mechanism is added such as an international agreement, or if a
system based on local solutions becomes an influential global player.
For the European Union, the question of the level at which regulation should
take place also arises. Since many areas of law have already been harmonized,
current EU legislation should be re-evaluated to ensure that it is fit for intelligent
machines. Any other approach would inevitably lead to a patchwork of national
legislation, hampering the development and deployment of these systems. In this
vein, in a recent resolution the European Parliament called for an “internal market
for artificial intelligence” and called on the Commission “to evaluate whether it is
necessary to update policy and regulatory frameworks in order to build a single
European market for AI.”277

273
Reisman et al. discuss “algorithmic impact assessments” in the US; Reisman, Schultz, Craw-
ford, and Whittaker, “Algorithmic impact assessments: A practical framework for public agency
accountability,” AI Now Institute 2018 <https://ainowinstitute.org/aiareport2018.pdf>.
274
The added value of such an algorithmic impact assessment compared to the procedure under
Art 35 GDPR could lie especially in the fact that important aspects beyond data protection
could be analyzed.
275
Cf. for example Elon Musk, quoted by Morris, “Elon Musk: Artificial Intelligence Is the
‘Greatest Risk We Face as a Civilization,’” 2017 <http://fortune.com/2017/07/15/elon-musk-
artificial-intelligence-2/>.
276
Turchin and Denkenberger (n 74).
277
European Parliament, Resolution of 12 February 2019 on a comprehensive European industrial
policy on artificial intelligence and robotics, P8_TA-PROV(2019)0081, No 119.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 95

2.9.5 Instruments for Modernizing the Current Legal Framework


Legislators have a wide range of instruments at their disposal for adjusting and
updating the current regulatory and institutional framework. These instruments
include the following:

 Regulation of research and development by banning certain algorithms or


systems,278 by denying research funds to systems with a high risk of
misuse,279 and/or by requiring that certain normative or ethical standards
be taken into account at the development stage (legality/ethics by design,
in particular audibility by design),280 following the “privacy by design”
approach well known in data protection law.281
 Premarket approval systems requiring that certain algorithms designed for
use in certain applications must undergo a testing phase and obtain
approval from an agency before deployment,282 and/or introducing an
obligatory algorithmic impact assessment,283 following the model of the
data protection impact assessment as foreseen in Art 35(1) GDPR.
 Monitoring and oversight by regulatory bodies in order to safeguard against
undue risks and harm to the public, especially auditing mechanisms for
algorithms consisting of testing, validation and/or verification of system
performance and impact carried out by internal or external auditors.284
 Ex-post regulation by private enforcement, especially by introducing
“notice-and-take-down” procedures285 and/or by updating liability/tort
law.286

278
Cf. for example Art 22(1) GDPR (prohibition of fully automated decisions).
279
This option is being considered in particular by the UK House of Lords Select Committee on
AI; cf. Thomas (n 215).
280
Cf. Dignum et al., “Ethics by Design: Necessity or Curse?” in Conitzer, Kambhampati,
Koenig, Rossi, and Schnabel (eds), AIES 2018, Proceedings of the 2018 AAAI/ACM Conference
on AI, Ethics, and Society, 2018 60 et seq.; Leenes and Lucivero, “Laws on Robots, Laws by
Robots, Laws in Robots: Regulating Robot Behaviour by Design” (2014) 6(2) Law, Innovation
and Technology 194 <https://ssrn.com/abstract=2546759>.
281
Cavoukian, Privacy by Design: Take the Challenge (Information and Privacy Commissioner of
Ontario, Canada 2009).
282
Tutt, “An FDA for Algorithms” (2017) 69 Administrative Law Review 83 <https://ssrn.com/
abstract=2747994>.
283
For the US cf. Reisman, Schultz, Crawford, and Whittaker, “Algorithmic Impact Assessments:
A Practical Framework for Public Agency Accountability,” AI Now Institute, 2018 <https://
ainowinstitute.org/aiareport2018.pdf>. For the EU cf. Martini, Chapter 3 in this book.
284
Adler, Falk, and Friedler et al., “Auditing Black-Box Models for Indirect Influence,” 2016
<http://arxiv.org/abs/1602.07043>; Diakopoulos, “Algorithmic Accountability: Journalistic
Investigation of Computational Power Structures” (2015) 3(3) Digital Journalism 398; Kitchin
(n 14); Sandvig, Hamilton, Karahalios, and Langbort (n 55).
285
For the Notice and Take-Down (N&TD) procedure in the USA, see Section 512(c) of the US
Digital Millennium Copyright Act (DMCA). For the EU, see Art 15 E-Commerce Directive
2000/31/EC.
286
See Section 2.5.3.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
96 Martin Ebers

 Co-regulation, i.e., regulatory cooperation between state authorities and


the industry using, for example: (i) schemes allowing companies to
certify algorithms or products on the basis of voluntary algorithmic
accountability standards which could be developed by standard-setting
organizations;287 (ii) seals of quality; or (iii) the New Approach regulatory
policy, which has been applied for many years in the area of EU product
safety law,288 creating a presumption of conformity if products comply
with harmonized standards.
 Accompanying measures such as (i) creating a (EU) regulatory agency for
AI and robotics;289 (ii) introducing ethical review boards to assess the
potential damages and benefits to society; (iii) developing a framework
for explainable AI (XAI), covering both transparency (simulatability,
decomposability, algorithmic transparency) and interpretability (textual
descriptions, visualizations, local explanations, examples),290 in order
particularly to provide for ex-ante/ex-post explanations about the system’s
functionality and ex-post explanations about specific decisions; (iv) cre-
ating a right to know whether a person is interacting with a human being
or a machine and whether they are subject to automated decision-
making;291 and (v) a right to opt out or withdraw from automated deci-
sion-making.292
 Improving cooperation between the public and private sectors and aca-
demia in order to reinforce knowledge sharing and promote education
and training for designers on ethical implications, safety and fundamen-
tal rights, as well as for consumers on the use of robotics and AI.
Which of these tools is the most suitable, and which of these instruments in
combination constitute a “multi-level legislation” approach, cannot be answered
in general terms. The choice of regulatory instrument and the intensity of interven-
tion ultimately depend on the type of algorithmic system, its area of application
(especially whether the system is used in the public or private sector) and – last but
not least – on the degree of risk and the legal interests at stake.

287
Cf. in this respect the certification procedures envisaged in Art 42 GDPR.
288
Busch, “Towards a ‘New Approach’ in European Consumer Law: Standardisation and Co-
Regulation in the Digital Single Market” (2016) 5 Journal of European Consumer and Market
Law 197.
289
European Parliament, Resolution (n 21) No 16. For the USA, cf. Calo, “The Case for a Federal
Robotics Commission,” 1 September 2014 <https://ssrn.com/abstract=2529151>; Brundage and
Bryson, “Smart Policies for Artificial Intelligence,” August 29, 2016 <https://arxiv.org/abs/1608
.08196>.
290
Cf. regarding these different (sub)categories Lipton (n 197).
291
Cf. AI HLEG, “Ethics Guidelines” (n 237) 34.
292
Ibid.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 97

2.9.6 A Plea for an Innovation-friendly Regulation


AI and robotics are fast-developing technologies. Adopting statutes or treaties may
take years or even decades, whereas technology develops quickly, outpacing any
attempt at regulating it. This “pacing problem”293 is exacerbated by the well-known
Collingridge Dilemma,294 according to which at the early stages of a new technol-
ogy, regulation is difficult due to lack of information, while by the time a technol-
ogy’s undesirable consequences are discovered, it is so much entrenched in our
daily lives and economy that any control faces resistance from users, developers, and
investors.
As AI and robotic systems already permeate our lives to a large extent, the need to
address these regulatory challenges is even more urgent.
In order to deal with these problems, many scholars have suggested specific
regulatory tools that could be considered in the creation of a future regulatory
framework for AI and robotics, including:

 Phrasing statutes and guidelines in a technology-neutral way in order to


ensure equal treatment295 and sustainable rules.296
 Using multi-level legislation, especially by combining statutory rules with
guidelines that can be adopted, evaluated, and amended easily by regu-
latory bodies.297
 Enhancing flexibility through temporary regulation by using “experimen-
tal legislation.”298
 Creating special zones for empirical testing and development in the form
of a living lab,299 or “regulatory sandboxes”300 in which the regulator

293
Marchant, Allenby, and Herkert (eds), The Growing Gap between Emerging Technologies and
Legal-Ethical Oversight: The Pacing Problem (Springer 2011); Hagemann et al., “Soft Law for
Hard Problems: The Governance of Emerging Technologies in an Uncertain Future” (2018)
<https://ssrn.com/abstract=3118539> 24.
294
Collingridge, The Social Control of Technology (Pinter 1980) 11 et seq.
295
Reed, “Taking Sides on Technology Neutrality” (2007) 4(3) SCRIPTed 263.
296
Greenberg, “Rethinking Technology Neutrality” (2016) 100 Minnesota Law Review 1495.
297
Koops, “Should ICT regulation be technology-neutral?” in Koops et al. (eds), Starting Points
for ICT Regulation (2006) <https://ssrn.com/abstract=918746>.
298
Fenwick, Kaal, and Vermeulen, “Regulation Tomorrow: What Happens When Technology Is
Faster than the Law?” (2017) 6(3) American University Business Law Review 561, <www.aublr
.org/wp-content/uploads/2018/02/aublr_6n3_text_low.pdf>; Guihot, Matthew, and Suzor,
“Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence” (28 July 2017) 20.
Vanderbilt Journal of Entertainment & Technology Law, 385 <https://ssrn.com/abstract=
3017004> 50.
299
The model for such a living lab is the Robot Tokku created by the Japanese government in the
early 2000s; cf. Pagallo, “LegalAIze: Tackling the Normative Challenges of Artificial Intelli-
gence and Robotics through the Secondary Rules of Law” in Corrales, Fenwick, and Forgó
(eds), New Technology, Big Data and the Law (2017) 281 et seq., 293 et seq.
300
Cf. UK Financial Conduct Authority, “Regulatory Sandbox Lessons Learned Report,” 2017
<www.fca.org.uk/publications/research/regulatory-sandbox-lessons-learned-report>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
98 Martin Ebers

provides selected firms wishing to bring innovative products or services to


market with an opportunity to roll out and test them within a designated
domain for a specified period, subject to monitoring and oversight by the
relevant regulator but without being forced to comply with the applic-
able set of rules and regulations.
 Creating a Governance Coordination Committee to “provide oversight,
cultivate public debate, and evaluate the ethical, legal, social, and
economic ramifications of (. . .) important new technologies.”301
 Implementing feedback processes in a dynamic regulatory framework that
facilitate the enhancement of information for regulation in order to
“enable rule makers to adapt to regulatory contingencies if and when
they arise because a feedback effect provides relevant, timely, decentral-
ized, and institution-specific information ex-ante.”302
 Applying a data-driven approach that enables dynamic regulation in
order to identify what, when, and how to regulate.303
All these innovative regulatory techniques (and more) should be considered to deal
with the manifold problems of AI and robotic systems. Since the risks of these
systems are highly context specific, there is no one-size-fits-all solution. Instead,
there is a need for multi-level legislation and a mix of different regulatory tools.
Attention should therefore shift to a mixed approach of abstract and concrete rules
that combines different governance measures mutually enabling and complement-
ing each other.

2.10 outlook
These uncertainties call for further risk and technology assessment to develop a
better understanding of AI systems and robotics, as well as their social implications,
with the aim of strengthening the foundations for evidence-based governance.
Collaboration with computer science and engineering is necessary in order to assess
the potential drawbacks and benefits, identify and explore possible developments,
and evaluate whether ethical and legal standards can be integrated into autonomous
systems (ethics/legality by design). Likewise, expertise from economics, political
science, sociology, and philosophy is essential to evaluate more thoroughly how
AI technologies affect our society. Since technical innovations know no boundaries,

301
Marchant and Wallach, “Coordinating Technology Governance” (2015) XXXI(4) Issues in
Science and Technology <https://issues.org/coordinating-technology-governance/>.
302
Kaal and Vermeulen, “How to Regulate disruptive Innovation – From Facts to Data,” 11 July
2016 <https://ssrn.com/abstract=2808044> 25.
303
Kaal and Vermeulen (n 302); Roe and Potts, “Detecting New Industry Emergence Using
Government Data: A New Analytic Approach to Regional Innovation Policy” (2016) 18
Innovation 373.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
Regulating AI and Robotics 99

an international perspective is required. In this respect, the initiatives being under-


taken at European and international levels are important and laudable.
Regulators should consider not only existing laws and their underlying principles
and goals, but also the regulatory bodies involved in the various sectors, different
codes of conducts and international standards, ethical guidelines, and much more.
This multiplicity of perspectives and approaches requires an oversight and coordin-
ation of various principles, rules, codes, and interests.
In this spirit, policymakers should avoid premature, innovation-inhibiting regula-
tion – but rather promote research and development projects that are committed to
fundamental human values. Whether current development requires regulation, or
whether such regulation would be too early for the time being, is indeed an open
question. There is no one-size-fits-all solution. Instead, the need for new rules must
be evaluated for each sector and for every application separately, considering the
respective risks and legal interests involved.
We may think in terms not only of “soft law” guidelines and ethical codes by
industry bodies, updated sets of rules using traditional methods of regulation such as
research and development oversight, product licensing, auditing mechanisms, co-
regulation, and/or ex-post public or private enforcement, but also of new, more fluid
regulatory tools such as (data-driven) experimental legislation or regulatory
sandboxes.
What is necessary is a multi-level approach, combining different governance
measures that mutually enable and complement each other, in order to find the
right balance between keeping up with the pace of change and protecting people
from the harm posed by AI and robotic systems, while at the same time creating a
regulatory environment that avoids overregulation but allows for innovation and
further development. Above all, much more research and debate is required to
determine which rules, if any, are needed.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:25:42, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.003
3

Regulating Algorithms

How to Demystify the Alchemy of Code?

Mario Martini*

introduction
Despite their profound and growing influence on our lives, algorithms remain a
partial “black box.” Keeping the risks that arise from rule-based and learning systems
in check is a challenging task for both: society and the legal system. This chapter
examines existing and adaptable legal solutions and complements them with further
proposals. It designs a regulatory model in four steps along the time axis: preventive
regulation instruments; accompanying risk management; ex post facto protection;
and an algorithmic responsibility code. Together, these steps form a legislative
blueprint to further regulate artificial intelligence applications.

3.1 algorithms as key to a digital cognitive world:


tomorrow’s leviathan?
Software applications1 constitute key elements of our society and economy in the
digital age. Their underlying algorithms2 act as prioritization machines and oracles

* The essay is part of the project “Algorithm Regulation in the Internet of Things” externally
funded by the German Federal Ministry of Justice and Consumer Protection. It summarizes
the central findings of the project – as well as the paper by Martini, “Algorithmen als
Herausforderung für die Rechtsordnung” (2017) 72 Juristenzeitung (JZ) 1017 and the book
Martini, Blackbox Algorithmus – Grundfragen einer Regulierung Künstlicher Intelligenz,
(Springer 2019) on which this article is based. The author thanks especially Michael Kolain,
Anna Ludin, Jan Mysegades, and Cornelius Wiesner for their very helpful participation. The
article was finished in June 2019. Internet sources referred to are also from this date.
1
The term “software application” is herein understood as a code-based overall system which has
an external relationship to users.
2
“Algorithms” are step-by-step instructions for solving a (mathematical) problem. As such, they
are not a phenomenon of the digital age. In this chapter references to “algorithms” means
computational algorithms in the sense of a formalized procedure that can be transformed into a
programming language in finite time. See as well e.g. Güting and Dieker, Datenstrukturen und

Downloaded from https://www.cambridge.org/core. University of New100England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 101

of knowledge shaping our reconstruction of reality. They determine what we buy,


what we read and how we learn by providing personalized offerings.3 A few words
are enough to play our favourite song, find the best restaurant and fulfil our worldly
desires, – for example in the case of six-year-old Brook from Texas who asked
Amazon’s voice assistant Alexa for a dollhouse and cookies. Shortly thereafter, two
kilograms of biscuits and a house for her doll were delivered to her parents’ doorstep.
It was not only Brook who was pleased about this – even the local television station
CW6 News covered the story. The news anchor quoted the little girl’s words: “Alexa,
order me a dollhouse.” Unintentionally, Alexa’s sisters in the TV viewers’ living
rooms took the quoted instruction literally: the reporter had unexpectedly ordered
the same dollhouses for his viewers as little Brook had done before.4 Voice assistants
do not distinguish who is giving them an order – their owner, a little girl or a news
anchor’s voice transmitted through TV.
Prospectively, similar devices will not only be able to understand what we order,
but also to analyse how we instruct them. In such an algorithm-driven world, all
kinds of data are of crucial importance and value – and can thus be used for multiple
purposes, far beyond processing an order. Apart from obvious information like
purchase history, even information such as tone of voice or keyboard typing patterns5
can be of interest.
Based on these technical tools, data analysis through algorithms predicts future
behavior as if looking into a crystal ball. Meanwhile, the oracles of this new “smart”
world make us and our lives more and more transparent. Algorithms allow com-
panies to process previous purchase behavior, place of residence and other data they
might even retrieve without the customer’s awareness.
At the same time, algorithmic classifications increasingly affect existential spheres
of life. They decide the conditions under which we receive a loan and whether we
are invited for a job interview. Euphoria quickly meets dystopia. Thus, the fear of an
alchemical imperialism that degrades the individual into the ward of data domin-
ation rises. It alternates with an awestruck amazement at the technical blessings that
make life easier, such as autonomous cleaning robots or new possibilities for image
and speech recognition. As a result, we willingly indulge in a digital trade: we sell

Algorithmen (4th edn, Springer 2018) 33; Zweig and Krafft, “Fairness und Qualität algorith-
mischer Entscheidungen” in Kar, Thapa, and Parycek (eds), (Un)Berechenbar? (ÖFIT 2018)
204, 207.
3
Regarding this development, see Coglianese and Lehr, “Regulating by Robot” (2017) 105
Georgetown Law Journal 1147, 1149 ff.; Tutt, “An FDA for Algorithms” (2017) 69 Administrative
Law Review 83, 85 ff.
4
Leininger, “How to Keep Alexa from Buying a Dollhouse without Your OK” (CNN online,
6 January 2017) <http://edition.cnn.com/2017/01/06/tech/alexa-dollhouses-san-diego-irpt-trnd/
index.html>.
5
Technical analysts’ methods can draw conclusions about consumerism from it, see e.g. Epp,
Lippold, and Mandryk, “Identifying Emotional States using Keystroke Dynamics” in Tan (ed),
Proceedings of the 29th Annual ACM CHI Conference on Human Factors in Computing
Systems (2011) 715.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
102 Mario Martini

the soul of our personal data to eat from the tree of knowledge of good and evil in
digital paradise.
Algorithms can not only recognize the risk of depression or Parkinson’s disease in
the individual’s voice, compose music and copy a Rembrandt painting true to the
original. Their predictions have even made their way into law enforcement.6 The
US state of Wisconsin, for example, uses a system called COMPAS to calculate an
accused’s likelihood of recidivism. Judges incorporate the algorithmic evaluation
into their appraisal.7 The influence of algorithms (fortunately) does not yet reach
this far in European courts. However, German tax authorities are already operating
an automated decision-making system: starting in 2018, tax refunds in general are no
longer being processed by a human tax official, but by a computer system
(Section 155(4) Sentence 1 of the German Fiscal Code (Abgabenordnung – AO)).
The software divides each tax declaration into risk groups.8 It works like a traffic light
system. Red signifies that a tax official should take a closer look.Green indicates “no
in-depth examination necessary”: the tax assessment reaches the citizen fully auto-
mated without human oversight.

3.2 out of control? risk potentials of ai as


prediction machines

3.2.1 Opacity
Algorithms have (at least from a lay person’s perspective) a lot in common with the
mysticism of Kabbalah. For most (advanced) software applications, the user cannot
see how they operate; underlying algorithms and their decision-making criteria
remain a magic formula. The criteria used by the algorithm of German credit
agency SCHUFA to assess the creditworthiness of customers are not known, nor
are the parameters applied by the risk selection of the German automated tax return
accessible.9 If the software used by tax authorities flagged all tax declarations of those
who had already filed an objection, of those in which an advice-intensive tax

6
See for several examples Rieland, “Artificial Intelligence Is Now Used to Predict Crime. But Is
It Biased?” (Smithsonian.com, 5 March 2018) <www.smithsonianmag.com/innovation/artifi
cial-intelligence-is-now-used-predict-crime-is-it-biased-180968337>.
7
For the suggested discriminatory tendency of this particular software (especially based on race),
see e.g. Angwin and others, “Machine Bias” (ProPublica, 23 May 2016) <www.propublica.org/
article/machine-bias-risk-assessments-in-criminal-sentencing>. For the rebuttal by the software
developers see Dieterich, Mendoza, and Brennan, COMPAS Risk Scales: Demonstrating
Accuracy Equity and Predictive Parity (Northpointe Inc 2016).
8
See Section 88(5) AO; on that topic e.g. Martini and Nink, “Wenn Maschinen entscheiden –
vollautomatisierte Verwaltungsverfahren und der Persönlichkeitsschutz” (2017) 36 NVwZ-Extra
10/2017 1, 8.
9
See Section 88(5) Sentence 4 AO.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 103

consultant had participated, or those of a certain minority, for further inspection, the
individual probably would not notice.
However, tax officials and social security officers have developed audit routines
based on their intuition in the analogue past as well – and not always without
prejudice. No one can read a human officer’s free mind, which makes decisions
according to values that are beyond external control. Their decisions cannot be
technically reconstructed on the basis of stored data in order to detect discrimination
or other errors. But there is a difference: software decides not dozens or hundreds of
cases, but tens of thousands or more. The decision of an algorithm unfolds over an
enormous range.10
From a technical perspective, the supervision of algorithms becomes more and
more like squaring a circle: machine-learning systems do not typically follow a fixed
scheme.11 Their learning process requires a permanent dialogue between data and
models mutually affecting each other, in which the algorithms evolve to be faster
and more precise due to “learning” by experience.12 Neural networks, as one kind of
adaptive system, work by emulating the functions of the human brain. Their output
depends on countless weighted individual decisions of millions of network nodes.
They decide praeter propter autonomously how to react to new situations and how to
weigh different criteria.13 As a consequence, the results of a self-learning algorithmic
decision cannot easily be reproduced. Once in the world, even developers of
machine-learning systems do not necessarily understand exactly how or why the
algorithmic oracle acts in the way that it does.14 Errors in such an arcane system
cannot be prevented, traced or checked in the traditional way.

10
There is another legally relevant difference. Unlike humans, whose decision programs cannot
be programmed ex ante, software is necessarily dependent on such values. Society must give
algorithms the frame for value decisions before they are applied in real life. Computer-
generated decisions antedate the time of decision determination. Thus, the algorithm forces
early decisions on what a legitimate evaluation should look like. The software system then
implements these guidelines with relentless consistency.
11
To take an example, in 2016, Google announced that its translation service – based on several
layers of neural networks – could now translate between two languages although it was never
taught to do so. Even Google could not explain how this worked in detail, providing only
theories and potential interpretations about what happened; Schuster, Johnson, and Thorat,
“Zero-Shot Translation with Google’s Multilingual Neural Machine Translation System”
(Google AI Blog, 22 November 2016) <https://research.googleblog.com/2016/11/zero-shot-trans
lation-with-googles.html>.
12
For a summary and several references see Tutt (n 3) 94 ff.; Surden, “Machine Learning and the
Law” (2014) 89 Washington Law Review 87, 89 ff.; for more in-depth basics see the prologue of
Flach, Machine Learning (CUP 2012) 1 ff.
13
For an older, but still quite instructive introduction see Chapter 1 of Haykin, Neural Networks
(2nd edn, Pearson Education 1999) 23 ff.; see also Goodfellow, Bengio, and Courville, Deep
Learning (MIT Press 2016) 164 f.
14
There are various reasons for this “technical opacity,” e.g. the connection with other systems,
libraries or data bases, cf. Kroll and others, “Accountable Algorithms” (2017) 165 University of
Pennsylvania Law Review 633, 648; the sheer extent of variables and code, cf. Edwards and
Veale, “Slave to the Algorithm?” (2017) 16 Duke Law & Technology Review 18, 59, 61; or the

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
104 Mario Martini

If it remains unclear how and based on which data an algorithm makes its
decision, this intransparency touches on what Art 8 Charter of Fundamental Rights
of the European Union terms “protection of personal data.”15 This fundamental
right consists in protecting the decision about who is authorized to collect and use
one’s data.16 Lack of access to the software applications’ mode of operation can
impede the legal protection of these rights: if a person does not know and under-
stand the data base, the sequence of actions and the weighing of the decision
criteria, they are not able to decide for themselves who can draw conclusions and
for what purpose, or to check the legality of data processing that relates to them.
Chilling effects may arise when it is not clear whether or not suspected surveillance
of one’s behavior is actually taking place, which can thus curtail the fundamental
right to privacy.17

3.2.2 Unlawful Discrimination As Ethical and Legal Challenge


Despite all intransparency, the mathematical and logical formulae underlying an
algorithm promise objectivity. Algorithms know no envy, no antipathy or

special properties of the respective systems, such as neural networks, cf. Yosinski and others,
“Understanding Neural Networks through Deep Visualization” [2015] ArXiV 1, 9. For an
overview see Burrell, “How the Machine ‘Thinks’: Understanding Opacity in Machine Learn-
ing Algorithms” (2016) 3 Big Data & Society 1.
15
In German constitutional law this fundamental right is called “the right to informational self-
determination.” The historical starting point of the right to informational self-determination in
Germany was the census of 1983. Thousands of people protested against it. “Don’t count us,
count your days,” the protesters chanted. On the night before the classic Dortmund v Hamburg
football match, activists even painted an appeal in big letters on the stadium turf: “Boycott and
sabotage the census!” The text could not be removed in time for the game. With the approval
of the Federal President, the text was promptly supplemented to read: “The Federal President:
DO NOT boycott and sabotage the census.” Germans are traditionally sceptical about bund-
ling data in one hand. They are “world champions” in data protection. The experience of
totalitarian dictatorships echoes particularly strongly in their collective consciousness. How-
ever, the amount of data collected by modern big-data collectors nowadays is far greater than
what the Federal Republic and the former East German secret service, the Stasi, could ever
have collected together. Over time, though, the German population seems to have become
more and more relaxed about sharing and disclosing their personal data. In everyday use, they
increasingly value the benefits provided by modern digitalization techniques more highly than
the protection of their privacy.
16
However, (at least under German law) this fundamental right is not an absolute right. The right
to informational self-determination is subject to prior rights of third parties or public interest
pursuant to Art 2(1) German Basic Law.
17
The term “chilling effect” has its origin in the case law of the US Supreme Court. In Europe it
first found its way into the jurisprudence of the European Court of Human Rights and was
introduced into German law by the German Federal Constitution Court (Bundesverfassungs-
gericht – BVerfG) as Einschüchterungseffekte (intimidation effects) after the court recognized
those effects in several legal situations, directly transforming the ECHR’s judication of discrim-
ination, see BVerfG, 3.3.2004, BVerfGE 109, 279 (354 f.); BVerfG 2.3.2010, BVerfGE 125,
260 (335).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 105

fluctuations in the blood sugar level before and after meals18 or any other circum-
stances beyond the essential facts to be decided on. Depending on their coding,
algorithms can make more consistent and unprejudiced decisions than the average
human. This raises the question: Are algorithms possibly even better decision-
makers, even if they are not transparent, e.g., in the equal allocation of places at a
public university?
Although the decisions of algorithms follow logical patterns, they are the product
of human programming and its preconditions and are therefore not free from bias.
They encode the values and assumptions of their creators. Hence, algorithms are
only as meticulous – not to say impartial – as the people who program them. Hidden
prejudices can creep into algorithms unnoticed not only through programming, but
also due to an inadequately selected data base.19
This effect was demonstrated involuntarily by the experimental beauty contest
Beauty Artificial Intelligence. It was the first beauty contest carried out exclusively
on the basis of the decision-making power of machine-learning algorithms. 6,000
people from 100 countries were judged by artificial intelligence. The result was
surprising in one respect: only one out of 44 winners was a person with dark skin.20
The algorithm turned out to at least partly gauge beauty by race.
Rationally it should not come as a surprise that the system paid no attention to
diversity. Its machine-learning algorithm had been fed with images of white beaut-
ies. Microsoft’s self-learning chat bot Tay’s test run on Twitter performed even
worse, when the bot mutated into a racist and sexist Holocaust denier after just
several hours of interaction with not always benevolent internet users.21
Complex algorithms, whether deterministic or with learning capacities, ultim-
ately base their decisions on stochastic inferences that only determine correlations.
Thus, by their very nature, algorithms do not offer explanations of cause and

18
See Danziger, Levav, and Avnaim-Pesso, “Extraneous Factors in Judicial Decisions” (2011)
108 PNAS 6889, 6889 f. According to the findings of the investigation, hungry judges tend to
soften their sentences after the meal break. However, the results are not empirically validated.
The study suffered from methodological shortcomings: in particular, it failed to take into
account the special features of judicial termination practice in the courts examined, which
may have distorted the study results. See Glöckner, “The Irrational Hungry Judge Effect
Revisited: Simulations Reveal That the Magnitude of the Effect Is Overestimated” (2016) 11
Judgment and Decision Making 601, 602 ff.; Weinshall-Margel, and Shapard, “Overlooked
Factors in the Analysis of Parole Decisions” (2011) 108 PNAS E833.
19
Cf. Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against
Algorithmic Discrimination under EU Law” (2018) 55 Common Market Law Review 1143, 1147;
Martini, Blackbox Algorithmus – Grundfragen eine Regulierung Künstlicher Intelligenz
(Springer 2019) 47 ff., 239 ff.
20
Levin, “A Beauty Contest Was Judged by AI and the Robots Didn’t Like Dark Skin” The
Guardian online (8 August 2016) <www.theguardian.com/technology/2016/sep/08/artificial-
intelligence-beauty-contest-doesnt-like-black-people>.
21
Gibbs, “Microsoft’s Racist Chatbot Returns with Drug-Smoking Twitter Meltdown” The
Guardian online (30 March 2016) <https://www.theguardian.com/technology/2016/mar/30/
microsoft-racist-sexist-chatbot-twitter-drugs>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
106 Mario Martini

effect.22 A profiling algorithm, rather, bases its assumptions about individuals on


correlated group probabilities and decides how to weigh each criterion more and more
autonomously. It assigns individuals to defined subgroups based on shared charac-
teristics (e.g., tennis fans, frequent buyers or people with a certain political belief ),
treat them as part of these groups and adapt their differentiation criteria accordingly.
The more social and economic power decision-making algorithms are given, the
more their stochastically operating classifications will risk discrimination. In a digital
society, it will become increasingly common for individuals to experience unequal
treatment as a result of algorithmic differentiation – not because they fulfil certain
characteristics, but because an algorithm assigns these characteristics to them on the
basis of a group classification.
Problems arise as soon as an algorithm discriminates against groups or people on
statistical grounds that we perceive to be unethical.23 For example, algorithms might
decide to step up administrative audits of welfare beneficiaries because of their
migrant background if they identify a statistical correlation between certain places of
origin and a raised incidence of welfare fraud or insolvency. This can even accentu-
ate existing structural disparities.24 Place of residence – unprivileged or upscale
neighbourhood – can be decisive for a loan application or the dynamic price of
goods. Even something as simple as somebody’s name – Hans, Mehmet or Igor –
can lead an algorithm to sort the person in a discriminatory manner.25 Apple
computer users can receive higher price quotes for online products and services
than the users of other computer products based on the assumption that Apple users
generally have higher incomes.26
The criminal prognosis software COMPAS27 is also suspected of a racial divide. It
discriminates indirectly between black and white accused (although it is at least
supposed to prevent ethnic biases).28 In fact, the gap between the prognosis of and
actual recidivism rates concerning black people is twice as wide as the one

22
Mayer-Schönberger and Cukier, Big Data (John Murray 2013) 248; Martini, “Big Data als
Herausforderung für den Persönlichkeitsschutz und das Datenschutzrecht” (2014)
129 Deutsches Verwaltungsblatt (DVBl) 1481, 1485.
23
See also European Parliament resolution of 14 March 2017, margin nos 19 ff., 31; Mittelstadt
and others, “The Ethics of Algorithms: Mapping the Debate” (2016) Big Data & Society 1, 5 ff.
24
O’Neil, Weapons of Math Destruction (Crown Random House 2016) 7: “downward spiral.”
25
For the risks and chances online personality tests pose for job applicants, Lischka and Klingel,
Wenn Maschinen Menschen bewerten (Bertelsmann Stiftung 2017) 22 ff.; see also Weber and
Dwoskin, “Are Workplace Personality Tests Fair?” The Wall Street Journal (29 September 2014)
<www.wsj.com/articles/are-workplace-personality-tests-fair-1412044257>; O’Neil, “How Algo-
rithms Rule Our Working Lives” The Guardian (1 September 2016) <www.theguardian.com/
science/2016/sep/01/how-algorithms-rule-our-working-lives> as a summary of the more exhaust-
ive O’Neil (n 24).
26
See Mattioli, “On Orbitz, Mac Users Steered to Pricier Hotels” The Wall Street Journal
(23 August 2012) <www.wsj.com/articles/SB10001424052702304458604577488822667325882>.
27
See Section 3.1.
28
Angwin and others (n 7) describe racial discrimination by the criminal prognosis software
COMPAS. The manufacturer of the software has issued a statement claiming methodical

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 107

concerning whites. The predictive power of the software also proved weak in
relation to a randomly selected group of people. Its hit rate was 65 per cent; the rate
of the human comparison group was 63 per cent. In the end, COMPAS turned out
to be not much better than a coin toss.29
An algorithm does not ‒ and cannot ‒ know when it crosses the line of unlawful
discrimination, exemplified by Art 21 of the Charter of Fundamental Rights of the
European Union or Protocol 12 to the European Convention on Human Rights, as
well as Art 3 of the German Constitution (Grundgesetz - GG).30 Algorithms do not
recognize that there are ethical limits to evaluating personality profiles, nor are they
aware of the thin red line between agreeable and unlawful judgement on ethical
parameters. Algorithms have no ethical compass. Their approach is not to do justice
to the individual. They lack empathy and social skills. Thus, an algorithmic process
cannot generally be considered better or worse than a human decision maker. An
algorithm must rather be programmed to the conditions under which it can exploit
its advantages and avoid unethical decisions.

3.2.3 Monopolization of Market Power and Knowledge: Influencing the


Formation of Political Opinion
Algorithmic decision-making changes the logic of economic market power: com-
panies like Amazon are predicting the needs of their customers increasingly well
using their data arsenal. The more accurate their forecast, the more they can target a
potential buyer by adapting their product offerings to individual needs. Consumers
no longer have to spend hours searching for a particular product or service. They
may possibly even expect to receive personalized discounts if algorithms are aware of
a particular need. In fact, companies might even offer to sell merchandise or services
to potential customers before they have even thought about it themselves.
The growing data base enables an exclusive evaluation potential that optimizes
the customer approach and thus stabilizes the competitive position. With a growing
number of users, and combined with network and platform effects, this development
can initiate an overwhelming market power. In the best-case scenario, a rapidly
increasing spiral of market power acts as an essential building block for economic
growth and prosperity. In the worst-case scenario, big-data players monopolize the
market and undermine the foundations of the disruptive power of market mechan-
isms. Under big-data conditions the barriers for new market entrants increase with

errors in this assessment: Dieterich, Mendoza, and Brennan (n 7) 2 f.; on the whole topic, see
also Martini (n 19) 55 ff.
29
Dressel and Farid, “The Accuracy, Fairness, and Limits of Predicting Recidivism” (2018) 4
Science Advances 1.
30
This constitutional framework bans discrimination (by state entities) based on sensitive traits
such as race, religion, gender, disability, origin or language. Private entities are subject to the
additional anti-discrimination provisions that exist in most countries.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
108 Mario Martini

the amount of data to be processed to solve a task. Attacking the market position of
big-data providers, and thus exploiting the disruptive efficiency of the “market as a
discovery process”31 becomes more difficult for them. The risk that market power
becomes concentrated in the hands of few suppliers becomes real. It can empower a
“surveillance capitalism”32 in which data-collecting companies control processes
and monitor people in a way that undermines market mechanisms.33
The use of algorithms not only changes the design of markets and the way
companies offer their products. It also creates new societal risks for democratic
societies. With the revelations about how Facebook and Twitter might have been
used to influence the electorate in previous US presidential elections, the public
realized that algorithmic decision-making can affect political equality of opportunity
and the democratic chances of participation.34 Social bots applied the rich supply of
big data to place targeted campaign messages far more precisely and expansively
than a human being could. In March 2018 it became known to the public that
Cambridge Analytica had used private information from the Facebook profiles of
more than 50 million users without their permission or knowledge to influence
voters.35 The data were mainly gathered by algorithms, and it was algorithms that
allowed and contributed to its misuse. The discussion on how society should deal
with these new possibilities of automated IT systems is still in its infancy. Establish-
ing fair competition and counteracting dominant market power is and remains one
of the key tasks the res publica will have to accomplish.

3.3 regulatory steps and proposals for further


legislative measures
Algorithms are like scalpels in a digital surgical kit: they bring blessed progress to
mankind, but in the wrong hands they can cause a lot of damage.

31
Von Hayek, Der Wettbewerb als Entdeckungsverfahren (Kieler Inst. für Weltwirtschaft 1968) 1.
32
Zuboff, The Age of Surveillance Capitalism (PublicAffairs 2019) 128 ff.
33
See also Ebers, “Beeinflussung und Manipulation von Kunden durch Behavioral Microtarget-
ing” (2018) 21 MultiMedia & Recht (MMR) 423, 424 ff.; Raini and Anderson, Code-Dependent:
Pros and Cons of the Algorithm Age (Pew Research Center 2017) 2 with many examples of
unexpected and adverse algorithmic effects and outcomes.
34
Facebook was suspected of deliberately suppressing news from the conservative spectrum and
manipulating the “trending topics” in favor of other political tendencies; Herrman and Isaac,
“Conservatives Accuse Facebook of Political Bias” The New York Times (9 May 2016) <https://
www.nytimes.com/2016/05/10/technology/conservatives-accuse-facebook-of-political-bias.html?
mcubz=1>. In response to these allegations, Facebook published (at least partially) its internal
selection guidelines.
35
See e.g. Ebers (n 33); Rosenberg, Confessore, and Cadwalladr, “How Trump Consultants
Exploited the Facebook Data of Millions” The New York Times (17 March 2018) <www
.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html>; see also The
Guardian online series regarding the scandal: <www.theguardian.com/news/series/cambridge-
analytica-files>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 109

The Deepfake36 phenomenon illustrates this in an exemplary way. The technique


gives everyone the power to imitate a person’s voice and facial expressions with a
few clicks. A short training phase with original voices and video material is sufficient
to master the tool; the software learns about the specifics of a voice or face,
reconstructs it and can place it in any other audio-visual context. Using the tool,
it becomes easy to forge political or personal statements of other people. Anyone
can now put a video on the web in which US president Donald Trump gives a
speech in front of the White House, quoting Hamlet or announcing the imminent
drop of a nuclear bomb on North Korea. What has previously required extensive
reconstruction by professionals is now open to any amateur who owns a home
computer. For potential victims of bullying this is bad news. Since February 2017,
numerous famous actresses have already involuntarily become part of fake porn
movies. We will have to get used to a world in which it becomes even more difficult
to discern what can still be considered authentic or “true.”

The evil, however, stems not from the technology, but from those who abuse it.
Even in the analogue world society has not banned knives, but punishes the
malignant act of stabbing another person with them. The legislator should thus
not suffocate new technologies, but ensure that they are applied in a manner
compatible with public interest and prevent abuse.

3.3.1 Collective Data Protection As Part of Consumer Protection in the


Digital World
In a world steered by complex machine-learning algorithms, an average consumer is
typically not able to handle the dangers and mitigate the risks associated with
software applications on their own. Even alertness and a high level of knowledge
are not always sufficient to protect individual rights. Moreover, in many cases there
is a gap between a person’s individual appreciation of their own privacy and their
willingness to thoroughly read a declaration of consent. The typical end user hopes
that suppliers will process their data in accordance with the values of the legal system
and in doing so obey the commandments which give rise to a reasonable expect-
ation of respect for their interest in adequate protection of privacy. The user thus
follows an individually rational, intuitive benefit calculation: they weigh up the
available time resources, the research and preparation effort associated with privacy
protection instruments, and the individual benefit of digital services. This “privacy
paradox”37 is one of the most important reasons why the current data protection
framework encounters difficulties.

36
Oberoi, “Exploring DeepFakes” (goberoi, 5 March 2018) <https://goberoi.com/exploring-deep
fakes-20c9947c22d9>.
37
See e.g. Athey, Catalini, and Tucker, The Digital Privacy Paradox: Small Money, Small Costs,
Small Talk (NBER 2018) 1 ff.; Dienlin and Trepte, “Is the Privacy Paradox a Relic of the Past?”
(2015) 45 European Journal of Social Psychology 285, 286 f. with a sociological problem analysis.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
110 Mario Martini

Recognizing this starting position, a new understanding of collective protection of


privacy as a common basic value is necessary, as well as a more stringent legal system
to analyse and balance the asymmetry between the interests and the structures to
enforce them.38 The legal system needs to gradually adopt a new approach of
collective data protection.
A collective regulating regime includes a procedural approach to the effective
governance of the “black-box algorithm.” Since most people are unable or unwilling
to understand and to control technical systems, the legal system should aim to
implement basic social values directly in the design of technology and to develop
effective mechanisms for their compliance with the law – as well as an effective
examination of privacy policies for surprising or abusive clauses (similar to the law of
general terms and conditions). Such a regulation regime should also include a state-
initiated consumers’ association for privacy protection, such as the British ‘Which?’,
the Dutch ‘Consumentenbond’ or the German ‘Stiftung Warentest’. These kinds of
associations could subject telemedia services to a comparison of the privacy settings
of their services (e.g., with regard to the principle of data minimization, transfers of
personal data to third countries, the depth of data analysis and procedures applied).
Like “traffic lights” that evaluate energy efficiency or the environmental impact of a
product on a scale from green over yellow to red, a data protection “traffic light”
could provide consumers with information on the depth of processing and possible
privacy hazards (insofar as this can be validly mapped despite the complexity of the
products) in comparison to other providers.
Achieving the goal of finding an approach to collective data protection requires a
social discourse (and agreement) on how we want to live in a society shaped by
algorithms, and what regulation in such a world could and should look like. Among
the important questions to be answered are: How do we open the “black box”
without jeopardizing legitimate trade secrets involved? How can we prevent illegal
discrimination by artificial intelligence and ensure that its learning process is in line
with legal and ethical requirements?39

However, the scientifically founded empirical proof of the phenomenon corresponding to daily
observation is missing as of yet.
38
For an instructive overview of collective protection literature see Helm, “Group Privacy in
Times of Big Data. A Literature Review” (2016) 2 Digital Culture & Society 137, 139 ff. The new
European provisions of Art 7 General Data Protection Regulation (GDPR), especially para 4,
are steps in the right direction. See also Martini, “Algorithmen als Herausforderung für die
Rechtsordnung” (2017) 72 JZ 1017, 1019.
39
Among the broad scope of literature on possible algorithmic danger and potential tasks of
algorithm regulation, see e.g. , Busch, Algorithmic Accountability, ABIDA Project Report,
March 2018, <http://www.abida.de/sites/default/files/ABIDA%20Gutachten%20Algorithmic%
20Accountability.pdf>; Citron and Pasquale, “The Scored Society: Due Process for Automated
Predictions” (2014) 89 Washington Law Review 1; Edwards and Veale (n 14); Ernst, “Algor-
ithmische Entscheidungsfindung und personenbezogene Daten” (2017) 72 JZ 1026; Hoffmann-
Riem, “Verhaltenssteuerung durch Algorithmen” (2017) 142 AöR 1, 24; Kroll, Accountable
Algorithms (2015); Kroll and others (n 14) 636 with many examples; Lanier, Who Owns the

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 111

The legal system has a diverse arsenal of conceivable measures at its disposal. It
can be applied at various points on the time axis: preventive (Section 3.3.2), parallel
to the use of the software applications (3.3.3), ex post in the shape of damages and
legal protection (3.3.4) and by accompanying self-regulation (3.3.5).
However, legislators should not try to crack the regulatory walnut with a sledge-
hammer. Not every software application and not every machine-learning algorithm
poses a threat to fundamental rights justifying regulation. All regulatory efforts
should thus start by trying to determine the right scope of legal obligations: Legisla-
tors should first try to find a general and/or sector-specific list of classification criteria
that form a threshold for particular means of regulation. Only certain types of
algorithms, especially those which are sensitive to fundamental rights, should be
captured by legislation.
The lynchpin for different levels of obligation in a regulatory class system should
always be the sensitivity to fundamental rights and the degree of risk in the individ-
ual case. Particularly important factors in this context are: the type of data processed
by the system (public-sphere data, social-sphere data, special categories of personal
data within the meaning of Art 9 and 10 GDPR); the number of affected persons;
and the extent to which alternative products are available to the data subject. The
class of sensitive products includes in particular: applications that process health
data or can cause physical harm; applications that have a special impact on the
formation of opinion and democratic order (e.g. social bots);40 scoring and profiling
software that is involved in decisions about participation in important aspects of life;
new technologies that enable a particular degree of evaluation intensity (especially
facial recognition, voice and sentiment analyses, smart home applications); human‒
machine collaboration (e.g. exoskeletons41 and cobots); systematic monitoring of
work activities,42 and publicly accessible areas and algorithm-based decision-making
procedures in the judiciary and administration.

Future? (Simon & Schuster 2013) 204; O’Neil (n 24); Tufekci, “Algorithmic Harms Beyond
Facebook and Google: Emergent Challenges of Computational Agency” (2015) 13 Colorado
Technology Law Journal 203; Pasquale, The Black Box Society (Harvard UP 2015); Salamatian,
“From Big Data to Banality of Evil” (Heinrich-Böll-Stiftung, 12 April 2014) <https://soundcloud
.com/boellstiftung/vortrag-from-big-data-to-banality-of-evil>; Wischmeyer, Regulierung intelli-
genter Systeme (2018) 143 AöR 1.
40
See e.g. Libertus, “Rechtliche Aspekte des Einsatzes von Social Bots de lege lata und de lege
ferenda” (2018) 62 Zeitschrift für Urheber- und Medienrecht (ZUM) 20; Steinbach, “Social Bots
im Wahlkampf” (2017) 50 Zeitschrift für Rechtspolitik (ZRP) 101.
41
See Martini and Botta, “Iron Man am Arbeitsplatz? – Exoskelette zwischen Effizienzstreben,
Daten- und Gesundheitsschutz” (2018) 35 Neue Zeitschrift für Arbeitsrecht (NZA) 625.
42
See e.g. Brecht, Steinbrück, and Wagner, “Der Arbeitnehmer 4.0?” (2018) 6 Privacy in
Germany (PinG) 10; Byers and Wenzel, “Videoüberwachung am Arbeitsplatz nach dem neuen
Datenschutzrecht” (2017) 72 Betriebs Berater (BB) 2036; Pärli, “Schutz der Privatsphäre am
Arbeitsplatz in digitalen Zeiten – eine menschenrechtliche Herausforderung” (2015) 8 Euro-
päische Zeitschrift für Arbeitsrecht (EuZA) 48.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
112 Mario Martini

3.3.2 Preventive Regulatory Instruments

3.3.2.1 Protection against “a Computer Says No” Dystopia: Fully Automated


Individual Decision-Making (Art 22 GDPR)
The legislature of the European Union has taken one important step toward the
regulation of algorithms. The GDPR gives data subjects the right of defence within
particularly sensitive automated decision-making procedures (Art 22(1) GDPR).43
The new provision is an expression of the European understanding of human
dignity. Its basic idea is: Machines should not degrade humans to pure objects of
their algorithmic decision-making.44
However, the normative radius of Art 22 GDPR is more limited than its heading
“[. . .] including profiling” suggests. First, it establishes numerous exceptions,45 but
more importantly, it only covers decisions being made without any (substantial)
human influence – so any human decision made only with algorithmic support is
not covered.46 Therefore, the pure result of a scoring algorithm as such (lacking a
decision) regularly does not fall within the scope of the regulation that addresses the
perils of algorithmic decision-making. For example, automated assessment of a
person’s creditworthiness is typically not a decision but merely the basis for one.
However, the output of the automated assessment typically determines the credit-
granting decision made by bank employees. It is only subject to the general rules of
the GDPR which govern the processing of personal data (Recital 72 Sentence 1
GDPR).

43
See also Edwards and Veale (n 14) 44 ff.; Wachter, Mittelstadt, and Floridi, “Why a Right to
Explanation of Automated Decision-Making Does Not Exist in the General Data Protection
Regulation” (2017) 7 International Data Privacy Law 76, 95 f.
44
European Group on Ethics in Science and New Technologies, Statement on Artificial Intelli-
gence, Robotics and “Autonomous” Systems (European Union 2018) 9; Martini, “Art 22
DSGVO” in Paal and Pauly (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz:
DS-GVO BDSG (2nd edn, CH Beck 2018) margin nos 1, 8 ff.
45
These exceptions apply in particular to the entering into and performance of a contract as well
as cases where explicit consent is given (Art 22(2) a and c GDPR). Nonetheless, the data
controller has to implement suitable measures to protect the data subject’s rights, freedoms
and legitimate interests including, as a minimum guarantee, the right to express one’s point of
view (Art 22(3) GDPR) e.g., in order to explain complicated contexts or cases of hardship. This
also includes the right of the person concerned to demand a reassessment of the content by the
person responsible. Furthermore, processing must be transparent and fair. The processor must
therefore use suitable mathematical procedures (Recital 71 sub-para 6). The result of the
calculation must be based on correct and up-to-date data: error management with verification
mechanisms is needed to check the data basis and its accuracy for integrity and authenticity.
The person responsible must also use technical and organizational measures to counter risks of
discrimination according to, e.g. gender, genetic or health status (see Recital 71 sub-para
1 sentence 4, sub-para 2 sentence). Compare also Section 3.3.2.2.
46
Art 22(1) refers to “a decision based solely on automated processing.” See also Edwards and
Veale (n 14) 44 ff; Martini (n 38) 1029; Wachter, Mittelstadt, and Floridi (n 43) 96.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 113

As a result, Art 22 GDPR does not completely solve regulatory tasks for the
majority of algorithmic decision-making procedures. Rather, a regulatory regime,
covering the wider field of digital decision support, is needed.

3.3.2.2 Transparency Obligations


An important component of a design for the regulation of algorithms is transparency:
Only if a person is able to recognize and then prove a legal infringement caused by a
software application, can they effectively avert threats to their rights.

(a) ex-ante information In order to enable legal protection, it is advisable that


consumers can clearly identify the use of algorithms – at least in areas with particular
sensitivity to personality rights – and that software applications communicate this
information to the potential subjects of algorithmic decisions.

De lege lata: information about “the existence of automated decision-making” and


the “logic involved” – Art 13(2) f and Art 14(2) g GDPR
The GDPR establishes an obligation on transparency (Art 13(2) f, Art 14(2) g, 15(1) h,
Art 5(1) a GDPR) similar to an information leaflet for patients under pharmaceutical
legislation.47 The controller has to provide the data subject not only with the infor-
mation that the decision is made automatically, but also “meaningful information
about the logic involved, as well as the significance and the envisaged consequences
of such processing for the data subject” (Art 13(2) f ).The GDPR does not require the
disclosure of the algorithm’s source code. The transparency obligation is rather
limited to an explanation that clarifies how the algorithm reaches its decisions.
In most cases, general disclosure of an algorithm’s source code would neither be
required nor helpful: jJust because something is openly accessible does not yet make
it comprehensible for the general public. Even experts who know the code often fail
to predict the exact results of the software – not to mention the complexity of
software applications with millions of lines of code. Furthermore, with certain forms
of machine learning, for example with neural networks, the source code does not
even reveal the dynamic decision patterns. In principle, it is a reasonable comprom-
ise between transparency and secrecy to only establish an information obligation
about the basic decision-making structure underlying the algorithms, thereby
making the models and assumptions that guide the decision comprehensible.
But when a state authority uses algorithm-based systems for public applications,
the duty to inform can go beyond the information duties of private controllers. An
obligation to reveal the algorithm’s source code may result from national freedom of
information acts providing a right to access information held by state authorities. For
47
For the fact that its scope is limited to processes of automatic decision making as in Art 22
GDPR, see Section 3.3.2.2.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
114 Mario Martini

example, in the Spanish autonomous community of Catalonia as well as in France,


applicants succeeded with an information request aimed at disclosing algorithms
used by the government. They claimed, under the right to information, disclosure of
the algorithm responsible for the admission of students to certain study pro-
grammes.48 The Catalonian authority explicitly stated that even the mathematical
source code of an algorithm (and not just an explanation of how it works) counts as
public information subject to the information access law.
But transparency obligations may conflict with the protection of software develop-
ers’ trade secrets, which are also subject to constitutional protection.49 The producer
of the recidivism assessment software COMPAS, for example, invokes its trade and
business secrets against the obligation to disclose the algorithm under the Freedom of
Information Act in the USA. Under German law, namely Section 6 German Federal
Freedom of Information Act (Informationsfreiheitsgesetz - IFG), the state is entitled to
refuse disclosure of information on this legal ground.50 The company that develops
the software has de lege lata an absolute veto position, even if its algorithms are used
by a state authority. Only if they consent is there a right to information.

De lege ferenda
(1) Extended disclosure requirements for software applications used for administrative
purposes
When a state authority uses algorithm-based systems for public applications, the
denial of information as provided in Section 6 German Federal Freedom of Infor-
mation Act51 is not appropriate. Public law should include the obligation to disclose
the source code of the software – despite trade secrets – if that is necessary to prove
the correctness of government decisions in individual cases (e.g., on allocation of
places in public kindergarten or in colleges and universities). It is recommendable to
modify the software developer’s intellectual property protection from an absolute to
a relative position: Source code and other details must be disclosed if there is a
predominant and legally protected public interest52 for the requested information.
48
Generalitat de Catalunya/Comissió de Garantia del Dret d’Accés a la Informació Pública
(GAIP), Resolución de 21 de septiembre de 2016, de estimación de las Reclamaciones 123/2016 y
124/2016 (acumuladas); association Droits des lycéens, press release, 10 December 2017 <www
.droitsdeslyceens.com/medias/files/dl-cp-17-12-13-tirage-sort-audience-ce.pdf>; first decision on
the disclosure of algorithms used by state authorities see Tribunal Administratif de Paris,
decision of 10 March 2016, <www.legalis.net/jurisprudences/tribunal-administratif-de-paris-
5eme-sec-2eme-ch-jugement-du-10-mars-2016/>.
49
See Art 14(1) German Basic Law (Grundgesetz); Art 17 Charter of Fundamental Rights of the
European Union.
50
Exceptions for the protection of special public interests are incorporated in almost all freedom
of information acts around the world. See e.g. Sec 3 IFG.
51
See Section 3.3.2.2.
52
There are cases in which it is justifiable not to disclose at least some critical parts of a software
application, because an obligation to disclose could undermine its task fulfillment (e.g.
requirements of official secrecy) if the integrity of its technical systems was be at stake. For

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 115

When awarding procurement contracts for software, the state should thus stipu-
late in the contract terms that the software must comply with the transparency
requirements (as well as fairness, user control, accountability and responsibility)
that society places on the state’s software.53 The state can use its power as a buyer on
the market to influence the supply of ethically desirable artificial intelligence in the
interest of its values.

(2) Extension of the information duties beyond procedures within the meaning of Art
22 GDPR to software applications which can have a sensitive effect on the rights of
their users
The obligation to provide information about “the existence of automated decision-
making, [. . .] and [. . .] meaningful information about the logic involved, as well as
the significance and the envisaged consequences of such processing for the data
subject,” which Art 13(2) f, Art 14(2) g and 15(1) h GDPR establish, has no particularly
broad scope of application. It is – as well as Art 22 GDPR – limited to cases of
“automated decision-making” – that is, decisions which are not substantially influ-
enced by human behavior.
The parenthesis “at least in these cases” (“zumindest in diesen Fällen,” “au moins
en pareils cas”) of Art 13(2) f, 14(2) g and 15(1) h GDPR indicates at first glance that
the explanatory obligations may also extend to other forms of processing. The
wording suggests that, in exceptional cases, the data controller may be obliged to
inform data subjects about the decision-making logic of their algorithms beyond the
scope of Art 22 GDPR. But, since the duty to provide information is subject to an
administrative fine (Art 83(5) b GDPR), it is not justifiable under the rule of law to
leave unanswered the question of which cases, beyond Art 22 GDPR, the duty to
provide information extends to. The obligation to provide information on the logic
and scope of systems therefore only extends – at least with suficient legal clarity and
under the legal sanction regime of the GDPR – to fully automated decisions which
are based on profiling or similarly use personal information for automated decision-
making. Under the GDPR now in force, the information duties of the data control-
ler do not extend to decisions outside the scope of Art 22(1) GDPR.54

example, public details about the crucial limits of the algorithm-based tax return audit
mechanism could undermine the automated system: anyone who knew the limit above which
the audit system for the deduction of donations applies could easily circumvent it. For this
reason, a legal provision stipulates that details of the risk management systems applied may not
be published (Section 88(5) s 4 of the German Fiscal Code (Abgabenordnung - AO). This
seems adequate. The same is equally true for software applications used by security agencies,
e.g., for purposes of anti-terrorism. The denial of access to the source code and details of
algorithm-based public applications should be limited to such cases. See also Martini and Nink
(n 8) 11.
53
The same applies in cases where the state uses its budget resources to finance investments in
the development of software systems.
54
See also Recital 63 Sentence 3 GDPR and more in detail Martini (n 19) 182 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
116 Mario Martini

The legislator should extend the scope of the labelling requirement to decisions that
are supported by algorithms and that tend to constrain fundamental rights. Thus, in
order to achieve transparency, the legislator should establish – beyond current law –
clear information duties for algorithm-based services that are not entirely based on
automated individual decision-making as in Art 22 GDPR. This concerns on the one
hand the duty to give “meaningful information about the logic involved” (at least in
areas that are sensitive to fundamental rights). On the other hand a transparency
obligation should form part of the regulatory concept, allowing the user to identify
whether risky machine-learning algorithms are used, or if a decision is sensitive to their
fundamental rights for other reasons.55 This applies in particular to profiling procedures
as well as chatbots, social bots and dynamic pricing or blocking software that differenti-
ates by features closely related to protected traits such as religion, race or gender.56 The
labelling obligation should require visual easy-to-understand symbols that customers
actually see and comprehend – otherwise, the information obligation will end up
simply adding another paragraph to the largely unread privacy policies used today.

(3) Obligation to provide information on “suitable measures to safeguard” measures


taken or to be taken (Art 22(3) GDPR)
For protective measures as provided in Art 22(3) GDPR, the GDPR does not yet
expressly impose an obligation to provide information. Art 13(2) f, Art 14(2) g and
Art 15(1) h GDPR explicitly refer only to Art 22(1) and (4) – but not to the “appropri-
ate measures to safeguard the rights and freedoms as well as the legitimate interests
of the data subject” of Art 22(3) GDPR.
Interpreted broadly, a duty to provide information on the precautions taken can
be understood as part of the “appropriate measures” set out in Art 22(3) GDPR.
However, the protective measure taken as such must be systematically distinguished
from the information on protective measures.
The legislator should close the existing gap by supplementing Art 13(2) f, Art 14
(2) g and Art 15(1) h GDPR with such an obligation to provide information. Since the
GDPR claims to regulate data protection law uniformly throughout the European
Union, the ember States cannot extend the information and disclosure obligations
of Art 13‒15 GDPR on their own. They have to feed their regulatory ideas into the
European Union’s legislative process.

(b) Ex-post information: obligation to justify algorithm-based decisions


The consumer protection effect of ex-ante information is limited. In many situations
in which the legislator has established information and labelling obligations,
55
See also Tene and Polonetsky, “Big Data for All: Privacy and User Control in the Age of
Analytics” (2013) 11 Northwestern Journal of Technology and Intellectual Property 239, 271. For
systems that are relevant to the development of public opinion and that can be used to determine
personal traits: European Group on Ethics in Science and New Technologies (n 44) 16.
56
On the question of whether price differentiation is already subject to the information obligation
under Art 14 (2) (f ) GDPR, see Martini (n 19) 180 with n 75.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 117

consumers do not perceive the well-intentioned information or cannot classify it


completely. Sometimes the transparency obligations even lead to information over-
load, which can make the decision more difficult instead of easier.57 This is because
individuals typically make their everyday decisions within a narrow framework of
time restrictions and competing needs. The findings of empirical studies thus raise
doubts as to whether information obligations actually achieve the normative object-
ive of informed consumer decisions.58 The legislator should thus not place too high
expectations on the effects of ex-ante information requirements.59
While the persons concerned often do not read information obligations before the
processing operation, they are in many cases all the more interested in why they
have received a negative decision. Up to now, the duty to shed light on the internal
context and the reasons for a decision has traditionally been limited to public
authorities (see especially Section 39(1) of the German Administrative Procedure
Act). Individuals and companies can keep their motives for a legal transaction to
themselves as an expression of their private autonomy. In particular, the data subject
cannot demand a personalized explanation of a specific decision based on their
rights to information in Art 13‒15 GDPR ‒ those provisions only cover a general
explanation of the functionality of the system.
For the limited scope of automated decision-making as in Art 22 GDPR, Recital
71(1) Sentence 4 implies that a data subject should have the right to “obtain an
explanation of the decision reached.” Surprisingly, this aspect finds no distinct
equivalent in Art 22(3) GDPR. The GDPR obviously did not want to include this
duty generally for automated individual decision-making. The legislator limits the
duty to state reasons to cases where the person concerned has made use of the
opportunity to express their own point of view (even in cases of Art 22 GDPR).
Not only in the case of fully automated software applications, but also in the case
of software integrated into a (human) decision-making process – for example as an
assistance system – can the obligation to justify decisions provide appropriate
impulses for transparency. A justification allows the data subject to look as far into
the “black box” as is necessary and sufficient to understand the basis of the decision

57
See the experiments of Baron, Beattie, and Hershey, “Heuristics and Biases in Diagnostic
Reasoning” (1988) 42 Organizational Behavior and Human Decision Processes 88, 100, 102 ff.,
108 ff.; see as well Ben-Shahar and Schneider, More than You Wanted to Know (Princeton
University press 2014) 55 ff.; Baron, Thinking and Deciding (4th edn, reprinted, CUP 2009) 177;
Vaughan, The Thinking Effect (Nicholas Brealey Publishing 2013) 29.
58
See e.g. Kettner, Thorun, and Vetter, Wege zur besseren Informiertheit (conpolicy 2018) 31 ff.
On freedom of information cases against Anglo-American states, compare Roberts, “Dashed
Expectations: Governmental Adaptation to Transparency Rules” in Hood and Heald (eds),
Transparency: The Key to Better Governance? (OUP 2006) 108, 109 ff.; Ben-Shahar and Bar-
Gill, “Regulatory Techniques in Consumer Protection: A Critique of European Consumer
Contract Law” (2013) 50 Common Market Law Review 109, 117 ff.
59
See in detail Edwards and Veale (n 14) 42 f., who doubt the reach of legal transparency
obligations; Martini (n 19) 188 f.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
118 Mario Martini

and, if need be, to challenge it.60 The right to demand an explanation can also serve
to discover and prevent discriminatory tendencies that would otherwise be undetect-
able – and thus build trust in digital technologies.
Ideally, the software application should implement a tool in its algorithm-based
process which at least substantiates a decision rejecting (in whole or in part) users’
requests. The clarification should explain in an intelligible way why the unfavour-
able decision determined by the algorithm was taken.61 It should add information on
comparison groups, parameters and principles guiding the decision-making process.
Reflecting about an algorithmic process and implementing such information into
the software will challenge programmers.62 Especially in the case of complex
machine-learning methods such as neural networks, their creators can often only
say that a decision has been made, but cannot explain the reasons why that
conclusion has been reached. However, technical challenges are no excuse, as long
as the solution is not impossible in a normative sense.63 Research efforts toward a
(more) “explainable artificial intelligence”64 are already underway.65
Nevertheless, the obligation to state reasons should not exist unconditionally and
without limits. Legislators intending to introduce an obligation to individually
explain algorithmic decisions to a subject must act with care in order not to interfere
disproportionately with the data controller’s private autonomy, professional freedom
and fundamental rights. In the analogue world, the individual cannot demand to
look into the neural (brain) structures of his contractual partner to obtain a scientific
explanation of every decision.
However, compared to humans, algorithm-based decision-making processes make
different, sometimes surprising mistakes. They operate on a quantitative basis of
similarities in the data that allows them to draw stochastic conclusions: Algorithms
recognize statistical correlations, but do not evaluate causal relations in the real
world. They have no worldview and lack the common sense capable of grasping a

60
See also Mittelstadt and others (n 23) 7; Tutt (n 3) 110.
61
See the normative requirement of Art 12(1)(1) GDPR.
62
Knight, “The Dark Secret at the Heart of AI” (MIT Technology Review online, 11 April 2017)
<www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/>. The US DARPA
(Defense Advanced Research Projects Agency) has started a research project titled “Explainable
Artificial Intelligence”: <www.darpa.mil/program/explainable-artificial-intelligence>.
63
An explanation of decisions implemented in the software could even be helpful for program-
mers to detect mistakes and law infringements early on; Edwards and Veale (n 14) 54 with
further evidence.
64
See also Wachter, Mittelstadt and Russell, “Counterfactual Explanations without Opening the
Black Box: Automated Decisions and the GDPR” (2018) 31 Harvard Journal of Law & Technol-
ogy 841, 841 ff., especially 860 ff.
65
For some early technical approaches, see Binder and others, “Layer-wise Relevance Propaga-
tion for Neural Networks with Local Renormalization Layers” in Wilson, Kim, and Herlands
(eds), Proceedings of NIPS 2016 Workshop on Interpretable Machine Learning for Complex
Systems (2016) 1 ff; on using methods of visualization Yosinski and others (n 14). So far, however,
it has only been possible to get systems to explain specific aspects of their decisions.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 119

decision’s context in social reality. Therefore, it is difficult for a computer to


distinguish, for example, a cat from a dog. In case of doubt, it might use the blue
background of an image as a distinguishing pattern if there is a correlation between
the characteristic “cat” and the background of its training data – mistakes that even a
five-year-old human would not make. Similarly, when recruiting new staff, a
computer will probably suggest a lower performance rating for women, as it deduces
from the lower average wages women receive in its training material a lower
performance level.
A duty to justify a decision is appropriate where the computer-typical risk of false
conclusions due to only fictitious causality unfolds and fundamental rights demand
special protection. The obligation to justify a decision should thus be limited to
situations with computer-specific risks and/or where a potential risk for fundamental
rights induces a special need to disclose specific reasons of a single decision – for
example in decisions that concern important life chances (such as renting a house or
finding a job), decisions that considerably curtail a data subject’s legal position or
situations in which a person is granted a benefit on significantly worse terms than
other applicants (e.g., a bank loan interest rate or a purchase price).
The scope of the duty to justify the decision should, as part of a graduated
regulatory approach, correspond to the extent of the risk posed to the right to equal
treatment and other fundamental rights. An explanation is thus regularly limited to
making the decision guiding principles comprehensible ‒ for example, explaining
the main reasons and factors that lead to a specific decision that can infringe the
legal position.66 The requirement to substantiate the decision finds its limit in
particular when trade secrets are disclosed (especially the source code) to the public
or where other overriding interests of third parties relevant to their fundamental
rights stand in the way of a substantiation (e.g., where an explanation would also
disclose information about indirectly affected persons, such as personal data of the
reference group).

(c) Right of access by the data subject (Art 15 GDPR): right to information about
profiling results?
Since algorithm-based decisions are only as good as the dataset on which they were
trained, it is particularly important for those affected by the decision to gain insight
into the data basis. This is often the only way for individuals to ensure that no
incorrect decision basis distorting the correctness of the result is used in the process.
It is therefore consistent and meritorious that the GDPR guarantees individuals the
right to obtain information about the personal data that a controller processes
(Art 15(1) GDPR).67

66
See already Martini (n 38) 1020; Wachter, Mittelstadt, and Russell (n 64) 863 ff.
67
Nor can a right to obtain information on the results of a profiling be derived from the
minimum protection rights of Art 22(3) GDPR. See Martini (n 19) 190, 202.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
120 Mario Martini

This right does not necessarily include giving individuals the right to view the
profiles created by a system as a result of the processing. Profiles that are created as a
result of processing are indeed personal data. In principle, however, neither the right
to information (Art 15(1) GDPR) nor the right to rectification (Art 16(1) GDPR)
extend to them. Rather, the GDPR is based on the idea that the right to information
does not capture the forum internum of the data controller where his internal
process of forming his opinion, preparing decisions and his business secrets are
concerned (Art 15(4), Recital 63 sentence 5 GDPR). A right to know the processing
results and evaluations obtained by someone as a result of processing personal data
is, looking at the other side of the coin, an obligation to disclose one’s opinion about
others. It intervenes in a sustainable way in the (negative) freedom of opinion
guaranteed by fundamental rights and strikes at the heart of the privately autono-
mous conception of our legal system. The GDPR did not take this step.
A subjective right to gain an insight into the formation of a profile should be
granted in legal relationships characterized by an asymmetry of information and
power. These typically include, for example, the vertical relationship between the
citizen and the government, performance profiles recorded by an education service
provider, or employment relationships68 (provided that no overriding confidentiality
or security interests are in conflict).69

(d) Extension of impact assessment and publication


Anyone wishing to place an algorithm-based application on the market that involves
a “high risk” to the rights and freedom of the data subject must, in principle, prepare
an impact assessment (Art 35(1) (1) GDPR)). De lege lata it is limited to the
“protection of personal data.” It does not directly address discrimination risks and
other legally relevant consequences of algorithm-based procedures. The European
Union legislator should extend this (narrow) focus of the impact assessment to all
risky consequences for the rights and interests of data subjects.70
The GDPR also does not yet formulate any obligation to make the impact
assessment and its individual steps accessible to the public – neither in Art 35 nor
in Art 12 et seq. GDPR. It is appropriate to impose such a normative obligation on
operators of machine-learning algorithms above a certain risk threshold.71 It then
reveals to each user in a comprehensible manner whether and to what extent the
implementation of algorithm-based applications endangers legal interests and

68
In a recent decision a German labor court has stated rightly that the right to access the
collected data on an employee´s professional conduct and performance can be restricted based
on Art 15(4) GDPR if the employer obtained the information from another employee who
might likely suffer disadvantages if the data (and thus his identity) were revealed, LAG Baden-
Württemberg Urt v 20.12.2018, ECLI:DE:LAGBW:2018:1220.17SA11.18.0A, para 182 f.
69
Martini (n 19) 202 ff.
70
Martini (n 19) 209 ff.
71
See Martini (n 38) 1022; Martini (n 19) 202 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 121

interests protected by fundamental rights. Those affected may then – supported by


the media and the public – consciously decide how willing they are to accept the
risky consequences associated with the use of such a service.
A comprehensive impact assessment with publication obligation should not apply
to every software application. Rather, it should be limited to those cases in which the
use of a software application involves atypical and lasting risks for fundamental rights
(e.g. risks arising from data transfers to third parties or the use of deep analytical
instruments such as sentiment analyses). The impact assessment should provide an
overview of the risks a consumer might face, but not extend to trade and business
secrets. If the public sector uses automated administrative procedures, it should (in
contrast to private providers) generally be required to publish an impact assessment
which is not limited to the protection of personal data.72

(e) Special transparency obligations for algorithm-based news aggregators


News aggregators73 like Google News or the newsfeeds of social networks profoundly
influence opinion-forming processes. Given the importance for democratic systems of
a free and pluralistic public discourse, these services should be subject to stronger
transparency requirements than other algorithm-based services – at least if they reach a
crucial audience. Since media transparency is an important cornerstone of the forma-
tion of (informed) public opinion in a free and democratic society, a legal transparency
obligation should not only require news aggregators to fully explain their technical
processes for news selection and personalized prioritization, but also reveal possible
conflicts of interest, such as economic relations affiliated to the prioritized news.74

3.3.2.3 Ex-ante Audit Mechanisms for Sensitive Software Applications


If the data subject cannot unravel or audit an algorithm sufficiently, but its decisions
will influence their opportunities and modes of social participation, a state supervi-
sion mechanism is an appropriate regulatory measure. A public authority auditing
algorithms75 should (to a certain extent) compensate for the lack of individual audit
capabilities. The audit entity could run a supervisory process validating certain
72
For risk management systems in (automated) taxation procedures, see e.g. Martini and Nink
(n 8) 8 f.
73
A “news aggregator” is an algorithm-based service that automatically collects, organizes and lists
news reports, continuously updating from various sources. Apart from the decision to make use
of a certain algorithm or not, no human consideration is involved in the compilation of the list
regarding the content’s relevance, quality or authenticity.
74
The French legislator has introduced a new digital bill which includes special transparency
obligations set out as fairness principles for platforms reaching a certain threshold of users; cf.
Art 49 loi numérique of 7 October 2016 implemented with Décret n 2017–1435 of 29 September
2017 relatif à la fixation d’un seuil de connexions à partir duquel les opérateurs de plateformes en
ligne élaborent et diffusent des bonnes pratiques pour renforcer la loyauté, la clarté et la
transparence des informations transmises aux consommateurs.
75
Tutt (n 3) 119‒123 calls for a public agency analogous to the US Food and Drug Administration.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
122 Mario Martini

quality aspects of the software, such as compliance with the principle of non-
discrimination.
A state audit procedure has to examine not only the source code of deterministic
procedures, but also standardized training processes76 and statistical models of
machine-learning algorithms. A special focus should, for example, be applied to
test data and to whether the software correctly integrates a non-discriminatory data
base. At the same time, consistent measures to protect the trade secrets of audited
software systems have to be a key element of an appropriate audit system.77
It is, however, not reasonable and therefore not necessary to apply a regime of
state supervision to every single software application – just as a bicycle does not, but
a car does need permission to participate in road traffic. A permission to market or
apply certain software products (such as in pharmaceutical law) is possible, but
needs to be strictly limited to dangerous use-case scenarios. Only algorithmic
procedures that bear a sensitivity for fundamental rights should thus be subject to
ex-ante supervisory and standardization procedures.
In case of private providers, an ex-ante evaluation should only be carried out on
software applications that typically involve special risks of discrimination (e.g.,
automated evaluation of job candidates) or have a lasting effect on the life plans
of individuals. In addition, software applications whose errors can lead to sustained
risks to life or limb (e.g., autonomous vehicles, care robots or medical analysis
systems), and sensitive forms of human‒machine collaboration (e.g., exoskeletons
or cobots) could be subject to a prior permission process. The same would apply to
the use of new technologies that allow a particularly high degree of personal
evaluation, especially facial recognition and sentiment analysis.
If the public administration applies methods of algorithmic decision-making (for
example, to allocate places at universities, to undertake tax assessments or to support
decisions of the judiciary), an ex-ante control in the public sector should be
mandatory and extend to a much wider scope.
For certain (sensitive) software applications the EU could complementarily
consider setting up a register. A registration requirement would give supervisory
authorities an overview of specifically risky algorithmic practices and could thus
help to improve their supervisory capabilities in individual cases.

3.3.2.4 Anti-discrimination Guardianship

(a) extension of the scope of anti-discrimination law Discrimination


is one of the most serious risks of algorithm-based applications. In addition to the
feeling of being monitored, they can at the very least trigger the perceived risk of
76
By exposing machine-learning algorithms to regulated input, training processes form the
pattern recognition and matching of the software and its algorithmic intelligence.
77
See in detail Section 3.3.3.2 with n 86.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 123

being treated unequally in a non-transparent decision-making process. To establish


effective protection against hazards that emanate from algorithms and to prevent
discrimination, transparency obligations should be combined with regulatory
measures.
In the analogue world, the European anti-discrimination Council Directives78 as
well as their national counterparts (in particular the German General Equal Treat-
ment Act (Allgemeines Gleichbehandlungsgesetz - AGG) specify the principle of
non-discrimination. They are intended to prevent discrimination against people
who are at special risk of disadvantage – typically minorities and women. Their
regulatory approach is also suitable as a paradigm for the digital world. The legal
provisions do not exclude software-based processes; rather, anti-discrimination legis-
lation is designed to be technology neutral.
However, the scope of anti-discrimination acts is restricted to a limited number of
areas of life in which discrimination is likely to take place:79 employment, educa-
tion, social benefits and – under German law80 – bulk business (sc. civil-law
obligations which typically arise without regard to the specific person in a large
number of cases under comparable conditions). In these areas the legislator recog-
nizes discrimination as particularly harmful.
In any case, neither the European nor the German anti-discrimination law covers
all the specialized fields of the operating domains of (machine-learning) algorithms:
On the basis of algorithmic classifications, there will be a whole range of situations
in the future, in which people will be treated unequally on the assumption of
belonging to a specific group, for example, the inhabitants of a certain area, the
group of cat lovers, the group of buyers most willing to buy Apple products, etc. In
German law, Section 19 German General Equal Treatment Act already covers a
multitude of constellations de lege lata ‒ but not all of them. As a result, an
extension to all unequal treatments resting on an algorithm-based data evaluation
or an automated decision procedure is worth considering.81
Extending the range of anti-discrimination law to all areas of digital life would
have a powerful impact on private autonomy. This applies in particular if the anti-
discrimination law not only covers all areas of digital life, but adds new forbidden

78
Council Directives 2000/43/EC, 2000/78/EC, 2004/113/EC and 2006/54/EC.
79
For contracts between private individuals that are not the subject of labor law, the anti-
discrimination directives of the EU furthermore only apply to gender and race discrimination.
In this respect, the scope of the German General Equal Treatment Act goes beyond the EU’s
legislation. See Art 1 and Art 3 para 1 of Council Directive 2004/113/EC for gender discrimin-
ation as well as Art 3 para 1h) of Council Directive 2000/43/EC for racial or ethnic discrimin-
ation. See also the exceeding implementation of these directives in Ss 2, 19 para 1 German
General Equal Treatment Act.
80
Section 19(1) of the German General Equal Treatment Act.
81
An alternative way could be to extend the scope of the German General Equal Treatment Act
to certain new constellations (e.g. consumer contracts concluded on the basis of a scoring
algorithm).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
124 Mario Martini

characteristics such as place of residence to the existing prohibitions of discrimin-


ation (race, ethnic origin, gender, religion, ideology, disability, age, sexual identity).
By virtue of private autonomy, the legal system allows individuals the freedom to
treat others differently in legal transactions in general. Price discrimination, for
example, is generally permissible.82 Whether and how far an extension of anti-
discrimination law is politically desirable, therefore, has to be negotiated in a
dialogue between parliament, citizens and the business community.

(b) onus of proof Since the individual is regularly denied insight into the
opponent’s decision-making processes and documents when algorithms are used,
it will be difficult for them to demonstrate unlawful discrimination, even under the
shifted onus of proof in European anti-discrimination law (e.g. Art 8 of Council
Directive 2000/43/EC). Under the current anti-discrimination law, a plaintiff has to
prove at least “facts from which it may be presumed that there has been direct or
indirect discrimination.” Even this can be very difficult if the plaintiff has no chance
to obtain other data to compare it to their own. The EU legislator should clarify the
provisions regarding the burden of proof in anti-discrimination law by adding that
black-box evaluations are sufficient evidence for algorithm-based procedures.
In German law, the burden-of-proof reversal privilege of Section 22 AGG does not
include injured parties in the important area of bulk business of civil-law transac-
tions, as the systematics of the law show. The legislator should extend the scope of
application of the provision to these transactions. As an expression of the principle
of equality of arms, the legislator should simultaneously impose higher standards of
exculpatory evidence (Entlastungsbeweis) in Section 21 (2) Sentence 2 AGG in
algorithm-based decisions than in human decisions.

(c) technical protection against (indirect) discrimination Machine-


learning systems reflect the inequalities and discrimination they find in their data
base, such as lower salaries for women, few women in leadership positions, or higher
stop-and-search rates for persons of color. Their algorithms treat this factual basis as
the norm. If, for example, an employment agency algorithm recognizes that employ-
ers hire women with children less frequently, it will classify them in the category
“more difficult to employ” and, in case of doubt, grant them fewer support meas-
ures. The new Austrian job placement system has been designed in this way since
2019.83 On the one hand, the algorithm only reflects social reality. On the other
hand, if this social reality is discriminatory, the machine-learning system is carrying
out indirect discrimination. It links the existing disadvantages of social reality to
82
See e.g. (2016) 4 Neue Zeitschrift für Kartellrecht (NZKart) 554.
83
Fanta, “Österreichs Jobcenter richten künftig mit Hilfe von Software über Arbeitslose” (netz-
politik.org, 13 October 2018) <https://netzpolitik.org/2018/oesterreichs-jobcenter-richten-kuenf
tig-mit-hilfe-von-software-ueber-arbeitslose>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 125

further unequal treatment. As captivating as the efficiency logic of an algorithm that


soberly asks for correlations is, it can turn out to be inconsistent with the value
decisions of our legal system.The programming of algorithms has to react. As part of
a concept of technical anti-discrimination protection, the legal system should thus
force operators of sensitive software applications to take appropriate precautions in
their systems against discrimination caused by the discriminatory database.
Such ways to establish fairness by design may consist of operating with standardized
data sets that abstract from features creating discrimination. Possible prejudices and
stereotypes can also be counteracted by redistributing the data to be analysed.
Conclusions can then no longer be drawn on the basis of particularly
discrimination-sensitive selection traits. In the interests of fair and non-discriminatory
procedures, randomness in automated selection decisions can thus be strengthened.
In order to prevent the dangers of discrimination posed by machine-learning
algorithms, it is also conceivable to subject key data related to discrimination to a
special context link. This procedure flags (“tags”) the mass data records collected
along the usage deposit with their context of origin. Their parameters can be checked
to see whether they are compatible with analysis purposes. This can help to verify the
admissibility of mass data utilization or evaluation with regard to contextual binding.
The legislator could introduce this technical procedure as a compliance requirement
in the admission process for high-risk software applications.84

3.3.3 Accompanying Risk Management and Supervision by


Public Authorities
Even if the legislator considers preventive regulatory measures necessary, they will
not suffice to address all the risks an individual faces in an environment permeated
with dynamic patterns of algorithmic decision-making.

3.3.3.1 Audit Algorithms and Scrutiny of Practice Data


Since complex software applications are constantly changing their behavior, either
due to updates or dynamic processes of machine learning, they require continuous
scrutiny. A verdict that has laboriously been fought for in court for years is swiftly
overruled by subsequent developments and can thus turn out to be of little help.
Companies utilizing algorithms that have sensitive effects on fundamental rights
should therefore be subject to ongoing monitoring – as part of operator obligations
and a supervisory regime by the state. These measures should ensure that the
algorithm is still based on appropriate prerequisites, such as a proper learning and
training environment, valid test data and a correct database.

84
See in detail: Martini (n 19) 243 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
126 Mario Martini

For this purpose, audit algorithms that analyse the decision results of other
algorithms can operate as important testing tools. They follow the guiding principle,
“You shall know them by their fruits.” Audit algorithms can systematically examine
the decisions of an adaptive system or a proprietary software for abnormalities,
bringing evidence of unlawful behavior to light. By applying the same statistical
methods, the audit algorithms can detect which factors are particularly significant in
the decision-making of the system examined. They use artificial intelligence instru-
ments to check and balance themselves.
As algorithms cannot precisely define what unlawfulness is, neither the basic
algorithm nor the audit algorithm can prevent discriminatory behavior. All they can
do is collect evidence for further investigation.
The scope of administrative supervision includes the mathematical-statistical
validity of the conclusions drawn by a software-based system – such as scoring or
profiling. The probability values on which it is based and the conclusions it draws
from its attitudes should be checked to see whether the assumptions on which the
decision model is based are methodologically correct and consistent with the values
of law and society. Only criteria that are verifiably relevant to the decision may be
included in the decision model. They must justify a well-founded presumption that
there is a relevant link between an input variable and a desired result. The more the
information relates to the private sphere of an individual or even exposes intimate
details, the more relevant is the information to the subject matter of the decision.
For certain evaluation contexts, the legal system should formulate prohibitions of
use; for example, price differentiation algorithms should – in principle – not be
allowed to take into account the state of health of the person concerned.

3.3.3.2 Institutional Supervision Structure


For powerful algorithm regulation, an effective supervisory system is a crucial
bottleneck. Its task is to ensure that the normative and technical-organizational
requirements for (adaptive) software systems are consistently met right down to the
application level. This requires excellent equipment and deep technical expertise.
Although the establishment of a uniform algorithm supervisory authority may be
desirable, it is at the moment not realistic in a supranational entity such as the European
Union, where several Member States have federal structures calling for even more levels
of authority. In Germany, the constitution divides enforcement powers between the
federal and state governments. The complex cross-sectional issue of algorithm regula-
tion is not only close to data protection and media law. It also has a direct impact on anti-
discrimination and competition law. Legislators therefore cannot establish a new
authority without taking essential tasks away from the existing specialized authorities
on the supranational, federal and state level and thus jeopardizing their effectiveness.
However, it is possible and appropriate to set up a federal support unit to provide
expert support to the various existing supervisory authorities. The legislator could

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 127

give this new support unit a framework as an authority that monitors markets and
products to establish supervisory mechanisms for certain particularly dangerous
software applications. In Germany, the Physikalisch-Technische Bundesanstalt
(PTB) and the Bundesamt für Sicherheit in der Informationstechnik (BSI) provide
an institutional set-up that could be blueprinted.

3.3.3.3 Obligation to Provide Information and to Collaborate, and


Risk-Management System
Supervisory bodies should at all times be able to request any information on any of
the system’s components and to request the cooperation of the algorithm oper-
ators.85 This entails technical cooperation and ways “into” the system so that the
audit entity can do its job. One possible component could be the duty to establish
comprehensive APIs for monitoring purposes.
Confidential instruments like trials in camera could be applied to cater for the
providers’ legitimate interest in secrecy.86 These would ensure that only the courts
can gain access to facts requiring secrecy – not third parties, in particular competi-
tors of a company.
The operator of a software application that poses a substantial risk to personality rights
or is equality sensitive should also be obliged to monitor the algorithmic procedures
used in internal processes by setting up a risk assessment system. Its task is to determine
whether and to what extent the software applications jeopardize legally protected
interests – in order to avoid software applications making unforeseen, unlawful deci-
sions. Providers would be obliged to design technical and organizational methods87 to
prevent infringements and empower supervisors to take effective control measures. To
ensure an effective risk-management model, the legislator could also make obligatory
the appointment of a risk manager for certain algorithmic processes and specify the
manager’s duties by law. The manager’s tasks would include forecasting and identifying
the risks of algorithm-based systems, monitoring and documenting algorithmic deci-
sions, and insisting on remedial action within the company if necessary.

3.3.3.4 Program Sequences Log


To ensure effective opportunities for consumers to prove discrimination and to
supply evidence in court, a comprehensive log of the program sequences should
85
In this respect, European financial market law, especially the European Directive 2014/65/EU,
can serve as a model for regulation: it grants the national financial supervisory authorities the
right to audit algorithmic high-frequency trading and in particular to inspect the algorithmic
structures at any time in order to avert risks for the financial market or market impairments. See
for further discussion of these questions Martini (n 19) 143 ff.
86
See e.g. Martini (n 22) 1485 f.
87
In cases where especially sensitive data, such as religious belief, are affected, or in case of a risk
of indirect discrimination, these systems can e.g. trigger a (human) check of the automated
decision.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
128 Mario Martini

be in place. The documentation should pay special attention to the modelling of the
software application as well as its decisions. Learning steps should also be logged if
applicable.
Art 30 GDPR already provides a list of processing activities. But its obligations are
limited to elementary data, in particular the name of the processor, the purposes of
the processing, and so on.88 The procedural list of Art 30 GDPR thus lags behind
reasonable requirements for active logging of the program sequences. Art 5(2) and
Art 24(1)(1) GDPR also do not formulate logging of the processing steps of algorithm-
based systems as a mandatory duty – at least not sufficiently clearly.89 The European
Union legislator should establish such a logging duty and define its scope precisely.
However, a comprehensive log and its evaluation, especially with decentralized or
adaptive systems, can be extremely costly and can quickly become a disproportionate
burden for the service provider. Therefore, the scope of the obligation should
depend on the risks in respect of personality and other fundamental rights, and
should include hardship clauses.

3.3.4 Ex-post Protection


The structural knowledge asymmetries generated by the use of algorithm-based systems
affect the individual’s chances of taking action against violations of his rights. Where
“black-box algorithms” torpedo the legal defence of consumer rights, the legislator
needs to find suitable solutions regarding liability, procedural law and enforcement.

3.3.4.1 Liability

(a) burden of proof (reverse onus clause) In the absence of insight into
the decision-making process, consumers can hardly prove – or even identify –
infringements, causalities and fault when a service provider or data controller uses
algorithms. This structural asymmetry has similarities to medical malpractice and
producer’s liability. Just as in these cases,90 the legislator should put a reverse onus
88
Hartung, “Art 30 DSGVO” in Kühling and Buchner (eds), Datenschutz-Grundverordnung,
Bundesdatenschutzgesetz (2nd edn, CH Beck 2018) margin nos 16 ff.; Martini (n 44) margin
nos 5 ff.
89
See in detail Martini (n 19) 260 ff.
90
In several constellations under German and European law, the law provides a shift of burden of
proof in contrast to the ordinary procedural principle of production of evidence. A medical
practitioner must prove that a gross error in treatment was not the cause of damage to the
patient if the patient could just prove this treatment error beforehand (section 630h (5)
Sentence 1 BGB). According to section 1(1) of the German Product Liability Act (Produkthaf-
tungsgesetz – ProdHaftG), the person who has put the product on the market must compensate
for damage to life, health or property belonging to another person unless they can prove that
certain statutory exemptions (section 1(2) ProdHaftG) exist, exceptionally excluding their liabil-
ity. The claimant only has to prove the damage, the product defect and the causality of the
defect for the damage beforehand.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 129

clause (Beweislastverschiebung) in place as an expression of procedural equality of


arms, which will shift the burden of proof against operators of algorithms that are
sensitive to fundamental rights. It would then be sufficient for the consumer to prove
facts indicating the likelihood of inadmissible parameters or that the decision or
analysis is illegal in other ways. The operator of the software application, conse-
quently, has an obligation to provide evidence to the contrary.

(b) strict liability? Software applications do not pose a general threat in the
same way as the usage of motor vehicles, performing surgery or keeping animals. But
in particularly sensitive fields of application (such as digitized medical applications
and nursing robots), a similarly strict liability for damages caused by automated
processes is reasonable to compensate for injuries such as potential violations of
important legal interests like life and limb. In such cases, a compulsory insurance
could complement the liability scheme. Offerors who profit from software applica-
tions should have to vouch for their mistakes and risks – even if the faults are due to
emergent (unpredictable) system behavior.91
Some even argue for a liability of the intelligent system itself.92 As a legal entity,
such an “electronic person” would be the algorithmic equivalent of a corporation as
a legal entity. It could provide considerable savings on transaction costs for eco-
nomic players and might guarantee seamless liability. Whether a legal personality is
needed, however, is another matter: mechanical systems do not (yet) possess the
freedom to make their own decisions. They are based on the programming of
natural persons and (until now) set in use by other natural persons, to whom their
behavior can be attributed. Therefore, from today’s perspective, it is not necessary to
construct a separate legal entity for this purpose.

3.3.4.2 Expansion of the Procedural Scope of Action

(a) authority to issue a legal warning to competitors In order to


effectively protect consumers against improper and opaque software applications,
the legislator can take advantage of competitors’ vigilance and expertise.
91
See e.g. (concerning the German General Equal Treatment Act) Bundesarbeitsgericht
(Federal Labour Court) (BAG) “Anspruch nach AGG wegen Stellenausschreibung ‘junges
und dynamisches Unternehmen’ 23.11.2017” (2018) 71 NJW 1497, 1499.
92
See Allen and Widdison, “Can Computers Make Contracts?” (1996) 9 Harvard Journal of Law
& Technology 26, 35 ff.; European Parliament, Civil Law Rules on Robotics, European
Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil
Law Rules on Robotics (2015/2103(INL)), para 53 f.; Karnow, “The Encrypted Self: Fleshing
Out the Rights of Electronic Personalities” (1994) 13 The John Marshall Journal of Information
Technology & Privacy Law 1, 4; Schweighofer, “Vorüberlegungen zu künstlichen Personen:
autonome Roboter und intelligente Softwareagenten” in Schweighofer and Lachmayer (eds),
Auf dem Weg zur ePerson (2001) 45, 49 ff.; Solum, “Legal Personhood for Artificial Intelli-
gences” (1992) 70 North Carolina Law Review 1231.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
130 Mario Martini

Competitors have their own economic incentive to prevent the use of unlawful
algorithms by other market players. The law should thus extend opportunities to
issue a formal warning concerning the use of discriminatory or otherwise infringing
software applications.93
In the same breath, it should prevent the misuse of the right to issue a warning,
establishing a system of checks and balances. The right to issue warnings should be
limited in scope. Competition law could, for example, limit the number of eligible
competitors, establish a qualitative lower threshold to exclude minor infringements
from claims and regulate the compensation costs for warnings (e.g., by capping
them to a maximum amount and excluding contingency fees) in order to decrease
incentives for resourceful law firms to misuse the instrument.

(b) right to representative action by consumers’ associations Con-


sumers who suffer from legal infringements that do not have a lasting impact on
their entire lives will generally avoid taking on the financial risks and bureaucratic
burdens that court proceedings entail. The prospect of long litigation and the
uncertainty of its success, as well as its procedural formalism, are a quick deterrent
to the affected parties. From an economic point of view, the expenditure associated
with judicial claim enforcement in the form of time and financial risk often does not
justify a lawsuit. The social value of performing control processes is often greater
than the value for the individual.
As guardians of consumer interests, consumers’ associations should thus obtain the
right to bring a representative action focused on anti-discrimination, data protection and
other fundamental rights. For these associations, economies of scale make it practical to
take on the risk of judicial proceedings. In the best case, the combination of technical
and legal expertise helps to discover and fight discriminatory and other unlawful
algorithmic patterns more effectively. Therefore, the competence for representative
action led by consumers’ associations should extend to the field of software applications
that can affect sensitive fundamental rights. This would enable associations to act against
unlawful algorithmic decision-making independently of the individual case.94
93
Like section 12, 8 (3) No 1, (1) with section 5 (1) Sentences 1, 2 No 6 with Sentence 3(3) German
Act against Unfair Competition (Gesetz gegen den unlauteren Wettbewerb – UWG); Art 11 of
the European Parliament and Council Directive 2005/29/EC. The extent to which the rules of
the Basic Data Protection Ordinance are rules of market conduct within the meaning of the
UWG is currently under discussion. See Heinrich Amadeus Wolff, “UWG und DS-GVO:
Zwei separate Kreise?” (2018) 9 Zeitschrift für Datenschutz (ZD) 248. Since most of the
provisions of the GDPR regulate conduct in the market, but not in the interest of the market,
however exclusively for the protection of privacy, a legislative reform of (national) competition
law would help to clarify which data controllers’ duties can be subject to a warning – and
which cannot. For the extent to which the GDPR is intended to regulate market behaviour in
the interest of the market participants, and the extent to which it still leaves the national
legislators their own scope for regulation, see in detail Martini (n 19) 299 ff.
94
These representative actions must not be confused with class actions, as they are known in the
USA. Class actions do not fit into every legal system. The German legal system has not

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 131

An alternative form of (state-sponsored) arbitration would be a body, as a form of


alternative dispute settlement, with special expertise in algorithmic procedures. It
could also reduce the number of lawsuits and the cost of enforcement for con-
sumers, improving effective enforcement of the law.

3.3.4.3 Extended Legal Competence for Civil Courts


If the law wants to protect citizens against the risks of discriminatory and anti-
competitive algorithms, it can also expand the binding effect of civil court
judgements. The wealth of legal evidence brought before the civil courts in a
discrimination case could then be accessible to third parties as well. The civil courts
would therefore have to acquire a secondary competence to, for example, issue erga
omnes injunctions against the provider of a proven discriminatory software applica-
tion, when they deal with anti-discrimination act procedures.95
However, legislators who want to anchor preventive competence as an additional
competence must bear in mind the structural differences between administrative
proceedings and civil proceedings. In particular, there is no inquisitorial principle in
civil proceedings in Germany which the competent authority must observe in order to
issue a lawful prohibition order under public law. In addition, the defendant is
particularly in need of protection in such proceedings. The effects of prohibition

incorporated any form of legal action in which immediate rights are granted to a person by
court decision, when the respective person has not deliberately participated in the court
proceedings. But Germany has newly implemented a model law suit for consumers’ associ-
ations as a preliminary step to further individual actions (Section 606 Code of Civil Procedure
[ZPO]; Musterfeststellungsklage (“model proceeding”); see the legislator bill of the Federal
Government, BT-Drs 19/2439>. This action combines the benefits of a class action and a
consumers’ association action. In a model case, a collective action can clarify legal issues in
one procedure. The individuals can subsequently benefit from the results of the model case for
their individual (damage) claim. The need for a form of representative action became
politically relevant with the Volkswagen scandal, see Stadler, “Musterfeststellungsklagen im
deutschen Verbraucherrecht” (2018) 33 Verbraucher und Recht (VuR) 83, 83. Section 606 ZPO
follows the example of the only existing form of a collective action in Germany initiated with
the Capital Markets Model Case Act (Gesetz über Musterverfahren in kapitalmarktrechtlichen
Streitigkeiten – KapMuG) in 2005. The scope of the KapMuG covers damage claims in the
field of capital markets. Section 606 ZPO addresses further economic sectors to strengthen
consumer rights. Meanwhile, the European legislature aims higher with a proposal for a
directive on representative actions for the protection of the collective interests of consumers
(Proposal for a Directive of the European Parliament and the Council on representative
actions for the protection of the collective interests of consumers, and repealing Directive
2009/22/EC, 11/04/2018, COM(2018) 184 final) facilitating damage claims brought by qualified
entities. The Commission’s proposal goes beyond the German form of model proceedings.
However, its scope is restricted to certain sectors such as financial services, energy, telecommu-
nications, health and the environment (13). It would be advisable to extend its scope to
infringements of personality rights by algorithmic applications.
95
Another regulatory option to facilitate litigation could be an intervention right for consumers’
associations in civil proceedings that allows them to bring an action for an injunction against
the algorithm-based application in question.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
132 Mario Martini

competence clearly reach beyond a civil-law, inter partes valid injunction judgement.
An additional preventive competence can therefore only be justified objectively if an
inquisitorial principle is applied in the respective judicial proceedings (see for
example Section 86(1) Administrative Court Code [Verwaltungsgerichtsordnung]).
A right to publicize such judgements may also be appropriate. The winning party
may then have the judgement published at the expense of the unsuccessful party if it
has a legitimate interest. The pillory effect emanating from these proceedings is
deliberately intended, according to the will of the legislator, to create additional
incentives to refrain from unlawful conduct as a preventive measure.

3.3.5 Self-Regulation: Algorithmic Responsibility Code with a


Declaration of Conformity
Due to their design, algorithmic software applications appear as black boxes not only
to customers, but also to experts and state agencies. Regulated self-regulation appears
to be a suitable legal instrument for responding. Because of the limited audit
capacities of the state and the dynamics of adaptive software applications, it is
advisable to include providers in the regulation of their software systems. They have
superior expertise regarding the risks triggered by their software applications, as well
as evolving possible effective mechanisms to solve these problems. Moreover, early
involvement of providers could increase acceptance of later restrictions and willing-
ness to follow the rules.
However, self-regulation has not yet turned out to be a persuasive method of data
protection.96 The efforts to encourage Facebook to fight hate speech in an appro-
priate and timely manner on its platform through self-regulation prove this.97
Without incentives and sanctions, self-regulation commitments rarely achieve the
desired effects. A modified model of self-regulation – a legal structure with “teeth” –
may help to involve the economic entities concerned in the execution of binding
rules. Stronger statutory minimum requirements regarding the content of a codex as
well as obligations to inform could complement the self-regulatory regime.
96
Martini, “Do it yourself im Datenschutzrecht” (2016) 35 NVwZ-Extra 1, 9 f.; Meltzian, “§ 38a
BDSG” in Wolff and Brink (eds), Datenschutzrecht in Bund und Ländern (CH Beck 2013)
margin no 3; Petri, “§ 38a BDSG” in Simitis (ed), Bundesdatenschutzgesetz (8th edn, Nomos
2014) margin no 16.
97
Germany has therefore taken a different regulatory approach with the enactment of the
Network Enforcement Act (Netzwerkdurchsetzungsgesetz – NetzDG). It pursues a good idea:
Facebook and similar entities have to delete illegal content within very short periods of time or
face a penalty. This legal structure strangles freedom of speech, because social networks like
Facebook have a strong incentive to delete rather than not delete to avoid the risk of sanctions,
even if the deleted content is protected by freedom of speech. The German NetzDG lacks
“equality of arms” for freedom of speech, in particular fast and effective legal protection against
deletions as well as insults. What is required pro futuro is a procedural mechanism that ensures
that all legitimate conflicting interests involved in the complex process of weighing up are
given appropriate legal protection.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 133

The hybrid approach used with the German Corporate Governance Codex98 –
set out in Section 161 German Companies Act (Aktiengesetz – AktG) – can serve as a
possible paradigm for a regulatory model.99 The Codex is not a statute. Rather, it
brings together diverse experience in a private panel of experts from the business
world.100 Section 161 AktG obliges those companies subject to stock exchange
trading to declare whether they have followed its recommendations. If a provider
does not comply with the recommendations, it must state its reasons. The regulation
mechanism of the Codex is based on the principle of “comply or explain.” It follows
the basic idea “let the market decide.”
In line with this concept, the legislator should establish an obligation for providers
of software applications that are particularly sensitive to fundamental rights to
commit themselves to an “Algorithmic Responsibility Codex.” A government com-
mission (consisting of elected deputies representing all sectors, including IT experts,
data protectors and consumer associations) would work out the content of the
Codex. Just as listed companies have to disclose their information for potential
investors to ensure transparency of investments, service providers (that offer
algorithm-assisted software applications posing a threat to personality and other
fundamental rights)101 should publicly comply with the rules of conduct for the

98
See Regierungskommission Deutscher Corporate Governance, Deutscher Corporate
Governance-Kodex, <www.dcgk.de/de/kodex/aktuelle-fassung/praeambel.html> (11.03.2019).
99
On the economic function of corporate governance codes see v Werder, “Ökonomische
Grundfragen der Corporate Governance” in Hommelhoff, Hopt, and v Werder (eds), Hand-
buch Corporate Governance (2nd edn, CH Beck 2009) 3. On the German code see Lutter,
“Deutscher Corporate Governance Kodex” in Hommelhoff et al. (eds) 123; for the British
Corporate Governance Code see Financial Reporting Council, “The UK Corporate Govern-
ance Code” (2016) <www.frc.org.uk/getattachment/ca7e94c4-b9a9-49e2-a824-ad76a322873c/
UK-Corporate-Governance-Code-April-2016.pdf>. A brief overview of international codes is
given by Wymeersch, “Corporate Governance Regeln in ausgewählten Rechtssystemen” in
Hommelhoff et al. (eds) 137; on the historical development of those codes see Hopt, “Die
internationalen und europarechtlichen Rahmenbedingungen der Corporate Governance” in
Hommelhoff et al. (eds) 39.
100
Its impact and meaningfulness are not free of criticism. The criticism extends in particular to
constitutional concerns over the influence of private parties in state legislation, which grants
private parties a higher binding power than the constitution possibly permits. At worst, the
Corporate Governance Codex gives private individuals the opportunity to impose declaration
obligations on other legal entities. The Codex is also suspected of being more of a fig leaf of
regulation than an effective instrument for improving corporate culture. See e.g. Habersack,
”Staatliche und halbstaatliche Eingriffe in die Unternehmensführung (Gutachten E)” in
Deutscher Juristentag (ed), Verhandlungen des 69. Deutschen Juristentages (CH Beck 2012)
E 57 f.
101
In order to achieve its normative mission in the digital world, the application of the Declaration
of Conformity (ideally incorporated in European law) should not depend on the requirement
of a branch in a certain country, but should – just like the other obligations to which service
providers are subject – follow the lex loci solutionis, where an offer is made to citizens of the
European Union. EU law applies irrespective of whether the supplier is located in the
European Union (see also Art 3(2) GDPR).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
134 Mario Martini

ethically responsible use of (machine-learning) algorithms. Otherwise, they have to


explain why and to what extent they do not follow the rules of the Codex.
The Algorithmic Responsibility Codex should be more than a normative symbol
that jumps like a tiger and lands like a decorative bedside rug. Thus, the Codex
would not only call for ethically justifiable action, but would also punish a violation
of its own promises: The supervisory authority could impose fines for false declar-
ations. Besides, the market could subsequently penalize the company with loss of
reputation. The effects of self-binding and truth-claiming caused by an explanation
may create pressure to make correct declarations. The resulting public commitment
to minimum standards could help to compensate the structural asymmetries
between users and service providers.

3.4 conclusion
We increasingly fail to understand how algorithms work. Conversely, algorithms are
becoming better and better at learning how we work. In the digital age – not
different from previous ages – the state is expected to protect the individual’s
autonomy and informational self-determination from impairments. This obligation
involves establishing an efficient audit system capable of handling the diverse and
growing use of (machine-learning) algorithms and ensuring the embedding of social
basic ethical values into automated systems.The GDPR has already cautiously raised
its regulatory index finger. However, it only provides effective answers for fully
automated decisions (Art 22 GDPR). Further regulatory steps should follow. The
whole spectrum of algorithmic processes that assist human decisions and shape our
daily lives cries out for tailored solutions.
Reasonable regulatory instruments are (inter alia): an algorithm audit entity with
inspection rights; cooperation obligations for the operators as well as the duty to
inform about the logic and scope of an algorithm-based procedure (not only in cases
of automated individual decision-making as in Art 22 GDPR); the obligation to
publish comprehensive impact assessments (not only with regard to data protection)
and install risk-management systems (for algorithmic systems involving special
dangers for the rights of third parties); and an extension of the scope of application
of (European) anti-discrimination legislation. An Algorithmic Responsibility Codex,
following praeter propter the regulatory concept of the UK and German Corporate
Governance Codes would be a useful addition to this regulation bundle.
As isolated national solutions cannot suffice to tackle transnational malpractices
executed by algorithms, the regulatory challenges should be solved on the highest
normative level possible – based on the common values of the European human
rights tradition. Apart from the efficiency of a harmonized regulation on EU level,
the regulatory competence to control algorithm-based procedures is in any case no
longer predominantly in the hands of the national states. It is the European Union
that is competent in this field, especially regarding data protection law, according to

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
Regulating Algorithms 135

Art 16 para 2 Treaty on the Functioning of the European Union (TFEU). The
regulatory competence of the national legislature (with regard to the proposed
measures) is mainly limited to procedural rights, in particular the structuring of
rights of consumer associations, powers of warning notices in competition law and
the allocation of the burden of proof in civil-law suits.
Regulation is not a goal in itself. Rather, it is necessary to build confidence among
users in the new digital offering: only when the commitment of algorithmic systems
follows clear rules can trust be established. Trust building is a central task of a legal
system promoting welfare – just as state regulation has in the past contained the
dangers posed by cars or pharmaceuticals in order to ensure their suitability and
reliability for mass consumption.
Yet, regulation should not simply be exhausted in “German angst” and algorith-
mic necromancy. Regulatory ambitions notwithstanding, the legislature must be
careful not to overreact to the digital progression of society by obstructing the
potential for innovation offered by modern software applications. They should in
particular not burden innovative start-up structures with a set of regulatory instru-
ments that do not leave adequate scope for development. The intensity of regulation
should correspond to companies’ profit chance, and the size and level of risk they
pose. Establishing a graded regulatory system based on a (sector- and/or application-
specific) diagnosis of how sensitive software applications are to fundamental rights
will be a challenge worth accepting. What is needed in this process is a healthy
balance between the risk of suffocating innovation and the foundations of a digital
humanism. In the tradition of the Enlightenment era, the categorical imperative
should point the way ahead for the digital world. Technology should always serve
the people – not the other way around.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:27:37, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.004
4

Automated Decision-Making under Article 22 GDPR

Towards a More Substantial Regime for Solely


Automated Decision-Making

Diana Sancho

introduction
Machine-learning algorithms are used to profile individuals and make decisions
based on them. The European Union is a pioneer in the regulation of automated
decision-making. The regime for solely automated decision-making under Article
22 of the General Data Protection Regulation (GDPR), including the interpretative
guidance of the Article 29 Working Party (WP29, replaced by the European Data
Protection Board under the GDPR), has become more substantial (i.e., less formal-
istic) than was the case under Article 15 of the Data Protection Directive. This has
been achieved by: endorsing a non-strict concept of ‘solely’ automated decisions;
explicitly recognising the enhanced protection required for vulnerable adults and
children; linking the data subject’s right to an explanation to the right to challenge
automated decisions; and validating the ‘general prohibition’ approach to Article 22
(1). These positive developments enhance legal certainty and ensure higher levels of
protection for individuals. They represent a step towards the development of a more
mature and sophisticated regime for automated decision-making that is committed
to helping individuals retain adequate levels of autonomy and control, whilst
meeting the technology and innovation demands of the data-driven society.

4.1 algorithms and decision-making


The development of machine-learning algorithms and their growing use in
decision-making processes in the era of big data pose significant challenges for
consumers and regulators.1 Such algorithms are sophisticated and process infor-
mation in an opaque manner which may not always be intelligible to the average
1
See Mayer-Schoenberger, ‘The Rise of Big Data’ (2013) Foreign Affairs (FA) 27 ff.; from an
institutional perspective, Hijmans, The European Union as Guardian of Internet Privacy. The
Story of Art 16 TFEU (2016) 511 ff. See also Kamarinou, Millard, and Singh, ‘Machine Learning

Downloaded from https://www.cambridge.org/core. University of New136England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 137

person.2 Profiling practices based on them are said to optimise the allocation of
resources by allowing private and public parties to personalise their products and
make more efficient choices.3 However, they can also be used to exploit consumers’
vulnerabilities and influence their attitudes and choices, which may result in unfair
discrimination, financial loss and loss of reputation.4
This chapter examines the legal mechanisms available in data protection law to
safeguard individuals from decisions which result from automated processing and
profiling. It considers, in particular, how the regime for automated decision-making
under the General Data Protection Regulation balances the interests of consumers
and their fundamental right to data protection against the demands of the data-
driven industry, such as the development of new products and services based on
artificial intelligence and machine-learning technologies.5 It will thus focus on
Article 22 GDPR and related provisions, and take a commercial perspective.6
The chapter has the following sections. Section 4.2 examines profiling and
automated decision-making and assesses the operation of Article 22 on procedural
grounds. Section 4.3 evaluates the concept of automated decision-making referred to
in Article 22(1) and demonstrates how the WP29 has helped this concept to become
more substantial (i.e., less formalistic). Section 4.4 analyses the so-called right to
human intervention and whether the legitimate interests of the controllers have any

with Personal Data’, Queen Mary University Legal Studies Research Paper 247, 2016 <https://
ssrn.com/abstract=2865811>.
2
See Burrell, ‘How the Machine ‘Thinks’: Understanding Opacity in Machine Learning
Algorithms’ (2016) Big Data & Society (BD&S)1 ff.
3
See Surblyte, ‘Data as a Digital Resource’, Max Planck Institute for Innovation & Competition
Research Paper No 16-12, 2016 <https://ssrn.com/abstract=2849303>. Information Commis-
sioner Officer (ICO), ‘Big Data, Artificial Intelligence, Machine Learning and Data Protection’
15 ff. <https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-
protection.pdf>. Lohsse, Schulze, and Staudenmayer (eds), Trading Data in The Digital
Economy: Legal Concepts and Tools (2017) 13 ff.
4
See O’Neil, Weapons of Math Destruction (2016) 10 ff.; Zarsky, ‘Mine Your Own Business!:
Making the Case for the Implications of the Data Mining of Personal Information in the Forum
of Public Opinion’ (2003) Yale Journal of Law and Technology (YJLT)19; Federal Trade Commis-
sion (FTC), Big Data: A Tool for Inclusion or Exclusion? (2016) <www.ftc.gov/system/files/
documents/reports/big-data-tool-inclusion-or-exclusion-understanding-issues/160106big-data-rpt
.pdf> 3 ff.; Centre for Information Policy and Leadership (CIPL), ‘Comments on the Article
29 Data Protection Working Party’s Guidelines on Automated Individual Decision-Making and
Profiling’ 2017. <www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_comments_
to_wp29_guidelines_on_automated_individual_decision-making_and_profiling.pdf>, 1 ff. Also,
Navas, Inteligencia Artificial. Tecnología y Derecho 2017, 63 ff.
5
Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of
personal data and on the free movement of such data (General Data Protection Regulation),
OJ 2016 L 119/1. On the interests at stake, see ICO (n 3) 94 ff., also CIPL (n 4) 1 ff.
6
The applicability of data protection law to private parties is not explicitly referred to in Article 16
TFEU, the legal basis for the GDPR, yet it is accepted that secondary EU law has extended the
application of data protection rights and obligations to private parties: see Surblyte (n 3) 15 ff.;
and Kokott and Sobotta, ‘The Distinction between Privacy and Data Protection in the Jurispru-
dence of the CJEU and the ECtHR’ (2013)3 International Data Privacy Law (IDPL) 226 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
138 Diana Sancho

role to play as a basis for processing under Article 22. Section 4.5 examines the
interplay between Article 22 and the information rights under Articles 13(2)(f ),
14(2)(g) and 15(1)(h). A conclusion follows in Section 4.6.

4.2 automated processing, profiling, and


automated decision-making

4.2.1 A Dynamic Process


‘Automated processing’ and ‘profiling’ are separate legal categories. Processing is a
generic concept, broadly defined as ‘any operation or set of operations which is
performed on personal data or on sets of personal data, whether or not by automated
means’.7 The term ‘automated’ is commonly used to qualify the way in which
information is processed, in a structured and non-manual form.8 Profiling, on the
other hand, is a type of automated processing that seeks to categorise individuals.
Article 4(4) GDPR defines profiling as ‘any form of automated processing of
personal data consisting of the use of personal data to evaluate certain personal
aspects relating to a natural person, in particular to analyse or predict aspects
concerning that natural person’s performance at work, economic situation, health,
personal preferences, interests, reliability, behaviour, location or movements’. Pro-
filing relies on data-mining techniques (procedures in which large sets of data are
analysed for patterns and correlations), which are then used to build predictions and
anticipate individuals’ needs.9
Methods that the GDPR employs to protect individuals operate at different levels.
Territorially, the GDPR has extended its reach further by ensuring its application to
processing activities which may be carried out by controllers who are either estab-
lished in the Union or target (or monitor) consumers in the Union.10 This is
complemented by a comprehensive regime on international transfers of personal
data that ensures the export of the European standard of protection abroad. On a
substantial level, the GDPR provides a robust regulatory framework for data process-
ing. This consists of general processing principles, detailed rights for data subjects
and risk management duties for controllers (i.e., data protection impact assessments),

7
Article 4(2) GDPR.
8
WP29, Opinion 4/2007 on the concept of personal data, WP136, 4 ff.
9
See Hildebrandt, ‘Defining Profiling: A New Type of Knowledge’ in Hildebrandt and Gutwirth
(eds), Profiling the European Citizen, Cross-Disciplinary Perspectives (Springer Netherlands
2008) 17 ff.
10
On Article 3 GDPR, see Svantesson, ‘The extraterritoriality of EU Data Privacy Law – Its
Theoretical Justifications and Its Practical Effects on US Businesses’ (2014) Stanford Journal of
International Law (SJIL) 55 ff.; Alsenoy and Koekkoek, ‘Internet and Jurisdiction after Google
Spain’ (2015) 5 International Data Privacy Law (IDPL) 105 ff.; Sancho, ‘The Concept of
Establishment and Data Protection Law: Rethinking Establishment’ (2017) 42 European Law
Review (EL Rev) 491 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 139

including privacy by design and default requirements (i.e., the adoption of technical
and organisational measures to implement data protection obligations). Finally, a
system of supervisory authorities, redress mechanisms and liability rules enforce
compliance.
Several classifications have been proposed to explain the usual development
stages of automatic processing and profiling.11 Although the language they use to
describe the different phases of processing varies, they all tend to identify the
following three stages of processing: collection, analysis and application.12
At the collection stage, the user (i.e., the controller) gathers personal data from a
variety of sources, not merely from the data subjects.13 Massive amounts of personal
data are collected from internet resources, mobile devices and apps, through
ambient intelligent technologies embedded in everyday objects (e.g., furniture,
vehicles and clothes) and from the human body itself (e.g., biometric data).14
The value of data is often unknown at collection and can only be attained after
the data is (re)processed over and over again for different purposes.15 In the analytical
stage, potent computational frameworks are used to store, combine and analyse large
quantities of data in order to generate new information. Data mining increasingly
relies on machine-learning algorithms to profile individuals.16 These differ from
traditional algorithms in that they feed on a vast amount of data and can adopt their
own operating rules.17

11
For a general classification on the lifecycle of personal data processing, see OECD, ‘Exploring
the Economics of Personal Data: A Survey of Methodologies for Measuring Monetary Value’
OECD Digital Economy Papers No 220, 2013 <http://dx.doi.org/10.1787/5k486qtxldmq-en> 11
ff.; also FTC (n 4) 3 ff. Specifically on profiling, Hildebrandt, ‘The Dawn of a Critical
Transparency Right for the Profiling Era’, in Bus et al. (eds), Digital Enlightenment Yearbook
(2012) 44 ff.; also Kamarinou, Millard, and Singh (n 1) 8 ff.
12
Interestingly, almost none of the available classifications explicitly consider the expiration/
destruction of data as the final stage of processing; see Moerel and Prins, ‘Privacy for the Homo
Digitalis: Proposal for a New Regulatory Framework for Data Protection in the Light of Big
Data and the Internet of Things’ (2016) <https://ssrn.com/abstract=2784123> 12 ff.
13
See Rubinstein, ‘Big Data: The End of Privacy or a New Beginning?’ (2013)3 International
Data Privacy Law (IDPL) 74 ff.
14
Ibid. See also Rouvroy, Privacy, Data Protection, and the Unprecedented Challenges of Ambient
Intelligence. Studies in Ethics, Law and Technology (Berkeley Electronic Press 2008) <https://
ssrn.com/abstract=1013984>, 1 ff. Tene and Polonetsky, ‘Big Data for All: Privacy and User
Control in the Age of Analytics’ (2013) Northwestern Journal of Technology and Intellectual
Property (NJTIP) 255 ff.
15
See Mayer-Schonberger and Padova, ‘Regime Change? Enabling Big Data through Europe’s
new Data Protection Regulation’ (2016) Columbia Science and Technology Law Review
(Colum Sci & Tech L Rev). See also Custers and Ursic, ‘Big Data and Data Reuse: A
Taxonomy of Data Reuse for Balancing Big Data Benefits and Personal Data’ (2016) 6
International Data Privacy Law (IDPL) 13 ff.
16
See Anrig, Browne, and Gasson, ‘The Role of Algorithms in Profiling’ in Hildebrandt and
Gutwirth (eds), Profiling the European Citizen, Cross-Disciplinary Perspectives (2008) 66 ff.
17
See Burrell, BD&S (2016) 6 ff.; O’Neil (n 4) 76 ff.; and Singh, Walden, Crowcroft, and Bacon,
‘Responsibility & Machine Learning: Part of a Process’ (2016) <https://ssrn.com/abstract=
2860048>.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
140 Diana Sancho

Finally, at the application stage, controllers implement the outcomes resulting


from automated processing, including profiling, and make decisions based on them
(e.g., they apply a score, a recommendation, a trend). There are two possibilities,
depending on whether the controller implements the algorithm output straightfor-
wardly, or relies on human analysts to make a decision. The first type of automated
decision-making is referred to as solely automated decision-making and falls within
the scope of Article 22, whereas the latter is excluded from this provision.

4.2.2 The Procedural Design of Article 22


Two aspects of the procedural design of Article 22 deserve attention. On the one
hand, the protection that this provision affords is intended to apply to the application
stage of processing only. Like the Data Protection Directive (DPD),18 the way in
which the GDPR delivers protection is based on the idea of single and static
processing operations to which the rights of the data subjects are attached.19 This
contrasts, however, with the dynamic nature of automated processing and profiling
in the era of big data, where data is reprocessed for different purposes and is said to
develop at a distance from the individual.20 On the other hand, it is also relevant to
notice that the right not to be subject to automated decision-making under Article
22 is codified last on the list of data subjects’ rights (found in Articles 12‒22).21
The procedural design of Article 22 GDPR mirrors that of Article 15 DPD, which
was also meant to apply to the application stage only and was codified last.22 This
may have facilitated the exercise of the right not to be subject to automated decision-
making under the Data Protection Directive or at least may have helped data
subjects become more aware of it.23 However, whether this design is well suited to
the GDPR is questionable. First, the long list of rights that precede Article 22,
18
Directive 95/46/EC on the protection of individuals with regard to the processing of personal
data and on the free movement of such data, OJ 1995 L 281/31.
19
Critically, Hert and Papakonstantinou, ‘The New General Data Protection Regulation: Still a
Sound System for the Protection of Individuals?’ (2016) Computer Law & Security Review
(CL&SR) 184 ff.
20
Different taxonomies of data exist: OECD (n 11) 11 ff. (referring to the categories of ‘data
volunteered’, ‘observed’, and ‘inferred’); Abrams, ‘The Origins of Personal Data and Its
Implications for Governance, Information Accountability Foundation’ (2014) <https://ssrn
.com/abstract=2510927>; also Schneier, ‘A Taxonomy of Social Networking Data’ <http://
ComputignNow.computer.org>.
21
After the general provision on transparency and modalities (Article 12), the right to information
(Articles 13 and 14), the right of access (Article 15), the right to rectification (Article 16), the right
to erasure (Article 17), the right to restriction of processing (Article 18), the right to portability
(Article 20) and the right to object (Article 21).
22
At the end of Section VII of the Data Protection Directive.
23
For a study on the exercise of the rights of data subjects under the DPD, see European Union
Agency for Fundamental Rights, ‘Access to Data Protection Remedies in EU Member States’
(2014) <http://fra.europa.eu/en/publication/2014/access-data-protection-remedies-eu-member-
states> 27 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 141

including significant new additions (portability, erasure and restriction of process-


ing) may reduce Article 22’s visibility for the average data subject.24 Second, the
processing phases discussed above are not necessarily linear, as data processing
increasingly occurs in real time.25
If automated decision-making is now the rule rather than the exception, enhan-
cing the visibility of the provision(s) which regulate it seems reasonable in legislative
policy terms. Recent international developments appear to follow this approach. In
the modernised Convention 108 (Council of Europe), the right not to be subject to
solely automated decision-making is codified first on the list.26 Under the GDPR,
however, choosing to define solely automated decision-making in paragraph (1) of
Article 22 (rather than adding a definition to the list in Article 4, for instance) and
formulating the right in this provision in negative terms (as the right ‘not to be
subject to . . .’) has caused significant interpretative difficulties. These issues are
further discussed in Section 4.4.

4.3 which decisions?

4.3.1 Classification
Different types of decisions derived from automated processing, including profiling,
can be distinguished. The nature of the agent making the decision represents an
obvious first classification criterion27 distinguishing human-based decisions from
machine-based decisions. Under this classification, automated decisions would
typically equate to machine-based decisions. An alternative approach to the same
criterion, however, would also consider the degree of human involvement in the
automated decision-making process.
This approach is different from the previous case, in that the nature of the agent
involved is not conclusive as regards the ‘automated’ character of the decision. Since
most automated decisions happen to be machine based, it could be argued that this
approach is of little practical relevance. However, in an increasingly sophisticated

24
As Blume quotes, ‘Communicativity does not seem to be the strength of the GDPR’; see ‘The
Myths Pertaining to the Proposed General Data Protection Regulation’ (2014) 4 International
Data Privacy Law (IDPL) 273 ff.
25
Moerel and Prins (n 12) 22 ff.; also Kamarinou, Millard, and Singh, (n 1).
26
Article 9(1)(a) states, ‘the right not to be subject to a decision significantly affecting him or her
based solely on an automated processing of data without having his or her views taken into
consideration’; see modernised Convention for the Protection of Individuals with Regard to the
Processing of Personal, as it will be amended by its Protocol CETS No [223], at <https://rm.coe
.int/16808ade9d>. On the other hand, Article 9(1)(c) refers to the right to obtain ‘knowledge of
the reasoning underlying data processing where the results of such processing are applied to
him or her’.
27
For a typology of types of profiling, see Hildebrandt (n 9) 25 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
142 Diana Sancho

processing context, a definition of automated decision-making not strictly relying on


the absence of human elements may present some advantages, as discussed later.
Automated decision-making can also be classified according to whether the
recipient is an individual or a group.28 An individual decision is directed towards a
specific individual (e.g., someone who is offered personalised interest rates), whereas
a group decision relates to a group of individuals sharing common attributes (e.g.,
consumers aged 20‒29, or those living in a certain neighbourhood).
The effects of a decision do not represent constitutive elements of the notion
‘decision’, for a decision exists regardless of its effects. Lawmakers, however, may
take specific effects into consideration as qualifying requirements of the applicable
regime. From this point of view, the effects of a decision can be considered
qualitatively, if decisions are required to impact upon their recipients in certain
ways.29 Effects can also be considered quantitatively, either by reference to an
individual (e.g., who is admitted to school or university) or by reference to a group
(e.g., whose members are offered insurance at higher premiums).
How the concept of solely automated decisions under Article 22(1) GDPR has
integrated these criteria is now examined.

4.3.2 Analysis

4.3.2.1 Actor
Article 22(1) states that the relevant decision has to be ‘based solely on automated
processing, including profiling’. Two interpretations of automated decisions under
Article 22(1) are possible. First, a strict interpretation excludes the application of this
provision if the automated decision-making process has involved any form of human
participation.30 This focuses on the nature of the determinant making the decision
under the first criterion above. By contrast, the notion of automated decision-
making referred to in Article 22(1) can also be defined by reference to the degree
of human autonomy involved (or the lack of it). For the purpose of the definition of
solely automated decisions under Article 22(1), this would imply that human involve-
ment in the decision-making process is not to mechanically exclude the application
of this provision. Under this second interpretation, the key question is not whether a
28
Ibid 20 ff.
29
For example, individuals can challenge the legality of a Union act if they demonstrate that they
are ‘individually’ and ‘directly’ concerned by it [under Article 263 TFEU and the Plaumann
test, European Court of Justice (ECJ) 15.7.1963 case 25/62 (Plaumann/Commission), ECLI:
EU:C:1963:17]; see, on the EU regime for judicial review, Hartley, The Foundations of
European Union Law (7th edn, OUP 2010) 370 ff.
30
On the strict approach, for example, see Savin, ‘Profiling and Automated Decision Making in
the Present and New EU Data Protection Frameworks’, Paper Presented at 7th International
Conference Computers, Privacy, and Data Protection, Brussels, Belgium, 2014, 1 ff.; also
Hildebrandt (n 9) 51 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 143

specific decision can be categorised as human or machine based but whether it


retains its automated nature in the case of human involvement.
Bygrave’s requirement of real and influential human participation can be used to
determine which type of human participation deprives a decision of its automated
nature .31 According to this author, involvement of human actors in the decision-
making process that is merely nominal (i.e., participation lacking any real influence
on the outcome) must not prevent the application of Article 22. In a context in
which controllers can be seen to operate increasingly with automated systems of
evaluation and profiling, permitting Article 22(1) to capture truly automated deci-
sions despite nominal human involvement is to be welcomed. This interpretation is
also easy to justify on teleological grounds. If the rationale of the regime for solely
automated decisions is the need to preserve some degree of autonomy of human
intervention in decision-making,32 decision-making processes involving human
nominal participation present the same risks as those completely lacking human
involvement. Furthermore, it should be noted that a strict interpretation of
Article 22(1) creates an incentive for controllers to make human actors implement
routine procedures to prevent the application of the protective regime for automated
decision-making.
In its guidelines on automated decision-making and profiling, the WP29 con-
firmed the proposed second interpretation of the definition of solely automated
decision-making under Article 22(1).33 The guidelines read: ‘[T]he controller cannot
avoid the Article 22 provisions by fabricating human involvement. For example, if
someone routinely applies automatically generated profiles to individuals without
any actual influence on the result, this would still be a decision based solely on
automated processing.’34 This is a positive development for the reasons just dis-
cussed. And so, three types of decisions resulting from automated processing and
profiling may arise. These are: (i) decisions where the automated output applies
straightforwardly; (ii) automated decisions with human nominal involvement, where
a human actor intervenes in the application of the automated output without
revising or assessing it; and (iii) human-based decisions, where a human analyst
revises the automated output and makes a decision. The interpretation that has been

31
Expressed as the possibility for a person to ‘actively exercise [. . .] real influence on the
outcome’, see Bygrave, ‘Minding the Machine: Article 15 of the EC Data Protection Directive
and Automated Profiling’ (2001) Computer Law & Security Report (CL&SR) 9 ff.
32
See Mendoza and Bygrave, ‘The Right Not to Be Subject to Automated Decisions Based on
Profiling’, University of Oslo Faculty of Law Legal Studies. Research Paper Series, (2017) 7 ff.;
also Rouvroy, ‘Des données sans personne: le fétichisme de la donnée à caractère personnel à
l’épreuve de l’idéologie des Big Data’ (2014) <https://works.bepress.com/antoinette_rouvroy/55/
> 12 ff.
33
WP29, ‘Guidelines on Automated individual decision-making and Profiling for the purpose of
Regulation 2016/679, WP251rev.01’; the Guidelines were adopted on 6 February 2018 (they
revise a previous draft version which was adopted on 3 October 2017).
34
Ibid 21 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
144 Diana Sancho

adopted enhances the level of protection by stretching the scope of application of


Article 22 to cover cases (i) and (ii), whereas the strict interpretation discussed above
would limit the applicability of this provision to case (i).

4.3.2.2 Recipient
The regime for solely automated decisions under Article 22 GDPR applies to
individual decisions (the provision’s title reads, ‘Automated individual decision-
making, including profiling’). Moreover, the protection granted in Article 22 operates
regardless of whether the data subject plays an active role in requesting the decision
(e.g., the data subject applies for a loan) or whether a decision is made about them
(e.g., the data subject is excluded from an internal promotion within an organisa-
tion). Article 22(1) also stipulates that automated decision-making targets decisions
on the ‘data subject’ rather than the natural person.35 The explicit reference to the
data subject in paragraph (1) implies that Article 22 is intended to apply to a decision
resulting from the processing of personal data of an identified or indefinable person.
This creates uncertainty as to whether the regime for solely automated decision-
making under Article 22 applies to individual decisions on data subjects based on the
processing of anonymised data.36 The Guidelines do not explicitly address this
point.37
The WP29 has confirmed that children’s personal data are not completely
excluded from automated decision-making under Article 22(1). The WP29 does not
consider that Recital 71 constitutes an absolute prohibition on solely automated
decision-making in relation to children.38 This is an important clarification which
reconciles the complete ban in Recital 71 with silence in the main text of the
GDPR.39 The WP29 has taken the view that controllers should not rely on the
derogations in Article 22(2) to justify solely automated decision-making in relation to
children (contractual necessity, imposed by law or based on the data subject’s explicit
consent), unless it is ‘necessary’ for them to do so, ‘for example to protect [children’s]

35
This, however, was the intention of the Commission in its 2012 Proposal; see Vermeulen,
‘Regulating Profiling in the European Data Protection Regulation; An Interim Insight Into the
Drafting of Article 20’ (2013) Centre for Law, Science and Technology Studies, <https://ssrn
.com/abstract=2382787> 8 ff.
36
As discussed in Kamarinou, Millard, and Singh, (n 1); also Savin (n 30) 9 ff.
37
The Guidelines recommend controllers are able to perform anonymisation and pseudonimisa-
tion techniques in the context of profiling; see WP29, ‘Guidelines, WP251rev.01’, 11 ff and 32 ff.
38
‘[R]ecital 71 says that solely automated decision-making, including profiling, with legal or
similarly significant effects should not apply to children. Given that this wording is not
reflected in the Article [22] itself, WP29 does not consider that this represents an absolute
prohibition’, ibid 28 ff.
39
Veale and Edwards, ‘Clarity, Surprises, and Further Questions in the Article 29 Working Party
Draft Guidance on Automated Decision-Making and Profiling’ (2018) Computer Law &
Security Review 398, 403 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 145

welfare’.40 Although this language may require further clarification (by the European
Data Protection Board or the ECJ in the context of a dispute),41 the references to
Recitals 71 and 38 and the view taken on Article 22 clearly suggest that the WP29 is
advocating the introduction of a restrictive system of solely automated decision-
making in relation to children.42 This is further confirmed by the WP29 continuing
to state that controllers processing children’s data under Article 22 must provide
suitable safeguards, as is required in Article 22(2)(b) and Article 22(2)(a) and (c).43

4.3.2.3 Effects
Automated decisions under Article 22(1) are required to have ‘legal effects’ on or
‘similarly significantly affect’ the recipient. Since decisions producing ‘legal effects’
on data subjects impact on their legal rights or legal status,44 they are more easily
objectified: for example, decisions granting or denying social benefits guaranteed by
law or decisions on immigration status when entering the country.45 However, in the
absence of objective standards, the meaning of the phrase ‘similarly significantly
affects him or her’ remains contextual and subjective; typical examples include
automatic refusal of credit applications and automatic e-recruitment practices, as
reported in Recital 71. The WP29 has stated that the effects of the processing must be
‘sufficiently great or important to be worthy of attention’.46 It has also provided some
guidance on which decisions may have the potential for this. According to WP29,
these are decisions that ‘significantly affect the circumstances, behaviour or choices
of the individuals concerned, have a prolonged or permanent impact on the data
subject, or, at its most extreme, lead to the exclusion or discrimination of
individuals’.47
Targeted advertising does not ordinarily produce decisions which could ‘similarly
and significantly’ affect individuals (e.g., banners automatically adjusting their
content to the user’s browsing preferences, personalised recommendations and
updates on available products).48 Some scholars, however, prefer not to exclude
the application of Article 22(1) to targeting advertising practices that systematically
40
Ibid 28 ff.
41
For example, how the requirement ‘necessary’ for the controller is to be interpreted, or whether
there any other valid examples apart from welfare cases.
42
Industry representatives, however, advocate for a more flexible approach, see Centre for Infor-
mation Policy and Leadership (CIPL), ‘GDPR Implementation in Respect of Children’s Data
and Consent’ (2018) 23 ff, available at <www.informationpolicycentre.com/uploads/5/7/1/0/
57104281/cipl_white_paper_-_gdpr_implementation_in_respect_of_childrens_data_and_
consent.pdf>.
43
Ibid 28 ff.
44
See Bygrave (n 31) 7 ff.
45
WP29, ‘Guidelines, WP251rev.01’, 21 ff.
46
Ibid.
47
Ibid.
48
Mendoza and Bygrave (n 32) 20 ff.; Bygrave (n 31) 9 ff.; Savin (n 30) 4 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
146 Diana Sancho

and repeatedly discriminate.49 Importantly, the guidelines on automated decision-


making of the WP29 have confirmed this approach. The WP29 lists some particular
circumstances which may increase the likelihood of targeted advertising being
caught under Article 22.50 Within the category of vulnerable adults, in particular,
WP29 considers the situation of individuals in financial difficulties. It uses the
example of individuals who incur further debt as a result of being systematically
targeted for on-line gambling services.51 Regardless of how such circumstances may
be interpreted in the context of a specific dispute, these are positive developments
which could assist in raising awareness of the special need for protection of vulner-
able individuals. The guidelines do not specifically elaborate on targeted advertising
aimed at children. This can be completed with the ICO’s recent consultation which
provides some guidance on the specific criteria that may be used to assess the impact
of targeted advertising on children.52
There is also a collective dimension to individual automated decision-making. In
the insurance sector, for example, big-data applications may be used to benefit
policy holders who represent a lower risk than the average (by offering them
discounts), whereas people belonging to a high-risk group may be subject to higher
premiums or not offered insurance at all.53 An increasing number of scholars
advocate the inclusion of a collective dimension to profiling and automated
decision-making under data protection law.54 It should be noted that the GDPR
has introduced a new provision that facilitates the protection of collective interests
through the action of representative bodies (i.e., Article 80). However, as to the
automated decision-making regime, Article 22(1) makes it explicit that what triggers
the applicability of this provision are the effects of the decision on the data subject
(‘. . . which produces legal effects on him or her or similarly and significantly affects
him or her’).

49
Mendoza and Bygrave (n 32) 12 ff.; also O’Neil (n 4) 164 ff., discussing examples in the
insurance sector.
50
Such as the intrusiveness of the profiling process; the expectations and wishes of the individ-
uals; the way the advert is delivered; and particular vulnerabilities of data subjects (WP29,
Guidelines, WP251rev.01, 22 ff.).
51
Ibid.
52
These include the choice and behaviours the controllers seek to influence, the way in which
these might affect the child, and the child’s increased vulnerability to this form of advertising:
ICO, ‘Consultation: Children and the GDPR guidance’ (2018) 5 ff., available at <https://ico
.org.uk/media/about-the-ico/consultations/2172913/children-and-the-gdpr-consultation-guidance-
20171221.pdf>.
53
ICO (n 3) 21 ff. (para 37); Moerel and Prins (n 12) on ‘pay how you drive’ 25 ff.
54
For example, O’Neil (n 4) 200 ff.; Zarsky (n 4)19–20 ff.; Mantelero, ‘Personal Data for
Decisional Purposes in the Age of Analytics: From an Individual to a Collective Dimension
of Data Protection’ (2016) Computer Law & Security Review (CL&SR) 238 ff.; Mantelero and
Vaciago, ‘Data Protection in a Big Data Society. Ideas for a Future Regulation’ (2015) Digital
Investigation (DI) 107 ff.; Baruh and Popescu, ‘Big Data Analytics and the Limits of Privacy
Self-Management’ (2017) New Media and Society (NMS) 590 ff.; Hildebrandt (n 9) 52 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 147

Under a literal approach, therefore, it would appear that solely automated


decision-making under Article 22 is not concerned with collective effects of auto-
mated individual decisions. This would imply that members of a group could only
be granted the protection in Article 22 if they claimed the application of this
provision as individual data subjects. Similarly, a person could request the applica-
tion of the protective regime under Article 22 when the adverse consequences of a
decision for him or her were formed by reference to a group to which this person
had been ascribed. The WP29 guidelines seem to support this approach.55

4.4 the right to human intervention and article 22


The rationale behind the regime for automated decision-making under Article 22 is
linked to the right to human intervention. Some scholars approach this right from
the intrusiveness of machine decisions and the need to preserve the autonomy of
human intervention in decision-making.56 There is also a more pragmatic under-
standing of this right which places an emphasis on the individual’s right to contest
an automated decision.57 It is the language and structure of Article 22 that give rise to
the variety of meanings attached to the right to human intervention.

4.4.1 Prohibition
Article 22(1) can be interpreted as a prohibition.58 Paragraph (1) is worded negatively
as it refers to the right of the data subject ‘not to be subject to . . .’. This corresponds
to a negative obligation for the controller (not to subject data subjects to solely
automated decisions). As a prohibition, Article 22(1) bans solely automated decision-
making categorically, unless one of the derogations in paragraph (2) applies (i.e.,
data subject’s explicit consent, where the decision is necessary for entering into or
performing a contract or is authorised by law). Under this approach, the law sets a
standard whereby the interests of data subjects not to be subject to automated
decision-making override the interests of controllers in engaging with it. The
resulting regime is both rigid and strict: rigid, because the legal standard is fixed
and allows no room for balancing competing interests (i.e., the ground of processing
based on the legitimate interests of the controller plays no role); and strict, because
the chosen legal standard ensures a high level of protection to individuals by default
55
See example on page 22, which refers to the situation of an individual who is deprived of credit
opportunities because of the behaviour of customers living in the same geographical area as
him or her (WP251rev.01, 22 ff.).
56
For example, Mendoza and Bygrave (n 32) 7 ff.
57
Ibid 16 ff.
58
Discussing this, Wachter, Mittelstadt, and Floridi, ‘Why a Right to Explanation of Automated
Decision-Making Does Not Exist in the General Data Protection Regulation’ (2017) 7 Inter-
national Data Privacy Law (IDPL) 94 ff.; also Mendoza and Bygrave (n 32) 9 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
148 Diana Sancho

(i.e., solely automated decision-making is unlawful, unless one of the derogations in


paragraph (2) applies). And so, when Article 22(1) is interpreted as a prohibition,
‘human intervention’ contributes to preserve human autonomy by becoming a
constitutive element of the decision-making process. In this case, it can be said that
the right to human intervention protects the interests of individuals ex ante and is an
essential element of the decision-making process.

4.4.2 Right
Article 22(1) can also be interpreted as granting data subjects the right not to be
subject to automated decision-making. Under this interpretation, the interests of
controllers and data subjects are on an equal footing unless the data subject objects
to automated decision-making. If the latter enters an objection, the right not to be
subject to solely automated decision-making prevails. Compared to Section 4.4.1
(i.e., Article 22(1) as a prohibition), this interpretation is also rigid but less strict. It is
rigid because no competing interests are to be balanced against each other (i.e., the
law tolerates solely automated decisions based on the legitimate interests of control-
lers, unless the data subject lodges an objection). If the data subject objects, solely
automated decision-making is prohibited. It is less strict, however, because the
protection relies entirely on the data subject, who has to actively exercise the right
not to be subject to solely automated decision-making. Overall, this interpretation is
more beneficial to controllers than the previous one.
Under this approach, the right to human intervention may be operated in one of
two ways. Before any decision is formulated, Article 22(1) can be relied upon pre-
emptively to avoid solely automated decision-making. In this case, the right to
human intervention would reach the decision-making process ex ante, as in Section
4.4.1. On the other hand, if the data subject objects to a solely automated decision
already taken, the right to human intervention would apply ex post as a safeguard for
fair processing.

4.4.3 Derogations
Article 22(2) on automated decision-making admits one interpretation only.
According to this provision, controllers’ interests in carrying out solely automated
decision-making based on the explicit consent of the data subject (Article 22.2.c),
contractual necessity (Article 22.2.a) or authorised by law (Article 22.2.b) prevail over
the data subjects’ right not to be subject to solely automated decision-making. The
rule in Article 22(2) is most beneficial to private controllers. Although data protec-
tion authorities may have interpreted the ground ‘contractual necessity’ narrowly,59
59
The WP29 has clarified that this ground has to be construed narrowly and ‘does not cover
situations where the processing is not genuinely necessary for the performance of a contract,

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 149

this ground does not require the data subject to provide consent to the processing.
Turning to consent, the GDPR requires it to be ‘explicit’. The WP29 has stated that
an obvious way to comply with this is to obtain written statements signed by the data
subject.60 The WP29 has also clarified that, in the digital context, this requirement
can be satisfied by the data subject by filling in an electronic form, sending an email,
uploading a scanned document (that carries the signature of the data subject) or
using an electronic signature.61
It is noteworthy that the rule in Article 22(2), although striking the balance in
favour of controllers (who can engage in solely automated decision-making under
certain conditions), is formulated in terms just as rigid as the rule in Sections 4.4.1
and 4.4.2 (i.e., those that interpret Article 22(1) as a prohibition and as a right,
respectively). Under Article 22(2), the legislator sets a fixed standard according to
which, if the controller demonstrates explicit consent or contractual necessity (or the
decision is authorised by law), the processing is lawful. In regard to the right to
human intervention, here it materialises in Article 22(3) GDPR as a safeguard and
operates ex post only.

4.4.4 The WP29 Guidelines


WP29 endorses the interpretation of Article 22(1) as a ‘general prohibition’.62 As a
result, the regime for solely automated decision-making is rigid and strict: solely
automated decision-making is categorically prohibited unless the controller demon-
strates the data subject’s explicit consent or contractual necessity to attempt such
automated decision formulation (or if the processing is authorised by law). As
already discussed, this interpretation prevents the legitimate interests of controllers
from playing any role as the legal basis for solely automated decision-making.
Moreover, interpreting Article 22 as a system of general prohibition/derogations
supports an understanding of the right to human intervention as a right operating
ex ante under Article 22(1) GDPR (i.e., as an essential element of decision-making)
and also ex post under Article 22(3) as a safeguard for fair processing.63
Taking the view that Article 22(1) contains a general prohibition, the WP29
ensures that data subjects are afforded a high level of protection by excluding the
legitimate interests of the controllers as a basis for processing. This has disappointed
industry representatives, who advocate for the application of this ground of

but rather unilaterally imposed on the data subject by the controller’, Opinion 06/2014 on the
notion of legitimate interests of the data controller under Article 7 of Directive 95/46, WP217,
2014, 16 ff.
60
WP29, ‘Guidelines on Consent under Regulation 2016/679, WP259’ 19 ff.
61
Ibid.
62
WP29, ‘Guidelines, WP251rev.01’ 19 ff.
63
Ibid 15 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
150 Diana Sancho

processing in automated decision-making contexts.64 They claim, in particular, that


limiting solely automated decision-making to consent and contractual necessity is
dysfunctional in private sectors where controllers are required to make a large
number of decisions.65
It is true that the position adopted by the WP29 excludes controllers’ legitimate
interests as a basis for processing. It is difficult to see, however, how the WP29
could have introduced alternative and more flexible standards under Article 22’s
current framework. As demonstrated, all three possible formulations of Article 22 –
as a prohibition, as a right or within the context of the derogations – rely on rigid
and fixed legal standards. This implies that, under the current regulatory frame-
work for solely automated decision-making, the ‘controllers’ legitimate interests’
ground of processing is banned for solely automated decision-making under
Article 22. Nothing prevents controllers, however, from relying on this ground of
processing in a decision-making context that is not solely automated, that is,
outside Article 22.

4.5 the right to an explanation and article 22


Articles 13(2)(f ) and 14(2)(g), on notification duties, and Article 15(1)(h), on the right
of access, impose information obligations on controllers engaging in automated
decision-making. Under these provisions, controllers have to inform the data subject
about the ‘existence of automated decision-making including profiling, referred to
in Article 22(1) and (4) and, at least in those cases, meaningful information about the
logic involved, as well as the significance and the envisaged consequences of such
processing for the data subject’.66
These provisions play an important role in ensuring data subjects’ effective
protection in solely automated decision-making processes. Knowing the existence
of automated decision-making referred to in Article 22(1) allows data subjects to
scrutinise the lawfulness of the processing.67 This is particularly important in cases
of contractual necessity, where consent is not required, and also of processing under
Article 22(4). Providing meaningful information on the logic involved and the

64
See CIPL, ‘Comments on the Article 29’ 9 ff.
65
Ibid.
66
Under Article 13(1), at the time controllers obtain the data from the data subject; under Article
14(3)(a), if they have not obtained the data from the data subject, within a month, at the time of
the first communication to the data subject or when the data are first disclosed to another
recipient. At any time under the right of access in Article 15. For a study on the effectiveness of
controller’s response to data access requests, see L’Hoiry and Norris, ‘The Honest Data
Protection Officer’s Guide to Enable Citizens to Exercise Their Subject Access Rights: Lessons
From a Ten-Country European Study’ (2015) 5 International Data Privacy Law (IDPL) 190 ff.
67
See Wachter, Mittelstadt, and Floridi (n 58) 83 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 151

significance and the consequences of such processing is also an essential require-


ment for accountability and transparency of algorithms.68
Articles 13(2)(f ), 14(2)(g) and 15(1)(h) impose transparency duties on controllers
carrying out ‘automated decision-making including profiling, referred to in
Article 22(1) and (4) and, at least in those cases . . .’.69 Therefore, the relationship
between Articles 13(2)(f ), 14(2)(g) and 15(1), on the one hand, and Article 22, on the
other hand, is determined by the phrase ‘at least in those cases’. The reference to
‘Article 22(4)’ in the former set of provisions is easy to interpret, as it clearly points to
automated decision-making for special categories of data under Article 22(4).70 The
reference to Article 22(1) may be more controversial, however, as it admits two
interpretations. This reference can be understood to exclusively refer to the case
in Article 22(1); it can also be interpreted to refer to Article 22(1) as a system of general
prohibition/derogations.
The first interpretation is problematic as it would prevent the application of
information rights in Articles 13(2)(f ), 4(2)(g) and 15(1)(h) to automated decision-
making processes based on contractual necessity or consent. Under this approach,
the applicability of these provisions in Articles 13‒15 would be strictly limited to
Article 22(1) GDPR (for example, should Article 22(1) be interpreted as a right rather
than a prohibition), excluding the cases in Articles 22(2)(a) and (c). In other words,
the protection afforded to solely automated decisions based on contractual necessity
and consent would be limited to the safeguards in Article 22(3). This interpretation is
untenable and can be challenged on teleological grounds: Article 22(1) GDPR is to
be interpreted as a prohibition, as discussed before and confirmed by the WP29;
furthermore, refusing to apply information rights under Articles 13(2)(f ), 4(2)(g) and
15(1)(h) to automated decisions in Article 22(2)(a) and (c) is likely to compromise
data subjects’ fundamental right to an effective remedy under Article 47 of the
Charter and Article 6 of the European Convention on Human Rights.71
The second interpretation relies on systemic grounds,72 according to which the
reference to ‘Article 22(1)’ in Articles 13(2)(f ), 14(2)(g) and 15(1)(h) can be understood
to refer to the general prohibition in Article 22(1), including the derogations in
Article 22(2) GDPR – that is, contractual necessity in point (a) and consent in
68
On accountability and ethics see Abrams, Abrams, Cullens, and Godstein, Artificial Intelli-
gence, Ethics and Enhanced Data Stewardship (2017) The Information Accountability Foun-
dation <www.privacyconference2017.org/eng/files/ai.pdf>.
69
Emphasis added.
70
Article 22(4) reads, ‘Decisions referred to in paragraph 2 shall not be based on special categories
of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and
suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are
in place’.
71
Referring to this dimension, Wachter, Mittelstadt, and Floridi (n 58) 80 ff.
72
This method of interpretation considers how the meaning of one provision relates to other
notions and provisions in the same text, and how it best makes sense in the structure and
general economy of the document; for a normative approach to the ECJ model of reasoning,
see Conway, The Limits of Legal Reasoning and the European Court of Justice (CUP 2012).

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
152 Diana Sancho

point (c). Unsurprisingly, this is the interpretation which is generally followed in


practice. None of the relevant stakeholders question the applicability of information
rights to automated decision-making carried out in the context of Article 22(2).73
The interpretation of the phrase ‘meaningful information about the logic
involved, as well as the significance and the envisaged consequences’ in Articles
13(2)(f ), 14(2)(g) and 15(1)(h) has been controversial in the academic literature.
Wachter et al. have taken the view that the GDPR does not provide for a right to
explanation of how specific automated decisions on an individual are made, but
does provide for a more limited right to be informed on the general functionality of
an automated decision-making process.74 They claim, in particular, that Article
15 GDPR on the right of access does not require controllers to provide information
on the rationale and circumstances of a particular decision.75 By contrast, Selbst
et al. suggest that providing data subjects with ‘meaningful information’ does not
always require information on a specific decision to be provided. They argue that, in
many systems, a complete system-level explanation provides all the relevant infor-
mation needed to understand specific decisions.76 Also, Malgieri et al. propose a
legibility test to ensure that data controllers provide meaningful information about
the architecture and the implementation of the decision-making algorithm.77 They
argue that such a test would help data subjects to meet their information require-
ments, whilst allowing controllers to identify potential machine bias. Noticeably,
rather than focusing on transparency obligations from a conventional perspective,
these scholars underline the important role that controllers’ accountability plays
within the framework of the GDPR.78 They claim, in particular, that Articles
13(2)(f ), 14(2)(g) and 15(1)(h) advocate the controllers’ duty to audit decision-making
algorithms.79 Importantly, the balance of tension between the principle of transpar-
ency and accountability plays an important role in the design of well-balanced and
sustainable automated decision-making systems.80

73
See for example, industry representatives, CIPL, ‘Comments on the Article 29’ 13 ff.; WP29,
‘Guidelines, WP251rev.01’ 24 ff.
74
See Wachter, Mittelstadt, and Floridi (n 58) 78, 89‒90 ff.; they base their interpretation on the
non-binding nature of Recital 71 (which refers to the right to obtain an explanation of the
decision reached), and a systemic analysis of Article 22 (which does not refer to such a right in
paragraph 3) and Articles 13‒15. Cf Goodman and Flaxman, ‘European Union Regulations on
Algorithmic Decision-Making and a “Right to Explanation”’ (2016) <https://arxiv.org/abs/1606
.08813>.
75
Ibid.
76
See Selbst and Powles, ‘Meaningful Information and the Right to Explanation’ (2017) 7
International Data Privacy Law (IDPL) 233 ff, discussing ‘determinism’ in machine learning
(239 ff ).
77
Malgieri and Comandé, ‘Why a Right to Legibility of Automated Decision-Making Exists in the
General Data Protection Regulation’ (2017) 7 International Data Privacy Law (IDPL) 243 ff.
78
Ibid 258 ff.
79
Ibid.
80
See ICO (n 3) 95 ff.; also Kroll, Huey, Barocas, Felten, Reidenberg, Robinson, and Yu,
‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law Review 633 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 153

In the revised guidelines on automated decision-making and profiling, WP29 has


acknowledged that Article 15(1)(h) obliges the controller to provide information
‘about the envisaged consequences of the processing, rather than an explanation of
a particular decision’.81 This is likely to help controllers standardise the information
they provide under Articles 13(2)(f ), 14(2)(g) and 15(1)(h), reducing their information
costs when they manage large amounts of solely automated decisions. WP29 has also
clarified that the controller has to provide general information to the data subject on
the rationale and the factors relied upon in reaching the decision, including their
aggregate weighing.82 Moreover, WP29 has confirmed that this information does not
require controllers to disclose the ‘full algorithm’, which helps them meet their legal
obligations towards third parties (i.e., trade secrets, intellectual property, etc.).83
Two statements in the WP29 revised guidelines are particularly relevant: that the
information provided ‘has to be sufficiently comprehensible for the data subject to
understand the reasons for the decision’ and that the information provided has to be
‘useful for [the data subject] to challenge the decision’.84 Although the wording used,
‘sufficiently comprehensible’ and ‘useful’, is difficult to objectivise and may require
further interpretative guidance (by the European Data Protection Board or by the
ECJ in the context of a specific dispute), these statements provide an indication that
the relevant threshold is to be determined by reference to the data subject (rather
than to the controller). Moreover, these two statements can also be interpreted as
supporting the introduction of a purposive approach to controllers’ duties under
Articles 13(2)(f ), 14(2)(g) and 15(1)(h), according to which the key question is whether
the information provided enables an average data subject to understand the ‘how’ of
the decision and the ‘why’ of its effects on them, so that the data subject can exercise
their rights under Articles 22(3), i.e., express their views and challenge the decision.
One last question concerns the nature of the protection afforded by the solely
automated decision-making regime, whether it be special or qualified. Under the
GDPR, this regime is primarily contained in: Article 22, including the safeguards in
Article 22(2)(b), (3) and (4); Articles 13(2)(f ), 14(2)(g) and 15(1)(h) on controllers’
transparency duties; and Article 35(3)(a) on controllers’ risk management duties,
which oblige them to conduct data protection impact assessments prior to the
processing.85

81
See WP29, ‘Guidelines, WP251rev.01’, 27 ff.; the revised Guidelines were adopted on 6 Febru-
ary 2018, whilst the draft version was adopted on 3 October 2017; noticeably, the draft version
barely elaborated on Articles 13(2)(f ), 14(2)(g) and 15(1)(h).
82
Ibid 25 and 27 ff.
83
Ibid 23.
84
Ibid 25 and 27.
85
Article 35(3)(a) reads, ‘A data protection impact assessment referred to in paragraph 1 shall in
particular be required in the case of: a systematic and extensive evaluation of personal aspects
relating to natural persons which is based on automated processing, including profiling, and on
which decisions are based that produce legal effects concerning the natural person or similarly
significantly affect the natural person’.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
154 Diana Sancho

Rules that set common standards of protection are ordinarily classified as general
rules, whereas rules that do not fall within this category may operate as special
provisions (if they override general rules) or qualified rules (if they offer additional
safeguards to those in the general framework). This distinction is important because
special rules displace, in principle, the otherwise applicable general rules, whereas
qualified rules apply in a cumulative manner. For instance, the Brussels Ibis
Regulation and the Rome I Regulation offer some well-known examples of special
provisions for the protection of consumers in international disputes, which displace
the otherwise applicable general provisions for non-consumers.86 Under the GDPR,
however, there is no evidence that the regulator has intended to deliver protection
strictly relying upon the interplay between special and general provisions. For
example, assuming that the protection afforded to the categories of data referred to
in Article 9 is meant to be special, it has not prevented the WP29 from supporting
the cumulative application of the common grounds for processing to the special
categories of data; according to the WP29, this interpretation is tenable should it
ensure a higher level of protection to individuals, on a case-by-case basis.87
Article 22 on solely automated decision-making is often referred to as a qualified
provision.88 Certainly, it is difficult to categorise the protection afforded by Article
22 as special. Nothing in the GDPR suggests that this is the intention of the
legislator. Moreover, there is no such thing as a ‘general’ regime for automated
decisions outside Article 22. The GDPR does not specifically regulate automated
decisions falling outside Article 22.89 Like any other processing activity on personal
data, automated decision-making not meeting the requirements in Article 22(1) will
have to comply with the principles and rules of the GDPR.90

86
Regulation EU 1215/2012 (Brussels Ibis Regulation, OJ 2012 L 351/1) adopts a special regime
seeking to protect consumers in cross-border disputes (Article 15); this regime displaces the
general rules in Articles 4 and 7 for disputes between non-consumers. Also, Regulation EC 593/
2008 on the law applicable to contractual obligations (Rome I Regulation, OJ 2008 L 177/6)
introduces a special rule on the applicable law to consumer contracts in Article 6; this rule
states the applicability of the law of the country where the consumer has his habitual residence
(displacing the general rules in Articles 3 and 4, which point to the law freely chosen by the
parties or the law of the vendor). In practice, however, the operation of the special rules for
consumers may not always consistent; see Rühl, ‘The Protection of Weaker Parties in the
Private International Law of the European Union: A Portrait of Inconsistency and Conceptual
Truancy’ (2014) 10 Journal of Private International Law (JPIL) 335 ff.
87
See WP29, WP217 (2014) 15 ff, which reads, ‘in conclusion, the Working Party considers that an
analysis has to be made on a case-by-case basis whether Article 8 DPD in itself provides for
stricter and sufficient conditions, or whether a cumulative application of both Article 8 and 7
DPD is required to ensure full protection of data subjects’.
88
See, for example, Mendoza and Bygrave (n 32)11 ff.; also ICO (n 3) 21 ff (para 35).
89
These can be decisions: solely automated with trivial effects on the data subject; non-solely
automated with significant effects on the data subject; or non-solely automated with trivial
effects.
90
The GDPR sets the general framework for the processing of personal data; see ‘Explanatory
Memorandum of the Commission’s Proposal for a Regulation on the protection of individuals

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
Automated Decision-Making under Article 22 GDPR 155

This, however, has not prevented the WP29 from blurring the boundaries
between these two types of automated decision-making processes (i.e. within and
outside Article 22): by requiring controllers to comply with risk management duties
under Article 35(3)(a);91 and by recommending the application of notification rights,
under Articles 13(2)(f ) and 14(2)(g), to automated decision-making outside Article
22.92 To conclude, therefore, although these are positive proposals which help
provide higher levels of protection to individuals, more coordinated efforts in regards
to the development of these categories would provide greater clarity for the auto-
mated decision-making regime.

4.6 conclusion
This chapter illustrates the benefits of the joint intervention of the EU legislator and
the WP29 ‒ currently, the European Data Protection Board ‒ in protecting individ-
uals in a data-driven society. Together these two actors have contributed to modern-
ising the regime for solely automated decision-making under Article 22 GDPR. The
WP29 interpretative guidance on automated decision-making and profiling shows a
determined commitment to making solely automated decision-making more sub-
stantial (i.e., less formalistic). This is achieved by: implementing an interpretation of
the term ‘solely’ which does not exclude human nominal involvement (i.e., involve-
ment lacking the ability to influence or change the automated output); explicitly
acknowledging the need to enhance protection of vulnerable adults and children
under the ‘similarly significant effects’ test and the safeguards in Article 22(2)(b), (3)
and (4); and linking the data subject’s right to meaningful information to the right to
challenge a decision. The WP29 has also confirmed the strict and rigid nature of
Article 22, meaning that solely automated decision-making is limited to the data
subject´s explicit consent, contractual necessity, legal authorisation and the specific
requirements for specially protected data under paragraph (4). Outside these cat-
egories, the general prohibition in paragraph (1) makes solely automated decision-
making unlawful.
These developments represent progress towards the introduction of a sustained
and more advanced regime for solely automated decision-making. Compared to
Article 15 DPD, they improve legal certainty and provide data subjects with higher
levels of protection in solely automated decision-making processes. However, it has
to be noted that there is nothing intrinsically revolutionary about them. Although it
is clear that they provide new and more articulated mechanisms to address data
subjects’ needs for enhanced protection, they do so without altering the underlying

with regard to the processing of personal data and on the free movement of such data (General
Data Protection Regulation)’, COM(2012) 11 final, 2012/0011 (COD) 1 ff.
91
This results from the wording of Article 35(3)(a) which refers to ‘decisions’ (rather than to
‘solely’ automated decisions). WP29, ‘Guidelines, WP251rev.01’ 29 ff.
92
As a matter of good practice, ibid 25 ff.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
156 Diana Sancho

regulatory paradigm, which they inherit from the Data Protection Directive. After
all, solely automated decision-making remains limited to specific types of decisions
and grounds for processing, and requires the adoption of safeguards.
The main question this raises is whether the higher standards of protection in
Article 22 GDPR, including controllers’ new transparency and accountability duties,
will allow data subjects to maintain adequate levels of autonomy and control in the
era of machine-learning algorithms and big data. This will have to be assessed
against the practice of solely automated decisions as it develops under the GDPR.
If the revised regime proves incapable of empowering individuals effectively, whilst
allowing the technological and innovative drive of the data-driven society, a more
ambitious regulatory intervention will be required.

Downloaded from https://www.cambridge.org/core. University of New England, on 06 Jul 2020 at 07:28:20, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.005
5

Robot Machines and Civil Liability

Susana Navas

introduction
The legal consideration of a robot machine as a ‘product’ has led to the application
of civil liability rules for producers. Nevertheless, some aspects of the relevant
European regulation suggest special attention should be devoted to a review in this
field in relation to robotics. Types of defect, the meanings of the term ‘producer’, the
consumer expectation test and non-pecuniary damages are some of the aspects that
could give rise to future debate. The inadequacy of the current Directive 85/374/
EEC for regulating damages caused by robots, particularly those with self-learning
capability, is highlighted by the document ‘Follow up to the EU Parliament
Resolution of 16 February 2017 on Civil Law Rules on Robotics’. Other relevant
documents are the Report on “Liability for AI and other emerging digital technolo-
gies” prepared by the Expert Group on Liability and New Technologies, the “Report
on the safety and liability implications of Artificial Intelligence, the Internet of
Things and Robotics” [COM(2020) 64 final, 19.2.2020] and the White Paper “On
Artificial Intelligence – A European approach to excellence and trust” [COM(2020)
65 final, 19.2.2020].

5.1 robot machines and virtual robots


We used to imagine a robot,1 because of the stereotype presented in films, as a
‘machine’, with an anthropomorphic form (an android), giving the impression that it

1
As is known, the term ‘robot’ was created by Josef Čapek, who was born in the Czech Republic.
In 1920, Josef used it when speaking with his brother Karel, allowing him to make the term
known in a play called R.U.R. (Rossum’s Universal Robots). ‘Robot’ came from the Czech word
robota, meaning ‘worker slave’. In addition, the word ‘robotist’ was created by Isaac Asimov in
1941, referring to a person studying or building robots (Asimov, I, Robot, Gnome Press 1950).
Regarding the origin of the term ‘robot’, see Horáková and Keleman, ‘The Robot Story: Why

Downloaded from https://www.cambridge.org/core. University College157 London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
158 Susana Navas

would act or at least seem to act autonomously and interact with human beings.2
However, robots are something more than this or, at least from a technological
viewpoint, are much more than they are considered by the collective imagination.
Thus, depending on what is understood by the word ‘robot’ – and how a robot is
represented – particular rules will regulate robots. Therefore, from the legal per-
spective, not all cases relating to robots should be treated in the same manner.

5.1.1 Broad Notion of a Robot


A common ‘technological’ definition of robot that covers all situations is ‘a system
that is capable of perceiving the environment or context in which it is located, that
can process the information to plan a certain action and execute it’.3 This definition
includes both robot machines and artificial intelligence entities.4 The first group of
robots, that is, robot machines,5 encompasses, for instance, a mechanical arm that
collects pieces in an assembly line and is employed in the automotive industry, or a
machine acting autonomously for a specific purpose following the instructions given
by some software (e.g., the well-known vacuum cleaner Roomba). In the second
group of robots, a range of cases is included. They have a common element: an
algorithm written in binary code that can act in response to a pre-designed purpose
or that can decide autonomously. The decisions and corresponding actions cannot
be predicted by the human being or group of individuals who created the algo-
rithm.6 These autonomous systems are called ‘agents’,7 and they can communicate

Robots Were Born and How They Grew Up’ in Husbands, Holland, and Wheeler (eds), The
Mechanical Mind in History (MIT Press 2008) 307.
2
Automatically to assign physical features like those of a person or an animal to a robot machine
is very common (Richards and Smart, ‘How Should the Law Think about Robots?’ in Calo,
Froomkin, and Kerr (eds), Robot Law (Edward Elgar 2016) 6.
3
Calo, ‘Robotics and the Lessons of the Cyberlaw’ (2015) Cal L Rev 103, 513; Palmerini and
Bertolini, ‘Liability and Risk Management in Robotics’ in Schulze and Staudenmayer (eds),
Digital Revolution: Challenges for Contract Law in Practice (Nomos Verlag 2016) 235.
4
Artificial intelligence entities, known as electronic or autonomous agents, have raised interest-
ing legal questions concerning the conclusion of contracts by electronic means. I will not deal
with this topic in this chapter, but instead would refer the reader to my work: Navas and
Camacho (eds), Mercado digital. Reglas y principios jurídicos (Tirant Lo Blanch 2016) 99. Also
see Loos, ‘Machine-to-Machine Contracting in the Age of the Internet of Things’ in Schulze,
Staudenmayer, and Lohsse (eds), Contracts for the Supply of Digital Content. Regulatory
Challenges and Gaps (Nomos Verlag 2017) 59‒83.
5
Kevin, ‘Paving the Road Ahead: Autonomous Vehicles, Products Liability and the Need for a
New Approach’ (2013) 1 Utah L Rev 437‒462.
6
European Commission, ‘Statement on Artificial Intelligence, Robotics and Autonomous
Systems’, European Group on Ethics in Science and New Technologies (March 2018),
available at <http://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf>. Date of
access: April 2020; Karnow, ‘The Application of Traditional Tort Theory to Embodied
Machine Intelligence’ in Calo, Froomkin, and Kerr (n 2) 55.
7
Stone and Veloso, ‘A Survey of Multiagent and Multirobot Systems’ in Balch and Parker (eds),
Robot Teams: From Diversity to Polymorphism (Taylor & Francis 2002) 37.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 159

with each other in what is termed machine-to-machine communication (M2M).


They seem to possess ‘life’, like the renowned IBM supercomputer Dr Watson,
which can analyse a dizzyingly huge amount of data, almost unimaginable for the
human brain, establishing diagnoses related to cancer as well as suggesting the best
possible treatment with a degree of success comparable to that of an expert in the
field.8 Similar cases are robot advisers in the context of market investments.9 Since
artificial intelligence can be applied in all fields of knowledge,10 there are many
other examples in this group of robots, including drones11 and completely autono-
mous vehicles.12 They can respond to pre-designed software or they can ‘think’ for
themselves by processing information that they continuously gather from the envir-
onment, from M2M communication and from databases (self-learning capacity),13
thanks to the technology that is the basis of the internet of things.14 This sort of
connection between agents is called a ‘multi-agent system’ or ‘agent society’.15
There are thus three fundamental activities that a system should develop if it is to
be considered a robot. First, it should perceive – that is, it should gather information
about its context, being equipped with a sophisticated sensor system. The infor-
mation that is collected should be rapidly processed to prevent the system from
crashing. It must be noted that usually the machine possesses different sensors, each
of which collects specific data that could be in conflict with, or even be opposite to,
other information that is captured. Algorithms are in charge of accommodating all
kinds of information and establishing a complete and precise system that enables the
machine to perform efficient and secure actions to minimize the damage that could
ultimately occur. Second, the system should plan. When the algorithm processes
and analyses the environment, it creates a series of actions that are ordered to

8
Millar and Kerr, ‘Delegation, Relinquishment and Responsibility: The Prospect of Expert
Robots’ in Calo, Froomkin, and Kerr (n 2) 102; Brynjolfsson and McAfee, The Second Machine
Age. Work, Progress, and Prosperity in a Time of Brilliant Technologies, (WW Norton &
Company Ltd 2016) 24‒27, 50, 65, 92‒93, 192, 207, 255; Ford, The Rise of the Robots.
Technology and the Threat of Mass Unemployment (Oneworld 2015) 102‒106, 108, 153‒155;
Balkin, ‘The Path of Robotics Law’ (2015) Cal L Rev Circ 6, 45.
9
Saroni, FinTech Innovation: From Robo-Advisors to Goal Based Investing and Gamification
(Wiley 2016) 21.
10
The common core is the analysis of massive data (knowledge-based AI), and the obtaining of
smart data in order to suggest solutions or diagnoses given the purpose or purposes for which
these data are handled (Mayer-Schönberger and Cukier, Big data. La revolución de los datos
masivos (Turner Madrid 2013).
11
Perritt Jr and Sprague, ‘Drones’ (2015) Vand J Ent & Tech L 7(3), 673; Perritt Jr and Sprague,
‘Law Abiding Drones’ (2015) 16 Colum Sci & Tech L Rev 385; Ford (n 8) 122, 173.
12
Brynjolfsson and McAfee (n 8)14‒15, 19, 55, 80, 200, 206‒207, 219; Ford (n 8) 96, 175‒186;
Rifkin, La sociedad de coste marginal cero (Paidós 2014) 285‒286.
13
Ebers, ‘La utilización de agentes electrónicos inteligentes en el tráfico jurídico: ¿necesitamos
reglas especiales en el Derecho de la responsabilidad civil?’ (2016) InDret 5, <www.indret
.com>. Date of access: April 2020.
14
Navas, ‘El internet de las cosas’ in Navas and Camacho (n 4) 32.
15
Stone and Veloso (n 7) 37; Navas, ‘Agente electrónico e inteligencia ambiental’ in Navas and
Camacho (n 4) 91.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
160 Susana Navas

achieve specific purposes. To plan also means to have regard to the perceived
information in order to select actions or determine situations or behaviour that
should take place in the future.
In addition, the choice between different behaviours, and thus the planning of
future actions, should be made as quickly as possible to enable the system to
respond, for instance, in milliseconds to any external circumstance. Lastly, the
system must act, that is, perform the foreseen plan, for which the machine usually
has an electronic system different from the traditional mechanical and hydraulic
system that was previously employed. Actions and behaviours modify and transform
the environment in which the machine is located.16

5.1.2 Strict Notion of a Robot


‘A “robot” strictly speaking would only be one that has self-learning capacity so that
the program not just applied the human heuristic; otherwise the machine creates its
own heuristic frame of references’.17 These robots are known as ‘smart robots’ or
‘expert robots’.18 According to this definition, we would say that a machine directed
by a person using a remote control, as in the case of some drones or driverless cars, in
which the human being must be present to carry out certain tasks or to take control
of the vehicle in specific circumstances to which the vehicle is not equipped to
respond conveniently, cannot be considered, in the proper sense, a robot.19 A robot
can have different sizes, from a vehicle to a chip (a nanorobot),20 ranging through
any machine that possesses the three features described above. Thus, there are robot
machines and virtual robots. The former could present different degrees of mobility:
they could be completely autonomous (like assistive or social robots)21 or not so

16
Calo (n 3) 513; Funkhouser, ‘Paving the Road Ahead: Autonomous Vehicles, Products Liability
and the Need for a New Approach’ (2013) 1 UL Rev 437‒462.
17
Karnow (n 6) 55.
18
We can find great artificial intelligence systems in the field of music, where algorithms can
compose pieces emulating the style of Mozart or Chopin, or computational programs capable of
painting and drawing better than many artists and with a level of creativity even higher than that of
a human <www.robotart.org>; Schlackman, ‘The Next Rembrandt: Who Holds the Copyright
in Computer Generated Art’, Art Law Journal (22 April 2016) available at <https://alj.orangenius
.com/the-next-rembrandt-who-holds-the-copyright-in-computer-generated-art/>. Date of access:
April 2020), or designing buildings that astonish many famous architects, or producing journalis-
tic reports that would perplex many journalists, or programs that propose judgments and write
decisions for the greater delight of judges and tribunals. Some more examples are described by
Carr, Atrapados. Cómo las máquinas se apoderan de nuestras vidas (Taurus 2014) 15.
19
Funkhouser (n 16) 437‒462.
20
Nanotechnology or nanorobotics is an emerging technology that already has relevant applica-
tions in the domains of medicine, electronics and the building industry. Nevertheless, nanor-
obots have many more future applications such as in nutrition or oral hygiene (Ford (n 8) 235‒
245; Serena, La nanotecnología (CSIC Madrid 2010) 95.
21
Feil-Seifer and Matarić, ‘Defining Socially Assistive Robotics’ (2005) Proceedings of the 2005
IEEE, 9th International Conference on Rehabilitation Robotics, 28 June – 1 July, Chicago;

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 161

autonomous (e.g., a chirurgical arm). In general, they should present a minimum


level of autonomy in responding to external stimuli. Hence, they should have a
certain degree of capacity to take decisions.22 We cannot regard cyborgs,23 robotic
prostheses that a person can carry (e.g., an exoskeleton)24 or other machines that are
controlled remotely, as robots. A 3D printer is not a robot either, although it employs
software. However, 4D printers, which are being researched in the industry, could
be classified as smart machines since they permit materials or products to adapt
permanently to the environment, redesigning themselves at the same time. This
kind of printer is nearer to the idea of a robot than a pure 3D printer.25
On the other hand, questions arise when an expert human being and an expert
robot are not of the same opinion after analyzing a certain situation or data and
make opposite decisions. Which of them devoted more attention to the analysis?
The robot or the human? If we decide to follow the decision of one of them and that
decision is not the right one, and if acting according to this decision causes damage
to third parties, who is to be regarded as liable?
If a smart robot is designed to answer to pre-determined specific purposes, it is
called a ‘closed robot’, whereas if it is not limited in its purposes, so that it can
change its behaviour and, therefore, perform different works depending on the
environment and take decisions that could be judged by an individual to be
unpredictable, using an ‘open-source’ code, the smart robot is called an ‘open

Levy, Amor + Sexo con Robots (Contextos Paidós 2007) 133; Turkle, The Second Self: Com-
puters and the Human Spirit (Simon & Schuster 1984).
22
Funkhouser (n 16) 437‒462.
23
See Camacho, ‘La subjetividad ciborg’ in Navas (ed) Inteligencia artificial. Tecnología. Dere-
cho (Tirant Lo Blanch 2017) 231‒257; Navas and Camacho, El ciborg humano. Aspectos
jurídicos (Comares Granada 2018); Aguilar, Ontología Cyborg. El cuerpo en la nueva sociedad
tecnológica (Gedisa 2008) 13; Hughes, Citizen Cyborg: Why Democratic Societies Must
Respond to the Redesigned Human of the Future (Basic Books 2004) 3; Ramachandran, ‘Against
the Right to Bodily Integrity: Of Cyborgs and Human Rights’ (2009) 1(87) Denver U L Rev 17‒
20; Clark, Natural-Born Cyborgs. Minds, Technologies and the Future of Human Intelligence
(Oxford University Press 2003) 13; Zylinska, The Cyborg Experiments. The Extensions of the
Body in the Media Age (Continuum 2002) 15.
24
Donati et al., ‘Long-Term Training with a Brain-Machine Interface-Based Gait Protocol
Induces Partial Neurological Recovery in Paraplegic Patients’ <www.nature.com/scien
tificreports>. Date of access: April 2020.
25
Robert, ‘Impresoras 3D y 4D’ in Navas (ed) Inteligencia artificial. Tecnología. Derecho (Tirant
Lo Blanch 2017) 197‒230. Concerning views on 4D printing at the Self-Assembly Lab of the
Massachusetts Institute of Technology (MIT), www.selfassemblylab.net, see Tibbits, Self-
Assembly Lab: Experiments in Programming Matter (Routledge 2017) 29. In the field of
medicine, see Mitchell, Bio Printing: Techniques and Risks for Regenerative Medicine (Else-
vier2017) 3; Kalaskar (ed) 3D Printing in Medicine (Elsevier 2017) 43. In the domain of
architecture and engineering, see Casini, Smart Buildings: Advanced Materials and Nanotech-
nology to Improve Energy (Elsevier2016) 95; European Commission, ‘Identifying Current and
Future Application Areas, Existing Industrial Value Chains and Missing Competences in the
EU, in the Area of Additive Manufacturing (3D-printing)’ (DOI 10.2826/72202), Executive
Agency for Small and Medium-Sized Enterprises, (Brussels 2016) 34.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
162 Susana Navas

robot’.26 In this case, changes can be made to the system by third parties without
compromising its performance of tasks.
Robot machines and virtual robots can be either closed or open robots, although
the former are frequently closed robots (e.g., robots for industry), whereas the latter
are usually open robots (e.g., Dr Watson, Deep Blue27 or Google AlphaGo).28

5.1.3 European Notion of a Robot


The European notion of a robot machine seems to be defined by the attribution of
five features:29 (i) acquisition of autonomy through sensors or by exchanging data
with the environment (inter-connectivity) as well as the trading and analysis of such
data; (ii) ability to learn (self-learning) from experience and by interaction with other
robots (M2M); (iii) a minor physical presence, to distinguish it from a virtual robot;
(iv) adaptation of its behaviour and actions to the environment; and (v) absence of
biological life.
In accordance with this concept of a robot, we may differentiate three groups of
smart robots: (i) cyber-physical systems, (ii) autonomous systems, and (iii) smart
autonomous robots.
For EU policy makers two criteria define a robot: first, the strict notion of a robot
as outlined above, and second, being a robot machine that can be established as
having the status of electronic person responsible for any damage that it causes.
However, the attribution of legal personality is actually a very controversial issue.

5.2 robots from a legal perspective

5.2.1 Current Legal Framework


Because of the diversity of types of robot, there is no unique legal framework for
them all. That is, an android does not merit the same legal consideration as a
chirurgical arm, or as an operating system that can take decisions autonomously, like
a robot adviser or an electronic agent that can conclude contracts and can choose its
counterparty.
Rules concerning liability for damage caused by robots are related to the legal
understanding of them. On the one hand, it is important to pay attention to the fact
that most robots contain an operating system, a computer program. On the other
26
Calo, ‘Open Robotics’, <http://ssrn.com/abstract=1706293>. Date of access: April 2020; Cooper,
‘The Application of a ‘Sufficiently and Selectively Open License’ to Limit Liability and Ethical
Concerns Associated with Open Robotics’ in Calo, Froomkin, and Kerr (n 2) 166‒167.
27
<www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/>. Date of access: April 2020.
28
<https://deepmind.com/research/alphago/>. Date of access: April 2020.
29
Follow-up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on
Robotics, 2015/2103 INL.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 163

hand, we must take into account that robots have been employed in the real world,
interacting with people as assistant robots, nurse robots or drones, or in general
autonomous means of transportation. Thus, as well as the robot machine producer’s
liability, there is the robot owner’s liability and the designer-engineer’s liability. In
studying these topics, it is important to deal with robot machines and virtual robots
separately. Since virtual robots are computer programs, the regulations related to
computer programs should be applied to them. Robot machines can be regarded as
a ‘movable good’, one of the different parts of which could be a computer program
(e.g. drones or driverless cars). Notwithstanding this, when a robot is part of a
movable or immovable good, it can be seen, in the traditional classification of
goods, as an ‘immovable good’ by destiny or by incorporation, depending on the
particular case treated (e.g., chirurgical arms30 or automotive industry arms).

5.2.2 Regulation of the Design and Production of Robot Machines


The regulation of the design and production of robot machines through technical
standards is one of the areas in which the law can have an effect, by requiring certain
levels of safety and security to minimize the risks for humans that handle these
machines, especially when they are so-called collaborative robots. In these cases, the
robot is not a mere tool or assistant of the individual, but collaborates with them,
carrying out a certain task in the same way as could be done by two persons, or even
in a better way. Safety requirements should be taken into consideration in the design
and subsequent production of the robot.31
A robot for industry is considered a ‘machine’. Therefore, Directive 2006/42/EC
of the European Parliament and of the Council of 17 May 2006 (known as the
Machinery Directive) and amending Directive 95/16/EC32 apply to it. This directive
defines essential health and safety requirements for general application, supple-
mented by a number of more specific requirements for certain categories of
machinery. Machinery must be designed and constructed so that it is fit for its
function, and so that it can be operated, adjusted and maintained without putting
persons at risk, both when these operations are carried out under the expected
conditions but also taking into account any reasonably foreseeable misuse thereof.33

30
Sankhla, ‘Robotic Surgery and Law in USA – A Critique’ <http://ssrn.com/abstract=2425046>.
Date of access: April 2020.
31
Commission staff working document, ‘Liability for emerging digital technologies’, SWD(2018)
137 final.
32
OJL 157/24, 9.6.2006. At the time of writing, the abovementioned directive is being reviewed
[Artificial Intelligence for Europe, SWD(2018) 137 final].
33
Smith, ‘Lawyers and Engineers Should Speak the Same Robot Language’ in Calo, Froomkin,
and Kerr (n 2) 78; Leenes and Lucivero, ‘Laws on Robots, Laws by Robots, Laws in Robots:
Regulating Robot Behavior by Design’ (2014) 6(2) Law, Innovation and Technology 193‒220.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
164 Susana Navas

In the international arena, there are the well-known ISO standards that, in the
field of industrial robots, are particularly taken into account by the EU and the
Member States. ISO 10218-I and 10218-II have been reviewed and updated by ISO
15066:2016.34 Other relevant ISO standards are ISO 26262, concerning safety in the
field of vehicles, and ISO/IEC 15288, in relation to engineering systems and
software. In relation to therapeutic or assistant robots (such as the well-known Robot
Pepper) that accompany minors during medical treatment, help disabled people
with daily activities or assist elderly people in their homes, it is foreseeable that the
human has physical contact with the robot or that their home should have certain
dimensions or other specific requirements. Certain security and safety standards
must therefore be established, as well as mechanisms that, in certain situations,
could automatically switch off the robotic system to prevent damage being caused.
The design should therefore emphasize the ability of the robot to comply with
certain legal and even social requirements.35 The document ‘Follow up to the EU
Parliament Resolution of 16 February 2017 on Civil Law Rules on Robotics’ recom-
mends that this type of robot (an assistant or collaborative robot) should be given
particular consideration and mentions their possible future regulation. For this
reason, specialized technical committees have been set up, such as the ISO/TC
299 Robotics Committee, which is exclusively dedicated to the design of rules
relating to robotics. In this regard ISO 13482:2014 should be taken into consideration.
Additionally, the context in which the robot performs its autonomous activity can
require it to respect certain legal rules that can, like technical norms, affect its
activity through the design of the artificial intelligence system embedded within it.
This is the case with driverless cars, which must pay particular attention to traffic and
safety rules as well as those concerning liability.36 Nowadays, researchers work with
algorithms that allow intelligent agents to recognize norms and respect them,
adapting to the uncertain and always changing context in which they interact.37
Because, in these cases, we are dealing with assistant rather than industrial robots,
from a legal point of view the producer must take other rules into account,
particularly Directive 2001/95/EC of the European Parliament and of the Council
of 3 December, on general product safety,38 and Council Directive 85/374/EEC of
25 July 1985 on the approximation of the laws, regulations and administrative
provisions of the Member States concerning liability for defective products.39 The
latter directive will be discussed in Section 5.4.

34
<www.iso.org/obp/ui/#iso:std:iso:ts:15066:ed-1:v1:en>. Date of access: April 2020.
35
Wynsberghe, ‘Designing Robots for Care: Care Centered Value-Sensitive Design’ (2013) 19 Sci
Eng Ethics 407‒433; Leenes and Lucivero (n 33) 193‒220.
36
Castells, ‘Vehículos autónomos y semiautónomos’ in Navas (n 25) 101‒121.
37
Criado, Argente, Noriega, and Botti, ‘Reasoning about Norms under Uncertainty in Dynamic
Environments’ (2014) 5 International Journal of Approximate Reasoning 2049–2070; Navas,
‘Derecho e Inteligencia artificial desde el diseño. Aproximaciones’ in Navas (n 25) 23‒72.
38
OJL 11/4, 15.1.2002.
39
OJL 210, 7.8.1985.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 165

Other interesting cases are rules concerning respect for, or adaptation to, the
environment through, for example, channelling or intelligent infrastructures that
take advantage of nanotechnology and 4D printing.40
Close to the domain of robotics are brain‒computer interfaces, which consist of
artificial systems that interact with the nervous system through neurophysiological
signals and are used, for instance, by people with disabilities during the execution of
certain motor activities.41 Cyborgs are one field in which these interfaces could have
full application.
It is important to bear in mind that a duty to inform, so that a person gives
informed consent to the implantation of the artificial system in question, is imposed
by national legal systems.
In short, if a robot or an autonomous artefact is to be put on the market, legal rules
can determine not just its corporeal structure but also its capabilities, through the
design of the artificial intelligence system itself. For this purpose, it is useful for
sensors allowing information to be received from the environment to be incorpor-
ated so that the robot is able to adapt to changing circumstances.

5.3 the liability of the owner of a robot:


some reflections
A core issue in robotics is the distribution of responsibility between humans and
robots or other intelligent machines when they cause harm to third partie-
s.42Although this issue is the subject of another chapter in this volume, I cannot
resist raising the question of the liability of the owner of a robot. Depending on the
degree of mobility or the decision-making autonomy of the robot, damage caused to
another person could be subject to various specific rules.43 In the case of an android,
it could be regarded as a minor and, consequently, the responsibility of the owner, so
would the liability be that of a parent or guardian, albeit by analogy? In the case of a
pet robot, then, would it be better to apply the strict liability for damages caused by
animals?44 Would the application of Directive 2001/95/EC, regarding general prod-
uct safety, be enough? Maybe a robot should be regarded as a tertium genus in the

40
The fact that the environment is relevant for the development of robotic capabilities has been
highlighted in the study of the iCube robot in which real situations have been recreated: Ribes,
Cerquides, Demiris, and López de Mántaras, ‘Active Learning of Object and Body Models
with Time Constraints on a Humanoid Robot’ (2016) IEEE Transactions on Autonomous
Mental Development <www.iiia.csic.es/~mantaras/TAMD.pdf>. Date of access: April 2020.
41
Camacho (n 23) 231‒257.
42
In the USA, see Balkin, Balkin (n 8) 45.
43
Hubbard, ‘Sophisticated Robots: Balancing Liability, Regulation and Innovation’ (2015) 66 Fla
L Rev 1803, 1862‒1863; Richards and Smart (n 2) 6.
44
Kelley, Schaerer, Gomez, and Nicolescu, ‘Liability in Robotics: An International Perspective
on Robots as Animals’ (2010) 24(13) Advanced Robotics 1861‒1871, DOI: 10.1163/
016918610X527194.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
166 Susana Navas

same way as animals in some national legal systems such as those of Germany,
Switzerland or Austria. Where the robot machine is used by a supplier of services,
could one treat their liability for damages as vicarious liability in the same way as a
principal is liable for damage caused by assistants?45 In my view, this option suggests
that robot machines and employees have the same legal status, which is doubtful.
The fact that they perform similar jobs does not mean that they deserve equal legal
consideration.
While I do not believe a specific rule is needed to regulate liability in the case of
owning a robot, policy makers should amend civil codes to regulate civil liability for the
possession of potentially dangerous goods, including robots or smart artefacts.46
Whether this is considered on the basis of fault (with a possible presumption iuris
tantum of lack of diligence, as in cases concerning the responsibility of parents or
guardians for the acts of minors under their charge) or of strict liability (as in cases of
animals or the handling of potentially dangerous machines), obtaining insurance with
a minimum level of cover for the damage caused by the robot should be compulsory.
I do not agree with the idea suggested by some scholars that, although third parties
should be compensated by the owner, responsibility should be assigned to the machine
itself.47 In such a case, the machine would be deemed to be a child, that is, a human
person, or, at least, legal personality would be assigned to it. This is not yet the case,
although it could become the case in the future through rulings by national policy
makers.48 In my opinion, if the attribution to a robot of the consideration of “holder
of rights and duties” makes some sense, it is that of being able to be “the subject” to
which the action that causes damage is “attributed”, whilst “the subject” that is to be
considered “liable” is the human. Thus, it would be a (new) case of civil liability for
someone else’s act.

5.4 the producer’s liability for damage caused by a


robot machine: review
Concerns about the responsible handling of smart robots led the European Parlia-
ment to issue a Resolution on 31 May 2016, making a proposal on the subject to the
Commission in charge of drafting civil law rules.49 This proposal was followed by, on
the one hand, the Report with Recommendations to the Commission on Civil Law
Rules on Robotics50 and, on the other hand, the Follow up to the EU Parliament

45
Palmerini and Bertolini (n 3) 241.
46
Spindler, ‘Roboter, Automation, künstliche Intelligenz, selbst-steurende Kfz – Braucht das
Recht neue Haftungskategorien?’ (2015) 12 CR 775.
47
Beck, ‘Grundlegende Fragen zum rechtlichen Umgang mit der Robotik’ (2009) 6 JR 229–230;
Ebers (n 13) 8; Kersten, ‘Menschen und Machinen’ (2015) 1 JZ 1‒8.
48
Loos (n 4) 59‒83.
49
2015/2103 INL.
50
27.01.2017, A8–0005/2017.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 167

Resolution of 16 February 2017 on Civil Law Rules on Robotics. These two docu-
ments also focus on the need to regulate the civil liability for damage caused by robots.
The European Parliament’s resolution of 12 February 2019 on a comprehensive
European industrial policy on artificial intelligence and robotics (2018/2088(INI)),51
the Report on “Liability for AI and other emerging digital technologies” prepared by
the Expert Group on Liability and New Technologies”, in which the need for a
review of liability rules is highlighted, should also be taken into consideration.
Although compensation for damages caused by defects in robots and other intelli-
gent machines can be awarded according to national producer liability legislation,
classical issues regarding the application of this legislation to such ‘products’ will
arise when it comes to future reviews of this legislation.52 In fact, the inadequacy of
the current Directive 85/374/CEE for regulating damages caused by robots, particu-
larly those with self-learning capacity, is highlighted by the ‘Follow up’ document
mentioned above.53 Some topics for a possible future review of EU legislation on
producer liability are presented below.

5.4.1 Robot Machines As Products


A robot machine can be included in the definition of ‘product’. Therefore, the
producer of a robot can be regarded as liable for defects that cause damage to another.
For the purposes of Directive 85/374/EEC, Art 2 states that ‘product’ means ‘all
movables, with the exception of primary agricultural products and game, even
though incorporated into another movable or into an immovable’. According to
my explanation in Section 5.1 concerning the legal view of a robot machine, we can
affirm that robots can be legally regarded as products and that European Commu-
nity rules should be applied. Usually, a robot machine (a tangible good) incorpor-
ates software in a manner that makes it hard to distinguish the software from the
good, for instance, in cases where the software is necessary for the functioning of the
robot. In this case, it is generally accepted that the computer program becomes an
inseparable part of the robot in which it is incorporated. Hence, it must be treated as
a product falling within the scope of the directive, given the link between the robot
machine and the computer program.54

51
P8_TA-PROV(2019)0081.
52
Howells and Willet, ‘3D Printing: The Limits of Contract and Challenges for Tort’ in Schulze
and Staudenmayer (eds), Digital Revolution: Challenges for Contract Law in Practice (Nomos
Verlag 2016) 67; Solé, El concepto de defecto del producto en la responsabilidad civil del
fabricante (Tirant lo Blanch 1997) 563; Salvador and Ramos, ‘Defectos de productos’ in
Salvador and Gómez, Tratado de responsabilidad civil del fabricante (Thomson Civitas Cizur
Menor 2008) 135.
53
At the time of writing, the above-mentioned directive is being reviewed (Artificial Intelligence
for Europe, SWD(2018) 137 final).
54
Fairgrieve et al. in Machnikowski (ed) European Product Liability (Intersentia Cambridge
2016) 47.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
168 Susana Navas

Given that robots are becoming increasingly sophisticated, the ‘state of scientific
and technical knowledge existing at the time when he put the product (the robot)
into circulation’ is especially relevant for assessing the producer’s defence against
liability (Art 7(e) Directive 85/374/EEC). Software upgrades and updates questions
the application of the so-called “development risks exception”.

5.4.2 Types of Defects


First of all, because robot machines are becoming increasingly sophisticated, we may
take their designs in particular into account, so that defects that mean that the robot is
considered ‘defective’55 are defects in design more often than defects in manufactur-
ing.56 In turn, the degree of sophistication implies that there must be more precision in
the warnings, information and instructions that the producer must supply to the
purchaser of the robot; that is, there must be more information but also the information
must be more technical.57 Some sort of specific knowledge is even needed by the user
of the robot or intelligent machine, if they are to have a full understanding of the
information and instructions provided. The complexity of this information and these
instructions suggests that in future lack of information will become a more common
defect than it is today. Hence, defects in design and in instructions will be the kind of
defects that robots will frequently have, rather than defects in manufacturing.58
From this statement it follows that, if the producer is regarded as liable in any case
under the current legislation, their own investment in high technology could be
considerably reduced. In the search for the balance between investment in techno-
logical research and liability to third parties, the solution should not be to protect the
manufacturer if there are certain defects, as proposed by Ryan Calo.59 In my opinion,
a better solution would be to set the criteria for imposing civil liability on the producer
according to the type of defect. Consequently, strict liability would be the best rule
regarding manufacturing defects, whereas a presumption of fault iuris tantum would
be more appropriate for defects in design and in information/instructions. Notwith-
standing this, the proposal made by the European Parliament to the Commission for

55
As is well known, the criterion used by the Directive to define the ‘defectiveness’ of a product is
not a subjective criterion but an objective and normative one (Wuyts, ‘The Product Liability
Directive – More than Two Decades of Defective Products in Europe’ (2014) 5(1)JETL 12).
56
In contrast to the position in the USA (see § 2 Restatement Third of Torts: Product Liability),
the Directive does not distinguish between types of defect. However, in practice, courts in the
Member States differentiate between manufacturing defects, design defects and instruction
defects (Fairgrieve et al. in Machnikowski (ed), European Product Liability 53).
57
Spindler (n 47) 769; Castells (n 36) 115‒121.
58
Hubbard (n 43) 1821‒1823; Ebers, ‘Autonomes Fahren: Produkt- und Produzentenhaftung’ in
Oppermann and Stender-Vorwachs (eds), Autonomes Fahren (CH Beck 2017) 111‒112.
59
Calo, ‘Robotics & the Law: Liability for Personal Robots’ <http://ftp.documation.com/refer
ences/ABA10a/PDfs/2_1.pdf>. Date of access: April 2020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 169

the regulation of robots and the Report on Liability for Artificial Intelligence clearly
opt for the introduction of liability irrespective of fault on the part of the producer of
the robot in all cases concerning defects. In addition, the proposal states that the
owner of a robot should take out compulsory insurance for damage caused to another,
and requires the creation of a compensation fund that covers all damage that cannot
be covered by that insurance.60

5.4.3 Notion of Producer: The ‘Market Share Liability’ Rule


The definition of who should legally be considered the ‘producer’ deserves special
attention. According to Art 1 of Directive 85/374/EEC, the producer is to be regarded
as liable for the damage caused to third persons by a defect in their product. Some
scholars argue (though without providing data to support this view) that if the
producer is exclusively responsible even when the defect is not properly a defect in
manufacturing and, in addition, is responsible if there is a defect in the design when,
for example, several individuals have been working on the product (e.g., the creator
of the algorithm, the programmer, the designer, and the manufacturer of a particular
part) or a group or research team is involved,61 a certain lack of interest in investment
in the manufacture of robots or other intelligent machines could be justified.62 If we
take into account the fact that most of the defects that might be found in robots or
other smart machines are defects in the design or conception of the ‘product’, it is
worth suggesting a broader definition of ‘producer’ that includes the engineer and/or
designer of the robot as long as they do not work for the manufacturer (that is, they
are not part of the structure of the manufacturer’s enterprise). As is known, the
designer of a product who is not the manufacturer or the repairer falls outside the
scope of the notion of producer. However, the designer could be held liable directly
as manufacturer of a component part of the robot for the damage caused. In any case,
an injured person can bring a direct civil claim for damages against the engineer or
the designer, according to the current national rules on civil liability, insofar as Art
13 Directive 85/374/EEC states that ‘this Directive shall not affect any rights which an
injured person may have according to the rules of the law of contractual or non-
contractual liability or a special liability system existing at the moment when this
Directive is notified’. In the NTF report the “designer” of the AI system could be
considered as backend operator.
As I have already emphasized, it is a commonplace to use open-source software in
the creation of a robot (an open robot) and, in this case, any person can introduce

60
̒Follow-up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on
Robotics’.
61
Balkin (n 8) 45. This scholar raises the question, but he does not propose a concrete solution.
For the same view, see Beck (n 48) 227.
62
Hubbard (n 43) 1821‒1823.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
170 Susana Navas

changes or innovations or add specific standards to public protocols, and so on.63


The uncertainty concerning the person or persons who act affects the existence and
proof of the ‘causal relationship’ between the defect and the damage. Therefore,
although it can still be criticized,64 the market share liability rule should receive
paramount consideration.
In 1999, aspects related to proof of damage, defect and causal relationship were
raised, among other issues, in the Green Paper submitted by the Commission on
liability for defective products.65 One of the proposals was the application of the
aforementioned rule of market share liability, with a view to a possible amendment
of Directive 85/374/EEC. The aspects considered included: (i) a legal presumption
of the causal relationship when the injured person proves the defect and the
damage; (ii) a legal presumption of the defect when the injured person proves the
existence of the damage; (iii) obliging the producer to provide all kinds of docu-
mentation and useful information so that the injured person can benefit from
specific elements to prove the facts (discovery rule); and (iv) requiring the producer
to pay the costs of experts, in order to lighten the burden of proof on the part of the
injured person, under certain conditions ‒ for example, the injured person could ask
the judge to order the producer to pay the necessary expenses for the victim to prove
his case, provided that the victim reimbursed the expenses (plus, possibly, interest) if
the claim was not successful.66 M2M communication can establish a natural
causality between the type of defect and the damage in a much clearer way, meeting
the criterion of objective imputation that must be taken into account by the judges.
On the internet of things, intelligent machines communicate directly with the
manufacturer, designer or programmer, indicating problems, deficiencies or defects.
M2M communication is, in fact, currently used by many enterprises.67 In any case,
the digitalization and the IoT allows tracing the behavior of things and storing all
this information in what is called the “black box”. Accessing to it by the injured can
facilitate the burden of proving the defect.
In these circumstances, and with significant cost savings, the agent causing the
damage can be fully identified, and this type of communication may lead to a

63
Calo, ‘Open Robotics’ <http://ssrn.com/abstract=1706293>. Date of access: April 2020; Cooper
(n 26) 166‒167.
64
Salvador, ‘Causalidad y responsabilidad (versión actualizada)’ (2002) InDret 3<www.indret
.com>. Date of access: April 2020; Luna, ‘Causalidad y su prueba. Prueba del defecto y del
daño’ in Salvador and Gómez (eds), Tratado de responsabilidad civil del fabricante (Thomson
Aranzadi Cizur Menor 2008) 471‒476; Ruda, ‘La responsabilidad por cuota de mercado a
juicio’ (2003) InDret 3, <www.indret.com>. Date of access: April 2020. All these authors quote
widely from the North American bibliography, which is the source of this approach.
65
COM(1999), 396 final.
66
The Third Report concerning Directive 85/374/EEC, of 2006, does not refer to these sugges-
tions [COM(2006) 496 final].
67
Bräutigam and Klindt, ‘Industrie 4.0, das Internet der Dinge und das Recht‘ (2015) NJW
1137‒1143; Grünwald and Nüssing, ‘Machine to Machine (M2M)-Kommunikation. Regulator-
ische Fragen bei der Kommunikation im Internet der Dinge‘ (2015) MMR 378‒383.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 171

significant change in the current rules concerning the responsibility of the manu-
facturer. On the basis of expert systems, defects of any kind that appear can be fully
identified and corrected almost immediately, at least if the system is able to repair
itself, or the defective mechanism can be stopped, which can prevent or minimize
damage. The knowledge of the defect that is immediately acquired by the liable
person allows them to take urgent measures in this regard (for example, modifying
the software or warning the user about the possible risk of damage and the best steps
to take to avoid it). It is worth mentioning that questions raised about the responsi-
bility of the producer after the product is put into circulation, in relation to
identifying a defect that can cause damage, must be answered according to the
general rules of civil liability under domestic law.68

5.4.4 The Consumer Expectations Test


The consumer expectations test should be considered in a future review of Directive
85/374/EEC (Art 6).69 In this respect, it would be advisable to take into account the
criteria that were proposed in relation to the Restatement (Third) of Torts in the
USA, and to apply the reasonable alternative design test70 instead of the consumer
expectations test. The reasonable alternative design test has been criticized on the
grounds that it favours the manufacturer (the entrepreneur) too much by imposing
excessive costs on the consumer (particularly relating to proof of the defect), since it
takes special note of the ‘risks–utility’ test. However, since communication between
intelligent machines is advancing, it might not be so unwise to take this test into
consideration, even if both tests are applied jointly for the purpose of determining
whether or not a smart machine is defective,71 or, above all, if we believe that defects
are more in the design than in the manufacturing.
In addition, whether a reasonable alternative design exists or not is a question that
an algorithm will be able to answer – or can already answer – once the data to which
the M2M communication has given rise have been handled. There would be full
compliance, in these instances, with technological neutrality.72

68
Machnikowski (ed) European Product Liability (Intersentia 2016).
69
Regarding the meaning of this in relation to producers’ civil liability, see Solé (n 53) 97‒102;
Salvador and Ramos (n 53) 146‒152.
70
According to § 2(b): ̒A product is defective (. . .) in design when the foreseeable risks of harm
posed by the product could have been reduced or avoided by the adoption of a reasonable
alternative design by the seller or other distributor, or predecessor in the commercial chain of
distribution, and the omission of the alternative design renders the product not reasonably
safe. . .̓ (§ 2 Rest. Third. Torts: Products Liability: Categories of Product Defect). This is
suggested by Hubbard (n 43) 1854‒1855.
71
Salvador and Ramos (n 53) 182‒184.
72
Navas (n 14) 58.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
172 Susana Navas

5.4.5 Inclusion of Non-pecuniary Damages


Non-pecuniary damage (pain and suffering and other non-material loss) has trad-
itionally been excluded from the protection extended by Directive 85/374/EEC, in
the sense that the existence and scope of the obligation to provide compensation is
determined exclusively by domestic regulations. First of all, we need to determine
the meaning of ‘non-pecuniary damage’. According to some scholars, only damage
for which compensation cannot be given should be considered as non-pecuniary
damage, because despite an amount of money being received by the injured person,
the utility that the injured person had before the damage occurred is not restored.
Hence, if compensation does restore the utility to the injured person, the damage
caused should be treated as pecuniary damage.73
The intention of the policy makers who drafted the directive on behalf of the
European Community in 1985 was to exclude non-pecuniary damage from the scope
of the directive and to refer this to national legislation. There is also a substantive
reason, which is that Germany was against the regulation of non-pecuniary damage
at Community level because of the differences in criteria between Member States
and, in particular, in the criteria applied by the national courts regarding the
admission of compensation for such damage.74 Indeed, in the mid-1980s when
the directive was being drafted, while compensation for non-pecuniary damage
was granted quite freely (even lightly) in France and Spain, Germany did not allow
it and the situation in Italy was very restrictive.75 Today, since the reform to the BGB
(§ 253.2) in relation to civil liability brought in by the German legislator in 2002,
claims for compensation for non-pecuniary damage are admitted, in general, in
cases of bodily injury, health, freedom, and sexual self-determination, and also
within the regime of strict liability.76 In the legislation covering civil liability for
defective products, in that same year, a final section was introduced to § 8 Pro-
dukthaftungsgesetz of 15 December 1989,77 by virtue of which the injured party is
only allowed to claim for the non-pecuniary damage that a bodily injury caused by a
defect in the product would have caused them. Thus, Directive 85/374/EEC should

73
Gómez, ‘Daño moral’ (1999) 1 InDret<www.indret.com>. Date of access: April 2020; Gómez,
‘Ámbito de protección de la responsabilidad de producto’ in Salvador and Gómez (eds),
Tratado de responsabilidad civil del fabricante (Thomson Aranzadi Cizur Menor, 2008) 662,
footnote 9.
74
This is highlighted by Marín, Daños por productos: estado de la cuestión (Tecnos 2001) 152;
Alcover, La responsabilidad civil del fabricante. Derecho comunitario y adaptación al Derecho
español (Civitas 1990) 80; Martín and Solé, ‘El daño moral’ in Cámara (ed), Derecho Privado
Europeo (Colex 2003) 859‒860.
75
See a comparative overview of non-pecuniary damages in Horton (ed), Damages for Non-
pecuniary Loss in a Comparative Perspective (Springer 2001) 279.
76
Magnus, ‘La reforma del derecho alemán de daños’ (2003) 4 InDret <www.indret.com>. Date
of access: April 2020.
77
<http://www.gesetze-im-internet.de/bundesrecht/prodhaftg/gesamt.pdf>. Date of access:
April 2020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
Robot Machines and Civil Liability 173

be amended to make sure that non-pecuniary damage falls within its scope of
protection.78 In fact, the Resolution of the European Parliament to the Commission
of February 2017 on standards in relation to robotics warns that the rules on civil
liability should cover all possible damage caused by a robot, given, as has already
been indicated, that not all cases involving a robot fall within the scope of the
directive’s current wording.

5.5 conclusions
The internet of things, as well as robots and other intelligent machines, presents a
challenge to civil liability norms, giving rise to the need for an articulated system that
can respond to the new situations that could occur. It should not be forgotten that
permanent communication between intelligent machines, or systems that are
capable of repairing themselves, or expert robots that make decisions at critical
moments, can drastically reduce the number of accidents or fatalities, with a
consequent decrease in deaths and bodily injuries with long-term consequences.
This may have a major economic impact, not just in the field of health.79 The
impact will be of particular importance in the insurance sector.80
Permanent communication between intelligent machines can allow machines
themselves to adapt constantly to new technical and scientific advances or to adapt
to their environment on the basis of the existing knowledge in a specific domain or
for a specific technique (e.g., the materials with which pipes are produced, in
relation to pipelines or other pieces of infrastructure).This will inevitably, and
sooner rather than later, affect the rules on the civil liability of the producer and
owner of a robot or intelligent machine. Robotics, then, can give a great opportunity
to review and finally amend different aspects of the producer liability rules that,
since 1999, have been left outside the political agenda of the Community’s public
bodies.81 In any case, future “personalized” information based on customer prefer-
ences, needs, capabilities, by way of the analysis of massive data stored by the
manufacturer, could allow to “personalize” liability avoiding the one-size-fits-all
rule.

78
For more arguments and literature concerning this approach see: Navas, ‘Daño moral y
producto defectuoso. Estado de la cuestión legal y jurisprudencial en España’ (2016) 13 Revista
Crítica de Derecho privado (Uruguay) 525‒573.
79
In the case of autonomous vehicles, it is estimated that they can reduce fatalities by 90%.
This, in turn, can mean savings of billions of euros per year in medical care (Rifkin (n 12)
285‒287). In fact, robotics is a current matter on the agenda of the World Economic Forum
<www.weforum.org/es/agenda/archive/artificial-intelligence-and-robotics/>. Date of access:
April 2020.
80
In relation to driverless cars, see the considerations of some important insurance companies in
<www.driverless-technologies-insurance.com>. Date of access: April 2020.
81
At the time of writing, the abovementioned directive is being reviewed (Artificial Intelligence
for Europe, SWD(2018) 137 final).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.006
6

Extra-Contractual Liability for Wrongs Committed by


Autonomous Systems

Ruth Janal*

introduction
As robots and intangible autonomous systems increasingly interact with humans, we
wonder who should be held accountable when things go wrong. This chapter
examines the extra-contractual liability of users, keepers and operators for wrongs
committed by autonomous systems. It explores how the concept of ‘wrong’ can be
defined with respect to autonomous systems and what standard of care can reason-
ably be expected of them. The chapter also looks at existing accountability rules for
things and people in various legal orders and explains how they can be applied to
autonomous systems. From there, various approaches to a new liability regime are
explored. Neither product liability nor the granting of a legal persona to robots is an
adequate response to the current challenges. Rather, both the keeper and the
operator of the autonomous system should be held strictly liable for any wrong
committed, opening up the possibility of privileges being granted to the operators of
machine-learning systems that learn from data provided by the system’s users.

6.1 damage wrought by autonomous systems


The human world is increasingly influenced by autonomous systems. Robots help
us with an increasing number of everyday chores: rooms are cleaned by vacuum
robots, gardens are tended by intelligent irrigation systems and automatic lawn
mowers, and cars are equipped with piloting functions that can park the car or even
steer it in specific circumstances. But autonomous systems need not be embedded
in a machine to influence our lives: they also trade on our behalf on the stock
exchange, influence which news we get to see on social media, and recommend

* The author would like to thank Rebecca Sieber for her research assistance and for proofreading
this chapter.

Downloaded from https://www.cambridge.org/core. University College


174 London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 175

which terms to search for in a search engine and whether to conclude a contract
with a customer.
The hope is that a more prominent use of autonomous systems will cause the
incidence of damage events and loss to fall,1 as artificial intelligence will outperform
humans in cognitive tasks. While this vision may or may not come true, those times
certainly have not yet arrived. Autonomous cars may crash,2 vacuum robots may eat
hair,3 and a whole spectrum of new categories of damages has arisen that were
previously unheard of: search engines that suggest defamatory search terms,4 adver-
tising networks that display adverts for high-paying jobs to men rather than to
women5 and image-recognition technology that categorizes persons of colour as
gorillas.6

6.1.1 Robots As Legal Persons


The need to assign responsibility and establish liability for robots is thus quite
obvious. Whenever an autonomous system ‘goes wrong’, we ask who might be held
liable for the ensuing damage. Academics7 and even the European Parliament8
have floated the idea that at some point in time, autonomous systems might be
assigned their own legal persona (e-person). This is certainly an interesting mind
game, but at least for the time being it is not a feasible option.9 The idea behind
1
See for example Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liabil-
ity’ (2017) 86(1) George Washington Law Review 118 et seq.
2
Yadron and Tynan, ‘Tesla driver dies in first fatal crash while using autopilot mode’, 1 July 2016
<www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-
musk>.
3
McCurry, ‘South Korean woman’s hair ‘eaten’ by robot vacuum cleaner as she slept’, 9 Febru-
ary 2015 <www.theguardian.com/world/2015/feb/09/south-korean-womans-hair-eaten-by-robot-
vacuum-cleaner-as-she-slept>.
4
Bundesgerichtshof (BGH), 20.10.1988, BGHZ 197, 213 = NJW 2013, 2348.
5
Datta, Tschanz, and Datta, ‘Automated Experiments on Ad Privacy Settings’ (2015) 1 Proceed-
ings on Privacy Enhancing Technologies (PoPETs) 92.
6
Simonite, ‘When it comes to gorillas, google photos remains blind’, 11 January 2018 <www
.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind>.
7
Hilgendorf, ‘Können Roboter schuldhaft handeln?’ in Beck (ed) Jenseits von Mensch und
Maschine (Baden-Baden 2012) 127 et seq.; Beck, ‘The Problem of Ascribing Legal Responsi-
bility in the Case of Robotics’ (2016) AI & Society (AI & Soc) 473, 479 et seq.; Matthias,
Automaten als Träger von Rechten (Logos Verlag Berlin 2008) 244; Sartor, ‘Agents in Cyber
Law’ in ‘Proceedings of the Workshop on the Law of Electronic Agents’, CIRSFID (LEA02)
(Gevenini 2002) 7; Cahen, ‘Le droit des robots’ <www.murielle-cahen.com/publications/robot
.asp>; Lagasse,’ Faut-il un droit des robots?’ Revue de la gendarmerie nationale (CREOGN)
Note numéro 12, Juillet 2015; Bem, ‘Droit des robots: quel statut juridique pour les robots ?’ 2013
<www.legavox.fr>; Bensoussan, Droit des robots: science fiction ou anticipation? (Dalloz (D)
2015) 1640.
8
Delvaux, ‘Report with recommendations to the Commission on Civil Law Rules on Robotics’
(2015/2103(INL)) (European Parliament Committee on Legal Affairs), 27 January 2017.
9
Also sceptical Muller, ‘Opinion of the European Economic and Social Committee on Artifi-
cial Intelligence – The consequences of artificial intelligence on the (digital) single market,

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
176 Ruth Janal

legal persona is to separate the ownership from the management of an entity and
limit shareholder liability. To that effect, the legal entity is provided with its own
assets. However, currently it does not seem economically efficient to endow robots
with their own assets, at least if those assets are supposed to be sufficient to cover
potential losses caused by the system. The same purpose can be more easily
achieved simply by making insurance a requirement.10 Even if the idea of legal
personality for robots gains traction over time, it is difficult to imagine each and
every kind of autonomous system endowed with its own assets, such as bank
accounts for vacuum robots, intelligent irrigation systems and internet search
engines.11

6.1.2 The Players Involved in Autonomous Systems


Thus, we look to humans and companies as possible defendants of a claim for
damages. When autonomous systems cause harm, three parties quickly come to
mind as possible defendants: the user of the autonomous system, the system’s owner/
keeper and the producer of the system. I would like to introduce a fourth category,
for reasons that will be discussed in Section 6.3.4. I shall call this party the ‘operator’.
The operator is the person who is pulling the strings of the autonomous process.
This is the party that is responsible for running the autonomous system, that is, the
party which provides the data necessary to run the system, which oversees possible
machine-learning processes and which initiates the necessary update pushes for the
software. Often, this party will be identical with the manufacturer of the machine in
which the system is embedded (think, for example, of Tesla, which develops both
Tesla cars and the Tesla autopilot). But that may not always be the case (think of
computer hardware and operating systems which are typically produced by different
companies), and the party that has originally developed the autonomous system will
also not necessarily be the party that is subsequently running it.

production, consumption, employment and society’ (own-initiative opinion)’ (2017/C 288/01),


n 3.3.3; Nevejans, ‘Citizens’ Rights and Constitutional Affairs – Legal Affairs, European Civil
Law Rules in Robotics’. Study, European Union 2016 <www.europarl.europa.eu/RegData/
etudes/STUD/2016/571379/IPOL_STU> (2016)571379_EN.pdf 14 et seq.; Martin, ‘Taking the
High Road: The 4th Continuity: Personhood of Robots’ (2018) 9.1 ACM Inroads <https://
inroads.acm.org/article.cfm?aid=3177854#R2>; Armbrüster, ‘Automatisiertes Fahren – Paradig-
menwechsel im Straßenverkehrsrecht?’ (2017) Zeitschrift für Rechtspolitik (ZRP) 85; Lohmann,
‘Ein europäisches Roboterrecht – überfällig oder überflüssig?’ (2017) Zeitschrift für Rechtspo-
litik (ZRP) 171; Bensamoun and Loiseau, L’intelligence artificielle: faut-il légiférer? (Dalloz (D)
2017) 581, 582; Mendoza-Caminade, Le droit confronté à l’intelligence artificielle des robots: vers
l’émergence de nouveaux concepts juridiques?, (Dalloz (D) 2016) 445, 447 et seq.
10
Nevejans (n 9) 15; Keßler, ‘Intelligente Roboter – neue Technologien im Einsatz’ (2017)
MultiMedia und Recht (MMR) 593.
11
Zech, ‘Zivilrechtliche Haftung für den Einsatz von Robotern’ in Gless and Seelmann (eds),
Intelligente Agenten und das Recht (Nomos 2016) 203.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 177

An example illustrates the various players. Let us assume that an autonomous car
is involved in a car accident, and the victim is seeking compensation. The victim
might sue the car manufacturer who produced the car and put the product into
circulation (Volkswagen). Another option is to make a claim against the operator of
the autonomous system that has continuously collected data from all the cars
equipped with the autonomous system and integrated this data in the updates which
it regularly sends to the cars (Waymo, the Google autonomous car company). And
obviously, a claim might also be made against the keeper/owner of the car or the
driver/passenger of the car.
Any of the four parties may bear responsibility for the accident by virtue of their
own wrongdoing: the manufacturer may have installed a defective sensor; the
operator may have installed an update which was not thoroughly tested; the keeper
may have ignored a notice to update the system; and the driver/passenger may have
ignored system warnings or other obvious signs that a sensor was dirty and thus not
operating properly. Apart from liability for any wrongdoing committed, some of
these parties might also be liable under strict liability rules.
This chapter takes a closer look at all but one of the parties named above,
addressing the liability of the user, the keeper and the operator of an autonomous
system. Product liability is considered by Susana Navas in Chapter 5. This chapter
looks at extra-contractual liability and is not concerned with contract law.
It is also the case that many damages are covered by insurance, and depending
on the applicable rules, any person who suffered a loss may have a direct
claim against the tortfeasor’s insurer. Again, this is not the subject of the present
chapter.

6.1.3 Existing Liability Regimes


Obviously, assigning the damage caused by a thing or a person to another person is
not a new concept in law. Law is best developed incrementally and it therefore
seems a good idea to first take a look at traditional notions of liability. In Europe,
civil liability has historically been fault based. Over time, other liability models have
developed that might provide guidance for autonomous systems liability. Here
liability for things comes to mind, since autonomous systems are often embedded
in a physical object. Autonomous systems also bear a resemblance to employees, as a
principal may choose to delegate a task either to a human employee or to an
autonomous system. Finally, the fact that autonomous systems may employ
machine learning and their actions cannot be entirely foreseen allows for a com-
parison with children. The chapter therefore next discusses the liability regimes for
fault, things, employees and minors using the examples of French, English and
German law. On this basis, a liability regime for autonomous systems is then
proposed.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
178 Ruth Janal

6.2 traditional concepts of liability

6.2.1 Fault-Based Liability


Classic liability rules in Europe are fault based.12 The reasons are historic – civil
liability rules were derived from criminal law. Those origins are evident from the
terms used to describe extra-contractual liability rules: tort law, responsabilité délic-
tuelle, Deliktsrecht. Obviously, torts may be committed intentionally, but in prac-
tice, in the majority of cases the tortfeasor was acting negligently. The approaches
taken by each of the three legal orders are next briefly outlined.

6.2.1.1 England
Under English law, a person will be liable under the tort of negligence if they are
under a duty of care towards the eventual victim, if they have breached said duty and
if the breach has resulted in damage, based on a preponderance of evidence.
However, damages that could not reasonably have been foreseen are considered to
be too remote and will not be compensated. Damages claims regularly turn on the
question of whether a duty of care existed towards a particular person or groups of
persons. There are accepted categories of a duty of care in case law: direct bodily
harm, product liability, legal malpractice, and so on. Further categories may be
developed by the courts, which will take an incremental approach and consider
three – admittedly vague – elements: proximity, foreseeability, and whether it is fair,
just and reasonable to impose a duty.13 Existence of a duty of care will be particularly
scrutinized in cases of pure economic loss.
A breach of duty occurs when a party fails to live up to the standard that a
reasonable person in their position is expected to meet, allowing for specific
standards for professionals and lay persons alike. In particular, children are only
expected to meet the standard of a reasonable child of the same age.14 Mental
impairment, however, is not an accepted defence.15

6.2.1.2 France
Under Articles 1240 and 1241 of the French Civil Code, liability arises for any damage
caused by faute. The concept of faute is best described as behaviour which does not
meet the standard of a just and cautious person or a good professional. Minor age
12
Zweigert and Kötz, Einführung in die Rechtsvergleichung auf dem Gebiete des Privatrechts (3rd
edn, Mohr Siebeck Verlag 1996) 650.
13
Caparo Industries PLC v Dickman [1990] United Kingdom House of Lords (UKHL) 2.
14
Jackson v Murray and Another [2015] United Kingdom Supreme Court (UKSC) 5.
15
Van Dam, European Tort Law (2nd edn, Oxford University Press 2013) 276 with reference to
exceptions.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 179

and mental impairment are not accepted defences. While pure economic loss is an
accepted head of damage, losses will only be compensated if the damage is direct,
certain and legitimate, which gives the courts some discretion to exclude rather
remote damages. It should also be noted that the significance of liability for negli-
gent acts in French law has dwindled in light of the wide-ranging liability imposed
for the acts of things and employees.

6.2.1.3 Germany
Under German law, a negligent act will generally only give rise to liability if one of
the rights named in § 823(1) BGB was infringed, namely life, health, property,
freedom, personality and commercial enterprise (leaving aside some other grounds
for negligence, such as breach of statutory duty, § 823(2) BGB). As § 823(1) does not
list a party’s wealth as a protected right, a negligent causation of purely economic
loss generally does not give rise to compensation. In cases of indirect losses, the
alleged tortfeasor will be held liable if they were under a duty of care to prevent the
damage by monitoring and controlling a particular source of damage, such as
hazardous objects or activities (Verkehrssicherungspflicht). The victim must prove
causation to the satisfaction of the court and, similar to English law, liability for
damages will be denied where the loss could not reasonably have been foreseen by
the tortfeasor. Neither minors nor mentally impaired persons are held liable if they
lack the appropriate comprehension of why their actions are wrong.16
2017 saw the introduction of new rules to the German Straßenverkehrsgesetz
(StVG; Road Traffic Act), adapting the law for the emerging functions of highly
automated driving. Under §§ 1a and 1b StVG, it is legal to operate a car with high-
level complete automation systems as defined under the law. Drivers may switch
these cars into automated mode and turn their attention away from traffic, provided
that they remain sufficiently alert to immediately regain control whenever the
system asks them to do so or whenever it becomes obvious that the prerequisites
for the use of automated driving functions are no longer present. These duties of
care are generally in line with the above-mentioned safety duties for hazardous
items. However, the law is so vague that it fails to contribute to legal certainty.17
Furthermore, it is questionable whether current systems are able to recall drivers’
attention in time, given that humans need around 30 to 40 seconds to fully get ‘back
in the loop’18 ‒ i.e. to assess the vehicle’s situation and to respond accordingly. For

16
§§ 827, 828 BGB.
17
Schirmer, ‘Augen auf beim automatisierten Fahren! Die StVG-Novelle ist ein Montagsstück’
(2017) Neue Zeitschrift für Verkehrsrecht (NZV) 255.
18
Merat, Jamson, Lai, Daly, and Carsten, ‘Transition to Manual: Driver Behaviour when
Resuming Control from a Highly Automated Vehicle’ in Merat and de Waard (eds), Transpor-
tation Research Part F: Traffic Psychology and Behaviour, Volume 27, Part B (Elsevier 2014) 274
et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
180 Ruth Janal

the time being, any driver who averts their attention from traffic should therefore be
considered to have acted negligently.

6.2.2 Liability for Things


As we have seen, non-contractual liability in European legal systems is traditionally
linked to misconduct. However, the age of industrialization and the development of
ever more complex machines planted the seed for the idea that parties should be
responsible for the damage caused by goods under their responsibility and control.
The concept behind liability for things is that any person who expands their sphere
of action through the use of things and benefits from such use should also bear
responsibility for any risk attached (eius damnum, cuius commodum).19
As discussed below, this idea has gained a varying amount of traction in the legal
orders examined here. The discussion in this chapter is limited to liability for items
that can be held by private individuals, and will not address the liability for hazards
typically only employed by major companies (such as nuclear power plants).

6.2.2.1 France
French law is certainly quick to assign the damage caused by a thing to the thing’s
keeper (gardien). Art 1242(1) of the Civil Code declares that ‘A person is liable not
only for the damage which he caused by his own act, but also for that which is
caused . . . by things which he has in his keeping’. Originally, this sentence was only
intended and understood to be an introductory note to the liability rules in Art 1242
(2) and following (which provide for strict liability of the keepers of animals and
buildings).20 In the nineteenth century, when industrialization led to a rapid
increase in accidents and victims were often unable to prove faute on the part of
the owners of machines, the French Cour de Cassation started to use Art 1242(1) as
the foundation of a strict liability regime.21 Over time, Art 1242(1) has come to be
understood as a general rule providing for strict liability of the keeper of a good.22
Strict liability arises whenever there is an intervention d’une chose, meaning that
the respective thing must somehow be involved in the creation of damage. It is
irrelevant whether that involvement is physical or merely psychological, whether the
19
Jourdain, Les principes de la responsabilité civile (9th edn, Dalloz 2014) 96; van Dam (n 15) 299.
20
Boyer, Roland, and Starck, Obligations. 1. Responsabilité délictuelle (5th edn, Elitec 1996) 201;
Ferid and Sonnenberger, Das französische Zivilrecht (2nd part, 2nd edn, Verlag Recht und
Wirtschaft GmbH 1986) chap 20, n 301; Jourdain (n 19) 85 et seq.; Zweigert and Kötz
(n 12) 663 et seq.
21
Van Dam (n 15) 60.
22
Chambre des requêtes de la Cour de cassation (Cass. Req.) 19.1.1914, D. 1914, 1, 303; Cour de
Cassation, Chambres réunies (Cass. Ch. Réun.) 13.2.1930, DP 1930, 1, 57 note Ripert = S. 1930,
1, 121 note Esmein – arrêt Jand’heur; Albrecht, Die deliktische Haftung für fremdes Verhalten im
französischen und deutschen Recht (Mohr Siebeck 2013) 20.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 181

harm is caused directly or indirectly, and whether the good is dangerous or generally
considered to be innocuous.23 Even if a person uses the object as an instrument to
create harm, that does not necessarily militate against the keeper’s responsibility.24
Instead, the French courts consider the particular role that the good has played in
the causation of damage. If the thing was in an orderly state at its orderly place, it
will be considered to have played a passive role (rôle purement passif). The thing will
then not be considered a major factor in the causation of harm and its keeper will
not be held liable for damages. On the other hand, if the thing was moving and thus
came into contact with the person harmed or the goods damaged, an active role in
the causation of damage will be presumed. It would then be up to the keeper to
exonerate himself by demonstrating contributory negligence of the victim25 or a case
of force majeure (events or effects that cannot be reasonably foreseen or
controlled).26
Special rules exist for specific items. Art 1243 provides a specific strict liability rule
for the keepers of animals, but case law does not distinguish between animals and
other things.27 The situation is different for cars, with the liability of the keeper of a
car subject to the so-called Loi Badinter.28 Compared to the strict liability regime
under Art 1242(1), the Loi Badinter restricts the defences available to the keeper. In
case of personal damage, the defence of contributory negligence can only be raised
under very limited circumstances.29 The keeper is moreover barred from raising the
defence of force majeure.30
This encompassing liability regime obviously warrants a closer look at the defin-
ition of gardien (keeper). Any person who possesses usage, control and supervision of
the good (usage, direction et contrôle) is considered its keeper, regardless of whether
the power of disposal is due to law or fact.31 For example, if the item is stolen, the
thief will be considered its new keeper. The former keeper will no longer be liable
for any loss incurred by the good, even if they did not keep the object in safe custody

23
Cass. Ch. Réun., DP 1930, 1, 57 note Ripert = S. 1930, 1, 121 note Esmein – arrêt Jand’heur;
Jourdain (n 19) 88.
24
Boyer, Roland, and Starck (n 20) 223 et seq.
25
Cour de cassation chambre civile (Cass. Civ.) 6.4.1987, D. 1988, 32 note Chabas; Assemblée
plénière de la Cour de cassation (Cass. Ass. Plén.) 14.4.2006, Bull. 2006, N 6, 12 = D. 2006,
1577; Jourdain (n 19) 96 et seq.
26
Cour de cassation chambre civile (Cass. Civ.) 2.7.1946, D. 1946, 392; Jourdain (n 19) 96 et seq.
27
Van Dam (n 15) 67.
28
Loi n. 85-677 du 5.7.1985.
29
Art 3 Loi Badinter; Cour de cassation chambre civile (Cass. Civ.) 20.7.1987, J.C.P. 1987, IV,
358–360; Cour de cassation chambre civile (Cass. Civ.) 8.11.1993, Bull. II no 316; Quézel-
Ambrunaz,’ Fault, Damage and the Equivalence Principle in French Law’ (2012) 3 Journal
of European Tort Law (JETL) 21, 29.
30
Art 2 Loi Badinter.
31
Cour de Cassation, Chambres réunies (Cass. Ch. Réun.) 2.12.1941, Bull. civ. N. 292, 523 – arrêt
Franck; Jourdain (n 19) 90 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
182 Ruth Janal

and negligently facilitated the theft.32 The keeper will be liable for any damage that
has arisen irrespective of fault, mental sanity33 or a specific age. According to a
famous decision by the Cour de Cassation, even a small child falling off a swing with
a stick in his hand will be liable if the stick accidentally injures another child’s eye.34
Looking at these principles, one might be misled into thinking that the liability
for autonomous systems embedded in a physical object does not pose any problems
under French law. It seems a given that the object’s keeper is liable for any damage
caused by the object, unless the person harmed is principally responsible for the
damage or there is a case of force majeure. Quite surprisingly, however, several
French authors argue that the keeper should not be held liable for a robotic object
due to the keeper’s lack of control if the object is steered autonomously.35 It is also
important to note that software which is stored on a data carrier is not physical
enough to be considered a chose (thing).36 Arguably therefore, strict liability for
things under French law does not cover autonomous systems.

6.2.2.2 Germany
At first glance, the liability for things in German law follows a very different path
from the French law.

(a) strict liability for motor vehicles and luxury animals Strict liabil-
ity of the keeper of an item is only imposed upon the keepers of motor vehicles
(§ 7(1) StVG) and ‘luxury’ animals that do not serve an economic purpose for their
keeper (§ 833(1) BGB). Similar to French law, the keeper is considered to be the
person who benefits from the use of the good and who is able to control the object as
a source of risk.37 Contrary to French law, however, the keeper’s liability does not
end with the motor vehicle being stolen or misappropriated. Rather, keepers will
only be exonerated under § 7(3) StVG if they have not negligently facilitated the
misappropriation.38 The abstract risk of harm posed by motor vehicles and animals
alike provides justification for the keeper’s strict liability. As a consequence, the
32
Cass. Ch. Réun., Bull. civ. N. 292, 523 – arrêt Franck; Jourdain (n 19) 91.
33
Art 414 (3) C.Civ.
34
Assemblée plénière de la Cour de cassation (Cass. Ass. Plén.) 9.5.1984, Bull. 1984, ass. plén.
n = D. 1984, 525 note Chabas – arrêt Derguini.
35
Mendoza-Caminade, D. 2016, 445, 447; Bonnet, La Responsabilité du fait de l’intelligence
artificielle (Master de Droit privé general thesis, Université Paris 2 Panthéon-Assas 2015) 19 et
seq.; Lagasse (2015) 12 CREOGN 2.
36
Cour d’appel de Paris, Pôle 5 (CA Paris Pôle 5) 9.4.2014 note Loiseau, CCE. N 6. 2014, 54
(regarding Google AdWords).
37
For vehicles see Bundesgerichtshof (BGH) 22.3.1983, Neue Juristische Wochenschrift (NJW)
1983, 1493; BGH, 26.11.1996, NJW 1997, 660; for animals see BGH, 19.1.1988, NJW-RR 1988,
656; Spindler in Beck’scher Online-Kommentar BGB (44th edn, 2017) § 833 n 1 et seq.; Wagner
in Münchener Kommentar zum Bürgerlichen Recht (7th edn, CH Beck 2017) § 833 n 2.
38
For animal theft see Wilts, Beiheft Versicherungsrecht, Karlsruher Forum 1965, 1020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 183

keeper will only be held liable if that specific risk has contributed to the damage
(i.e., the loss can be attributed to the unpredictability of animal behaviour).39 In this
context, it is irrelevant whether a vehicle was steered by a human or an autonomous
system. Contributory negligence by the person harmed is a valid defence, as is force
majeure.40
There is no consensus on the required minimum age for the keeper’s liability.
While some authors argue that liability should depend upon the individual cogni-
tive ability of the minor (§ 828 BGB),41 others look to the capacity to contract
following §§ 104 et seq. BGB.42 When a parent entrusts a motor vehicle or animal to
their child, the parent is generally considered to be the keeper.43

(b) safety duties regarding hazardous objects While German statute


recognizes a strict liability for things only in exceptional circumstances (§ 833(1)
BGB, § 7(1) StVG), the courts have found ways to also hold the keepers of other things
accountable under § 823(1) BGB. Again, the underlying idea is the principle eius
damnum, cuius commodum, meaning that whoever profits from the use of an object
should bear the associated risk. In the absence of rules stipulating strict liability, the
German courts have developed the theory of Haftung für eine Verkehrspflichtverlet-
zung, a liability based upon the negligent violation of safety duties under § 823(1)
BGB. Any person who bears responsibility for a hazardous object (Gefahrenquelle)
will be required to monitor the object’s status and activities and avert damage caused
to other parties.44 The obligation to take preventative action is subject to reasonability.
Naturally, this approach raises difficult questions. How does one determine whether
an item is dangerous? Who is required to undertake preventative measures? Which
measures are appropriate and when does the reasonability threshold kick in?
Several categories of dangerous objects have been identified by case law, ranging
from risks emanating from buildings and premises45 to storage obligations for

39
Bundesgerichtshof (BGH) 6.7.1976, Neue Juristische Wochenschrift (NJW) 1976, 2130, 2131;
BGH, 6.3.1990, Neue Juristische Wochenschrift – Rechtsprechungs-Report (NJW-RR)
1990, 791.
40
§ 254 BGB, §§ 9, 17(2), 7(2) StVG.
41
Hofmann, ‘Minderjährigkeit und Halterhaftung’ (1964) Neue Juristische Wochenschrift (NJW)
228 (232 et seq.); Deutsch, ‘Die Haftung des Tierhalters’ (1987) Juristische Schulung (JuS) 678;
Wagner (n 37) § 833 n 40; Staudinger in Schulze et al. (ed), BGB, 2017, § 833 n. 6.
42
Canaris, ‘Geschäfts- und Verschuldensfähigkeit bei Haftung aus culpa in contrahendo, Gefähr-
dung und Aufopferung’ (1964) Neue Juristische Wochenschrift (NJW) 1990 et seq.; Spindler
(n 37) § 833 n. 14; Teichmann in Jauernig, Kommentar zum BGB (16th edn, C.H.Beck, 2015)
§ 833 n. 3.
43
Bundesgerichtshof (BGH), 6.3.1990, Neue Juristische Wochenschrift – Rechtsprechungs-Report
(NJW-RR) 1990, 790; Wagner (n 37) § 833 n. 40.
44
Bundesgerichtshof (BGH) 8.2.1972, Neue Juristische Wochenschrift (NJW) 1972, 726; Wagner
(n 37) § 823 n. 406.
45
Förster, in Beck’scher Online-Kommentar BGB (44th edn, 2017) § 823 n. 442 et seq.; Wagner
(n 37) § 823 n. 599 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
184 Ruth Janal

weapons46 and household chemicals47 to supervision obligations for washing


machines and water supplies.48

6.2.2.3 England
English law takes a very restrictive approach to the keeper’s liability. According to
the Animals Act 1971 (s 2.1), the keeper of an animal may be held strictly liable, but
only if the animal is of a dangerous species (defined as a species not normally kept in
the British Isles and capable of causing serious damage if it is roaming free).
Apart from that, a strict liability for tangible items is unheard of for private
individuals. Even the keeper of a motor vehicle will not held be held liable for
damages caused by the car.49 Admittedly, the challenges posed by industrialization
did lead to the notorious precedent of Rylands v Fletcher, which held the keeper of
land strictly liable for hazardous substances stored on the ground.50 However,
subsequent decisions have watered down the rule in Rylands v Fletcher with the
result that it has become irrelevant.51
Occasionally, it is possible to identify tendencies in English case law to compen-
sate for the lack of strict liability rules.52 In Roberts v Ramsbottom, the High Court
held a driver liable for negligence after he rear-ended another car, even though the
driver’s steering ability was impaired due to a slight stroke. Neill LJ argued that since
the driver had kept his hands on the wheel, he was able to maintain some control,
albeit imperfect.53 The driver was then held liable as his driving was below the
required standard. Roberts v Ramsbottom came very close to imposing strict liability
on the driver of a car. However, the subsequent decision in Mansfield v Weetabix
(which was based on similar facts) emphasized that a driver will not be liable under
the tort of negligence if he is unaware of his illness and consequently fails to notice
the accidents caused by his actions. Leggat LJ in the Court of Appeal convincingly
argued that a more stringent standard for the driver’s duty of care would amount to
nothing less than strict liability.54
Interestingly enough, the approach to strict liability for motor vehicles has
changed in light of automated driving. Under the Automated and Electric Vehicles
46
Bundesgerichtshof (BGH) 12.6.1990, NJW-RR 1991, 24 et seq.; Wagner (n 37) § 823 n. 689561.
47
Bundesgerichtshof (BGH) 12.3.1968, NJW 1968, 1183; BGH, 25.9.1990, NJW 1991, 502; Ober-
landesgericht Frankfurt (OLG Frankfurt) 30.5.2006, Straßenverkehrsrecht (SVR) 2006, 340
48
Oberlandesgericht Düsseldorf (OLG Düsseldorf ) 23.07.1974, NJW 1975, 171; Oberlandesger-
icht Hamm (OLG Hamm) 27.03.1984, NJW 1985, 332 et seq.; Oberlandesgericht Karlsruhe
(OLG Karlsruhe) 04.10.1990, Versicherungsrecht (VersR) 1992, 114; Oberlandesgericht Zwei-
brücken (OLG Zweibrücken) 10.04.2002 – 1 U 135/01 (juris).
49
Zweigert and Kötz (n 12) 672.
50
Rylands v Fletcher [1868] United Kingdom House of Lords (UKHL) 1.
51
Tofaris, Rylands v Fletcher Restricted Further [2013] CLJ 11 (14) with further references.
52
Zweigert and Kötz (n 12) 672.
53
Roberts v Ramsbottom [1980] 1 All ER 7 = 1 WLR 823.
54
Mansfield v Weetabix [1998] 1 WLR 1263.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 185

Act 2018, the Secretary of State is to keep a list of automated vehicles which, under
defined circumstances, are capable of safely driving themselves.55 The Road Traffic
Act (s 143) makes it an offence to drive a car without insurance against third-party risks,
and under the new Act, insurers will be held liable for the damage caused by an
automated vehicle.56 In the absence of insurance, which may be the case for vehicles
owned by public bodies, the owner of the vehicle is liable for third-party damages.57
Note that insurance policies may exclude liability when the insured person has made
software alterations or has failed to install safety-critical software updates.58 The victim
would then have to sue the insured party for negligence.
In light of English law’s narrow approach to strict liability, any liability for the acts of
autonomous systems other than automated vehicles will need to be based upon the tort
of negligence. Some common-law scholars have argued that parallels can be drawn
between autonomous systems and animals.59 But courts are unlikely to follow that
suggestion in the near future, seeing that the strict liability for animals is based on
statute60 and that Parliament has acted to introduce liability only for automated vehicles.

6.2.3 Liability for Employees and Other Assistants


What sets robots apart from mere machines or other physical objects is that they
are not only of use for manual chores but are also capable of substituting cognitive
activities. The assignment of a task to an autonomous system seems similar to
the delegation of tasks to an employee or other assistant: watering the garden can
be assigned to a gardener or to an intelligent irrigation system; driving a vehicle
can be assigned to a driver or an autopilot. The following section considers liability
for employees and other assistants.

6.2.3.1 France: Strict Liability


French law provides in Art 1242(5) for the strict liability of the principal for any
unlawful acts of their employee (préposé). Not every person commissioned to

55
Automated and Electric Vehicles Act 2018, s 1.
56
Automated and Electric Vehicles Act 2018, s 2(1).
57
Automated and Electric Vehicles Act 2018, s 2(2).
58
Automated and Electric Vehicles Act 2018, s 4.
59
Kelley, Schaerer, Gomez, and Nicolescu, ‘Liability in Robotics: An International Perspective on
Robots as Animals’ (2010) 24 Advanced Robotics, 1864 et seq.; sceptical Asaro, ‘The Liability
Problem for Autonomous Artificial Agents’, Association for the Advancement of Artificial
Intelligence (2015) <http://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI
.pdf>.
60
For the historic writ of scienter cf. Chapman, ‘Liability for Animals that cause Personal Injury:
Historical Origins & Strict Liability under the Animals Act 1971’ <http://1chancerylane.com/
barristers/matthew-chapman-qc/matthew-chapman-publications>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
186 Ruth Janal

undertake an act on behalf of another is considered to be a préposé. What is required


is that the party commissioned to assist another is – however briefly – bound to
follow the instructions of another. Furthermore, the principal will only be held
liable for acts committed by the préposé within the functions for which the latter was
commissioned (dans les fonctions auxquelles ils les ont employé).61 The French
courts are quite lenient when considering this requirement. The principal is only
exonerated if the préposé has acted a) outside their functions and b) without
permission, provided that c) the purpose of the actions is not in any way connected
with their competences. Even random intentional torts committed by the préposé
are often considered to be have occurred within the functions of the commission.62

6.2.3.2 Germany: Liability for Presumed Negligence


German law does not recognize the concept of vicarious liability. Rather, the
relevant provision of § 831(1) BGB holds the principal liable for their presumed fault
in selecting, controlling and supervising their employees or other agents. The
principal therefore may be exonerated, provided that they are able to prove their
diligence in selecting, supervising and controlling the employee. Parties will be
considered to be agents if they are integrated in the principal’s organizational sphere
and are subject to the principal’s instructions. As in French law, the principal will
only be held liable for acts committed by the agent within their function. Contrary
to French law, German courts have been reluctant to hold the principal liable for
torts committed on the occasion of the task commissioned, such as a theft facilitated
by access to the victim’s rooms (gelegentlich der Verrichtung).63 The principal’s
liability will only arise where there is an inner correlation between the tort commit-
ted and the task assigned (in Ausübung der Verrichtung). This restrictive case law has
rightly been criticized by legal scholars,64 who argue that it would be better to
consider whether the wrongful act was facilitated due to the assignation of the task.
When the German Civil Code was drafted, § 831 BGB was conceived as a liability
for presumed fault rather than strict liability in order to protect private households
and small businesses. Today, the rule is generally considered to be ill conceived.65

61
Assemblée Plénière de la Cour de cassation (Cass. Ass. Plén.) 19.05.1988, D.S. 1988, 513;
Jourdain (n 19) 109.
62
Cour de cassation, chambre criminelle (Cass. crim.) 23.11.1923, GP 1928, 2, 900; Paris 8.7.1954,
GP 1954, 2, 280; Cass. crim. 16.2.1965, GP 1965, 2, 24; Cass. crim. 23.11.1928, GP 1928, 2, 900;
Cass. crim. 5.11.1953, GP 1953, 2, 383; Cass. crim. 18.6.1979, DS 1980 IR 36 (Larroumet); Cass.
Ass. Plén. 19.5.1988, D.S. 1988, 513 (Larroumet); Jourdain (n 19) 111.
63
Bundesgerichtshof (BGH) 12.04.1951, BGHZ 1, 388 (390); BGH 14.02.1989, NJW-RR 1989,
723 (725).
64
Larenz and Canaris, Lehrbuch des Schuldrechts II/2 § 79 III 2 d, 480 https://doi.org/10.17104/
9783406731181-419; Medicus and Lorenz, Schuldrecht II (17th edn, CH Beck 2014) n 1347;
Wagner (n 37) § 831 n. 27.
65
See further Wagner (n 37) § 831 n. 1 et seq.; Zweigert and Kötz (n 12) 634 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 187

The courts pursue two strategies to hold principals accountable beyond § 831 BGB.
First of all, they seek to extend the sphere of contractual liability to the pre-
contractual phase (culpa in contrahendo) and to third parties (by means of a legal
instrument called Vertrag mit Schutzwirkung zugunsten Dritter, or contracts with
protective effect to the benefit of third parties).66 As a consequence, § 278 BGB will
apply and the principal will be held vicariously liable for the acts of their agents. The
courts’ second measure is to extend the scope of § 823(1) BGB by introducing strict
duties of care for businesses that employ agents. Principals are held to intensive
duties of care in operational management and the production process. Among other
things, this has given rise to a relatively strict liability in the area of product liability,
replacing the application of § 831(1) BGB.67
Lastly, parties who seek to delegate their safety duties to independent third parties
are bound by case law to diligently choose the third party and to undertake spot tests
on the third party. Failure to do so will again lead to liability under § 823(1) BGB.68

6.2.3.3 England: Vicarious Liability


In English law, any principal is vicariously liable for torts committed by their
employees or similar agents. The test is two-fold. First, it needs to be established
whether there is a relationship between the parties giving rise to vicarious liability.
Originally, this relationship was termed a master‒servant relationship, but has in
recent years been described as a relationship of employment or akin to employment.
In Various Claimants v Catholic Child Welfare Society,69 Lord Philipps identified
several policy reasons for the vicarious liability of employers which may also give rise
to vicarious liability for non-employees: (1) the tort is committed as a result of activity
being undertaken by the wrongdoer on behalf of the ‘employer’, (2) the wrongdoer’s
activity is likely to be part of the business activity of the ‘employer’, (3) the ‘employer’
will have created the risk of the tort committed by the wrongdoer, (4) the control of
the ‘employer’ and (5) the likelihood of deeper pockets on the part of the ‘employer’.
Even if a relationship of employment or akin to employment exists, the courts will
only consider the principal’s liability to be ‘fair and just’ if there is a close connection
between the wrongdoer’s tort and the employment or other relationship giving rise

66
Zweigert and Kötz (n 12) 637 et seq.
67
Diederichsen, ‘Wohin treibt die Produzentenhaftung?’ (1978) Neue Juristische Wochenschrift
(NJW) 1287; Wagner (n 37) § 823 n. 778.
68
Bundesgerichtshof (BGH) 26.9.2006, Neue Juristische Wochenschrift (NJW) 2006, 3629; BGH
30.9.1986, Neue Juristische Wochenschrift – Rechtsprechungs-Report (NJW-RR) 1987, 147;
BGH 2.10.1984, NJW 1985, 271; BGH 12.3.2002, NJW-RR 02, 1057; BGH 12.6.2007, NJW 2007,
2550; Wagner (n 37) § 823 n. 464 et seq.
69
Various Claimants v Catholic Child Welfare Society [2012] United Kingdom Supreme Court
(UKSC) 56; Cox v Ministry of Justice [2016] UKSC 10, n. 15 et seq.; Bermingham and Brennan,
Tort Law (5th edn, OUP 2016) 240.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
188 Ruth Janal

to vicarious liability.70 It seems that the UK Supreme Court has given both the
qualifying relationship and the close connection requirement a broader interpret-
ation in recent years, thus extending the principal’s liability.71
In addition to vicarious liability, liability for the conduct of an independent third
party (in particular independent contractors) may arise from the tort of negligence.
While the law of negligence is generally fault based, a person may be required to
procure the careful performance of work delegated to others in case of a ‘non-
delegable duty’. The wording notwithstanding, it is generally accepted that a non-
delegable duty may in fact be delegated, but doing so will give rise to a strict liability
on the part of the person delegating the task. The case law regarding non-delegable
duties is not particularly coherent,72 but it is possible to identify two broad categories
of non-delegable duties: first, where an independent contractor is commissioned to
perform a task which is inherently hazardous;73 and second, where there is an
antecedent relationship between the principal and the victim under which the
principal is under a duty to protect and care for the victim.74

6.2.4 Liability for Minors


There are striking similarities between machine-learning systems and minors.75 Minors
are constantly developing both their cognitive capabilities and their personality. Similar
to minors, the decision-making process of autonomous systems is pre-programmed to a
point, but may not yet be fully developed and fully predictable. As we have seen above,
the German and English systems adapt their liability rules, adjusting for the typically
limited care and insight that can be expected from minors. On the other hand, all the
legal orders examined in this contribution hold parents and other guardians liable for
misdeeds of minors under their protection. It is worth taking a closer look at these
liability schemes and considering whether any parallels might be drawn with them.

6.2.4.1 Liability for Minors under French Law


After the discussion of strict liability for the acts of things and strict liability for the
acts of employees, it should come as no surprise that French law also imposes a strict
70
Various Claimants v Catholic Child Welfare Society [2012] UKSC 56; Mohamud v WM
Morrison Supermarkets, [2016] UKSC 11.
71
Susan Cunningham-Hill and Karen Elder, Civil Litigation 2017–2018 (10th edn, OUP 2017)
n. 7.5.1.2.
72
For an overview see Woodland v Essex County Council [2013] United Kingdom Supreme
Court (UKSC) 66.
73
See Lord Sumption in Woodland v Essex County Council [2013] UKSC 66, n. 6: ‘Many of these
decisions are founded on arbitrary distinctions between ordinary and extraordinary hazards
which may be ripe for re-examination.’
74
Woodland v Essex County Council [2013] UKSC 66, n. 7, 23 et seq.
75
Pagallo, ‘Killers, Fridges, and Slaves: A Legal Journey in Robotics’ (2011) 26 AI & Society 352 et
seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 189

liability on parents for the acts of their children in Art 1242(4) of the Civil Code.
Parents will be held liable irrespective of their diligence in supervising their child76
and even irrespective of faute on the part of the child.77 The courts have developed a
similar liability rule for institutions charged with organizing, controlling and
directing other persons’ conduct and based this rule on Art 1242 (1).78

6.2.4.2 Liability for Minors under English Law


In English law, parents and other supervisors may be held liable for the tortious acts
of minors only under the tort of negligence. Generally speaking, supervisors are
under a duty of care to prevent harm committed by a minor under their supervi-
sion.79 However, as the UK Supreme Court has pointed out, ‘The courts are also
anxious not to impose an impossibly high standard of care in an ordinary domestic
setting’.80 An illuminating example is the case Donaldson v McNiven, in which a
parent who gave an air gun to his 13-year-old child was not liable under negligence
for the harm committed with the gun, as he had instructed the child to only use the
gun in the cellar.81

6.2.4.3 Liability for Minors under German Law


Under German law, the liability for tortious acts of minors is based upon a presump-
tion of fault (§ 832 BGB). Parents or other parties that are under an obligation to
supervise minors are in principle liable for any loss caused by the wrongful act of a
minor. However, they can exonerate themselves by showing their diligence in
supervising the child or by showing that the damage would also have occurred
had they adequately supervised the minor. Parents are required to instruct, monitor
and control their children. The courts do, however, accept that children must be
given room to grow and develop their personality so that they may eventually mature
into responsible adults. Thus, parents are not required to constantly keep an eye on
their children, but – depending on their age – may allow them to roam free for
limited periods of time.82

76
Cour de Cassation, 2e Chambre civile (Cass. 2e Civ.) 19.2.1997, D. 1997, 265 note Jourdain.
77
Assemblée plénière de la Cour de Cassation (Cass. Ass. Plén.) 13.12.2002, D. 2003, 231 note
Jourdain.
78
Assemblée plénière de la Cour de Cassation (Cass. Ass. Plén.) 29.3.1991, D. 1991, 324 note
Larroumet; Cour de Cassation, 2e Chambre civile (Cass. 2e Civ.) 22.5.1995, D. 1996, 453 note Le
Bars/Buhler; Cass. 2e Civ. 12.12.2001, ETL 2002, 201.
79
Carmarthenshire County Council v Lewis (1956) Appeal Cases (AC) 549.
80
Woodland v Essex CC [2013] UKSC 66 [41].
81
Donaldson v McNiven [1952] 2 All England Law Reports (All ER) 691 (Court of Appeal) 692.
82
Bundesgerichtshof (BGH), 24.3.2009, Neue Juristische Wochenschrift (NJW) 2009, 1954; BGH,
15.11.2012, NJW 13, 1441.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
190 Ruth Janal

6.3 perspective: liability for autonomous systems


What inferences can be drawn from the above overview for the liability for autono-
mous systems? The following section will suggest paths to pursue for users, keepers
and operators of autonomous systems. The difficulty that arises in determining the
wrong committed by autonomous systems, however, is that the more sophisticated
and autonomous systems become, the more difficult it may be to determine whether
their actions can be called ‘wrong’. The traditional grounds for liability discussed
above all presume that a person has acted inappropriately or that an object was
moving or not in a proper state. How does this translate to autonomous systems?

6.3.1 How to Define ‘Wrong’ in the Context of Autonomous Systems

6.3.1.1 Embedded Autonomous Systems


There is no doubt that an act by a robot that directly harms a person’s body or
property constitutes a ‘wrong’, unless there are valid defences (such as the preven-
tion of greater harm). Whether this allows for a weighing of life against life is a very
difficult discussion, and such trolley problems are intensely debated with respect to
autonomous vehicles.

6.3.1.2 Intangible Autonomous Systems

(a) four examples Determining a ‘wrong’ becomes much more difficult when
we look at intangible autonomous systems. Let us consider four examples:
(1) Consider autocomplete functions in search engines that suggest terms
which convey a false impression of a person. A famous example is the
case of the wife of the former President of Germany, Mrs Bettina Wulff.
In the year 2015, when the letters ‘bet’ were entered into the search form
at google.de, the search engine would suggest the search terms ‘Bettina
Wulff Escort Service’ and similar.83 Should this be considered a false
statement of fact and thus a wrong?84 Or should it be regarded as a
statement that significant numbers of people who started out by typing
‘bet’ ended up searching for ‘Bettina Wulff Escort Service’ – which
would be true?
83
Tota, ‘Dreiundvierzig Wortkombinationen weniger’, Frankfurter Allgemeine Zeitung (FAZ)
16 January 2015 <www.faz.net/aktuell/feuilleton/google-entfernt-ergaenzungen-bei-suche-
nach-bettina-wulff-13373712.html>.
84
For a discussion of case law see Karapapa and Borghi, ‘Search Engine Liability for Autocom-
plete Suggestions: Personality, Privacy and the Power of the Algorithm’ (2015) 23 International
Journal of Law and Information Technology 275 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 191

(2) In some legal orders, such as in German law, derogatory terms and
insults give rise to civil liability. If that is the case, does image-recognition
software that labels a woman of colour ‘gorilla’85 commit a ‘wrong’?
(3) While it is well known that roughly 10 per cent of pneumonia patients
die from the disease, it is not easy to determine the risk in individual
cases. Since autonomous systems are often used for risk assessment,
suppose a system is developed to predict the probability of death for
patients with pneumonia so that high-risk patients are admitted to
hospital while low-risk patients are treated as outpatients.86 Suppose
further that in an individual instance, the algorithm does not suggest
inpatient admission and the patient dies. Should the patient’s relatives
be entitled to damages, even though a doctor could not have given a
clear recommendation in the individual case?
(4) An employer uses social media to advertise jobs in STEM fields.
Without any direction by the employer, the ads are shown on social
media to more men than women.87 Should female applicants be
entitled to compensation under equal opportunity laws, such as Art 18
of Directive 2006/54/EC88? Does it matter whether the cause for this
imbalance can be discovered? Should the argument that targeting
women is more expensive than targeting men be considered a valid
defence?
There are no easy answers to these questions. Some of the system results described
above may be due to user and/or programming decisions. Such a decision might be
to show ads to as many people as possible for a given price, irrespective of their
gender, in scenario (4). In scenario (1), a significant programming decision was to
include searches based upon autocomplete suggestions when counting the absolute
number of searches. Whenever someone entered the letters ‘bet’, searching for the
German word for bed (Bett), they would have stumbled upon the scandalous
content suggested by the autocomplete function. Their typical curiosity regarding
the scandalous content would have contributed to the popularity of the search term,
thus leading to a snowball effect of the autocomplete function. Other system results
may be based upon insufficient data, which helps explain why image-recognition

85
Cf. Simonite (n 6).
86
Caruana et al., ‘Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital
30-day Readmission’ (2015) Proceedings of the 21th ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining 1721 <http://people.dbmi.columbia.edu/noemie/
papers/15kdd.pdf>.
87
Lambrecht and Tucker, ‘Algorithmic Bias? An Empirical Study into Apparent Gender-Based
Discrimination in the Display of STEM Career Ads’, 15 October 2016 <https://papers.ssrn
.com/sol3/papers.cfm?abstract_id=2852260>.
88
Directive 2006/54/EC of the European Parliament and of the Council of 5 July 2006 on the
implementation of the principle of equal opportunities and equal treatment of men and
women in matters of employment and occupation (recast), OJ 2006 L 204/23.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
192 Ruth Janal

software often fails to yield satisfying results for minorities (scenario (2)).89 Also, data
may be incomplete, which is why a neuronal network trained to discover the
mortality risk of pneumonia patients classified asthma patients as low-risk patients
(scenario (3)). Interestingly enough, the data fed to the system supported this result.
However, the data did not reveal that patients with a history of asthma who develop
pneumonia are usually admitted directly to intensive care units and it is for this
reason that they rarely die of pneumonia.90

(b) human standards and beyond So what standard of behaviour can reason-
ably be expected from an autonomous system? Surely, the behaviour of a reasonable
human should be the minimum standard to expect when an autonomous system is
allowed to ‘run free’. But as autonomous systems become more sophisticated and
outperform humans in specific tasks, the bar should be raised, and one might expect
at least an average performance level from an autonomous system – or should it be
expected to be state of the art? However, due to the often intransparent nature of
software, particularly machine-learning software, it would be difficult to define
either a state-of-the-art or an average performance. System performance may work
well in 95 per cent of all instances, but may not work at all where certain minorities
are concerned. It is therefore not easy to define an average performance. Also,
irrespective of whether one applies an average standard or requires state of the art,
what is the relevant point in time? I suggest we require an average performance from
autonomous systems at the time the harm was done, as this would draw a parallel
with humans who are also expected to adopt evolving safety standards and are
judged by the standard of reasonable peers. The specific problem of machine
learning from user data will be considered in Section C.IV.4.

(c) lack of transparency Finally, the reasons for a specific result being
yielded by an autonomous system may lie in the dark – which is certainly an issue
for liability rules grounded in the principle of causation. As Caruana et al. note, ‘In
machine learning often a trade-off must be made between accuracy and intelligi-
bility.’91 When even experts fail to understand why an autonomous system makes a
specific recommendation (such as whether a pneumonia patient should be admitted
to hospital), how can a court decide whether this decision was correct?
I do not claim to have the definitive answers to these questions, but I certainly
believe that we need interdisciplinary research to address them. For the time being,
at least when a robot directly harms a person or property or when an autonomous

89
Cf. Buolamwini and Gebru, ‘Gender Shades: Intersectional Accuracy Disparities in Commer-
cial Gender Classification’ (2018) 81 Proceedings of Machine Learning Research 1–15 <http://
proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf>.
90
Caruana et al. (n 86).
91
Caruana et al. (n 86).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 193

system does not manage to reach the human standard, this should be considered as a
“wrong” committed by the system.

6.3.2 User of the Autonomous System


The first party to consider is the user of the autonomous system. Obviously, the
abuse of an autonomous system to intentionally harm another party would fall under
the relevant rules of intentional torts. There is no need to dwell on this any further.
But what about situations where the system does not function as intended by the
user? Setting aside differences in detail and dogmatics, the legal orders analysed
above agree that a person may be held liable for any indirect damage caused by their
negligent acts or omissions, provided that the damage caused was not too remote. As
a consequence, the users of an autonomous system may be held liable for the acts of
the system if they have breached a duty of care, particularly in operating and
supervising the autonomous system. The difficult part is defining the extent of such
a duty of care. The beauty of autonomous systems, after all, is that their actions are
autonomous and do not require constant supervision. Thus, a duty of care to
monitor the system should only apply where the user has reason to believe that
the system is not completely autonomous in particular circumstances. This may be
due to the product design and the instructions issued by the producer, due to
software alterations made by the user, due to system warning signals or due to faults
in the system’s performance which the user could and should have picked up on. It
is important to note that any user who employs an autonomous system that is only
partially autonomous must consider the time it will take for them to get ‘back in the
loop’ and take over from the system.
The suggested approach is more or less in line with the approach taken by
German legislators when they adapted the German Road Traffic Act to autonomous
vehicles.92 Under the act, vehicle manufacturers are required to make a declaration
in the system description that the autonomous system conforms with certain pre-
requisites.93 Drivers will not incur liability for using the system, provided that they
are ready to take back control when needed.94 It is unlikely that similar laws will be
enacted with respect to all kinds of robotics. Nonetheless, the guidance provided by
the system manufacturer will generally be a first reference point for any user.
If an autonomous system is programmed to learn from one individual user, users
should be liable under negligence if they ‘feed’ the system with wrongful behaviour.
An example of how this might happen is Microsoft’s chatbot Tay which gave racist
and sexist responses after interacting with Twitter users for only a day.95 Obviously, if
92
§ 1b Straßenverkehrsgesetz (StVG).
93
§ 1a StVG.
94
§ 1b StVG.
95
Vincent, ‘Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day’,
24 March 2016 <www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
194 Ruth Janal

the user’s input is intentional, a damages claim can normally be based on inten-
tional torts. However, users should also be held under a duty of care to not lead
machine-learning systems astray by providing them with training data that mirrors
negligent behaviour.

6.3.3 Keeper of the Autonomous System


What about the keeper of the autonomous system, i.e. the person who benefits from
the use of the good and who has the authority to decide upon its use? The rise of the
sharing economy will contribute to situations in which the user of a robot or other
autonomous system is not identical with the keeper of that system. The following
section first argues that keepers should generally be liable for damages caused by
autonomous systems held by them, then proceeds to a discussion of the adequate
concept of liability.

6.3.3.1 Eius damnum, cuius commodum


As we have seen, the use of autonomous systems is quite similar to the delegation of
a task to an employee, the main difference being that humans and artificial intelli-
gence have very different strengths and weaknesses in the design and execution of
tasks. Autonomous systems may also be incorporated into things; thus it is possible to
draw inferences from the liability for things. Finally, parallels can be drawn with
minors in instances where an autonomous system is based on machine learning and
is thus in a developing stage of cognizance.
The conceptual idea behind liability for employees and other assistants is that
persons who extend their scope of action by delegating certain tasks should bear the
corresponding risk (eius damnum, cuius commodum).96 All three of the legal orders
examined follow this concept: French and English law by holding the principal vicari-
ously liable, German law by providing for strict duties of care and a presumption of
negligence. The concept is equally convincing when looking at autonomous systems:
anyone who delegates activities to such a system should bear the risk of the system
running afoul and causing damage. There is no reason why a principal should be liable
for the faulty execution of a task delegated to an employee (such as a chauffeur or a
gardener), but should not be held liable if the same task was delegated to an autonomous
system (such as a smart irrigation system or an autonomous car).97 For autonomous
systems incorporated into a machine, the concept of principal’s liability is corroborated
by the fact that all three legal orders also require persons to monitor and bear the risk of
96
Cf. on this concept Oertel, Objektive Haftung in Europa (Mohr Siebeck 2010) 289 et seq.;
Tulibacka, Product Liability Law in Transition: A Central European Perspective (Ashgate
Publishing Ltd 2009).
97
Also advocating for vicarious liability: Chandra, ‘Liability Issues in Relation to Autonomous AI
Systems’, 29 September 2017 <https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3052154>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 195

things under their control – although admittedly to quite varying degrees. Finally, for
autonomous systems that are learning from data provided by their users, the keeper is in a
crucial position to steer the learning process by deciding who is allowed to use the system.

6.3.3.2 The Victim’s Perspective


Holding the keeper liable for the acts of the autonomous system also seems
reasonable because it provides for legal clarity on the part of the victim. Imagine
an intelligent irrigation system as a control system connected to sensors and irriga-
tion modules as well as to the internet. The control system contains stored data on
the water requirements of numerous plants. It uses the sensors installed in the
garden to evaluate information about solar radiation, temperature, humidity, wind
speed and ground conditions in the garden and makes autonomous weather fore-
casts about the nearest weather station. From all these data, the optimum watering
quantity is calculated, and the irrigation of the garden is controlled by instructions
from the individual sprinkler modules. Operation is carried out via an app, which
also allows watering according to the operator’s specifications. The garden owner
can therefore avoid certain watering times or turn on the sprinkler system on hot
days to allow their children to play with water. Perhaps the system allows for
machine learning and learns to follow the garden owner’s irrigation pattern.
Ideally, the intelligent irrigation system supplies each plant with the optimum
amount of water and uses the precious resource water efficiently – even if the garden
owner is on a four-week holiday to New Zealand. But what if something goes wrong
and the system ends up flooding the neighbouring property instead? In this case, the
damage can be traced to various possible sources of error:98 defects in sensors, pipes
and sprinklers, defective software code, incorrect data supplied by the weather
station. Perhaps the garden owner’s child has inadvertently used the app or the
garden owner has failed to install a necessary security update, allowing an enterpris-
ing youth from the neighbourhood to test their hacking skills. It will be virtually
impossible for the neighbour to determine which of these risks is at the root of the
flooding. It is thus of fundamental interest to the injured party to claim compen-
sation from the keeper of the system ‒ in this case, the garden owner.

6.3.3.3 Counterarguments
Four arguments militate against the keeper’s liability: (i) the development process of
artificial intelligence, (ii) the non-dangerous nature of AI, (iii) a perceived lack of
98
See the proposal for a ‘Robot Liability Matrix’ set out by Zornoza, Moreno, Guzmán,
Rodriguez, and Sánchez-Hermosilla, ‘Robots Liability: A Use Case and a Potential Solution’
in Dekoulis (ed) Robotics – Legal, Ethical and Socioeconomic Impacts (InTech 2017) 70 et seq.
<http://dx.doi.org/10.5772/intechopen.69888>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
196 Ruth Janal

control on the part of the keeper and (iv) an adjacent liability of the producer or
operator.

(a) machine learning and chilling effects In so far as the machine-


learning process of some autonomous systems is reminiscent of children’s learning,
an argument might be made that the liability of children’s supervisors is a more
appropriate example for keeper’s liability. While French, English and German law
all acknowledge that parents may be responsible for the acts of their underage
children, only French law holds the parents strictly liable, whereas English and
German law do not. Both English and German courts limit the parents’ supervisory
duties to allow for the gradual development of minors into fully responsible adults.
One might argue that the same freedom should be granted to autonomous systems
with machine-learning capabilities, thus limiting the responsibility of the persons
employing these systems. However, there are inherent differences between minors
and artificial intelligence. Barring the introduction of legal persona for autonomous
systems (see Section 6.1.1), such a system will never develop into a person with full
responsibility and thus does not need leeway to experiment and develop its ‘person-
ality’.99 There is no point in time at which an autonomous system can be considered
‘fully developed’ and no longer under human control – nor should there be.
When it comes to product liability, scholars often argue that strict liability may
have a chilling effect on the beneficial development of AI.100 Might such chilling
effects also occur if the keeper of an autonomous system is held liable for the acts of
the system?101 I do not think so. There is certainly no empirical evidence that strict
liability for goods or employees has had a chilling effect where such a liability is
currently imposed: French citizens are not less likely to acquire objects due to the
French system’s strict liability; UK citizens do not keep more cars because there is
no strict liability. There is thus no reason to assume a chilling effect of strict liability
when it comes to the employment of autonomous systems. There may be areas
where the danger of chilling effects is real and where machine learning is so
beneficial to the human race that it is deemed desirable to limit the liability of
the developers and parties employing the respective systems. In those instances, a
social security compensation system replacing the principal’s liability might be an
adequate response (see Section 6.4), but drawing inferences from the liability for
minors is not helpful.

99
Cf. Zech (n 11) 195.
100
See for example Abbott, ‘The Reasonable Computer: Disrupting the Paradigm of Tort Liability’
(2017) 86 George Washington Law Review 1, 118 et seq, 121 et seq.; Asaro, ‘The Liability Problem for
Autonomous Artificial Agents’, Association for the Advancement of Artificial Intelligence, 2015
<http://peterasaro.org/writing/Asaro,%20Ethics%20Auto%20Agents,%20AAAI.pdf>.
101
Weber, ‘Liability in the Internet of Things’ (2017) 6 Journal of European Consumer and Market
Law (EuCML) 208.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 197

(b) non-dangerous nature of autonomous systems Another counterargu-


ment is that autonomous systems are not inherently dangerous;102 rather, the hope is
that they will outperform humans and be safer in the long run. But as the above
analysis has shown, ‘danger’ is not the overriding factor for imposing strict liability.
A hamster is less dangerous than a cow or a washing machine, but German law
imposes strict liability on the keeper of the hamster, not on the keeper of the cow or
the washing machine (see Section 6.2.2.2). Most employees are diligent workers, and
employees may be more diligent and skilled than their principal, but nonetheless
English and French law hold any principal liable for the actions of their employees
(Sections 6.2.3.1 and 6.2.3.3). When physical objects play an active role in an
accident, the French courts do not concern themselves with whether the object
possessed an inherent danger; they are only interested in the object’s role in the
causation of loss (Section 6.2.2.1). As we have seen, the overriding argument is not
risk. The argument is that persons who extend their scope of action through people
and things should bear the corresponding risk.

(c) lack of control French authors have argued against the keeper’s liability
for robots due to the keeper’s inability to control an object steered by an autonomous
system.103 It is important to note that this argument was made in the context of Art
1242(1) of the Civil Code, where persons will be liable for the damage caused by an
object if they are the gardien, meaning they possess usage, control and supervision of
the good. I would question the argument that a completely autonomous system
evades human control,104 as the keeper of such a system will make the general
decision on whether or not to put the system into operation and on who may or may
not use the system. More importantly, if we look at the broader principle of liability
based upon a sphere of action, we find that parties are often held liable for actions
which they cannot entirely control or for situations where their attempt to exercise
control has failed, such as the actions of employees or animals. This is the very
essence of the principle of eius damnum, cuius commodum.

(d) sufficiency of product liability? The issue of control also raises


another important question: if a robot is predominantly steered by an autonomous
system controlled by an operator, shouldn’t it be the software developer and/or the
operator that are held liable? In other words, if a Tesla car is predominantly steered
by its autopilot, isn’t it sufficient to hold Tesla liable or is there still a need for
accountability on the part of the vehicle’s keeper? I believe the latter to be true,

102
Weber (n 101) 208.
103
Bonnet (n 35) 19 et seq.; Lagasse (2015) 12 CREOGN 2.
104
Cf. Petit, ‘Law and Regulation of Artificial Intelligence and Robots – Conceptual Framework
and Normative Implications’, 9 March 2017 <https://papers.ssrn.com/sol3/papers.cfm?abstract_
id=2931339>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
198 Ruth Janal

particularly if one takes into account the victim’s perspective.105 For the injured
party, it may be very difficult to determine whether the accident is a consequence of
a human operating error, a malfunction of the autonomous system or a malfunction
of the mechanical parts of the car.106 Also, the developer of the software or the
operator of the system may be difficult to identify, may have their place of business
abroad or may be insolvent.107 Finally, it may be necessary to provide an incentive
for keepers to keep the system updated and ensure its proper use.108 In light of the
keepers’ decision to delegate tasks to the autonomous system, it does not seem
appropriate to exonerate them and instead ask the victim to pursue claims against
the producer or operator liability.

6.3.3.4 Strict Liability v Duty of Care


As the above overview of liability for things and employees has shown, there are two
separate approaches to holding the keeper liable for the wrongdoings of an autono-
mous system. One option is to hold the keeper strictly liable for any damage
incurred through the acts of the system or the keeper. The other is for the keeper
to be placed under a duty of care to monitor the autonomous system and prevent any
damage resulting from its use. This latter approach has the benefit of flexibility,
allowing courts to weigh the benefits and risks associated with the particular system
and to adapt their case law to the developing stages of artificial intelligence.
Nonetheless, there are a number of arguments that point to strict liability as the
more appropriate liability regime for autonomous systems.
There are various reasons why it would be very difficult to define an appropriate
standard for the duty to monitor autonomous systems.109 First, the main reason for
using autonomous systems is that their work is – well – autonomous. What is the use
of an intelligent irrigation system, if its keeper is required to stand by and closely
monitor the system in order to avoid liability for the breach of a safety duty? The
purpose of autonomous systems is to dispense with the keepers’ presence, allowing
them to put their time to better use elsewhere.110 While providing for strict monitor-
ing obligations would defeat the point of autonomous systems, providing for more

105
Cf. Borghetti, ‘L’accident généré par l’intelligence artificielle autonome’, La semaine juridique
(December 2017) 27.
106
Günther and Böglmüller, ‘Künstliche Intelligenz und Roboter in der Arbeitswelt’ (2017)
Betriebs-Berater (BB) 53, 54 et seq.
107
Günther and Böglmüller (n 106).
108
Galasso and Luo, Punishing Robots: Issues in the Economics of Tort Liability and Innovation in
Artificial Intelligence, Economics of Artificial Intelligence (University of Chicago Press 2018) 6
<www.nber.org/chapters/c14035.pdf>.
109
For safety duties under current German law cf. Spindler, ‘Zukunft der Digitalisierung –
Datenwirtschaft in der Unternehmenspraxis’ (2018) Der Betrieb (DB) 41, 48.
110
Lohmann, ‘Roboter als Wundertüten – eine zivilrechtliche Haftungsanalyse’ (2017) Aktuelle
Juristische Praxis (AJP) 152, 159.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 199

lenient standards (such as spot checks) would be insufficient to comply with the
principle eius damnum, cuius commodum.
Second, the cognitive abilities of humans do not align with autonomous systems.
Machine learning is based on the comparison of patterns that are not self-
explanatory to the human mind. Humans cannot always comprehend why an
algorithm shows ads for STEM jobs to more men than women111 or how algorithms
manage to make reliable assumptions regarding the sexual orientation of a person in
a portrait.112 Even if an autonomous system does not employ machine-learning
techniques, most consumers do not possess the requisite knowledge to monitor
the workings of the software – and even experts experience difficulties if the software
is not open source. As a consequence, any duty to monitor the systems would likely
be limited to obvious malfunctions and error messages. Such limited measures are
unable to prevent the autonomous system from causing harm in unexpected ways.
Third, the courts would have to painstakingly establish the duty of care in each
individual case, leading to uncertainty and a lack of legal clarity. A strict liability rule
therefore seems more efficient113 and will alert the keeper to the necessity of insuring
the corresponding risk.

6.3.3.5 Liability for the Specific Autonomy Risk


If strict liability for the keepers of autonomous systems and robots were to be
introduced, legal orders that do not possess a general regime of strict liability for
things would face a problem, such as German and English law. How does one
distinguish between objects for which strict liability arises and other objects for
which it does not? There are two ways to address the issue: distinguishing between
the physical risks posed by different robots (strict liability might arise for autonomous
cars but not for robot vacuums); or looking at the degree of autonomy with which
the object is endowed. The latter approach seems more convincing if one respects
the decision taken by these legal systems not to introduce a general strict liability for
dangerous objects.
The next question to tackle is whether the liability of the keeper should be limited
to the specific risk posed by the autonomous system or should also include the risk
posed by the physical object in which the autonomous system is embedded. From
the perspective of the person harmed, it is much more expedient if the keeper’s
111
Wang and Kosinsky, ‘Deep neural networks are more accurate than humans at detecting sexual
orientation from facial images’, 16 October 2017 <https://psyarxiv.com/hv28a>.
112
Datta, Tschanz, and Datta (2015) 1 PoPETs 92.
113
Spindler, ‘Zukunft der Digitalisierung – Datenwirtschaft in der Unternehmenspraxis’ (2018)
Der Betrieb (DB) 50; Bräutigam and Klindt, ‘Industrie 4.0, das Internet der Dinge und das
Recht’ (2015) Neue Juristische Wochenschrift (NJW) 1139; Lohmann (2017) ZRP 169; Groß and
Gressel, ‘Entpersonalisierte Arbeitsverhältnisse als rechtliche Herausforderung – Wenn
Roboter zu Kollegen und Vorgesetzten werden’ (2016) Neue Zeitschrift für Arbeitsrecht
(NZA) 996.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
200 Ruth Janal

liability extends to all risks posed by the object ‒ both the specific autonomy risk and
the risk of mechanical defects. This will make it easier for victims to claim damages,
as they do not have to prove the exact cause of malfunction. Thus, in case of a
malfunctioning intelligent irrigation system, the neighbour whose garden has been
flooded would only have to prove the system’s overall malfunction, not the specific
cause of the defect (operating error or defective sensor). However, if the reason for
the keeper’s liability is the delegation of control to the autonomous system, then the
keeper should only bear this specific autonomy risk. I propose the following happy
medium: while the keeper only bears the risk associated with the autonomous
system, any defect is presumed to be caused by a malfunction of the control system.
Keepers get the option to exonerate themselves by proving that the damage was due
to a physical malfunction that could not have been prevented. Obviously, this
distinction need not be made by legal orders such as the French which employ a
general strict liability regime for things.
An operating error may be deemed to exist whenever the autonomous system has
not carried out an action that could reasonably be expected from it under the
circumstances. Whether the error is due to incorrect programming, is due to
training data that is not representative of real-world conditions or is an effect of
unforeseen machine learning should not be relevant. Reasonable expectations will
initially be modelled on the capabilities of humans, and should pose the minimum
level of performance. Since autonomous systems are expected to outperform
humans over time, the reasonable expectation could then be modified to an average
system at the time the damage occurred (see Section 6.3.I.2(b)).
Finally, whether the software controlling an object is incorporated in the object
itself or there is a control mechanism operated from somewhere in the cloud should
also be irrelevant.

6.3.3.6 Who Is the Keeper of an Autonomous System?


If the keeper is to be held strictly liable for the acts of autonomous systems, the
concept of ‘keeper’ needs to be defined. I propose a distinction between autono-
mous systems that control physical objects (autonomous cars and intelligent irriga-
tion systems) and intangible autonomous systems (electronic bidding agents, search
engines).

(a) autonomous systems embedded in physical objects If the autono-


mous system is controlling and steering a physical object, then the keeper of the
physical object should be regarded as the keeper of the autonomous system. Thus, it
is essential to determine who benefits from the use of the good and is able to
physically dispose of the object. It is important to note that the level of control over
robots may be lower than the level of control regarding other objects. This may be
due to unforeseen patterns of action that the individual user cannot understand and

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 201

control for lack of expertise. It is also possible that third parties exert control over the
autonomous system, such as the operator of the system (Section 6.3.4.2) or hackers
manipulating it. Such an inability to technically control the autonomous system is
an inherent risk of the use of the system and should not exonerate the keeper.114 This
mirrors the prevalent position regarding employees and animals (see Sections 6.2.2
and 6.2.3). In any case, the keeper will always be able to control the object by cutting
off the power supply or confining the object physically.

(b) intangible autonomous systems Defining the keeper of an intangible


autonomous system is more complicated. In the era of cloud computing, there is no
point in looking to the owner of the servers on which the system is running as the
keeper of the system. Instead, a parallel can be drawn with the principles of vicarious
liability, where we hold liable the persons who instruct, control and benefit from the
actions of the wrongdoer. Following this line, liability for intangible autonomous
systems can be assigned to the party who controls the system. The power of control
will normally lie with the developer or operator of the system, even if its functions
are made available to third parties (as is the case, for example, with search engines or
bidding agents). The keeper’s liability in these instances overlaps with the producer’s
and operator’s liability. However, in business models such as robotics as a service,
situations may arise in which the autonomous system functions as the ‘servant of two
masters’, and two distinct keepers may be identified. Such a joint and several liability
is well known from the liability for employees115 and things116 and should not pose
major challenges for the law.117

(c) mental capacity threshold A final point for consideration with respect to
the keeper’s liability is the mental capacity required to qualify as the keeper of an
autonomous system. As noted earlier, under French law even a small child will be
regarded as the keeper of an object that has caused harm, whereas German scholars
debate how to ascertain the minimum age for the keeper’s liability. The rise of smart
toys for children and care robots for the elderly shows that this is also a critical topic
114
Gless and Janal, ‘Hochautomatisiertes und autonomes Autofahren – Risiko und rechtliche
Verantwortung’ (2016) Juristische Rundschau (JR) 561.
115
Various Claimants v Catholic Child Welfare Society [2012] UKSC 56; regarding the respons-
abilité du fait d’autrui: Ferid and Sonnenberger (n 20) chap 2, n 226; regarding § 831 I BGB:
Bundesgerichtshof (BGH) 26.01.1995, Neue Juristische Wochenschrift – Rechtsprechungs-
Report (NJW-RR) 1995, 659 et seq.
116
Regarding § 7 I StVG cf. Bundesgerichtshof (BGH) 28.04.1954, NJW 1954, 1198; Deutsch,
‘Gefährdungshaftung – Tatbestand und Schutzbereich’ (1981) Juristische Schulung (JuS) 317,
323 et seq.; Walter in beck-online.Grosskommentar zum Zivilrecht, 1.11.2017, § 7 StVG n. 78;
regarding § 833 S. 1 BGB: Spickhoff in: beck-online.Grosskommentar zum Zivilrecht, 1.11.2017,
§ 833 n. 89; regarding the responsabilité du fait des choses: Ferid and Sonnenberger (n 20) chap
2, n 328.
117
Cf. Chandra, ‘Liability Issues in Relation to Autonomous AI Systems’ 29 September 2017
<https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3052154>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
202 Ruth Janal

for the law of autonomous systems. The matter is clearly linked with the question of
insurability, that is, whether a child or a person with mental impairment can weigh
the risks associated with the autonomous system and insure the risk accordingly. If
this is not the case, the person who provided the robot to the child or person
concerned should be considered its keeper.

6.3.4 The Operator’s Liability

6.3.4.1 Why There Is a Need for Operator Liability


While I argue for the strict liability of the keeper of an autonomous system, I have
also previously pointed out that in the context of autonomous systems we need to
look at a new player who may be liable for the damage in addition to or in the
alternative of the system’s producer and keeper ‒ the operator. There are two reasons
for this: the changing technological landscape and the limitations of the current
liability regimes.
First, it is my belief that the more robotics influences everyday life, the more often
different legal entities will be responsible for the production of the physical object
on the one hand and its autonomous operation on the other. This has already
happened for computers and smartphones, and similar developments can be
expected for the internet of things.
Second, product liability rules as they currently stand have their shortcomings
when applied to autonomous systems. The EU Product Liability Directive, for
example, only applies to harm caused by products, that is, movable items.118 While
some argue that software should also be covered by the directive, this is certainly not
a given.119 To be compensated, the victim must further show that the product is
defective, in other words, that it does not provide the safety that a person is entitled to
expect.120 To prove the existence of a defect can be difficult enough if we are talking
about a complex physical object. It seems almost impossible for a victim to prove the
inadequacy of an autonomous system, as the data used to train the system and the
source code of the system will typically not be available to the public. Also, most of
the machine-learning algorithms currently employed do not issue explanations for
their decisions, making it even harder to comprehend them.
Under the directive, producers will not be liable if they manage to show that the
existence of the defect could not have been discovered at the time when the product

118
Art 2 Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws,
regulations and administrative provisions of the Member States concerning liability for defect-
ive products, OJ 1985 L 210/29 et seq.
119
Weber, ‘Liability in the Internet of Things’ (2017) 6 Journal of European Consumer and Market
Law (EuCML) 207, 210.
120
Art 4, 6 Product Liability Directive.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 203

was put into circulation.121 This may limit the liability for machine-learning systems
as well as autonomous systems that produce unexpected actions when they are
subsequently fed with obsolete or incomplete data or when they interact with other
autonomous systems.122 Finally, the EU Product Liability Directive only provides for
damages in the form of death, personal injury and damage to consumer property
other than the product itself. Product liability rules modelled on the directive may
therefore prove inadequate to address the problems posed by autonomous systems.
The European Commission has launched an evaluation of the Product Liability
Directive that will, among other topics, look into this matter.123

6.3.4.2 Who Is the Operator?


The person or company that is pulling the strings of the autonomous process should
be considered its operator. This is the party that is responsible for running the
autonomous system by contracting to receive the required data (for example, map
and weather data or feedback data on system performance), by optimizing, tweaking
and overseeing any possible machine-learning processes and by passing necessary
updates to the keepers of the system.
Often, the keeper and the operator of an autonomous system will be identical. But
especially with robots, keeper and operator may be entirely different parties.

6.3.4.3 Case Study: Google Autocomplete


Albeit couched in different terms, some courts have applied the idea of an operator’s
liability to Google’s autocomplete function. This was the case in the German
Federal Court of Justice which held that autocomplete suggestions with defamatory
content were an infringement of the claimant’s personality rights under § 823(1)
BGB. Also, the Supreme Court of Australia held that Google was a publisher for
autocomplete suggestions under the strict publication rule,124 and the Hong Kong
Court of First Instance deemed a similar claim against Google to be a good arguable
case.125
However, in the Australian case, the court also found that the autocomplete words
did not give rise to defamatory imputations, as an ordinary person would understand
that the words ‘comprise a collection of words that have been entered by previous

121
Art 7(e) Product Liability Directive; cf. also Beck (n 7) 474; Lohmann, AJP 2017, 158.
122
Beck (n 7) 474.
123
<https://ec.europa.eu/growth/single-market/goods/free-movement-sectors/liability-defective-
products_en>.
124
Duffy v Google [2015] Supreme Court of Australia (SASC) 170, n. 284.
125
Dr Yeung, Sau Shing Albert v Google [2014] Hong Kong Court of First Instance (HKCFI) 1404,
n. 103.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
204 Ruth Janal

searchers when conducting searches’.126 Interestingly enough, France’s Cour de


Cassation has also not held Google liable for its autocomplete function – despite
the prevalence of strict liability in French law. The Cour de Cassation’s main
argument was that there was no intention on the part of the company to give the
search term suggestions an independent meaning beyond their mere juxtaposition
and their sole function of helping their customers.127

6.3.4.4 Privileges
The Google autocomplete case shows that, arguably, privileges should be granted to
the operators of machine-learning systems that are fed by the system’s users.
Common law courts have highlighted that the innocent dissemination defence
may be available to the operator of a search engine.128 The German Federal Court
held that while Google is liable for an infringement of personality rights if defama-
tory search terms are suggested, courts must undertake a process of balancing rights,
taking into account the rights of the harmed individual, the protection of free speech
and the benefit derived from the suggestion of search terms. As a consequence, the
German Federal Court found that Google was only liable for damages after it had
been notified of the defamatory search terms and declined to act.129 The court
therefore introduced a principle similar to the ISP privileges under the EU
E-Commerce Directive130 or the US Digital Millennium Copyright Act.131 In Italy,
a court held that the caching privilege of the Italian implementation of the
E-Commerce Directive applied directly, thus exonerating the company running
the search engine.132
In my view, such a principle seems adequate in some instances, such as when
weighing personality rights and freedom of speech. Such privileges may not be
appropriate in other instances, for example, when autonomous car systems are fed
steering data from all drivers using the particular system without oversight. One must
also be careful not to assume that an algorithm is ‘neutral’ when processing user
behaviour. The functionality behind the autonomous system is often kept a trade
secret by the operator and not revealed, and operators follow their own optimization
goals (such as viewer engagement). An autocomplete function, for example, both

126
Duffy v Google [2015] Supreme Court of Australia (SASC) 170, n. 375.
127
Cour de cassation, première chambre civile (Cass. 1re Civ.), 19.06.2013, Arrêt n 625, <https://
www.courdecassation.fr>.
128
Dr Yeung, Sau Shing Albert v Google [2014] HKCFI 1404, n. 120 et seq., Duffy v Google [2015]
SASC 170, n. 386
129
Bundesgerichtshof (BGH) 14.5.2013 (2013) Neue Juristische Wochenschrift (NJW) 2350.
130
Art 12 et seq. Directive 2000/31/EC of the European Parliament and of the Council of 8 June
2000 on certain legal aspects of information society services, in particular electronic commerce,
in the Internal Market, OJ 2000 L 178/1.
131
US Code § 512 – Limitations on liability relating to material online.
132
X c. Google, Tribunale Ordinario di Milano, 25.3.2013, N RG 2012/68306.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
Extra-Contractual Liability for Wrongs Committed by Autonomous Systems 205

predicts what users would have searched for and draws their attention to previously
unthought-of searches.133 Autocomplete suggestions based on the latter will lead to a
snowball effect which perpetuates defamatory content. In the same vein, it has been
shown that the YouTube autoplay function tends to promote radical content.134
Thus, the granting of liability privileges needs careful consideration.

6.4 no-fault compensation schemes


As an alternative or complement to all the liability regimes discussed above, no-fault
schemes could be introduced, which would compensate a victim whenever an
autonomous system is involved in the creation of harm. Such schemes could be
funded through taxation or by contributions from keepers and operators alike.
Examples of no-fault schemes are the New Zealand Accident Compensation
Scheme, the German135 and French136 social security systems for accidents at work
and occupational diseases or the French rules regarding medical accidents.137
However, the political feasibility of such an approach is likely to be limited to areas
in which the use of autonomous systems is socially particularly desirable.138

6.5 conclusion
The transfer of cognitive activities from man to machine poses new challenges for
liability law. This chapter has explained why current product liability rules may not
be adequate to cover losses resulting from the acts of an autonomous system and
why, even with reforms, such rules might remain insufficient. The chapter focused
on three players that could also be held liable for damages caused by an autonomous
system:
(1) Any user of such a system may be held liable under negligence if they
failed to supervise the system despite having reason to be believe that
the system is not fully autonomous in the particular circumstances.
(2) The keeper of the system (the party that instructs, controls and benefits
from the use of the system) should be held strictly liable for the damage
caused.
133
Karapapa and Borghi, ‘Search Engine Liability for Autocomplete Suggestions: Personality,
Privacy and the Power of the Algorithm’ (2015) 23 International Journal of Law and Information
Technology 264.
134
Tufekci, ‘YouTube, the Great Radicalizer’, 10 March 2018 <www.nytimes.com/2018/03/10/
opinion/sunday/youtube-politics-radical.html>.
135
Siebtes Buch Sozialgesetzbuch - Gesetzliche Unfallversicherung.
136
Art L461–1 Code de la sécurité sociale.
137
Art L1142–1 Code de la santé publique.
138
See for bionic prosthetics by Bertolini and Palmerini, ‘Regulating Robotics: A Challenge for
Europe’ in Directorate General for Internal Policies (ed), Upcoming Issues of EU Law
v. 24.9.2014 144 et seq. <www.europarl.europa.eu>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
206 Ruth Janal

(3) The ‘operator’, i.e. the party that is responsible for running the autono-
mous system, making sure the system is provided with necessary data,
overseeing and tweaking the machine-learning process and installing
required updates in gadgets. In parallel with product liability, I propose
that the operator should be held strictly liable for the autonomous
system as well, but that privileges for machine learning based on user
data should be explored.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:19, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.007
7

Control of Algorithms in Financial Markets

The Example of High-Frequency Trading

Gerald Spindler

introduction
High-frequency trading has become important on financial markets and is one of
the first areas in algorithmic trading to be intensely regulated. This chapter reviews
the EU approach to regulation of algorithmic trading, which can be taken as a
blueprint for other regulations on algorithms by focusing on organizational require-
ments such as pre- and post-trade controls and real-time monitoring.

7.1 algorithms and financial markets


Algorithms are widely used in business and industry as they offer manifold oppor-
tunities to rationalize and speed up decisions, and to substitute for workflows
previously operated by people, thus economising on costs. Financial institutions
in particular began very early on to exploit big data and algorithms, their trade in
immaterial goods (financial products, money, etc.) making their business prone to
digitalisation. Today, algorithms can be found in every financial sector, for instance
in traditional banking, investment banking or the insurance industry. They are used
on every level, be it in financial trading like the stock markets, at consumer level or
even on the private level, such as crowdfunding. Among the many examples are
high-frequency trading and ‘robo advice’.
However, if algorithms are to replace human decisions, why should they be
subject to specific regulations that go beyond the regulation of human behaviour?
In this context, one would expect the same norms and standards to apply to both
algorithms and human beings. Thus, for example, discrimination should not be
dealt with differently, whether decisions are taken by machines or by human beings.
Hence, regulation of algorithms needs to take into account the specific characteris-
tics they display in contrast to human decisions and behaviour – as well as the
peculiarities of financial markets.

Downloaded from https://www.cambridge.org/core. University College207 London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
208 Gerald Spindler

More precisely, however, the regulation of algorithms in financial markets is not


about controlling ‘algorithms’ as such. Mankind has been using algorithms as
methodological problem-solving mechanisms since learning to control fire – it is a
way of solving problems using a logical structure which is totally independent of the
use of artificial intelligence or robots. Rather, the ‘algorithms’ to be controlled
means those that have been implemented in machine-learning systems that may
learn and recode themselves (and their algorithms). These systems may result in
unforeseeable behaviour if machines can change their parameters (but not their
ultimate goals) – pointing to the fundamental problem, which applies to all legal
areas, not just financial markets, of whether these machines and their behaviour can
still be assigned to human beings. Machine algorithms speed up decisions so fast
that it is difficult to intervene in time in response to unexpected developments.
Moreover, even though they are able to learn, machine algorithms cannot recognize
unknown atypical cases.
Closely related to unforeseeable behaviour and to coding decisions is the asym-
metry between supervisory authorities and individuals. While human behaviour
may be easily observed, noted and documented, codes must be reviewed by experts.
This means coded human decisions are more difficult to control, especially when
machine learning leads to alterations and unexpected consequences. Moreover,
even the best algorithm causes false predictions and distorted results if the under-
lying data has not been collected correctly. Quality of data is therefore crucial for
assessing algorithms ‒ or more precisely, for the results of an algorithm based on
these data.
Last but not least, the coding of algorithms may be biased if the software developer
does not take certain risks into account. Human prejudices can be set in stone in
code, unlike decisions taken by actual humans who are able to realize that an
algorithm may lead to distorted results if it turns out that not all relevant factors have
been taken into account, or that the factors have not been correctly weighed.
Financial markets are affected by these algorithm-specific problems in multiple
ways. Personal profiling may determine whether an individual obtains insurance or
a loan, so control of algorithms is as crucial as controlling financial institutions (and
their employees). Robots advise investors according to their (assumed) personal
profiles and risk preferences. In stock markets, algorithms in the form of electronic
agents decide on buying or selling stock in milliseconds according to the behaviour
of other (electronic) agents.
Whilst the control of algorithms in general is yet to be incorporated in financial
regulation (in contrast, for instance, to Art 22 of the GDPR concerning automated
decisions), the realization of systemic risks to the stability of financial markets has
already led to the regulation of high-frequency trading.
This chapter deals with the general approach to controlling algorithms enshrined
in high-frequency trading regulations, from national developments to the Europe-
wide regulation recently adopted by the EU. The chapter concludes by discussing

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 209

whether these regulations can serve as a blueprint for control of algorithms in


general in financial markets.

7.2 control of algorithms: high-frequency trading as a


blueprint for regulation?
The evolution of high-frequency trading can be traced back to the first stock market
crashes that resulted from algorithmic trading, which were followed by one of the
first sets of regulations in Europe, Germany’s High-Frequency Trading Act and its
specification by the German Supervisory Authority on Financial Markets. With the
adoption of MiFID II1 and the corresponding ESMA guidelines,2 these approaches
were extended to the European level and finally specified by the recent delegated
regulation 2017/589.3 At international level the International Organization of Secur-
ities Commissions (IOSCO) launched a consultation in 2011 on ‘Regulatory Issues
Raised by the Impact of Technological Changes on Market Integrity and Effi-
ciency’.4 However, this initiative has not been followed by development of any
general principles.

7.3 risks and impact of high-frequency


trading on markets
High-frequency trading is the automated trading of shares and securities on stock
markets that takes place within milliseconds. Algorithms are used to buy and sell
shares and securities with a specific trading strategy that uses market indices, triggers
and signals. More sophisticated algorithms may learn how other market participants
act, developing and modifying their own trading strategy accordingly.5 High-
frequency trading reduces spreads and can improve liquidity on markets. However,
improvements in liquidity may also lead to higher volatility of markets. Markets can
also be manipulated by exploiting bugs in algorithms. Software may be stolen and
then misused or hacked.

1
Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on
markets in financial instruments and amending Directive 2002/92/EC and Directive 2011/61/
EU (recast), OJ of 12.6.2014, L 173/349.
2
ESMA, ‘Guidelines: Systems and controls in an automated trading environment for trading
platforms, investment firms and competent authorities’, 24 February 2012 ESMA/2012/122 (EN).
3
Commission delegated Regulation (EU) 2017/589 of 19 July 2016 supplementing Directive
2014/65/EU of the European Parliament and of the Council with regard to regulatory technical
standards specifying the organizational requirements of investment firms engaged in algorith-
mic trading, OJ 31.3.2017 L 87/417.
4
Technical Committee of the International Organization of Securities Commissions, CR
02/11 July 2011 available at <www.iosco.org/library/pubdocs/pdf/IOSCOPD354.pdf>.
5
For a thorough overview see Peter Gomber, Björn Arndt, Marco Lutat, and Tim Uhle, ‘High
Frequency Trading’ (2011) <http://ssrn.com/abstract=1858626>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
210 Gerald Spindler

High-frequency trading also carries risks. In what became known as the Flash
Crash of 6 May 2010, a wrongly coded algorithm led to a crash. Within minutes of
the sell program initiating the sale of a large block of E-mini contracts valued at
US$4.1 billion, other algorithms reacted similarly, leading to a rapid decline of the
E-minis.6
Thus, whilst high-frequency trading is just another phenomenon of systemic risks
in financial markets, it can result in a total crash. This problem can only be adressed
when markets are monitored and trade is interrupted. The monitoring takes place
on the basis of a system of indicators and warning signals (known as ‘break circuits’).

7.4 the german high-frequency trading act


As a result of these experiences, and in order to avoid any financial market crises, the
German government adopted a strategy of regulating high-frequency trading which
led to the Act on High-Frequency Trading in 2013.7
The cornerstones of the Act are the introduction of a permit duty for own-account
trader (proprietary trading), organizational requirements for algorithmic trading and
enhanced transparency for market participants.
Under Sec 1 para 1a No 4d of the German Banking Supervisory Act (Kreditwe-
sengesetz), permits are required for
d) purchasing or selling of financial instruments on an own-account basis as a direct
or indirect participant in a domestic organised market or multilateral trading facility
by using a high-frequency algorithmic trading technique characterised by infra-
structures that intend to minimise latency, by systems that make the decision to
initiate, generate, route or execute an order without human intervention for
individual trades or orders and by high intra-day message rates in form of orders,
quotes or cancellations, without necessarily providing services for others (propri-
etary trading);

One of the crucial elements of high-frequency trading is the minimising of laten-


cies. The Act references what can be observed in practice as high-frequency trading –
in particular with regard to the rapidly evolving technologies used.8 The German
Supervisory Authority specifies this element by requiring a short distance between
6
Report of the staffs of the CFTC and SEC to the Joint Advisory Committee on emerging
regulatory issues, ‘Findings regarding the market events of May 6’, (30 September 2010) <www
.sec.gov/news/studies/2010/marketevents-report.pdf>; see also Kirilenko, Kyle, Samadi, and
Tuzun, ‘The Flash Crash: High-Frequency Trading in an Electronic Market’ (6 January
2017) Journal of Finance <https://ssrn.com/abstract=1686004> or <http://dx.doi.org/10.2139/
ssrn.1686004>.
7
(2013) I Bundesgesetzblatt 1162.
8
See also Jaskulla, ‘Das deutsche Hochfrequenzhandelsgesetz – eine Herausforderung für
Handelsteilnehmer, Börsen und Multilaterale Handelssysteme’ (2013) Bank- und Kapitalmark-
trecht (BKR) 221, 228; Schultheiß, ‘Die Neuerungen im Hochfrequenzhandel’ (2013) Wertpa-
piermitteilungen (WM) 596; Kobbach, ‘Regulierung des algorithmischen Handels durch das

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 211

the computer on which the algorithms are running and the systems which match
the incoming order, including a minimum speed of 10 GB per second.9 Every
market participant who fulfils these criteria has to apply for a permit, even if they are
based outside Germany.10 As a consequence, every trader with a high-frequency
algorithm has to comply with the requirements laid down in the German Securities
Trading Act (Wertpapierhandelsgesetz) as well as the banking law (Kreditwesenge-
setz). Moreover,11 traders are subject to solvency supervision.
The Act is not limited to high-frequency trading but also encompasses the more
generic ‘algorithmic trading’ (Sec § 80 para 2 s 1 Securities Trading Act), which
refers to a computer program that automatically defines parameters for orders such
as price, time for buying or selling, or quantity of an order.12 The requirements
established by the German Securities Trading Act are thus applicable to all kind of
algorithm-based trading, whether it is on the trader’s own account or for clients,
whether on stock markets or over the counter. However – and in contrast to high-
frequency trading – only market participants based in Germany are covered, not
foreign market participants.13
Based on the European Securities and Markets Authority (ESMA) 2012 guide-
lines,14 the German Supervisory Authority issued a circular15 in 2013 specifying the
requirements for algorithmic trading. According to Sec 80 para 2:
(2) An investment services enterprise must additionally comply with the provisions
stipulated in this subsection if it conducts trading in financial instruments in such a
way that a computer algorithm automatically determines individual parameters of
orders, unless the system involved is used only for the purpose of routing orders to
one or more trading venues or for the confirmation of orders (algorithmic trading).
Parameters of orders within the meaning of sentence 1 include, in particular,
decisions on whether to initiate the order, on the timing, price or quantity of the
order, or on how to manage the order after its submission with limited or no human
intervention. An investment services enterprise that conducts algorithmic trading
must have in place effective systems and risk controls to ensure that

neue Hochfrequenzhandelsgesetz: Praktische Auswirkungen und offene rechtliche Fragen’


(2013) Bank- und Kapitalmarktrecht (BKR) 233, 235.
9
Rundschreiben 6/2013 ‘Anforderungen an Systeme und Kontrollen für den Algorithmushandel
von Instituten’, 18 December 2013 (hereafter ‘Circular’); see also Kobbach (n 8).
10
Kindermann and Coridaß, ‘Der rechtliche Rahmen des algorithmischen Handels inklusive des
Hochfrequenzhandels’ (2014) Zeitschrift für Bankrecht und Bankwirtschaft 178, 180.
11
For more details about the debate concerning the regulation (if it should not be better part of
the stock exchange acts) see Jaskulla (n 8).
12
Note that under the German Act high-frequency trading is not identical with algorithmic
trading; see Jaskulla (n 8) 230; Schultheiß (n 8); Kobbach (n 8) 237.
13
Kindermann and Coridaß (n 10) 181.
14
ESMA Guidelines (n 2)
15
These circulars are not legally binding; however, as they specify the supervisory practice most
market participants regard them as de facto binding rules, like the ESMA guidelines.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
212 Gerald Spindler

1. its trading systems are resilient, have sufficient capacity and are subject to
appropriate trading thresholds and limits;
2. the routing of erroneous orders or the functioning of the system in a way that
may create or contribute to a disorderly market are prevented;
3. its trading systems cannot be used for any purpose that is contrary to European
or national rules against market abuse or to the rules of the trading venue to
which it is connected.
An investment services enterprise that conducts algorithmic trading must also have
in place effective business continuity arrangements to deal with unforeseen failures
of its trading systems and must ensure that its systems are fully tested and properly
monitored.
Thus, algorithmic traders must implement an appropriately resourced risk-
management system that follows the prescribed three-step order control system,
depending on the complexity of the algorithms they have implemented.16
According to Sec 80 (3) of the German Securities Trading Act an algorithmic trader
must document how they comply with these management requirements and keep the
relevant records for at least five years. Supervisory authorities may inspect those
records. Furthermore, high-frequency algorithmic traders are required to record every
order, including cancelled orders, executed orders, and market prices on exchanges
and trading platforms (Sec 80 (3) sent. 2 German Securities Trading Act). Thus, every
modification of any computer algorithm used for trading purposes must also be
documented. The trader has to provide evidence of changes of algorithms; if strategies
for algorithms are changed, or algorithms are used in new markets or platforms, the
German Supervisory Authority classifies them as ‘new products’ that require a com-
plete risk assessment according to the provisions on risk management.17
Employees of the trader have to be able to understand and to control the algorithm
on time. The German Supervisory Authority goes beyond ESMA requirements18 to
demand that the trader’s operators can be reached at any time by operators of the
market exchange or platform.19
Moreover, the algorithm has to provide adequate limits for trade. The German
Supervisory Authority has laid down very detailed requirements in this area: the limits
for contracting parties and the issuer, and market prices have to be settled before each
transaction. Liquidity must be ensured at all times and be monitored in real time.20
The trader and in particular the risk managers must be able to intervene directly and
independently of their trading departments.21 Algorithms must be designed in such a
way that every order and transaction can be identified and linked to the particular
algorithm that has executed the order.

16
Circular (n 9) No 36, 39.
17
Circular (n 9) No 26.
18
Cf. ESMA Guidelines (n 2) No 2.2. g.
19
Circular (n 9) No 23.
20
Circular (n 9) No 39.
21
Circular (n 9) No 41.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 213

Algorithmic traders must ensure that their algorithms are not misused for the
purposes of market manipulation and that they comply with market-specific rules. In
order to comply with these provisions traders therefore have to implement systems
that monitor the behaviour of algorithms, including automated warning systems.22
The entire board of directors of the investment firm is obliged to assess market
manipulation risks and define a strategy, at least to explain why they are not prone to
such risks.23 Systems must be designed in such a way that they allow for real-time
monitoring, which means that controls have to take place within a reasonable time
span.24 Operators of monitoring systems have to be independent of those who are
operating the algorithm trading system.
Traders must provide emergency systems that are able to cope effectively with
unforeseen difficulties in their main systems. These emergency systems must be kept
up to date and provide risk-appropriate actions in emergency cases.
Furthermore, traders must safeguard the continuous operation of their systems.
All systems have to be checked thoroughly under stress conditions before going
live.25 Moreover, algorithmic traders must assess the risks resulting from trade on
individual trading platforms and check their risk management.26
The third cornerstone of high-frequency trading regulation is the transparency
obligations codified in Sec 16 (2) No 3 of the German Stock Exchange Act (Börsen-
gesetz). Market participants have to flag the fact that algorithms are being used for
trading, but only those with direct access to the platform are obliged to do so. If
market participants allow their clients direct access to the platform and those clients
use algorithms they must ensure that clients will cooperate to flag algorithms in use.27

7.5 regulation on the european level

7.5.1 MiFID II
MiFID II more or less parallels the German approach by introducing a permit duty
for high-frequency trading, but not for algorithmic trading in general, for which it
22
Circular (n 9) No 55.
23
Circular (n 9) No 60.
24
Circular (n 9) No 71; see also Kindermann and Coridaß (n 10) 183.
25
Circular (n 9) No 15.
26
Circular (n 9) No 4.4.
27
Hessisches Ministerium für Wirtschaft, Energie, Verkehr und Landesentwicklung, ‘Guidelines
for adherence to the requirement of the labelling of trading algorithms’ (§ 16 sub-para 2 no 3
Stock Exchange Act (Börsengesetz), § 33 sub-para 1a Securities Trading Act (Wertpapierhan-
delsgesetz), § 72a Exchange Rules for the Frankfurter Wertpapierboerse (Börsenordnung für
die Frankfurter Wertpapierbörse), § 17a Exchange Rules for Eurex Deutschland and Eurex
Zurich (Börsenordnung für die Eurex Deutschland und die Eurex Zürich) as of 22 September
2014 No 7 available at <https://wirtschaft.hessen.de/sites/default/files/media/hmwvl/guidelines_
to_the_adherence_to_the_requirement_of_the_labelling_of_trading_algorithms_14–09-22-neu
.pdf>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
214 Gerald Spindler

provides a set of specific requirements. The parallels between the two sets of
regulations are quite obvious, given that the German legislator wanted to adopt
the European proposals at an early stage (even though MiFID II was only adopted in
2014). The evolution of discussion of algorithmic and high-frequency trading can be
seen in the definitions adopted by MiFID II. Art 4 (39) states that:
‘algorithmic trading’ means trading in financial instruments where a computer
algorithm automatically determines individual parameters of orders such as
whether to initiate the order, the timing, price or quantity of the order or how to
manage the order after its submission, with limited or no human intervention, and
does not include any system that is only used for the purpose of routing orders to
one or more trading venues or for the processing of orders involving no determin-
ation of any trading parameters or for the confirmation of orders or the post-trade
processing of executed transactions.

Thus, MiFID II follows grosso modo the approach taken by both the German act
and ESMA28 in excluding algorithms that only forward orders (or route them). The
algorithmic trading covered by MiFID has to be related to trading in a narrow sense,
acting on the market platform. Interestingly (and in contrast to Art 22 GDPR),
MiFID also covers systems that still allow for human decisions (based, however,
on algorithms). Moreover, MiFID does not distinguish between traditional software
and machine-learning software.
Regarding high-frequency trading, MiFID II also follows the lead of the German
act and ESMA by defining high-frequency trading as (Art 4 (40)):
. . .an algorithmic trading technique characterised by:
(a) infrastructure intended to minimise network and other types of latencies,
including at least one of the following facilities for algorithmic order entry:
co-location, proximity hosting or high-speed direct electronic access
(b) system-determination of order initiation, generation, routing or execution
without human intervention for individual trades or orders; and
(c) high message intraday rates which constitute orders, quotes or cancellations.
Thus, the minimization of latencies, in particular around hosting or high-speed
electronic access, is decisive. Moreover (and in contrast to the more generic term
‘algorithmic trading’), ‘high-frequency trading’ is restricted to fully automated
trading without any human intervention.
The basic requirements for algorithmic trading are laid down in Art 17 MiFID II,
which, however, leaves the bulk of specifications to ESMA and then to the Com-
mission as a delegated act.29 Thus, Art 17 requires an investment firm in general to
‘have in place effective systems and risk controls suitable to the business it operates
to ensure that its trading systems are resilient and have sufficient capacity, are subject

28
ESMA (n 2).
29
See Section 7.5.2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 215

to appropriate trading thresholds and limits and prevent the sending of erroneous
orders or the systems otherwise functioning in a way that may create or contribute to
a disorderly market’. Market manipulation is also banned. Art 17 emphasizes the
capacity of investment firms to cope with unexpected events and failures of the
algorithms. The supervisory authorities are explicitly entitled, according to Art 17 (2),
to obtain a description ‘of the nature of its algorithmic trading strategies, details of
the trading parameters or limits to which the system is subject, the key compliance
and risk controls that it has in place to ensure the conditions laid down in paragraph
1 are satisfied and details of the testing of its systems’. Hence, investment firms
cannot refer in their descriptions to any kind of trade secrets or intellectual property
concerning the algorithms used.
Furthermore, Art 17 (3) requires an investment firm, that engages in algorithmic
trading to pursue a market making strategy, to ‘take into account the liquidity, scale
and nature of the specific market and the characteristics of the instrument traded
when complying with its obligations as per lit a-c’. Special attention is paid to
conformance with the framework of the trading venue.
Somewhat surprisingly, neither Art 17 nor the rest of MiFID II contains specific
provisions on high-frequency trading as opposed to generic ones for algorithmic
trading – even though the Recitals (No 61 and subsequent) explicitly mention the
specific risks of high-frequency trading. Recital 62 alone requires that ‘in order to
ensure orderly and fair trading conditions, it is essential to require trading venues to
provide such co-location services on a non-discriminatory, fair and transparent basis’.
However, these principles are not reflected in the provisions of Article 17 of MiFID
II (or anywhere else). Recital 64, which emphasizes the need for robust measures ‘in
place to ensure that algorithmic trading or high-frequency algorithmic trading
techniques do not create a disorderly market and cannot be used for abusive
purposes’, does not distinguish between different types of algorithmic trading. The
same is true of the requirement for tests and resilient systems including ‘circuit
breakers . . . on trading venues to temporarily halt trading or constrain it if there are
sudden unexpected price movements’.
Enforcement and supervision are enhanced by the requirement to flag all orders
generated by algorithmic trading (Recital 67), enabling supervisory authorities to
more precisely relate events to certain algorithms which may lead to distortion of
markets.
Finally, Art 48 (6) MiFID II requires trading venues and platforms to provide for
controls on algorithmic trading, including circuit breaker facilities, in order to avoid
‘flash crashes’.30 In particular, regulated markets have to provide testing environ-
ments for algorithmic traders. Concerning the control of algorithms, Art 48 MiFID
II requires market operators to ‘manage any disorderly trading conditions which do

30
See also ESMA, ‘Automated Trading Guidelines, ESMA peer review among National Com-
petent Authorities’, 18 March 2015, ESMA/2015/592.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
216 Gerald Spindler

arise from such algorithmic trading systems, including systems to limit the ratio of
unexecuted orders to transactions that may be entered into the system by a member
or participant, to be able to slow down the flow of orders if there is a risk of its system
capacity being reached and to limit and enforce the minimum tick size that may be
executed on the market’. Hence, market operators have to ensure that they are able
to take algorithms out of the market, notwithstanding the ‘kill functionalities’ which
are in the hands of the investment firms. This is also stressed by Recital 157.

7.5.2 Delegated Act: The Regulation of the European Union


The final part of the regulation of algorithmic trading and high-frequency trading
consists of Commission Regulation 2017/589 specifying the organizational require-
ments for algorithmic trading that are laid down in Art 17 (7) (a) and (d) of MiFID
II.31 The Regulation continues the approach already taken in the ESMA guide-
lines32 and the German Act on High-Frequency Trading, together with the usual
precautions concerning IT security and avoidance of security breaches that are
addressed in Art 18.33 The adoption of the approaches taken by ESMA and the
German Supervisory Authority has led to the withdrawal of the Authority’s circular
concerning algorithmic trading.34
As a general principle, Art 1 of Regulation 2017/589 requires an investment firm to
‘monitor its trading systems and trading algorithms through a clear and formalised
governance arrangement’, using the same patterns as for traditional organizational
requirements, such as:
(a) clear lines of accountability, including procedures to approve the development,
deployment and subsequent updates of trading algorithms and to solve problems
identified when monitoring trading algorithms;
...
(c) a separation of tasks and responsibilities of trading desks on the one hand and
supporting functions, including risk control and compliance functions, on the
other, to ensure that unauthorised trading activity cannot be concealed.

Further, Art 2 of Regulation 2017/589 obliges the investment firm to employ com-
pliance staff who have a general knowledge of the algorithms, are in continuous
contact with those who operate the algorithms and have detailed technical know-
ledge. Closely related to the description of compliance staff – and self-evident ‒ are
the requirements that technical staff should understand the algorithm and be able to
manage, monitor and test it (Art 3 (1)). Whereas Art 4 obviously allows for
31
See n 3.
32
ESMA (n 2).
33
Such as penetration tests, simulation of cyber-attacks, identification of users of the system, etc.
34
See Notification of the German Supervisory Authority of 18 December 2017, Gz: BA 54-FR
2210-2017/0010.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 217

outsourcing software and hardware by stating that the investment firm remains fully
responsible, it is unclear whether the investment firm can also outsource staff for
managing and controlling the algorithms that are in place. ESMA had already set
out detailed provisions for the governance of algorithms, starting with development
and/or purchase of software (including outsourcing) and its subsequent mainten-
ance and control.35
Regulation 2017/589 also structures the deployment of an algorithm, requiring the
system to be tested in accordance with its specific market – and also in case of
‘substantial updates’ (Art 5 (1)). For algorithms which execute orders, specific
obligations are set out in Art 5 (2–5): the senior management of the investment firm
must designate a person to be responsible for the deployment or update (Art 5 (2)). In
particular, Art 5 (4) requires that the algorithm:
(a) does not behave in an unintended manner;
(b) complies with the investment firm’s obligations under this Regulation;
(c) complies with the rules and systems of the trading venues accessed by the
investment firm;
(d) does not contribute to disorderly trading conditions, continues to work effectively
in stressed market conditions and, where necessary under those conditions, allows
for the switching off of the algorithmic trading system or trading algorithm.

With regard to artificial intelligence or machine learning, Art 5 (4) (a) could raise
new questions, as these algorithms and their behaviour are not completely predict-
able. However, it is unlikely that the Commission really wanted to ban semi-
autonomous electronic agents from markets as long as their general behaviour can
be predicted.
The Commission specifies the necessary testing further in Art 6 and Art 7, again
following the ESMA guidelines, which required testing in a live environment before
going online:36 the investment firm must check the algorithm in respect of its
conformance with the requirements of the market venue, in particular the inter-
action with market venue software and the processing of data flows. Moreover, tests
have to be undertaken ‘in an environment that is separated from its production
environment and that is used specifically for the testing and development of
algorithmic trading systems and trading algorithms’ (Art 7 (1)).
As well as design and testing, in Art 8 the Commission obliges investment firms to
set limits on
(a) the number of financial instruments being traded;
(b) the price, value and numbers of orders;
(c) the strategy positions and
(d) the number of trading venues to which orders are sent.

35
ESMA (n 2) No 2.2. a.
36
ESMA (n 2) No 2.2. d.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
218 Gerald Spindler

Thus the Commission continues the approaches already chosen by ESMA. More-
over, the algorithm is not allowed to change these parameters; thus, Art 8 sets limits
to semi-autonomous systems as well.
The testing is not restricted to the initial deployment. An important part of the
duty to validate the algorithms annually is the required stress test (Art 10), in
particular the resilience of the system in case of increased order flows or market
stresses. The Commission requires that these stress tests should encompass
(a) running high messaging volume tests using the highest number of messages
received and sent by the investment firm during the previous six months,
multiplied by two;
(b) running high trade volume tests, using the highest volume of trading reached
by the investment firm during the previous six months, multiplied by two.

Another important element which was previously not specifically required is the ‘kill
functionality’ in Art 12 (1), which allows the investment firm to immediately cancel
unexecuted orders in emergency cases. Moreover, Art 12 (3) requires that the
investment firm can identify every trading algorithm and trader related to the
emergency case. The importance of this is illustrated by the additional requirement
that the compliance staff must be in constant contact with those who can ‘kill’ the
algorithm (Art 2 (2) of Regulation 2017/589).
Like the first approaches taken by the German Supervisory Authority and ESMA,
the Regulation lays stress on automated surveillance systems to detect market
manipulation and obliges investment firms to constantly monitor all trading activ-
ities (Art 13). ESMA had already demanded that traders should be able to automatic-
ally block orders that do not match fixed prices and quantities.37 In particular, the
investment firm must review its surveillance system each year and adapt it to
changes in the regulations (Art 13 (6)). The Commission Regulation even prescribes
detailed conditions for the system concerning time granularity, and capacity to
document and analyze order and transaction data ex post in a low-latency trading
environment (Art 13 (7)).
Like ESMA, the Commission Regulation is also concerned about continuation of
business in cases of disruption caused by incidents. Thus Art 14 explicitly requires
‘business continuity arrangements’ which should take into account different ‘pos-
sible adverse scenarios relating to the operation of the algorithmic trading systems,
including the unavailability of systems, staff, work space, external suppliers or data
centres or loss or alteration of critical data and documents’. Like ESMA, the
Commission even requires investment firms (among other organizational proced-
ures, such as shutting down the running algorithm) to provide for ‘(c) procedures for
relocating the trading system to a back-up site and operating the trading system from

37
ESMA (n 2) No 4.2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
Control of Algorithms in Financial Markets 219

that site, where having such a site is appropriate to the nature, scale and complexity
of the algorithmic trading activities of the investment firm’.
Applying to all investment firms – and not only algorithmic traders – are the
provisions for pre-trade control on order entry in Art 15 Regulation 2017/589. The
Regulation requires investment firms to carry out price collars with automatic
blocking of mismatching orders, maximum order values and maximum message
limits – thus obviously seeking to ban any market manipulation attempts. Moreover,
investment firms must control the number of times an algorithm has been used,
disabling it after a certain number of executions, which only can be enabled by
human decision of the competent officer (Art 15 (3)). Investment firms also have set
market and credit limits that are based, among other criteria, on ‘the length of time
the investment firm has been engaged in algorithmic trading’ (Art 15 (4)).
One subject that has been intensively debated is now codified in Art 16 (1)
Regulation 2017/589, which requires an investment firm to monitor in real time
‘all algorithmic trading activity that takes place under its trading code, including that
of its clients, for signs of disorderly trading, including trading across markets, asset
classes, or products, in cases where the firm or its clients engage in such activities’.
This real-time monitoring task is assigned to the risk management department of the
investment firm and must be carried out independently of the trading staff (Art 16
(2)). The monitoring staff should be accessible to other market participants and
supervisory authorities. Moreover, Art 16 (5) requires real-term alerts to unexpected
trading activities undertaken by means of an algorithm within 5 seconds of the
relevant event. The investment firm is then obliged to take action, and in particular
to withdraw the order. However, the ‘killing functionality’ is not mentioned in Art
16 (5).
During the downsizing of algorithmic trading the investment firm must also
control post-trade market and credit risk limits, including, in case of alerts, the
shutdown of an algorithm (Art 17 (1)). However, as Recital 59 of MiFID II already
clarifies, the mere use of algorithms in the post-trade phase does not constitute
relevant algorithmic trading.

7.6 outlook: high-frequency trading as a blueprint?


Even though regulation of high-frequency trading and of algorithmic trading in
general seems to be highly specific for financial markets, there are some general
lessons to be learnt for the regulation of algorithms. The regulation of algorithms in
financial markets has concentrated to date on a set of organizational requirements,
beginning with the design of algorithms, testing, real-time monitoring including
‘killing functionalities’ and ending up with post-trade controls and monitoring.
Given the fact that machine learning or artificial intelligence leads to unpredictable
behaviour, such instruments become even more important, as even the most
advanced testing cannot anticipate all events or casualties. Hence, a procedural

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
220 Gerald Spindler

approach that adopts all organizational requirements provided by the regulation of


high-frequency trading might be a crucial element in the regulation of modern
algorithms – without, however, replacing other standards such as non-
discriminating, etc. It will be essential for empirical evidence and experiences of
supervisory authorities to be followed up in order to assess the appropriateness of the
regulations recently adopted.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:30:50, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.008
8

Creativity of Algorithms and Copyright Law

Susana Navas

introduction
The possible emulation of human creativity by various models of artificial intelli-
gence systems is discussed in this chapter. In some instances, the degree of original-
ity of creations using algorithms may surprise even human beings themselves. For
this reason, copyright protection of ‘works’ created by autonomous systems is
proposed, which would take account of both the fundamental contributions of
computer science researchers and the investment in human and economic
resources that give rise to these ‘works’.

8.1 creativity

8.1.1 Definition: Types of Creativity


Creativity is the capacity to generate new and valuable ideas or artefacts.1 The
process of creating new and valuable ideas requires two elements: i) information
converted into knowledge and ii) time.
The broader the information that is converted, the greater the possibility of new
and surprising ideas appearing. Information involves in-depth knowledge in one or
more fields.2 Having expertise in these fields is fundamental. Time means

1
Boden, ‘Computer Models of Creativity’ (2009) 30(3) AI Mag 23.
2
‘Probably the new thoughts that originate in mind are not completely new, because they have
their seeds in representations that already are in the mind. To put it differently, the germ of our
culture, all our knowledge and our experience, is behind each creative idea. The greater the
knowledge and the experience, the greater the possibility of finding an unthinkable relation
that leads to a creative idea. If we understand creativity like the result of establishing new
relations between pieces of knowledge that we already have, then the more previous knowledge
one has the more capacity to be creative’ Boden, Artificial Intelligence and Natural Man (2nd
edn, Basic Books 1987) 75.

Downloaded from https://www.cambridge.org/core. University College221 London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
222 Susana Navas

perseverance, and hours of practice, study and tests, in which the so-called slow
brain3 can process ideas that arise as progress is made in research, or in a new artistic,
architectural, gastronomical or musical style, to give some examples.
According to renowned scholar Margaret A Boden, three types of creativity arise
successively.4 The first, ‘combinational creativity’, consists of a new combination of
familiar ideas through the association of ideas that were not previously related, or
through analogous reasoning. These two mechanisms may result in the creation of
complex conceptual structures, and therefore could be called creative. It could be
said that this class of creativity is a natural property of the human mind, and
functions through associations, images, symbols and analogies that vary according
to the society and culture in which the person grows and is formed. Whatever the
influence, this type of creativity is the easiest for human beings to use. In this sense,
everyone, whether disabled or not (although not someone who is very seriously
disabled) possesses at least a minimum level of creativity. This is a basic sort of
creativity ‒ or creativity in its pure state (natural creativity),5 which does not mean
that its results must always and in all cases be protected by the law. It is a more
limited and poorer type of creativity than those described below, since much of the
information on which the ideas are based or the analogies are made comes from the
context or from the tacit knowledge acquired in the medium in which the person
lives, and not from in-depth knowledge of one or more matters or areas of know-
ledge. In many cases, the result of this creative combination does not pass beyond
the stage of mere occurrence, and no creation worthy of legal protection is formed.6
Many poetic images do not extend beyond this level of creativity.
The second model is ‘exploratory creativity’, which consists of exploring a style of
thought or a conceptual space belonging to the person defining it, using a set of
productive ideas (‘generative ideas’) that may be explicit but may also be totally or
partially implicit.7 In this type of creativity, the limits of the conceptual scheme are
explored, and small changes or alterations that do not necessarily modify its basic
initial rules are even introduced. The result of this exploration, insofar as it is
sufficiently original, may be protected by law. This is a creativity that could be
classified as ‘professional’ as opposed to ‘natural’.
The third model, according to Boden, is ‘transformational creativity’; in this
model, the conceptual space for the style of the thought itself is transformed when
one or more of the elements defining it are altered, resulting in new ideas that could
not possibly have been generated before. These ideas are not only valuable and new,
but also surprising, shocking, counterintuitive and a break with the status quo or

3
Kahneman, Pensar rápido, pensar despacio (Barcelona 2013) 48‒50.
4
Boden (n 1).
5
Boden, ‘Creativity and Artificial Intelligence’ (1998) 103(1) Artif Intell 347‒356.
6
Navas, ‘Creation and Witticism in the User-Generated Online Digital Content’ (2015‒2016) 36
Actas de Derecho Industrial 403‒415.
7
Boden (n 1); Collins and Evans, Rethinking Expertise (The University of Chicago Press 2007).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 223

with some of the ideas commonly accepted by the social, artistic, legal or economic
sector in which the person works.8 It takes years, therefore, for these ideas to be
recognised and studied and for people (including other experts) to become accus-
tomed to this new form of thinking in the area involved. This is the only type of
‘professional’ creativity to provide ideas that are different from previous ones, not
only to their authors but to anyone else. It is different from the other two types of
creativity, which generate ideas (or artefacts) that are mostly new for their creator but
not for humanity, since either the idea already exists, or another person has had the
same idea or created the same artefact without the two creators knowing each
other.9 It is this difference that provides a ‘creative height’ worthy of legal protection.
‘Transformational creativity’ can only be a product of the mind, of the effort of
this person and no other. The personal imprint of the creator is fundamental. On
the other hand, in the case of ‘combinational’ and ‘exploratory’ creativity, the idea
(or the new artefact) may be created by another person, meaning the persona of the
creator is fungible, so that his or her imprint is neither determinant nor fundamental
for the ‘creative’ result. This does not prevent the appreciation that the author’s
imprint is stronger in the second type of creativity than in the first.

8.1.2 The Relationship between Creativity and Algorithms


It is precisely because ‘combinational creativity’ is natural to humans that it is the
most difficult (though not impossible) for an artificial intelligence system to emu-
late, in that it seeks to reproduce the processing of ideas that takes place in the
human brain, imitating the anatomy and physiology of the human nervous system.10
The tacit knowledge that comes from context encompasses a whole series of human
nuances, expressions, customs or habits that are difficult to interpret using traditional
true/false, assertion/negation computational logic Expert systems have been
developed, however, to work with ‘fuzzy logic’, enabling reasoning with vagueness,
ambiguities or assertions that can have a number of interpretations, in a similar way
to how the human brain works.11 Likewise, neural network systems that seek to

8
Boden (n 1).
9
Boden differentiates between psychological creativity (‘P-creativity’) and historical creativity
(‘H-creativity’). In the former, the creativity takes the person who produced the idea as the
reference, even if other people already had the same idea previously. In the latter, as well as
being P-creative, the idea is H-creative in the sense that nobody has had this idea before (see
Boden (n 1)).
10
However, it is thought that Kurzweil is very close to doing this: How to Create a Mind. The
Secret of Human Thought Revealed (Penguin Books 2013).
11
Schorlemmer, Confalonieri, and Plaza, ‘The Yoneda Path to Buddhist Monk Blend’ <www
.iiia.csic.es/es/publications/yoneda-path-buddhist-monk-blend>. Date of access: April 2020;
Benítez, Escudero, Kanaan, and Masip, Inteligencia artificial avanzada (UOC Barcelona
2013) 10.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
224 Susana Navas

imitate the way the human brain functions12 have been developed, consisting of a
large number of very simple components that work together. A fundamental feature
of this type of network is its ability to learn and improve its behaviour through
training and experience. Computer algorithms that combine ideas giving rise to
improbable – but not impossible – ideas owe much to progress in both fuzzy
computational logic and neuronal connections. Of the two working methods
normally used by computer scientists, the bottom-up method, which focuses on
solutions, seems more popular than the top-down method of concentrating on the
problems.13There are artificial intelligence models based on the association of ideas,
others that handle analogies in both fixed and flexible structures, and models that
centre on induction, which is crucial for artistic and scientific creativity, taking into
account case-based knowledge and reasoning, as well as theoretical models
that suggest new questions and new approaches to answering these questions
(explanation-based learning).14
The other two types of creativity are easier, in that they use a set of rules that can
be specified sufficiently well for them to be converted into binary code, that is,
translated into an algorithm in computer language that, by transforming the rules of
the conceptual framework, could lead to results that are comparable or even
superior to those of the most competent professionals. The music of Mozart is
usually quoted as an example of ‘exploratory creativity’; in exploring the inherent
possibilities of the musical genres of his epoch, Mozart generally introduced rela-
tively superficial changes that did not involve a fundamental transformation.
Another case is AARON, the program created by Harold Cohen, which has created
drawings and paintings that are exhibited in the world’s leading art galleries.15
The use of genetic algorithms is fundamental to transformational creativity,
meaning that the rules of the conceptual space or scheme of thought change
themselves.16 Thus, the random and sudden changes in the algorithm rules are
similar to the mutations or crossings that occur in biology, giving rise to ‘surprises’
and a constant and automatic evolution of the computer program, the result of
which is highly creative. This type of creativity requires the human being to possess
not only a profound knowledge of their area but also a great deal of knowledge of

12
Barrow, ‘Connectionism and Neural Networks’ in Boden (ed), Artificial Intelligence (2nd edn,
Oxford University Press 1996) 135‒155.
13
Galanter, ‘What Is Generative Art? A Complexity Theory As a Context for Art Theory’,
ga2003_paper.pdf. Date of access: April 2020.
14
Boden, ‘Creativity’ in Boden (ed) Artificial Intelligence (2nd edn, Oxford University Press 1996)
272‒277.
15
Boden (n 1). Ramalho, ‘Will Robots Rule the (Artistic) World? A Proposed Model for the Legal
Status of Creation by Artificial Intelligence Systems’ (13 June 2007). Available at SSRN: https://
ssrn.com/abstract=2987757. Date of access: April 2020.
16
Boden (n 1); Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine
Intelligence’ in Calo, Froomkin, and Kerr (eds) Robot Law (Edward Elgar 2016) 56‒58; Boden
(n 14) 286‒289.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 225

artificial intelligence ‒ or to be able to work with an expert in artificial intelligence


to produce results that transform the previous ones and provide novelty and
originality.17
One example of this type of creativity is that offered by Christopher Longuet-
Higgins, who, at the end of the 1980s, presented a series of computer programming
object rules that interpreted a piano being played in certain styles (legato, staccato,
piano, forte, sforzando, crescendo, rallentando and rubato). He worked with two
piano compositions by Chopin, and used these rules to discover counterintuitive
results, for example, that a crescendo is not uniform but exponential, and that a
uniform crescendo does not sound like a crescendo but as though the volume of a
radio is being turned up.18 His creative efforts have served as the basis for writing
programs that improvise jazz of a quality equal to, or even better than, that of a
musician.19 Another example of creative transformation is the atonal music of
Schoenberg, which uses the 12 notes of the chromatic scale rather than just seven.
The discoveries by Kekulé on the benzene ring and the sculptural works of William
Latham, which offer new visual styles from the use of genetic algorithms,20 also fall
within the transformational creativity category. The artificial intelligence experi-
ments of David Cope are the most widely studied in the field of music.21

8.1.3 Categories of Computational Art


The development of computer programs that behave in the same way as a person
creating ideas (or artefacts) is known as ‘computational creativity’.22 In this case, the
software is not merely a tool used by the creator for the better understanding or the
perfecting of their work, but is a collaborator, as a human could be.23 Computational
creativity involves the system ‘creating’ works itself without any human involvement,
except at the time of developing the algorithm, resulting in originality comparable to,
or even better than, that of a person. Autonomous artificial agents write poems, design
objects, draw, paint or compose music as a human being would.24
17
Boden (n 1).
18
Longuet-Higgins, ‘Artificial Intelligence and Musical Cognition’, in Boden, Bundy, and
Needham (eds), ‘Special Issue on Artificial Intelligence and the Mind: New Breakthroughs
or Dead Ends?’ (1994) 349 Philosophical Transactions of the Royal Society of London, Series
A 103‒113.
19
de Mántaras, ‘Computational Creativity’ (2013) 189(764) Arbor a082. <http://dx.doi.org/10
.3989/arbor.2013.764n6005>.
20
Boden (n 1).
21
Da Silva, ‘David Cope and Experiments in Musical Intelligence’ <www.spectrumpress.com>.
Date of access: April 2020.
22
de Mántaras (n 19); Galanter, ‘Thoughts on Computational Creativity’, Dagstuhl Seminar
Proceedings 09291. Computational Creativity: An Interdisciplinary Approach <http://drops
.dagstuhl.de/opus/volltexte/2009/2193>. Date of access: April 2020.
23
McCormack and D’Inverno (eds) Computers and Creativity: A Roadmap (Springer 2012).
24
de Mántaras (n 19).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
226 Susana Navas

The art thus created is known as ‘generative art’.25 It features randomness in its
composition, evolution and constant change in a complex or even chaotic environ-
ment created exclusively by the software.26 Two examples of this type of art are, in
the visual arts, the AARON program and, in music, the EMI program used by
David Cope.
When the program produces results (‘works’) that cannot even be imagined by the
person who commissioned the development of the program or who used a program
that had already been created, this is usually called ‘evolutionary art’. Examples of
this type of art are the works of Karl Sims and William Latham. Karl Sims uses a
computer program that produces graphical images (12 at a time) that are radically
different from those produced randomly without favouring one style over another.
This remains the decision of the computer itself. This is ‘transformational creativity’
using a genetic algorithm. William Latham also uses a genetic algorithm to produce
sculptures that he is unable to imagine himself.
However, if the program is designed to interact with the medium and, in
particular, to take external human behaviour into account, the result is ‘interactive
art’.27 Here, the audience may influence the behaviour of the software up to a
certain point, but this does not always occur. Indeed, the software may interpret this
external factor in a way that differs from the audience’s intention and gives rise to
unusual and surprising artistic results. This type of art is similar to multi-media work
but does not fully correspond to it.28 In 2007, an art gallery in Washington, DC used
a computer program written by Ernest Edmond to interact with works by Mark
Rothko, Clyfford Still and Kenneth Noland to commemorate the 50th anniversary
of the ‘color field’ painters.

8.2 creation by algorithms and copyright


In certain cases, the degree of originality of a creation by an algorithm may even
surprise humans. Copyright issues therefore arise in respect of the ‘works’ created by
an autonomous system, including the legal protection of the investment that has
been made in resources to prepare an expert system that can produce ‘works’ with a
given ‘height of creativity’.

25
Following the classification given by Boden (n 1). A wider taxonomy of generative art can be
found in Boden and Edmond, ‘What Is Generative Art?’ (2009) 20(12) Digital Creativity 21‒46.
26
A wide definition of generative art is offered by Galanter: ‘Generative art refers to any art
practice where the artist cedes control to a system that operates with a degree of relative
autonomy, and contributes to or results in a completed work of art. Systems may include
natural language instructions, biological or chemical processes, computer programs, machines,
self-organizing materials, mathematical operations, and other procedural inventions’ (Galanter
n 22).
27
Boden (n 1).
28
Esteve, La Obra Multimedia en la Legislación Española (Aranzadi Cizur Menor 1997) 29‒35.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 227

8.2.1 A Work Produced by an Algorithm as an Original ‘Work’


Generative art, with all its variations, produces works that can, without doubt, be
considered ‘original’, possessing ‘creativity’ and, in many cases, ‘novelty’. As is well
known, the legislation on authors’ rights requires ‘originality’ before an intellectual
product can be protected. This is the minimum requirement for creativity,29 which
can be increased in law by admitting, first, the protection of works derived from
other pre-existing work(s) (Art 2.3 Berne Convention for the Protection of Literary
and Artistic Works, 9 September 1886)30 and, second, the protection of collections of
works by others (Art 2.5 Berne Convention). The interpretation of creativity may be
subjective or objective:31 subjectively, an ‘original’ creation represents a given
subject, showing its personal imprint; objectively, a certain degree of objective
‘novelty’ is required. Since the author creates, not ex nihilo, as some copyright
regulations appear to contemplate, but on the basis of pre-existing works, on the
basis of a common cultural ‘acquis’,32 this can lead to authors making small
modifications to pre-existing works and attempting to pass them off as their own
‘original’ work when in fact they are no more than an unconscious appropriation of
another person’s work.33Although both conceptions of the requirement for original-
ity present difficulties, both national and European legislators have inclined towards
the subjective. This is justified, according to some scholars,34 by the fact that the law
protects, as an ‘original creation’, collections or compilations of the works of others
in which the author’s personal imprint is their ‘selection’ and ‘arrangement’ of the
materials (Art 2.5 Berne Convention). In this case, there is no obvious objective
novelty, but the work is nevertheless classified as a ‘new work’. Thus, it is argued,
what the legislator requires is a ‘minimum of creative effort’, which is represented in
the ‘selection’ of the content and in the ‘structuring, arranging or layout of it’ (‘the
author’s own intellectual creation’).35 These activities will carry the personal imprint

29
Yu, ‘The Machine Author: What Level of Copyright Protection Is Appropriate for Fully
Independent Computer-Generated Works?’ (2017) 165 U Pa L Rev 1241; Yanisky-Ravid and
Velez-Hernandez, ‘Copyrightability of Artworks Produced by Creative Robots and the Concept
of Originality: The Formality-Objective Model’, available at SSRN: <https://ssrn.com/
abstract=2943778>. Date of access: April 2020.
30
<www.wipo.int/treaties/es/text.jsp?file_id=283698>. Date of access: April 2020. Referred to as
the ‘Berne Convention’ from now on.
31
Perry and Margoni, ‘From Music Tracks to Google Maps: Who Owns Computer-Generated
Works?’ Paper 27, Law Publications (2010) <http://ir.lib.uwo.ca/lawpub/27>. Date of access:
April 2020; Yanisky-Ravid and Velez-Hernandez (n 29).
32
Rahmantian, Copyright and Creativity. The Making of Property Rights in Creative Works
(Edward Elgar 2011).
33
Marco, ‘La formación del concepto de derecho de autor y la originalidad de su objeto’ in
Macías and Hernández (eds), El derecho de autor y las nuevas tecnologías. Reflexiones sobre la
reciente reforma de la Ley de Propiedad Intelectual (La Ley 2008); Rahmantian (n 32).
34
Marco (n 33); Yanisky-Ravid and Velez-Hernandez (n 29).
35
Margoni, ‘The Harmonisation of EU Copyright Law: The Originality Standard’, Available at
SSRN: <https://ssrn.com/abstract=2802327>. Date of access: April 2020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
228 Susana Navas

of their author and, as a result of this, legal protection. This minimum effort must be
beyond the ordinary, the routine or the obvious. Thus, the chronological or alpha-
betical ordering or the putting together of other people’s works without any coher-
ence would not give rise to a ‘new work’ and, consequently, would lack protection.
In this sense, the salient factor determining whether the originality requirement is
met is the creation process, and its result is less important. Indeed, compared to the
‘classic’ (traditional) model still present in copyright legislation in Europe and the
United States, in which the author creates from nothing, from their own inspiration
and alone,36 progress in artificial intelligence, the new technologies and the Internet
provide a much more dynamic model in which the author can hold a dialogue with
the public about their work, interacting with them and with their colleagues. The
author’s model forged from the network of networks therefore puts the accent not so
much on themselves as on the process of creating the work. Technology allows the
work to be in permanent evolution: the creative process does not end, but is always
actively improving, transforming or perfecting the work.37
In the classic approach described, the work resulting from an algorithm with
learning capacity that can evolve and generate original works unimaginable to the
human being who wrote the algorithm could not be considered a work protectable
by copyright. The ‘intelligent’ imprint of the algorithm is not comparable with the
creative effort of a physical person, however minimal this may be. However, if the
emphasis is placed on the creative process itself rather than on the result, it can be
seen that in certain types of algorithms, above all the genetic ones, the process of
creating the work is similar to the creation process that only a few human creators
can carry out. This is the case with transformational creativity, a fundamental
element in evolutionary art. In our opinion, similar considerations could apply with
regard to exploratory creativity. It is these two types of human creativity that are the
simplest for artificial agents to imitate, in so far as it is possible to emulate the
functioning of the human brain when working with a scheme of predefined rules.
On the other hand, it is most difficult to emulate the working of the brain in natural
creativity because of the sheer quantity of nuances, ambiguities, generalisations and
non-professional tacit knowledge that are involved. This natural creativity is within
the reach of any physical person who, with a minimum of creative effort, can
produce a work deserving legal protection under intellectual property legislation.
When the work is created by an algorithm,38 the creation process is very similar to

36
Grimmelmann, ‘Copyright for Literate Robots’ (2016) 101 Iowa L Rev 657: ‘Copyright’s ideal of
romantic readership involves humans writing for other humans . . . Copyright ignores
robots. . .’.
37
For a proposal for a new model of copyright based on new technologies and the Internet, see
Navas, ‘Dominio público, diseminación online de las obras del ingenio y cesiones “creative
commons” (Necesidad de un nuevo modelo de propiedad intelectual)’ (2011‒2012) 32 Actas de
Derecho Industrial 239‒262.
38
The ideas or principles involved in the algorithm, the computational logic and the program-
ming language are not protected by copyright. Only the expression of the computer program is

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 229

the one the human brain would use in the case of transformational and exploratory
creativity, which might argue for copyright protection for a result produced by an
algorithm. The personal imprint of the author as a physical person is emulated
almost perfectly by the algorithm or is even superior to what they might achieve.39
On the other hand, works in which the process of creation is based on natural
creativity, a field in which artificial intelligence is still far from emulating the human
brain, would remain outside the copyright protection regime.
Therefore, a result of the three types of creativity described above will be protect-
able by copyright if it is made by a physical person and there is a minimum of
creative effort, while a work in which the creative process replicates almost identi-
cally the creative process of a human will only be protected by copyright if it is the
product of the ‘imagination’ of an algorithm. This can occur more often in trans-
formational creativity and evolutionary art than in exploratory and merely generative
art and, to a much lesser extent, where the creativity can be classified as ‘natural’, in
purely combinatory processes. In these cases, the algorithms faithfully follow instruc-
tions, having very little, if any, learning capacity, and acting mechanically without
introducing changes. Where the work created by the algorithm can be protected,
the originality must have a component of novelty that is not required for works of
human creativity. In fact, the issue of whether or not works created by algorithms
have legal protection brings into question whether the minimum creative effort
criterion for the protection of works of human intellect should be revised, the
threshold raised and a creative height required that seems to have disappeared
(the objective approach).40 As part of this creative height, the element of novelty
must still be taken into consideration. Machines can certainly contribute to
improved self-observation and self-knowledge for human beings, allowing them to
see the intellectual potential that is all too frequently wasted.
A challenging question must therefore be answered. Under the current copyright
model, the term ‘work’ can only apply to the work of a physical person, not to that of
a machine or an animal, even if they are ‘creative’, so ‘work’ may not be appropriate
for objects created by an algorithm. The use of other terms, such as ‘result’, could be
the subject of an independent concept in intellectual property legislation, requiring
definition or differentiation.

protected, as is reminded by Recital No 11 and Art 1.2 of Directive 2009/24/EC of the European
Parliament and Council, 23 April 2009, on the legal protection of computer programs (codified
version), OJ L 111, 5.5.2009 16‒22.
39
In fact, the popularisation of culture and art has, through the use of technology and publication
on the Internet, reached levels that are almost unimaginable, with mere popular occurrences
being considered as brilliant ideas and as works that make artificial agents with creative capacity
appear much more intelligent than perhaps they are and, above all, appear more (even much
more) intelligent than many humans. At least the cognitive biases of people will not appear
here (for more on this perspective, see Navas, ‘Creation and Witticism in the User-Generated
Online Digital Content’ (2015‒2016) 36 Actas de Derecho Industrial 403‒415).
40
Yanisky-Ravid and Velez-Hernandez (n 29).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
230 Susana Navas

8.2.2 Authorship: Ownership and Exercise of Rights


If, as we have seen, copyright is based on the authorship of a physical person,41 a
creation arising from the spontaneity of an animal,42 a machine or an algorithm
remains outside the field of application of intellectual property law, becoming
material belonging in the public domain (Art 18 Berne Convention). However, if
the work is created with the ‘help’ of a computer program it will be protected by
legislation insofar as the human presence is not eliminated.43
Although an author/physical person is the basis, copyright itself recognises that, in
certain cases, it could be ‘presumed’ that an author is a legal person. In Spanish,
French and Italian law, this occurs with so-called joint works44 whose authorship
can be attributed to a physical or ‘legal’ person or, in the case of the writing of
computer programs, where the author may be a ‘group of individuals’ or even an
organisation;45 if the authoring is carried out within the framework of an employ-
ment relationship, the holder of the rights in the program is the company.46 In the
United States there is the ‘work made for hire’ doctrine (§ 201(b) US Copyright

41
As an example, in Spain, Art 5.1 Ley de propiedad intelectual (BOE 22 April 1996) states that
the author may only be a ‘natural person’; § 2(2) of the Urheberrechtsgesetz in Germany <www
.gesetze-im-internet.de/urhg/inhalts_bersicht.html> (Date of access: April 2020) considers that
only works that consist of ‘persönliche geistige Schöpfungen’ can be considered objects of
protection; Art L 111-1 of the French Code de la propriété intellectuelle <www.legifrance
.gouv.fr/affichCode.do?cidTexte=LEGITEXT000006069414> (Date of access: April 2020)
alludes to ‘ouvrages de l’esprit’, which implies that these are created by man. In the same
sense, see the wording of s 9 UK Copyright, Design and Patent Act (1988) <www.legislation
.gov.uk/ukpga/1988/2> (Date of access: April 2020). Likewise, see s 2(1) Ireland Copyright Act
and Related Rights Act (2000) <www.irishstatutebook.ie/eli/2000/act/28/enacted/en/html>.
Date of access: April 2020.
42
Neuberger, ‘Computer Ownership Is not Monkey Business: Wikimedia and Slater Fight over
Selfie Photographs’ (2014) 20(5) IP Litigator 33; Ricketson, ‘The Need for Human Authorship –
Australian Developments: Telstra Corp Ltd v Phone Directories Co Pty Ltd (Case Comment)’
(2012) 34(1) EIPR 54: ‘the need for author to be human is a longstanding assumption’;
McCutcheon, ‘Curing the Authorless Void: Protecting Computer-generated Works Following
ICETV and Phone Directories’ (2013) 37 Melbourne University Law Review 46; Ramalho
(n 15).
43
Yanisky-Ravid and Velez-Hernandez (n 29); Hertzmann, ‘Can Computers Create Art?’, avail-
able at: arXiv: 1801.04486v6[cs.AI], 8 May 2018. Date of access: April 2020.
44
Art L 113-2 Code la propriété intellectuelle; Art 8 Ley de propiedad intelectual; Art 7 Legge di
protezione del diritto d’autore e di altri diritti conessi al suo esercizio 23 April 1941 <www
.interlex.it/testi/l41_633.htm#6>. Date of access: April 2020; Art 19 Código de derechos de autor
y derechos conexos in Portugal <www.wipo.int/wipolex/es/text.jsp?file_id=198457>. Date of
access: April 2020.
45
Art 2.1 Directive 2009/24/EC of the European Parliament and Council, 23 April 2009, on the
legal protection of computer programs. This is specifically admitted in the LPI for Spain,
Art 97.
46
Art 2.3 Directive 2009/24/EC of the European Parliament and Council, 23 April 2009, on the
legal protection of computer programs; Art L 113-10 Code de la propriété intellectuelle; § 69b
Urheberrechtsgesetz (Germany).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 231

Act)47 under which the employer of the person who carries out the work is
considered to be the author, which thus differentiates between the ‘author in fact’
and the ‘author in law’. The author in law is the owner of the rights to exploitation
and exercises them in relation to the work produced by the author in fact.48
The same situation occurs in the case of the production of audio-visual works, for
which rights are transmitted by the director of the work to its producer (Art 15.2
Berne Convention).49 Similarly, the rights to anonymous and pseudonymous works
may be exercised by a legal person (Art 15 Berne Convention),50 as can the so-called
right of dissemination as suggested by Ana Ramalho.51
Thus, whether by legal fiction or by presumption, copyright recognises exceptions
to the rule that only a physical person, and only an author in fact, can own the rights
to a work.
From this, we may start from the premise that only works created by an algorithm
in a process that emulates the creative process of a human brain can be protected,
which, according to computer scientists, applies especially in cases of exploratory
and transformational creativity in which the legal fiction is established that the
‘author in law’ is the individual or organisation that commissioned the algorithm
in question or that used an algorithm that had been created previously but for other
purposes but that ended up producing the ‘original’ work. Such an organisation or
individual will be the owner of the rights to both moral and economic exploitation
(the same rights as would be held in any other case in which the author in fact was
an individual). The author in fact would be the ‘robot machine’.52
There are already legal systems – all of them within the common-law legal
tradition – that have admitted such an interpretation: the Copyright, Design and
Patent Act (1988) in the UK, section 9 paragraph 3; the New Zealand Copyright Act
(1994), paragraphs 2 and 5;53 the Ireland Copyright Act and Related Rights Act
(2000), Part I, section 2 and Chapter 2, paragraph 21; and the South Africa Copyright
Act (1978), No 98.54
These laws define computer-generated works as works ‘generated by a computer
in circumstances such that there is no human author’, where ‘the person by
whom the arrangements necessary for the creation of the work are undertaken’ is

47
The text can be consulted at <www.copyright.gov/title17/title17.pdf>. Date of access: April
2020.
48
Lee, ‘Digital Originality’ (2012) 14(4) Vanderbilt J Ent and Tech Law 919.
49
Arts 88‒89 Ley de propiedad intelectual (Spain); § 89 Urheberrechtsgesetz (Germany).
50
Art L 113-6 Code de la propriété intellectuelle (France), art 6 Ley de propiedad intelectual
(Spain); § 10 Urheberrechtsgesetz (Germany).
51
Ramalho (n 15).
52
Samuelson, ‘Allocating Ownership Rights in Computer-Generated Works’ (1985) 47 U Pitt L R
1185, 1224; Yu (n 29).
53
<http://legislation.govt.nz/act/public/1994/0143/105.0/DLM345634.html>. Date of access:
April 2020.
54
<www.nlsa.ac.za/downloads/Copyright%20Act.pdf>. Date of access: April 2020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
232 Susana Navas

considered to be the author owning the rights in the work that was created entirely
by the computer program.55 However it is not clear that these standards admit,
without further ado, the protection of works created autonomously without human
involvement. The expression ‘arrangements necessary for’ does not necessarily mean
that they are contemplated by the rule. The relationship between these ‘arrange-
ments’ and the final result is not easily understood, nor is it clear whether these
arrangements must be made by a human or whether it is sufficient that they are
made by an expert system. That is, it is not clear whether there must be a person
guiding the ‘arrangements’ in the creative process; this would not match the
definition given of ‘computer-generated works’.56
If it is not admitted that the author in fact is an expert system, given the premise
on which authors’ rights are based, as mentioned earlier, there is always the
possibility that the result of the ‘spontaneity’ of an algorithm without any human
presence passes to the public domain.57 Yet another possibility could be some right
sui generis to ensure compensation for anyone who invests human and economic
resources in creating an expert system or intelligent agent, so that a user who
acquires the system to create a work must pay.58 In fact, there could be a regulation
system similar to that for databases,59 independently of whether there are authors’
rights in the result of the creativity of the algorithm or any of its parts.
The payment of this compensation for the economic and human investment
made could be carried out electronically using automated systems, to avoid discour-
aging investment in creating, in innovating or in technological progress in general.60

8.3 conclusion: challenges for copyright


Computing is changing our reality, in both social and legal ways, as we increasingly
automate physical and intellectual tasks. Among the latter, creation may occupy an
outstanding place, which suggests a challenge to the current copyright model.
For computer-generated works to be recognised at the European level as protect-
able ‘works’ will involve relevant modifications to national legislation through the
55
Lee (n 48); Schafer, Komuves, Niebla, and Diver, ‘A Fourth Law of Robotics? Copyright and
the Law and Ethics of Machine Co-production’ (2015) 23 Artif Intell Law 217‒240.
56
Lambert, ‘Computer Generated Works and Copyright: Selfies, Traps, Robots, AI and Machine
Learning’ (2017) 39(1) EIPR 39. McCutcheon (n 42) criticises the expression ‘arrangements’ in
these standards.
57
Proposal by Perry and Margoni for legislation on authors’ rights in Canada (‘From music tracks
to Google maps: Who owns computer-generated works?’ Paper 27, Law Publications (2010).
<http://ir.lib.uwo.ca/lawpub/27>. Date of access: April 2020).
58
Ramalho (n 15).
59
Directive 96/9/EC of the European Parliament and Council, 11 March 1996, on the legal
protection of databases (OJL 77, 27.3.1996, 20–28). McCutcheon (n 42) argues against this
solution and in favour of recognising, in Australian law, work created by an algorithm as
original and attributing the author’s rights to an individual or organisation.
60
This risk for authors’ rights is described by Samuelson (n 52).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
Creativity of Algorithms and Copyright Law 233

incorporation of a directive that will regulate those rights.61 If such a directive is


considered, a number of issues will have to be decided: whether there must be a
degree of originality superior to that of human originality or, which will mean
additional requirements, ‘novelty’; who is the author ‘in law’ and/or the owner of
the rights to exploit the work thus created; the situation of co-authorship when one
of the co-authors is an expert system or when the result is the product of the
interaction of various algorithms of diverse origin and ownership; the duration of
the copyright period (70, 50 or 25 years, or the term in force at the time); and
whether a right is also (or only) attributed sui generis to the subject (individual or
organisation) who invested human and economic resources in developing the
algorithm that generated the ‘original work’.62 Other possible alternatives that may
arise are i) that the result of the algorithm is not protectable by the exercise of
authors’ rights so that it passes into the public domain and can be used by anyone,
which could be regarded as a legal solution that discourages investment in research,
technology and innovation, or ii) that a law on authors’ rights is drafted independ-
ently that avoids the use of expressions referring to human creation and alludes not
to ‘work’ but to ‘result’ or ‘material’, and not to ‘creation’ but to ‘production’, and
so on.
The recognition of works created by algorithms that possess the necessary origin-
ality to be protected certainly represents a serious challenge to the model of ‘author’
and of ‘protected work’ that we have inherited. Because of this, the proposal made
here will require an intense debate in both academic and commercial and other
forums.
The Recommendation to the Commission on Civil Law Rules on Robotics from
the European Parliament of 16 February 201763 admits the possibility of the elabor-
ation of criteria for own intellectual creation for copyrightable works produced by
computers or robots. The European Parliament suggested a specific legal status for
robots, conveying them an ‘electronic personality’. This personality, created by law
for the purpose of liability rules, may well apply to the field of intellectual property
rights.64

61
The legislative technique of directives is the one that has been used by the community
legislator in the harmonising of copyright to which we refer.
62
From the day on which legal personality of intelligent robots is admitted (Chopra and White,
‘Artificial agents – Personhood in law and philosophy’, <www.sci.brooklyn.cuny.edu/~scho
pra/agentlawsub.pdf> (Date of access: April 2020); Wettig and Zehendner, ‘A Legal Analysis of
Human and Electronic Agents’ (2004) 12 Artif Intell Law 111‒135, there will be no legal or
theoretical problem in attributing to them the condition of authors, not only ‘in fact’ but also
‘in law’, exercising the rights through other legally designated subjects who may be individuals
or organisations (the person who commissioned the program and/or invested economic
resources in its preparation or even an unconnected third party).
63
Follow up to the EU Parliament Resolution of 16 February 2017 on Civil Law Rules on
Robotics, 2015/2103 INL.
64
Ramalho (n 15).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
234 Susana Navas

It must not be forgotten that technology and the binary code entered the world of
authors some time ago, and their presence is emphasised further in the regulation of
digital rights management65 and technological protection measures66 which have
had to become ‘intelligent’ rights and measures incorporating legal and computer
science and, especially, artificial intelligence ideas and legal concepts. We can thus
allude, at least among legal scholars and the creators of algorithms, to ‘computa-
tional copyright law’.67 Perhaps it is time, as proposed by Gervais, to think of a new
Berne Convention.68
For now, for the first time, a work made by an algorithm, ‘The portrait of Edmond
de Belamy’, has been auctioned by Christie’s in New York (23‒25 October 2018).69

65
These could consist of algorithms that autonomously determine whether or not what is used is
legal, as well as using computational logic to represent the standards relating to copyright.
66
The leading technology at the time ‒ the blockchain ‒ must be taken into account (Navas,
‘User-Generated Online Digital Content as a Test for the EU Legislation on Contracts for the
Supply of Digital Content’ in Schulze, Staudemeyer, and Lohsse (eds) Contracts for the Supply
of Digital Content: Regulatory Challenges and Gaps (Nomos Verlag 2017) 229‒255.
67
Schafer, Komuves, Niebla, and Diver (n 55).
68
Gervais, (Re)structuring Copyright. A Comprehensive Path to International Copyright Reform
(Edward Elgar Publishing 2017).
69
<www.christies.com/features/A-collaboration-between-two-artists-one-human-one-a-machine-
9332-1.aspx>. Date of access: April 2020.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:31:17, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.009
9

“Wake Neutrality” of Artificial Intelligence Devices

Brian Subirana, Renwick Bivings, and Sanjay Sarma

introduction
This chapter introduces the notion of “wake neutrality” of artificial intelligence
devices and reviews its implication for wake-word approaches in open conversational
commerce (OCC) devices such as Amazon’s Alexa, Google Home and Apple’s Siri.
Examples illustrate how neutrality requirements such as explainability, auditability,
quality, configurability, institutionalization, and non-discrimination may impact the
various layers of a complete artificial intelligence architecture stack. The legal
programming implications of these requirements for algorithmic law enforcement
are also analysed. The chapter concludes with a discussion of the possible role of
standards bodies in setting a neutral, secure and open legal programming voice name
system (VNS) for human-to-AI interactions to include an “emotional firewall.”
I don’t need a girlfriend. My conversational device gives me everything I need and more.
(MIT student, summer 2017, two weeks after the first conversations with Amazon’s Alexa)

9.1 wake neutrality and artificial intelligence


Suppose you want to order an ice cream cake from Toscanini’s Gelateria for pick-up
in 30 minutes on your way home from work. You can dial the phone number with
any phone or check their web page with any browser, but if you use your smart
speaker or car audio control things are a bit more cumbersome, since each conver-
sational commerce hardware manufacturer has its own way of getting you started
and there is no central and standard1 repository of “wake words.” In some cases, it

1
By “standard” we mean that from the users’ point of view the way to engage (the dial pad in the
case of a phone call) is the same and is unrelated to the infrastructure choices made by the
different parties involved (make of phone, network provider). On the web it is also standard
since the different browsers work the same way and, again, the functionality is mostly unrelated
to the type of computer you are using or the ISP provider you have.

Downloaded from https://www.cambridge.org/core. University College235 London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
236 Brian Subirana, Renwick Bivings, and Sanjay Sarma

may even be impossible. For example, Siri mid-2019 still wouldn’t “talk” to Spotify.
You asked it, “Play Bruce Springsteen by Spotify” and it politely responded, “I can
only talk to Apple Music” ‒ where you don’t have an account. Not only there is
lack of interoperability but, even if you are routed the service is inconsistent
across devices. For example, Google Home and Amazon don’t understand the same
flavor of English and won’t take you to the same service even if you ask for the
same thing.
To distinguish the two behaviors above making explicit the difference between
the phone network and the AI wake examples above, we introduce the notion of
“Wake Neutrality Markets” in the following definition:

Wake Neutrality Market

We say that a market has “Wake Neutrality” if there are standard ways to activate
services that don’t favor a particular supplier. This includes:
1. Product Wake Neutrality: The same products can be consumed independently of
the market operator chosen.
2. Naming Wake Neutrality: Operator switching costs are not a function of the
number of products consumed. In particular, products have the same names
regardless of the market operator chosen.
3. Intelligence Wake Neutrality: Operators don’t use intelligence derived from wake
requests to give an unfair advantage to a particular product supplier.
4. Net Wake Neutrality: Market operators cannot lower the quality of service of a
given supplier to favor another one.

9.1.1 Product and Name Wake Neutrality of Smart Speakers


In the case of smart speakers’ skills, the market has neither of the first two properties
of Wake Neutrality, because the skills available in market operators such as Google
Home or Alexa are different, and, most importantly, because they are named
differently even for similarly behaved skills. In contrast, the calling Phone Network
has both properties because all destinations can be reached using the same numbers.
Switching phones has some cost but only to learn basic functionalities like how to
“Dial” a number. Once dialing has been learned, phone device switching costs are
independent of the numbers dialed because they remain unchanged.
Telecommunication calling has the first two properties above because Phone
numbers follow the North American Numbering Plan (NANP) so that when
switching the brand of your mobile phone, there is only a small fixed cost to learn
the calling app of the new phone independently of how many calls are made. In
contrast, Voice Skills is not yet a Wake Neutrality market because smart speakers,
when changing supplier, requires learning an entirely new language and even
different names and functionality of equivalent skills.
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 237

To prevent market dominance and insure Wake Neutrality, regulation can help.
For example, EU regulators announced an antitrust investigation into Apple in
connection with the music and Spotify example mentioned above.2 Apple’s response
was to “comply” enabling Spotify on Siri in September of 2019. However, iOS 13,
released that month, introduced “voice control,” a proprietary way to interact with
some features of Apple devices using voice. This means Apple can infer, among
other things, what you listen on any music platform, and more broadly your mood
based on the tone of your voice as you switch applications or respond to specific
messages.3 EU regulators initiatives may have made the market more product wake
neutral at the expense of intelligence wake neutrality.

9.1.2 Intelligence Wake Neutrality of Smart Speakers


The smart skills market also has the potential of not having intelligence wake
neutrality because AI interactions during wake provide an extraordinary amount of
information that can provide an advantage to the market operator skill development
efforts. Recorded voice-based interactions with conversational devices can provide a
wealth of data useful for customizing and personalizing various facets of daily life.4
However, conversational devices such as the Amazon Echo, the Microsoft Cortana,
Samsung’s Bixby, the Google Home and the Apple HomePod also open the door to
unprecedented levels of personal information acquisition being used by artificial
intelligence agents empowered with unexplainable deep-learning algorithms. At
least a subsection of this data might disclose an individual’s mood, personality,
gender, race or other information5 that could open the door to non-neutral, or in
some cases possibly discriminatory responses by the conversational Internet.6 Today
we can track people’s locations, vital signs, visual appearance in public spaces,
digital transactions, page views, emails and social media account activity. This
is just the beginning. Recent research conducted at MIT suggests we will soon
be able to see through walls using wifi,7 extract sound from potato chip bags using

2
Toplensky, “Brussels poised to probe Apple over Spotify’s fees complaint. EU to launch formal
competition inquiry as music streaming battle escalates” The Financial Times (5 May 2019).
3
Recordings are stored as stated in: <https://support.apple.com/en-us/HT210657>. Apple gives
you an id which is different than your personal id within Apple so that advertising requests are
not sent to you based on your voice profile.
4
Borden and Armstrong, “Tiny Sensors, Huge Consequences: Unregulated Inferences from Big
Data Create Ethical and Legal Dilemmas for Businesses and Consumers” (2016) 12(3) SciTech
Lawyer 28‒30.
5
Conrad and Branting, “Introduction to the Special Issue on Legal Text Analytics” (2018) 26(2)
Artificial Intelligence and Law 99‒102 <https://doi.org/10.1007/s10506–018-9227-z>. Springer
Netherlands.
6
Peppet, “Regulating the Internet of Things: First Steps Toward Managing Discrimination,
Privacy, Security and Consent” (2014) 93(1) Texas Law Review 85–178.
7
Adib and Katabi, “See Through Walls with WiFi!” (2013) 43.4 ACM.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
238 Brian Subirana, Renwick Bivings, and Sanjay Sarma

high-speed cameras,8 or even determine whether a person has been diligent about
car maintenance using a mobile phone’s built-in microphone.9 Recent research on
AI and medicine suggests that these devices may soon even be able to predict certain
medical conditions and even anticipate suicide attempts better than humans can.10
Should they be allowed to do so? Under what conditions? The unexplainability of
such algorithms prevents formal scrutiny by smart law-enforcement agents and poses
many as yet unresolved security11 and legal issues.12
We feel that the lack of standard interoperability in conversational commerce
may lower the adoption rate of simple voice interactions and eventually impact on
the well-being of the industry more broadly. Behind standardization choices, much
is at stake in terms of how our future societies will evolve, as various government
agencies have agreed.13 Much is at stake.

9.1.3 Wake Neutrality Legal Compliance: Open versus Closed Approaches


This chapter explores regulatory options to achieve Wake Neutrality in AI devices by
standardizing the initial steps of the human-to-machine interaction during wake,
focusing on smart speakers in the context of conversational commerce. Voice is
complicated to regulate because it is ambiguous, prone to errors, neither race nor
gender neutral, and because it reveals significant amounts of information14 about the
person through its tone, choice of words and semantic constructs.
A key concern for us is how artificial intelligence applications can balance the
benefits of the technology while enforcing human rights and certain basic laws
including those related to privacy, consumer protection, IPR and contracting. In the
long run things may get even more complicated as we develop models of the human
brain that can accurately reproduce a given person’s response to a situation by
digitally reproducing the activity of every single neuron in that person’s brain. This

8
Davis, Rubinstein, Wadhwa, Mysore, Durand, and Freeman, “The Visual Microphone:
Passive Recovery of Sound from Video” (2014) 33(4) ACM Trans Graph.
9
Siegel, Bhattacharyya, Kumar, and Sarma, “Air Filter Particulate Loading Detection Using
Smartphone Audio and Optimized Ensemble Classification” (2017) 66 Engineering Applica-
tions of Artificial Intelligence 104‒112.
10
Loh, “Medicine and the Rise of the Robots: A Qualitative Review of Recent Advances of
Artificial Intelligence in Health” (2018) BMJ Leader.
11
Brundage, Avin, Clark, Toner, Eckersley, Garfinkel, and Anderson, “The Malicious Use of
Artificial Intelligence: Forecasting, Prevention, and Mitigation” (2018) arXiv preprint
arXiv:1802.07228.
12
Stern, “Introduction: Artificial Intelligence, Technology, and the Law” (2018) 68(supplement 1)
University of Toronto Law Journal 1‒11.
13
Cath, Wachter, Mittelstadt, Taddeo, and Floridi, “Artificial Intelligence and the ‘Good
Society’: The US, EU, and UK Approach” (2018) 24(2) Science and Engineering Ethics
505‒528.
14
Peppet (n 6).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 239

could, for example, mean AI modeling humans to the extent of pushing sales by
manipulating customer desires, and perhaps even changing personalities at the
software’s behest. In the short term, this is an unattainable goal, but we certainly
live in an era where the amount of personal identifiable information that can be
recorded is increasing, opening up unprecedented opportunities to design eco-
nomic markets and innovation policies.15,16
This chapter also examines how to algorithmically enforce wake neutrality in the
behavior of these new powerful AI technologies including avoiding bias toward
certain groups of humans and types of behaviors17 and prevent the unintended
emergence of isolated platforms limiting the potential of these technologies.18
Without legally enforceable neutrality rules we cannot ensure that AI devices do
not distort competition beyond what we would consider fair.19,20,21,22 From a legal
point of view, computers can no longer be seen as simple communication tools for
message transmission in commerce. Instead, they are powerful AI legal program-
ming23 agents with human-like personalities that operate in Internet of Things (IoT)
environments, engaging with humans using natural-language open conversational
commerce (OCC) and initiating transactions that generate agreements with third
parties through automated contracts.

9.1.3.1 Closed-Garden Solutions


This creates a legal conundrum that may be addressed in two ways. The first
approach, currently the dominant one, is a “closed legal garden” solution such as
the one championed by the first versions of conversational commerce devices
including Amazon Echo, Google Home, Microsoft Cortana or Apple HomePod.

15
Milgrom and Steven, “How Artificial Intelligence and Machine Learning Can Impact Market
Design” (2018) National Bureau of Economic Research, No w24282.
16
Agrawal, Joshua, and Avi, “The Economics of Artificial Intelligence” McKinsey Quarterly,
April 2018.
17
Fessler, “Amazon Alexa Is Now Feminist and Is Sorry If That Upsets You” Quartz at Work,
17 January 2018 <https://qz.com/work/1180607/amazons-alexa-is-now-a-feminist-and-shes-sorry-
if-that-upsets-you>.
18
Smith, “Siri Can Finally Control Streaming Apps like Spotify in iOS 12,” 7 June 2018 <https://
bgr.com/2018/06/07/ios-12-features-siri-shortcuts-streaming-apps-spotify>.
19
Khan, “Amazon’s Antitrust Paradox” (2016) 126 Yale Law Journal 710.
20
Frieden, “The Internet of Platforms and Two-Sided Markets: Legal and Regulatory Implica-
tions for Competition and Consumers” (October 2017). SSRN: <https://ssrn.com/abstract=
3051766 or http://dx.doi.org/10.2139/ssrn.3051766>.
21
Hovenkamp, “Whatever Did Happen to the Antitrust Movement?” (24 August 2018) Notre Dame
Law Review, forthcoming; U of Penn, Inst for Law & Econ Research Paper No 18-7. Available at
SSRN: <https://ssrn.com/abstract=3097452 or http://dx.doi.org/10.2139/ssrn.3097452>.
22
Parsheera, Ajay, and Avirup, “Competition Issues in India’s Online Economy” No 17/194. 2017.
National Institute of Public Finance and Policy, New Delhi, Working paper No 194.
23
Subirana and Bain, “Legal programming” (2006) 49.9 Communications of the ACM 57‒62.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
240 Brian Subirana, Renwick Bivings, and Sanjay Sarma

By 2020, over 50 percent of Americans were using voice search once a day.24 The
growth of conversational commerce devices is unprecedented, doubling that of
mobile phones and expected to reach 50 percent of US homes by 2020. In this
approach, the conversational commerce agents are a simple channel to a traditional
sandbox webpage or mobile application inheriting the legal framework of the
channelled service. Therefore, these interactions are not truly open because they
are mediated by a third party. Legal terms and conditions are established when the
human user configures the system and contracting changes are done with this third-
party sandboxed service (or “garden owner”).
This first closed-garden approach raises important issues in terms of law enforce-
ment since it is unclear how automated legal enforcement is to be performed. For
example, the service owner can extract undesirable personally identifiable infor-
mation (PII) from speech, including gender, race and mood. Serious legal hurdles
are also encountered when generic conversational devices are embedded in public
settings. For example, current proprietary devices, such as Amazon’s Echo, require
users to agree to relevant terms of use when downloading a “skill,” which is an app
that runs on top of the Alexa platform. What happens when such skills are embed-
ded in the cloud of a public, generic-use platform? More generally, how should we
deal with the fact that in a solely voice-based interaction with a conversational
device, there may be no point at which a user agrees to any terms whatsoever? It
seems likely that the current model of having users check a box or otherwise
physically agree to lengthy terms of service will lose applicability in a voice-based
environment. There is also the issue of both user and device authentication. What
does it mean to log in via voice in a public setting? Conversely, how can an
individual know that the device they are talking to is really what it claims to be?
While the effectiveness of biometric identification by voice will probably increase
rapidly in the coming years, users’ need to authenticate devices will still present
problems likely to fall within the purview of the law.
Finally, conversational devices also present unique security threat models that
pose new legal questions. What happens if a malicious actor uses a particular
individual’s recorded voice to authenticate themselves improperly? More subtly, a
malicious actor may intercept interactions between two machines regarding a
certain user’s sensitive information, which poses the question of exactly who is liable
when devices are in public spaces. Current voice recognition technologies also
utilize certain strategies in parsing verbal terms that are subject to change. A user
request might be parsed improperly, leading to unintended results and damages. In
a voice-only space, providing secondary authorization to certain requests could
prove burdensome, but the lack of such a process would clearly raise legal issues

24
Jeffs (2018), “OK Google, Siri, Alexa, Cortana; Can You Tell Me Some Stats on Voice
Search?” Branded3 <www.branded3.com/blog/google-voice-search-stats-growth-trends>
Accessed June 2019.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 241

as well. Even if these issues are solved, closed-garden solutions fragment the market
and prevent a single-user experience.

9.1.3.2 Open Conversational Commerce Approaches


A second approach, and the one we will focus on in this chapter, is an open
conversational commerce AI/IoT environment where anyone can direct any device
to their desired AI handler by simply prompting the device with an appropriate and
standard25 “wake word” (or another comparable action including EEG recorded
thoughts and hand gestures), so that associated software agents can “wander around”
on their own as if they were delegated human butlers. This second approach is more
challenging since it includes public settings and requires industry collaboration – it
inherits all of the issues of the first approach, while raising several additional ones in
relation to the legality and law enforcement of such transactions.26
In closed-garden solutions, automated enforcement is insured by a given legal
entity, the service operator, responsible for checking the end-to-end operation of a
given service. This is also the case in several multilateral relationships (e.g., Google
and associated major travel websites such as Booking.com). One could argue that
even systems that appear completely open are, when it comes to legal compliance,
closed-garden solutions. For example, in bitcoin and Ethereum,27 the contracts are
exposed openly, and law enforcement is done via automated rules predefined by the
parties within a very well-established set of possible rules. However, many block-
chains have central software development teams that decide how the software will
achieve legal compliance. What is worrying about many existing cryptocurrency
solutions is that, in the event of major system breaches, law enforcement by the
courts may be impossible given that control of the system is in the hands of a few
people who may remain completely anonymous or may be in a different jurisdiction
than the one of interest. In some cases, there may be no known human behind such
instances, or even no human at all: this could be the case if a computer virus is
allowed to earn cryptocurrencies by modifying music and selling it in the open
market as may be possible with initiatives such as the Open Music Initiative.28

25
Subirana, Taylor, Cantwell, Jacobs, Hunt, Warner, Stine, Graman, Stine, and Sarma, “Time
to Talk: The Future of Brands Is Conversational,” MIT Auto-ID Laboratory Memo, January
2018. Available at <www.researchgate.net/publication/328733947_Time_to_talk_The_Future_
for_Brands_is_Conversational>. 10.13140/RG.2.2.10490.75208.
26
Helbing, “Societal, Economic, Ethical and Legal Challenges of the Digital Revolution: From
Big Data to Deep Learning, Artificial Intelligence, and Manipulative Technologies” Towards
Digital Enlightenment (Springer 2018) 47‒72.
27
Crosby et al., “Blockchain Technology: Beyond Bitcoin” (2016) 2 Applied Innovation 6‒10.
28
De León and Avi, ”The Impact of Digital Innovation and Blockchain on the Music Industry”,
Inter-American Development Bank (2017).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
242 Brian Subirana, Renwick Bivings, and Sanjay Sarma

9.1.4 A Voice Name System for Wake Neutrality


The research described in the chapter is part of an effort to establish a voice name system
(VNS) standard for conversational commerce, paving the way for a more general AI
standard to include other sensory inputs such as vision, Internet of Things devices,
social media, mechanical activation and neural activation. Current research into
possible specifications in this area has started with an open conversational commerce
standard that is based on the VNS.29 A first version of the VNS system was demonstrated
at the Hello World inaugural session of the newly established Stephen A. Schwarzman
College of Computing at MIT.30 The VNS architecture routes smart-speaker voice
requests to third-party services in a neutral way (unlike Siri, Echo or Google Home). It
behaves similarly to the domain name system (DNS) on the web, the North American
Numbering Plan (NANP) on the phone, and the GS1 barcode standard. Key issues our
architecture addresses include: the collection of voice samples to design “wake
engines”; the creation of an emotional firewall to prevent leaking PII; the extension
to other interaction modes such as EEGs or Vision; and prevention of phishing attacks.
We demonstrated progress towards VNS on Internet browsers and Android, showing
that any device with a browser and a microphone can benefit from AI interactions.
The rest of this chapter is organized as follows. In Section 9.2 we establish six
requirements to achieve wake neutrality configurability, institutionalization, non-
discrimination, explainability, auditability, and error management. They are natur-
ally grouped into those that help achieve wake neutrality, the first three, and those
that enforce it, the latter three. Section 9.3 reviews previous research on Net
Neutrality and how it relates to Wake Neutrality of AI Devices. Section 9.4 analyzes
the legal programming implications of the desired requirements, and Section 9.5
looks at the relationship between contracting and wake neutrality. In Section 9.6 we
provide examples of how the various layers of an AI architecture stack may be related
to wake neutrality. The chapter concludes in Section 9.7 with a discussion on future
research, including the possible role of standard bodies in setting legal programming
policies for human-to-AI interactions.

9.2 six requirements for wake neutrality of


ai devices in occ
In this section, we establish a set of requirements for an open conversational
commerce standard architecture that enables wake neutrality. The key question in
29
Subirana et al., “The MIT Voice Name System (VNS)”, MIT Auto-ID Laboratory
Memo (2019).
30
Gaidis, Subirana, Sarma, Cantwel, Stine, OliveiraSoens, Tarragó, Hueto, Rajasekaran and
Armengol, “A Secure Voice Name (VNS) System to Wake Smart Devices”. June 2019. DOI:
10.13140/RG.2.2.12884.65921. Conference: MIT Stephen A. Schwarzman College of Comput-
ing Research and Computing Launch Postdoc and Student Poster Session (Hello World,
Hello MIT).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 243

table 9.1 Six legal requirements to achieve and enforce wake neutrality

Achieve wake neutrality Enforce wake neutrality

What/How Configurability Explainability


Who Institutionalization Auditability
Why Non-discrimination Error compliance

this section is how AI devices, including conversational commerce speakers, can


have wake neutrality in a similar way to phone numbers, email addresses or URLs,
and if so, how? Returning to the example in our introduction, when you pick up the
phone you can dial Tosanini’s Gelateria from anywhere in the world. You could also
send it an email or check its website. Email, phone numbers, and URLs are gender,
power and wealth neutral, and more importantly, have no lock-in effect if transfer-
ability is ensured (such as the number portability that is now standard in most parts
of the world). More controversial is whether the quality of the service should be
equal in all circumstances – one could even argue it should be universal and
recognized as a fundamental human right. In the next section, we will turn to one
aspect that has received considerable policy attention over the years and that has
been termed “net neutrality,” which concerns whether the Internet should be
agnostic to the content it transports – for example, whether paid-subscription video
should have a bigger share of bandwidth than free, or even illegal content. We will
review the history of “net neutrality” since it can serve as an illustration for what
needs to be done to achieve net or general wake neutrality.
We divide the legal requirements for wake neutrality, into those concerned with
achieving it and those concerned with enforcing it. For each of these two categories,
we identify the what, the how and the who. This yields six requirements as outlined
in Table 9.1. The What is related to the technology used, the Who is related to the
market agents that participate and the Why has to do with the legal rights to be
valued and protected. Loosely speaking, the What is mostly related to Intelligence
Wake Neutrality, the Who to Name Wake Neutrality and the Why to Product Wake
Neutrality.
Let’s now review each of the six in turn, starting with the three related to
achieving Wake Neutrality.

9.2.1 Requirements to Achieve Wake Neutrality

9.2.1.1 Configurability
Open conversational commerce, in the foreseeable future, will depend on imperfect
speech-to-text and speech-to-personality inferences which make neutrality more
intricate than is the case in simple web browsing or phone dialing. While speech-

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
244 Brian Subirana, Renwick Bivings, and Sanjay Sarma

recognition algorithms are becoming increasingly sophisticated,31 significant accur-


acy issues still exist, especially in noisy environments,32 meaning that there is still a
relatively high risk associated with the accuracy of speech detection devices. Existing
research has proposed potential legal solutions to the problem of securing privacy in
the age of mass data collection, distinguishing PII from other types of anonymized
and aggregated non-PII data. However, some of this research has assumed
that collected data is in fact anonymous, or that even if there is not a possibility of
de-anonymization the benefits currently outweigh the risks.33 Increasingly powerful
big data-based inferences34 could blur or altogether demolish any clear delineation
between what is and is not PII.35 Certain legal frameworks might be extendable to
meet the challenge of a PII-less world, but in general, laws in the USA and
elsewhere are noticeably behind the pace of technological change.36
Voice-based interactions provide a goldmine of data for machine-learning algo-
rithms to draw inferences on the personality, tastes and needs of an individual.37
This could lead some organizations to act on these inferences in ways that might not
be in the best interest of the individual, such as advertising addictive products to
individuals with user profiles deemed addictive, or increase health insurance at the
first sign of dementia.38 As IoT adoption takes off, users will increasingly expect
device- and session-agnostic experiences, such as those currently enabled on the
Internet via cookies or through incognito or private browsing.39 Could a similar
concept be applied to conversational interactions with multiple devices?
Recorded voice data can also include information about the state of mind of
individuals. It has previously been observed that speech patterns and other acoustic
information can be highly relevant to ascertaining the mental and emotional states
of patients for clinical diagnostic purposes.40 What happens when devices are able to

31
Hinton et al., “Deep Neural Networks for Acoustic Modeling Ii Speech Recognition: The
Shared Views of Four Research Groups” (2012) 29(6) IEEE Signal Processing Magazine 82‒97.
32
Shen, Hung, and Lee, “Robust Entropy-Based Endpoint Detection for Speech Recognition in
Noisy Environments” (1998) 98 ICSLP paper 0232.
33
Asay, “Consumer Information Privacy and the Problems(s) of Third-Party Disclosures” (2013)
11(5) Northwestern Journal of Technology and Intellectual Property 358.
34
Hu, “Big Data Blacklisting” (2015) 67(5) Florida Law Review 1735‒1810.
35
Mariarosaria and Floridi, “Regulate Artificial Intelligence to Avert Cyber Arms Race” (2018)
556(7701) Nature 296‒298.
36
Paez and La Marca, “The Internet of Things: Emerging Legal Issues for Businesses” (2016)
43(1) Northern Kentucky Law Review 29‒72.
37
Webb, Pazzani, and Billsus, “Machine Learning for User Modeling” (2001) 11(1) User Modeling
and User-Adapted Interaction 19‒29.
38
Witten et al., Data Mining: Practical Machine Learning Tools and Techniques (Morgan
Kaufmann 2016).
39
Jansen et al., “Defining a Session on Web Search Engines” (2007) 58(6) Journal of the
American Society for Information Science & Technology 862‒871. EBSCOhost, doi:10.1002/
asi.20564.
40
Murray, Pouget, and Silva, “Reflections of Depression in Acoustic Measures of the Patient’s
Speech” (2001) 66(1) Journal of Affective Disorders 59‒69.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 245

similarly diagnose the mental states of users? Neutrality here will probably need to
include some mechanism for decoupling the emotive information contained in
speech from the content of the speech itself, which will present issues from a
semantic interpretation perspective. On the other hand, well-intentioned services
may use this powerful health inferences to provide valuable alerts to users and early
prevention treatment options that could result in great cost savings and significant
health improvements over time.
It is evident, therefore, that voice as a widespread medium for interactions with
devices connected to the Internet impacts neutrality in several dimensions such as
accuracy, personally identifiable information (PII), machine-learning effort, cross-
device identification, and semantic interpretation.
The above discussion implies that in order to effect neutrality, there must be a
way to configure the speech-to-text and speech-to-personality algorithms so that users
can decide which PII is shared and, most importantly, whether they want some form
of feedback to be able to interrupt the sending of data to the wrong service in case
there are inaccuracies in any of the conversions.
Thus, a requirement for wake neutrality is that of configurability. Open
systems must have a way to set options so that neutrality is tailored to various modes
based on the particular privacy preferences of the user and service. Some examples
could be:

 Speech incognito mode: The handling service only receives the trans-
lated text. No voice information is passed along.
 Native speech mode: The handling service receives the full speech via an
encrypted point to point connection.
 Emotional mode: Speech is converted to text and basic sentiment
analysis information.
Although privacy and security are perhaps the most important things to configure,
there are many other aspects of voice devices that may benefit from some form of
configuration standardization. For example, how is sound level defined; are there
reserved words to turn on IoT devices such as a light bulb; are there industry specific
commands like “shopping list” or “checkout.”

9.2.1.2 Institutionalization
Who should establish what are the configuration options to be implemented? For an
open approach to flourish, we feel an international standards body is needed to
facilitate setting the standards ensuring wake neutrality. There are many options
including adoption by existing bodies such as GS1, W3C or ITEF. Whatever choice,
an organization must take charge to set the standards to be implemented. In the
foreseeable future, AI may continue to progress creating new options and challenges
for such institution.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
246 Brian Subirana, Renwick Bivings, and Sanjay Sarma

9.2.1.3 Non-discrimination
Perhaps the most important legal right to preserve is the development of a market
non-discriminatory wake service. This means that the service offered is not biased in
any way and that no particular supplier or customer gets any preferential treatment.
In addition, in the case of conversational commerce, provisions needs to be made for
special cases such as the mute, blind, or deaf.

9.2.2 Requirements to Enforce Wake Neutrality

9.2.2.1 Explainability and Auditability


Existing conversational commerce systems require users to first set up an account and
subsequently operate under very strict closed-garden rules so that the service operator
can make sure who is legally responsible at any point in time. Since this would not be
possible in an open conversational solution, the first two requirements to insure AI wake
neutrality must be explainability and auditability: to be able to identify who is respon-
sible for the software agent that handles a given conversation; and to be able to audit
whoever has made this decision. This type of explainability already happens on the web
because domain name services are associated with legal entities across the globe.
Auditability is standard practice in data protection legislation and is trivial in the case
of bandwidth net neutrality. As in many existing devices, explainability in OCC may
take the form of audio or visual feedback so that the user knows how the device is
responding to wake cues. Explainability of the rationale for AI decisions, beyond
contracting records, to ensure the removal of all biases is still not possible since we
don’t yet understand how deep learning works and have no way of formally verifying
large code bases. We don’t know how to get rid of these biases, but we do know they
exist.41

9.2.2.2 Error Compliance


All of the above legal requirements are mostly unrelated to the fact that speech
recognition is prone to errors, most notably false positives. The devices may “think”
we need their attention when we don’t. Worse even, the devices may send our voice
commands to the wrong handler because they hear a different wake word than
the one intended. To establish a legally secure playground, some policies must be
established to manage error routing in an open environment. On the web, close
matches like Amzon and Amaz0n (zero instead of the letter o) are not allowed to

41
Hacker, “Teaching Fairness to Artificial Intelligence: Existing and Novel Strategies against
Algorithmic Discrimination under EU Law” (2018) 55 Common Market Law Review 1143‒1186.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 247

be registered to prevent malicious phishing practices. With voice, one can also
prevent close sounding names to be registered. However, there is no clear way to set
phonetic boundaries and some errors seem unavoidable especially in noisy environ-
ments. Establishing such boundaries may prove increasingly difficult to manage as
AI algorithms develop personalized user voice profiles.

9.3 net neutrality and wake neutrality


Net neutrality has received considerable attention for its implications in Internet-
related commerce.42 Here we propose the establishment of an automated law-
enforcement framework that implements an open AI canon determining how law
enforcement of AI wake neutrality should be ensured – in a way that is analogous to
net neutrality practices in basic telecommunications services as we will review next.
In these services, very simple bandwidth meters are designed to find breaches in the
basic requirement of net neutrality (bandwidth) and fines can be directed to the
small number of providers, making automated law enforcement very simple. In
conversational commerce, a requirement for net wake neutrality is that the quality
of the service is independent of the user and the software agent. This requirement is
simple to validate but equally important to ensure neutrality on the Internet.
While the term “net neutrality,” as it was originally crafted, applied solely to
internet service providers (ISPs), its more modern usage encompasses a much
broader range of concepts. Next we explain the forces that led to the current state
of thinking, provide a brief history of the relevant legislation and suggest implica-
tions for a future trajectory for Net Wake Neutrality in OCC.
The Communications Act of 1934, which established the Federal Communi-
cations Commission (FCC),43 consists in its amended form of seven major sections,
the first of which, entitled “General Provisions,” reads:
For the purpose of regulating interstate and foreign commerce in communication
by wire and radio so as to make available, so far as possible, to all the people of the
United States a rapid, efficient, nationwide, and worldwide wire and radio commu-
nication service with adequate facilities at reasonable charges, for the purpose of the
national defense, and for the purpose of securing a more effective execution of this
policy by centralizing authority theretofore granted by law to several agencies and
by granting additional authority with respect to interstate and foreign commerce in
wire and radio communication, there is hereby created a commission to be known
as the “Federal Communications Commission,” which shall be constituted as
hereinafter provided, and which shall execute and enforce the provisions of this Act.
(The Communications Act of 1934)

42
Wu, “Network Neutrality, Broadband Discrimination” (2003) 2 Journal on Telecommunications
& High Technology Law 141.
43
Coase, “The Federal Communications Commission” (1959) 2 The Journal of Law & Econom-
ics 1–40. JSTOR.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
248 Brian Subirana, Renwick Bivings, and Sanjay Sarma

The Communications Act would later empower the US government to regulate


new technologies, such as television, mobile phones, the Internet and conversa-
tional IoT devices.44 In addition, because the Act allowed for the regulation of
certain commercial corporations deemed common carriers, opponents regarded it
as government overreach.45 Despite periods of fierce debate, the Act was not
seriously changed until 62 years later, with the passing of the Telecommunications
Act of 1996.46 Although the term “common carrier” originally referred to organiza-
tions engaged in the transport of people or goods,47 it can have different connota-
tions in other contexts. In the USA, the term is generally used to refer to
telecommunications service providers, especially those that fall under Title II of
the Act, as well as public utility providers in certain cases. The FCC expanded the
term to include ISPs in June 2015,48 a major step forward in the enforcement of net
neutrality.
One of the main benefits of the creation of the FCC49 was that it codified a set
of explainable standards for common carriers and ISPs, and in doing so, created a set
of expectations by which the public could then judge specific companies and
products. However, while the Act helped promote the explainable component of
net neutrality, there was still work to be done on other fronts, especially that of
fairness.
The Civil Rights Act of 1964 was the culmination of the historic civil rights
movement50 and its many defining moments, such as Rosa Parks’ famous refusal
to sit at the back of a segregated bus.51 The Civil Rights Act, in effect, made it illegal
to discriminate based on race, color, religion, sex or national origin.52 Arguments in
favor of the Civil Rights Act largely centered around the idea that all citizens should
be afforded equal access to facilities open to the public, with President John
F Kennedy, a major proponent of the civil rights movement, calling in a speech
on 11 June 1963, for legislation “giving all Americans the right to be served in
facilities which are open to the public ‒ hotels, restaurants, theaters, retail stores,

44
See the Communications Act of 1934, 47 USC § 151 et seq. and Coase (n 43).
45
Nichols, “Redefining Common Carrier: The FCC’s Attempt at Deregulation by Redefinition”
(1987) 3 Duke Law Journal 501‒520.
46
Levi, “Not with a Bang but a Whimper: Broadcast License Renewal and the Telecommuni-
cations Act of 1996” (1996) 29 Connecticut Law Review 243.
47
Holmes, “Common Carriers and the Common Law” (1879) 13(4) American Law Review 609‒
631.
48
Gioia, “FCC Jurisdiction over ISPS in Protocol-Specific Bandwidth Throttling” (2009) 15(2)
Michigan Telecommunications and Technology Law Review 517‒542.
49
Brown, “Revisiting the Telecommunications Act of 1996” (2018) 51(1) PS: Political Science &
Politics 129‒132. doi:10.1017/S1049096517002001
50
Klarman, From Jim Crow to Civil Rights: The Supreme Court and the Struggle for Racial
Equality (Oxford University Press 2004).
51
Parks and Haskins, Rosa Parks: My Story (Dial Books 1992).
52
Act, An. CIVIL RIGHTS ACT OF 1964. Title VII, Equal Employment Opportunities (1964).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 249

and similar establishments.”53 One of the key differences between the Act of
1964 and previous civil rights legislation was that after its passing, the Supreme
Court ruled in the landmark case of Heart of Atlanta Motel v United States that the
law applied not only to the public sector but also to the private sector, on the
grounds that Congress has the power to regulate commerce between the States.54
The passing of civil rights legislation put in place a set of standards by which to
judge future developments, allowing us to pass some judgment on how fair, or how
net neutral, subsequent developments in conversational IoT are. The idea that ISPs
should not be able to throttle speeds to certain users over others is broadly based on
the notion of fairness and equal access to common goods, as well as legal recourse in
cases in which such expectations are not met. Net neutrality is a way of positively
promoting the fundamentally democratic and decentralized nature of the Internet.
While the term “net neutrality” was first coined by Professor Tim Wu at the
beginning of the twenty-first century,55 many of the fundamental ideas associated with
it were already being debated in the 1800s, with some legal scholars asking whether
telegrams sent and received by two individuals in the same state, but routed through
another state, would be designated “interstate commerce” (as seen in the Civil Rights
Act of 1964, this designation can be crucial for federal regulation).56 More recent ideas
relating to antitrust and monopoly law57 are being developed as part of efforts to afford
consumers more access to ideas and creative works.58 There has been extensive debate
as to when, where, and how to apply net neutrality, and in certain cases this debate has
led to actual changes in FCC policy.59 In one prominent complaint filed with the
FCC against Comcast, the company was alleged to have been throttling use of its high-
speed Internet service to users of the file-sharing software Bittorrent.60
There is, however, still no truly agreed upon definition of net neutrality.61
Narrowly defined as it applies to Internet access, a working definition might

53
Berg, “Equal Employment Opportunity under the Civil Rights Act of 1964” (1964) 31 Brook
Law Rev 62.
54
McClain, “Involuntary Servitude, Public Accommodations Laws, and the Legacy of Heart of
Atlanta Motel, Inc v United States” (2011).
55
Wu, “Network Neutrality, Broadband Discrimination” (2003) 2 Journal on Telecommunications
& High Technology Law 141.
56
Harris, “Is a Telegram which Originates and Terminates at Points within the Same State but
which Passes in Transit Outside of that State an Interstate Transaction?” (1916‒1917) 4(1)
Virginia Law Review 35‒52.
57
Schwartz, “Antitrust and the FCC: The Problem of Network Dominance” (1959) 107(6)
University of Pennsylvania Law Review 753‒795.
58
Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (Vintage 2002).
59
Browni, “Broadband Privacy within Network Neutrality: The FCC’s Application & Expansion
of the CPN Rules” (2017) 11(1) University of St Thomas Journal of Law and Public Policy
(Minnesota) 45‒62.
60
Reicher, “Redefining Net Neutrality after Comcast v FCC” (2011) 26(1) Berkeley Technology
Law Journal 733‒764.
61
Krämer, Wiewiorra, and Weinhardt, “Net Neutrality: A Progress Report” (2013) 37(9) Telecom-
munications Policy 794‒813.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
250 Brian Subirana, Renwick Bivings, and Sanjay Sarma

look like this: Next we outline reasonable boundaries for the spirit of net
neutrality, which we analyze in terms of its application to conversational
commerce and IoT.62 Net Neutrality in Spirit (NNiS) is a set of loosely defined
conventions that expand upon narrowly defined net neutrality via the concepts
that underpin the legislation outlined above, namely the Telecommunications
Acts of 1934 and 1996, common carrier designations, and the Civil Rights Act of
1964. The real thrust of net neutrality is its application in spirit.63 An example of
NNiS includes so-called open Internet initiatives that go beyond the original
notion of net neutrality in an effort to promote open standards, transparency,
lack of censorship, and low barriers to entry.64 Many of their core proponents
regard these initiatives as an attempt to decentralize the power inherent in
technology and data, and as similar to open-source software, at least in their
core mission.65

Narrowly Defined Net Neutrality


Internet access providers may not alter the service, whether by throttling speeds or
blocking access altogether, based on the user, the content being viewed, or the owner of
such content.

In general, NNiS might apply to any good or service that has become so ubiquitous
or necessary to daily life that access is viewed as nearly or wholly a common good.66
For example, Title IX legislation has sought to rectify gender discrimination at
federally funded schools,67 while the Americans with Disabilities Act was passed
to prohibit discrimination based on disability, requiring businesses and organizations
to provide accommodations enabling individuals to participate in regular employ-
ment and education.68 Similarly, many take the view that ISPs should not be
liable for the presence of illegal content online, although actual legal opinions have

62
Kim, “Securing the Internet of Things via Locally Centralized, Globally Distributed Authenti-
cation and Authorization,” PhD Thesis, University of California at Berkeley 2017.
63
Klein, “Data Caps: Creating Artificial Scarcity as a Way around Network Neutrality” (2014)
31(1) Santa Clara High Technology Law Journal 139‒162.
64
Meinrath and Pickard, “Transcending Net Neutrality: Ten Steps toward an Open Internet”
(2008)12(6) Education Week Commentary 1‒12.
65
Thierer, “Are ‘Dumb Pipe’ Mandates Smart Public Policy? Vertical Integration, Net Neutral-
ity, and the Network Layers Model” in Lenard and May (eds), Net Neutrality or Net Neutering:
Should Broadband Internet Services Be Regulated (Springer US 2006) 73‒108.
66
Hartmann, “A Right to Free Internet: On Internet Access and Social Rights” (2013) 13(2)
Journal of High Technology Law 297‒429.
67
Heckman, “Women & (and) Athletics: A Twenty Year Retrospective on Title IX” (1992) 9(1)
University of Miami Entertainment and Sports Law Review 1‒64.
68
Acemoglu and Angrist, “Consequences of Employment Protection? The Case of the Ameri-
cans with Disabilities Act” (2001) 109(5) Journal of Political Economy 915‒957.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 251

differed depending on the region.69 In the recent case Packingham v the State of
North Carolina, the Supreme Court ruled in favor of the right of a convicted sex
offender to use Facebook to make innocuous posts, even though the court also
upheld North Carolina’s right to prohibit registered sex offenders from using social
media to make any attempt to contact minors.70 Here, the case hinged on whether
prohibiting access to ubiquitous social media like Facebook infringed on funda-
mental rights to free speech and access to public spaces, and at least in this case, the
Supreme Court took this view.
In the narrow definition introduced above, net neutrality may not strictly apply
to Wake Neutrality in conversational commerce and AI in general. Even if it were
to be applied, the architecture may be set up in such a way that businesses wish to
incentivize more consumer use, choosing to reward data creators instead of
seeking to throttle heavy data users.71 Broadly defined and in spirit, however,
net neutrality could present a set of expected norms within this space, the
breaching of which might trespass on what is considered fair and right in the
public mind. Users of a unified IoT ecosystem may, for instance, expect that
information provided after requests is provided truthfully and equally to all parties,
even if the presentation of information, such as search results, is ultimately
protected under the First Amendment rights of the company.72 Companies could
theoretically benefit from creating customized IoT experiences that lead different
users to believe different things, but customers in such a scenario might expect
that, unless customization is explicitly requested, information should be provided
in equal, straightforward ways.73
This section has presented six requirements for wake word neutrality which lead
us to suggest the following definition of net wake neutrality:In this definition, we
have used “wake” rather than “wake word” in order to include other forms of
recalling agents that may be based on sign language or IoT sensing. Even though
our focus is on conversational commerce using speech, the discussion above would
be very similar for other forms of AI software agent awakening (vision, EEG,
presence based). For the sake of simplicity, we will continue to center our descrip-
tions on the voice-use case but with the understanding that our ambitions are non-
discriminatory in the broadest sense possible.

69
Kleinschmidt, “An International Comparison of ISP’s Liabilities for Unlawful Third Party
Content” (2010)18(4) International Journal of Law and Information Technology 332‒355.
70
See the Packingham v North Carolina (2017) case, accessed June 2019, in <www.supremecourt
.gov/opinions/16pdf/15-1194_08l1.pdf>.
71
Ganti, Ye, and Hui Lei, “Mobile Crowdsensing: Current State and Future Challenges” (2011)
49(11) IEEE Communications Magazine 32‒39.
72
Volokh and Falk, “Google: First Amendment Protection for Search Engine Search Results”
(2012) 8(4) Journal of Law, Economics & Policy 883‒900.
73
Hall, “Standing the Test of Time: Likelihood of Confusion in Multi Time Machine v
Amazon” (2016) 31 Berkeley Technology Law Journal Annual Review 815‒850.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
252 Brian Subirana, Renwick Bivings, and Sanjay Sarma

(Artificial Intelligence) Net Wake Neutrality


Conversational commerce service providers will make a configurable and explainable
effort to direct a user’s request to its intended AI software agent without altering the
service, whether by throttling speeds or blocking access altogether based on the user, the
agent being requested, or the owner of such agent.

9.4 legal programming enablers of wake neutrality


Automated law enforcement can be used in many fields where sensors and com-
puters record individuals’ activity, and in particular for wake-word neutrality in
OCC. While in the past, law enforcement was carried out by a human officer,
automated systems should provide for an efficient, meticulous, and tireless enforce-
ment of many laws. Algorithmic enforcement promises rapid dispatch of penalties
and offers financial incentives to law-enforcement agencies, governments, or other
organizations. In this section we first examine the potential scope of automated law
enforcement, considering how the concept can be carefully implemented and
properly constrained through the legal programming74 of automated contracts and
shared ledgers. Second, smart contracts in public open commerce can support the
performance of contracts, and reduce costs of negotiation, verification, and enforce-
ment by turning legal obligations into self-executing transactions. Third, automated
law enforcement can also help eliminate the over-regulation currently found in
many legal systems. Precise, rigorous and concise legal norms should be rationalized
to the minimum needed, eliminating the contradictions and internal inconsisten-
cies that inhibit semantic validation of automated contracts, and algorithmic tech-
nology can provide invaluable assistance with this task.
Our basic proposal for wake neutrality is to develop a decentralized DNS-type
service, which we call voice name system (VNS) and which handles wake words
very much as the Internet handles domain names. A register of words can be created
so that the software agent handler for a given wake word is unambiguously associated
with one owner, which in some ways resembles how email, the web, or even the
phone network work. A dial pad-type mechanism would need to be added to handle
securely wake-word requests. Some smart speakers already have a simple mechanism
for this, operated by tapping the device.
Unfortunately, while this basic proposal may allocate wake words correctly, it
would eventually need to address many more aspects of wake neutrality, such as
the PII and accuracy issues inherent in speech (as discussed in Section 9.2).
A smart contract infrastructure could be built to address these issues and to

74
Subirana and Bain, Legal Programming: Designing Legally Compliant RFID and Software
Agent Architectures for Retail Processes and Beyond (Springer-Science 2005).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 253

facilitate algorithmic law enforcement of a wake-word standard that addresses the


six requirements we have discussed.75,76,77 The remainder of this section intro-
duces various automatic contracting approaches that could extend the
basic DNS-type solution described above. While a complete solution is
outside the scope of this research, the ideas presented here illustrate a number
of options.
Ever since the emergence of bitcoin, developed by an unknown cryptic individ-
ual or group using the pseudonym Satoshi Nakamoto,78 the potential for blockchain
technologies that underpin cryptocurrencies such as bitcoin to greatly affect society
has been discussed. A blockchain79 can be described as a distributed ledger that is
continuously created in an ever-growing chain of transactions that is very difficult,
though not impossible, to forge, serving as a self-referencing proof of all authentic
transactions that occurred on that particular blockchain. One of the most important
innovations of these technologies lies in the removal of the need for any centralized
third-party authentication, but this is not a requirement. The ledger is theoretically
unchangeable through any mechanism other than authentic transactions; therefore,
no outside verification is required and algorithmic law enforcement is ensured as
long as some basic hypothesis are met. In bitcoin, for example, at least 50 percent of
mining power is in the hands of trusted entities and software engineering does not
introduce malicious or undesired software. The use of such technologies therefore
has the potential to fundamentally change the nature of the relationship between
two parties in a transaction, such as between an individual consumer and a
company.
For example, where once a retailer might require proof of identity via a passport,
users of bitcoin might expect to remain anonymous throughout the transaction
process, operating on the assumption that both sides accept the legitimacy of an
exchange of a cryptocurrency. Moreover, the use of blockchain has moved beyond
the scope of cryptocurrencies, with the development of Ethereum and other newer,
more programmable technologies illustrating the potential impact on the legal field
as well through what have been termed smart or self-executing contracts. In general,
as individual expectations of interactions and transactions with other parties change,
the landscape of what is considered fair and reasonable shifts to meet this new

75
Wright and De Filippi, “Decentralized Blockchain Technology and the Rise of Lex Crypto-
graphia” (10 March 2015). Available at SSRN: <https://ssrn.com/abstract=2580664>.
76
Mik, “Smart Contracts: Terminology, Technical Limitations and Real World Complexity”
(2017) 9(2) Law, Innovation and Technology 269‒300.
77
Eskandari, Clark, Barrera, and Stobert (2018), “A First Look at the Usability of Bitcoin Key
Management” arXiv preprint arXiv:1802.04351.
78
Nakamoto Satoshi, “Bitcoin: A Peer-to-Peer Electronic Cash System” (2008) <https://bitcoin
.org/bitcoin.pdf>.
79
Yli-Huumo, Ko, Choi, Park, and Smolander, “Where Is Current Research on Blockchain
Technology? – A Systematic Review” (2016) 11(10) PLoS ONE e0163477. <https://doi.org/10
.1371/journal.pone.0163477>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
254 Brian Subirana, Renwick Bivings, and Sanjay Sarma

reality. Blockchain could therefore form a major part of a backbone architecture for
the formulation of voice net neutrality in conversational AI/IoT networks, which
could impact the perceived fairness, configurability, and explainability of future
voice-based IoT devices.
One possible way of automating contracts is to associate each conversational
commerce device with a validation server operated by a standards body or, by
default, its parent according to the VNS hierarchy. Such a validation server could
have a default wake-word policy (or issue its own) and report it in the form of a URL
and a public hash key. Each validation server would issue its own tokens and register
wake-word requests on its own ledger or a blockchain of choice, as set by the policy
in operation. These ledgers could then be algorithmically audited for legal compli-
ance. The validation servers could issue their own tokens and store them as
validation server hash keys at given intervals. The tokens could then be incorporated
in the legal programming of algorithmic law-enforcement agents. These agents
could signal exceptions and issue fines. Token depletion and PII removal could
be recorded on the same blockchains.
Note that such ledgers could include the associated conversational commerce
devices and could be run either by it or by participating in other public
blockchains such as bitcoin. These registries could selectively include the ori-
ginal sound files and some description of the processed output and this infor-
mation would be paired with the destination agent signature and any additional
user information such as IoT data, PIN verification or DUO security authoriza-
tion. The architecture could combine these different signatures to enable algo-
rithmic law enforcement of AI agent allocation by checking pair integrity. While
records and extra signature options are not so relevant for a wake standard, they
may become essential for automated contracting. If the DNS association implies
that miners are not anonymous, and perhaps even that they are trustable, the
participating blockchains could incorporate novel mining approaches where
mining rewards are based on time spent talking to the device, proof of talk or
proof of charity (certainly more environmentally friendly than bitcoin’s resource-
intensive proof of work).
Each device could store in its server’s blockchain a smart contract policy that
specifies the algorithmic law-enforcement guarantees that are in place and how the
six requirements of wake neutrality are met. The policy could include procedures
for users to report exceptions and for the involvement of the courts. Most import-
antly, it should describe how speech recordings are used and the various ways of
interacting with the device available to (a) the user, (b) third-party standards bodies
and (c) government law-enforcement services. A challenge service could be
included for algorithmic legal enforcement by standards bodies and government
agencies. The service would take speech recordings and provide access to the parsed
output. There could also be an algorithmic auditing policy for standards bodies and
government agencies.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 255

9.5 balancing wake neutrality with


automated contracting
This section reviews the key challenges of automated law enforcement in balancing
agent-contracting legal constraints80 with the six requirements of wake neutrality.
Contract law is a matter for national jurisdictions and has not yet been harmonized
at an international level. Here we follow previous work on legal programming81 and
adapt it to wake-neutrality law enforcement.At least 13 core contract principles are
relevant to the interaction between humans and open AI software agents: consent,
offer and acceptance; intention; evidence in writing; signatures; capacity; object and
cause; consideration; mistake; misrepresentation and good faith;82 incorporation of
terms;83 invitations to treat; time and place; and absent parties. Once an AI conver-
sational agent starts representing a human and it embarks on automated contracting,
the following issues therefore need to be resolved:
(a) Agent-based contract formation and validity
 Capacity. Do agents have sufficient capacity to enter into a contract
based on the wake-word command alone, given that it can be
recorded?
 Consent. Can agents provide consent, either their own or that of the
agent user?
 Agent failures, errors and the legal apportionment of risk. What
happens when an agent purchases the wrong product because
speech recognition failed, or the system crashes?
(b) Practical issues
 Procedures. Can agents distinguish invitations, offers and acceptances
based on the conversation with the user?
 Evidence. Can the requirements for “in writing” be met? How can
evidence be obtained and maintained of an agent-formed contract
through voice when it could be produced by a malicious recording?
Can it be combined with other biometric information?
 Terms. Can we ensure that all terms are properly incorporated into a
contract if the user has not even listened to them? Should we
produce standard contract terms? Can the user have or be deemed

80
Jacobowitz and Ortiz, “Happy Birthday Siri! Dialing in Legal Ethics for Artificial Intelligence,
Smart Phones, and Real Time Lawyers” (2018) Texas A&M University Journal of Property Law,
University of Miami Legal Studies Research Paper No 18-2. Available at SSRN: <https://ssrn
.com/abstract=3097985>.
81
Subirana and Bain (n 74).
82
Sartor and Cevenini, “Agents in Cyberlaw”. Proceedings of the Workshop on the Law of
Electronic Agents (LEA02) 2002.
83
Thus, for example in e-commerce, the importance in web pages of including any contracting
conditions, either directly on the “Accept” page, or by a visible and easily accessible link.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
256 Brian Subirana, Renwick Bivings, and Sanjay Sarma

to have knowledge of the terms? Where is the line between advertis-


ing and contract terms?
 Signatures. Can an agent provide a digital signature with binding
effect? Given that voice can be played over a recording, how should
agreement be registered?
 Consumer rights. How can information, transparency and consent
requirements be complied with when using agents?
(c) Contract formation:
 What is included in/excluded from the terms? In conversational com-
merce, one possible alternative is to develop standard terms to avoid
users having to agree to a text they will never read.
 Previous representations/declarations. Are there any that are binding?
 Evidence. Is confirmation of contract formation required, for
example by a mobile app?
These issues could be determined in traditional ways or via blockchain ledgers
with associated enforcement rules as those in public registers, recording of decision-
taking and parameterization.84 The identification, registration and certification of
“intelligent agents” can be addressed by a process similar to corporate registration,
using public-ledger solutions that link the computer agent to the associated person.
This would amount to granting legal identity to software agents.85 Assets and
liabilities of software agents are also conceivable since they can become miners of
cryptocurrency and accumulate assets.
In the absence of immediate legal solutions to some of these problems, it may be
possible to enhance the validity of any agent-based contract by adding technical
features to software agents, as suggested in the following list:
(a) The identity of the user/principal together with some “wake infor-
mation” (or at least, an indication that the software agent is a device
and not a person) could be included in the coding. This may run
into problems of privacy (a user who doesn’t want to disclose their
identity) which can be solved by a neutral indication that the software
agent is only an electronic device or via some form of zero-
knowledge proof.
(b) The nature of the user/principal could also be incorporated into the
code: nature of user = consumer/business. This would provide the
counterpart with some idea of its obligations, and the possibility of

84
Karnow, “Liability for Distributed Artificial Intelligences” (1996) Berkeley Technology Law
Journal 147‒204.
85
Ibid.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 257

excluding certain consumer-initialized agents if it only contracts (by


law or by corporate policy) with businesses.
(c) Negotiation protocols should enable websites and services agents in
general to communicate regarding which party is making the offer, as
well as the acceptance and the acknowledgment required by the
combination of the relevant national legislation. This may be addressed
if agents have an established wake protocol for talking to the responsible
user, although such a protocol may be enacted in such a way that it
protects the privacy of the user.
(d) Run-time errors and other unexpected events or states (e.g., after third-
party intervention) should be able to generate a “freeze/refer or report
back to user before proceeding” procedure to reduce certain liabilities
in the event of non-correctable mistakes. Variable parameters would
give the agent greater autonomy and could widen as the agent learns.
(e) Voice communications could be confirmed by the smart speaker upon
request, be registered on a public ledger or sent via mobile app, email
or SMS directly to users to provide further evidence of transactions,
either encrypted (for security) or not.
(f ) Agents should include functionalities for creating, transmitting and
storing electronic evidence of voice transactions. For adaptive/
advanced agents or special voice services, initial parameterization
should be stored as evidence of user intent, especially in the case of
mistakes or unexpected learning processes.
(g) Security features including alternative biometric checking should be
incorporated to minimize the risk of contracting after third-party inter-
vention (viruses, etc.) or system failure (power surges, etc.).
(h) Agents should withhold from contracting when in doubt, retracting to
collect additional information, especially regarding terms of sale (exclu-
sions of liability, etc.), with a fallback procedure that allows the agent to
report back to the user. Most importantly, there should be a reciprocal
wake standard so that agents can also wake the smart speaker to collect
additional information.
(i) Agents should include programming to send an acknowledgment of
receipt back to (consumer) users as soon as possible.

Table 9.2 presents a sample of legal and technical issues for electronic contracting.
We suggest that if these technical contracting processes can be completed and
modeled so that they become universal for the majority of conversational commerce
contracting, we may be able to create a legal architecture that can be applied to the
technical processes of AI agent contracting. This legal modeling in turn would
enable software developers to legalize their technical models – thus creating a
framework for compliant contracting-agent engineering in open conversational

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
258 Brian Subirana, Renwick Bivings, and Sanjay Sarma

table 9.2 Legal risks of AI agent-contracting processes

Wake neutrality concerns Additional processes for


Principal process raised compliance and/or certainty

Agent determines a need Would the user agree to this Ledger registration of original
to purchase specific item action? agent programming/
parameters (trigger events,
contract conditions)
Agent searches the Advertising or offer. Identification of data messages
network for various stores Information requirements as advertisements or offers.
selling relevant products (consumer contracting issue) Forwarding of obligatory
in agreement with user information to users’
wake? traceability ledger
Agent negotiates with Identification of parties – Registration of negotiation
store(s) for the quantity, agent user identified as steps (assistance to determine
price and other terms of consumer. true intent).
sale Capacity of agent to Session control and processes
negotiate. for system failures.
Good faith and withdrawal Well defined negotiation
from negotiation protocols
Agent concludes Capacity and consent registry Certification of agent’s
purchase agreement and impact if authority to conclude
mistake on wake. contracts.
Incorporation of all terms Process for retrieving and
storing terms.
Process for error correction
and confirmation.
Process for acknowledgment
of receipt
Agent provides delivery Identification of parties and Digital signatures for
and payment details use of PII information for payments (e.g. SET protocol).
possible anonymized Reference to user for PIN
payment

Agent records transaction Storage of evidence Register of processes (but


including wake recording security level? – e.g.
encryption for integrity and
confidentiality)

commerce. Further work, however, is needed in both legal and technical domains
in relation to agent-based contracting in public open conversational commerce
settings that are enhanced with AI agents. Specific areas requiring attention include
the expression of contractual preferences through speech associated with automated
negotiation (including the ability of computing languages to capture and express
personal contracting preference and the degree of granularity that may be achieved),

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 259

enhancing the legal validity of agent-based digital voice signatures, and attribution
regimes for contracts that are not directly supervised by the human user of the agent.

9.6 implications of wake neutrality for the ai


architecture stack
The discussion on wake neutrality so far has been somewhat unrelated to the various
AI architecture layers. This section considers the implications of wake neutrality in
OCC for a simple OCC architecture stack composed of four layers:86 sensor stream,
cognitive core, brain OS, and expression. Some examples are provided of how
architectural choices in each layer may impact how wake neutrality is approached
in a given AI system.

9.6.1 Wake Neutrality and the Sensor Stream


A key component not just of conversational IoT, but also of the greater universe of
big-data algorithm-based technologies, is the way in which learned models are
created and utilized to more accurately and effectively achieve a given task, such
as searching for a wake word while continuously listening to ambient sound. An
algorithm or set of algorithms that has been modified by feeding it specific data, for
example a user’s preferences, is referred to as a learned model. Because input data is
theoretically different for each user, the learned model applicable to that user
should be unique. This has legal implications for net neutrality on conversational
devices, especially if the aim is to achieve a generic public-access platform. If a
specific user’s learned models, or more plainly, preference and privacy settings, are
stored on one company’s cloud rather than another’s, how can the user seamlessly
move from one device to another, assuming different companies are involved? Even
if this problem were solved, what about the possibility that certain conversational
devices may utilize algorithms fundamentally incompatible with others? Before a
truly net-neutral conversational ecosystem can be built, the problems associated with
transferring learned models will need to be addressed.A key and possibly intractable
problem is that of explaining to the average user in an easily understandable way
how machine learning-based algorithms actually work. One reason for this problem
lies in the big-data processes by which such algorithms are reinforced. While the
creator of an algorithm may start with some basic assumptions, as the algorithm is
fed data, it can cluster around some nodes more than others, sometimes in unpre-
dictable ways. By the time an algorithm is sophisticated enough to solve a given
problem, it may be so “messy” as to render reverse engineering, and therefore a

86
The four layers are inspired by the MIT CBMM model of the brain: <https://cbmm.mit.edu/
research/modules>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
260 Brian Subirana, Renwick Bivings, and Sanjay Sarma

simple explanation, essentially impossible. However, if such explanations are impos-


sible, users may be unable to understand what is happening in any given interaction
with the technology. When human beings speak to each other, they can empathize
with others as well as guess what any given person may be likely to do at any given
time. The average human being is quite adept at this process, but the catch is that it
only applies to the cognition of other human beings, or at the very least, other
animals. If we are unable to predict what an AI agent might be “thinking,” this could
lead to a series of problems with voice-based technologies, and perhaps first and
foremost an inability to trust our interactions with them. More generally, the
inability to easily explain how conversational technologies work presents a hurdle
to creating truly net-neutral systems.

9.6.2 Wake Neutrality and the Cognitive Core

9.6.2.1 Psychological Well-Being and Personality Anonymization


Research as long ago as a century sought to connect voice and speech patterns to
clinical states of mind for the purposes of diagnosis. “The patients speak in a low
voice, slowly, hesitatingly, monotonously, sometimes stuttering, whispering, try
several times before they bring out a word, become mute in the middle of a
sentence. They become silent, monosyllabic, can no longer converse.”87 More
recent research has concluded that “clinical impressions are substantially related to
acoustic parameters” and that “acoustic measures of the patient’s speech may
provide objective procedures to aid in the evaluation of depression.”88 If states of
mind are discernible through patterns of speech, this could have far-reaching
implications for voice-based IoT interactions. For example, if a company has data
on the speech patterns of stressed people who are more likely to make impulse
purchases, should they be able to adjust their offerings to match customers in that
mindset? On the other hand, the ability to act on patterns of speech showing
elevated levels of stress could be crucial in times of need, such as a medical
emergency or for a victim of crime.89 More discussion is needed to form a
consensus on how to decouple actionable voice data from voice as the transac-
tional medium, as well as which elements of speech data and metadata should be
treated as PII.

87
Kraepelin, “Manic Depressive Insanity and Paranoia” (1921) 53(4) The Journal of Nervous and
Mental Disease 350.
88
Alpert, Pouget, and Silva, “Reflections of Depression in Acoustic Measures of the Patient’s
Speech” (2001) 66(1) Journal of Affective Disorders 59‒69.
89
Li et al., “Smart Community: An Internet of Things Application” (2011) 49(11) IEEE Communi-
cations Magazine 68‒75.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 261

9.6.2.2 PII-Neutral Navigation


Privacy by design (PbD), a concept put forward by the Federal Trade Commission
in a 2012 report,90 recommends that companies mitigate privacy risks by building
protections into their products, services and organizations from the ground up, but
an asymmetric risk/reward relationship between company and consumer means that
even an extremely small data abuse or breach rate could lead to intolerable conse-
quences for society as a whole.91 Certain legal frameworks might be extendable to
meet these impending challenges, but in general, laws in the USA and elsewhere
are noticeably lagging behind the pace of change.92 The questions of what, if
anything, PII will mean in the future, as well as how to provide proper protections,
deserve more attention going forward.
Another option we propose is for a VNS neutral system that requires an “emo-
tional firewall” based on independent third parties whose only role is to convert
voice into neutral text.93 This would create a legal and technical divide facilitating
PII anonymization. Your recorded voice in a conversational device may give away
your mood,94 personality,95 gender96 or ethnicity,97 opening the door for non-
neutral responses by the conversational Internet. An important distinction to draw
here is between the information we do and do not intend to communicate via the
voice. Using the analogy of a keyboard, while we clearly intend to allow the
computer to take in the information we input through keyboard strokes, if ancillary
information about us as individuals could be inferred from our keyboard stroke
patterns ‒ for example, the fact of being left-handed ‒ we might collectively decide
that the computer should not automatically take in or utilize this information unless
explicitly permitted to do so. Similarly, a distinction exists between the content of
our speech and the ancillary information that is appended to that speech, such as
through acoustic patterns. Ultimately, consumer trust in conversation-based transac-
tions will depend on the ability to decouple voice as the medium of consent from
voice metadata, such as patterns that show higher levels of stress.

90
Rubinstein, “Regulating Privacy by Design” (2011) 26(3) Berkeley Technology Law Journal
1409‒1456.
91
Lenard and Rubin, “Big Data, Privacy and the Familiar Solutions” (2015) 11(1) Journal of Law,
Economics & Policy 1‒32.
92
Paez and La Marca, “The Internet of Things: Emerging Legal Issues for Businesses” (2016) 43
(1) Northern Kentucky Law Review 29‒72.
93
Subirana et al., “The MIT Voice Name System (VNS)” MIT Auto-ID Laboratory
Memo (2019).
94
Gobl and Ailbhe, “The Role of Voice Quality in Communicating Emotion, Mood and
Attitude” (2003) 40(1) Speech Communication 189‒212.
95
Gobl and Ailbhe (n 94).
96
Xue, An, and Fucci, “Effects of Race and Sex on Acoustic Features of Voice Analysis” (2000) 91
(3) Perceptual and Motor Skills 951‒958.
97
McComb et al., “Elephants can Determine Ethnicity, Gender, and Age from Acoustic Cues in
Human Voices” (2014) 111(14) Proceedings of the National Academy of Sciences 5433‒5438.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
262 Brian Subirana, Renwick Bivings, and Sanjay Sarma

The rise of conversational commerce may lead to new issues of potential discrim-
ination. An ubiquitous sensor-based IoT infrastructure, coupled with powerful big
data-crunching algorithms, might produce unexpected inferences about individual
consumers, leading to unintended, yet nevertheless discriminatory, decisions.98
Should a business have the right to deny entry to an individual who is deemed to
have a cold based on their recorded voice patterns? Can an insurance adjuster deny
a claim based on voice-based transactional histories showing a pattern of lies, even if
the claimant has indeed lied? The potential for discriminatory blacklisting practices,
albeit unintentional, that designate certain individuals or groups as guilty before the
fact will likely only increase as big data-based inferences become increasingly
powerful.99

9.6.2.3 Forgetting Behavioral History


Smart contracts are a powerful method by which consumers may eventually reclaim
ownership of their data,100 but current technologies are greatly limited by an
inability to retain transactional privacy, since under current protocols all relevant
actions are necessarily recorded and distributed across the entire network.101 This is a
crucial hurdle to be overcome before consumers can be expected to trust such
systems with their personal information, especially that pertaining to medical history
and other highly sensitive data. In the EU, landmark rulings on “the right to be
forgotten”102 have sought to establish that users have a reasonable expectation of
being able to erase their online presence, especially given the Ebbinghaus forgetting
curve,103,104 and courts have at times recognized damages stemming from an inabil-
ity to remove unwanted content from the Internet. Courts have even on occasion
required companies to delete the content in question themselves. What does this
mean for conversational IoT?

98
Peppet (n 16).
99
Hu (n 34).
100
Zyskind and Oz, “Decentralizing Privacy: Using Blockchain to Protect Personal Data” Security
and Privacy Workshops (SPW), 2015, IEEE.
101
Kosba et al., “Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart
Contracts” (2016) IEEE Symposium on Security and Privacy (SP).
102
Villaronga, Fosch, Kieseberg, and Li, “Humans Forget, Machines Remember: Artificial Intelli-
gence and the Right to be Forgotten” (2018) 34(2) Computer Law & Security Review 304‒313.
103
Subirana, Bagiat, and Sarma, “On the Forgetting of College Academics: At Ebbinghaus
Speed?” Center for Brains, Minds and Machines (CBMM Memo No 068), 2017 <http://hdl
.handle.net/1721.1/110349>.
104
Cano-Córdoba, Sanjay, and Subirana, “Theory of Intelligence with Forgetting: Mathematical
Theorems Explaining Human Universal Forgetting using ‘Forgetting Neural Networks’”
Center for Brains, Minds and Machines (CBMM Memo No 071), 2017 <https://dspace.mit
.edu/handle/1721.1/113608>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 263

9.6.3 Wake Neutrality and the Brain Operating System

9.6.3.1 Open and Neutral Navigation with Associated User Information


In addition to handing voice commands to a given AI agent, to make a truly open
and neutral choice, automated law enforcement105 should also ensure the transfer of
relevant information to the agent. Users of conversational IoT devices may come to
view their local IoT environment, both in and outside the home, as, like the
Internet, the network on top of which they expect companies to compete, and not
as a means of competition itself.106 In other words, they will view the IoT as similar
to the Internet in its entirety, and not as a specific website or app. It is likely that
certain platforms or service providers will come to dominate the IoT landscape, as is
the case with the Internet, regardless of whether such trends lead to increased or
decreased competition.107 For their part, consumers will probably expect companies
not to erect barriers that make it difficult to navigate and share their data across
different platforms. Obvious barriers, such as withholding access to a user’s purchase
history, for example, might be seen as clearly trespassing on notions of fairness.108
However, as interactions are increasingly tailored via a user’s past interactions, more
subtle barriers may present themselves. For instance, if a user found, after switching
doctors and in effect platforms, that their data could not be transferred, they might
simply decide not to switch to avoid the issue. A truly net-neutral conversational
architecture would therefore require that such data be configurable and explainable,
i.e., the consumer should understand what the data they have actually is across
different spaces. Another expectation that users are likely to have of a voice-based
IoT infrastructure is the freedom to move data from one device or platform to the
next. In other words, as they mostly do with current smartphones, laptops and other
popular devices, users will expect their data to be “device agnostic.” Legislators
across the globe have signaled the wish to open up competition by forcing com-
panies to share closed-garden information silos. Germany is leading on legislation
that will enable cross-service information transfer in a format that can be shared
across different services.

105
Petit, “Artificial Intelligence and Automated Law Enforcement: A Review Paper” (21 March
2018). Available at SSRN: <https://ssrn.com/abstract=3145133> or <http://dx.doi.org/10.2139/
ssrn.3145133>.
106
Khan et al., “Future Internet: The Internet of Things Architecture, Possible Applications and
Key Challenges” 10th International Conference on Frontiers of Information Technology
(FIT), IEEE, 2012.
107
Haucap and Heimeshoff, “Google, Facebook, Amazon, eBay: Is the Internet Driving Compe-
tition or Market Monopolization?” (2014) 11(1‒2) International Economics and Economic Policy
49‒61.
108
Etzioni and Etzioni, “Incorporating Ethics into Artificial Intelligence” (2017) 21(4) The Journal
of Ethics 403‒418.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
264 Brian Subirana, Renwick Bivings, and Sanjay Sarma

Returning to the example of health care, a public-ledger architecture could be


created so that the blockchain hash key of a patient’s history is kept in an open ledger
and used to verify effective transfer of a full patient history. Several mechanisms
could be included in the architecture to verify that the hash key effectively corres-
ponds to the full history, such as a system whereby hash keys of the different
components of a user’s history are appended to the various elements of the health
chain of patient interaction (e.g., x-ray equipment, doctor’s diagnosis and prescrip-
tion, pharmacy invoice). This effectively opens up the cookie system that currently
operates on the web: when a device goes to a site, the site installs a number, also
known as a cookie, in the computer. If you visit the same site 20 days from now, you
are enabling servers to create a trace which, through the use of advertising networks
such as Google, can effectively follow users as they move from one site to another.
One way of neutralizing the system would be to open it up through an automated
contract system that operates through a ledger so that users have control over how
the information is used.109

9.6.3.2 Human Rights Neutrality


To prevent discriminatory retaliation, including that based on personality traits and
disorders, automated AI neutralization algorithms need to be developed in such a
way that legal enforcement can be carried out while not compromising human
rights. Emerging IoT technologies have the potential to greatly affect the healthcare
industry and its basic legal principles.110 To date, little attention has been paid to
data privacy and security issues (especially in the context of open health care) that
are relevant to the voice-based conversations poised to take up an increasingly large
share of total interactions with IoT devices.111 What happens when a swimmer asks
their device for the closest pool, and incidentally, the device happens to capture that
the swimmer has a cold, based on their voice patterns? Should the local pool be
allowed to bar entry to the swimmer based on that interaction? While the impact of
specific devices on consumer privacy has been analyzed,112 the ramifications of a
conversation-based architecture have thus far been largely overlooked. Users of the
future might once again opt for increased utility over privacy, as they have done in

109
Mytis-Gkometh, Drosatos, Efraimidis, and Kaldoudi, “Notarization of Knowledge Retrieval
from Biomedical Repositories Using Blockchain Technology” in Maglaveras, Chouvarda, and
de Carvalho (eds), Precision Medicine Powered by Health and Connected Health. IFMBE
Proceedings, vol 66 (Springer 2018) 69‒73.
110
Kester, “Demystifying the Internet of Things: Industry Impact, Standardization Problems, and
Legal Considerations” (2016) 8(1) Elon Law Review 205‒228.
111
Bajarin, “The Voice-First User Interface Has Gone Mainstream” Recode. 7 June 2016.
Web. 14 July 2017.
112
Brown, “The Fitbit Fault Line: Two Proposals to Protect Health and Fitness Data at Work”
(2016) 16(1) Yale Journal of Health Policy, Law and Ethics 1‒50.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 265

the past,113,114 but the uniquely intimate nature of voice-based data presents new
challenges requiring innovative solutions.
In the extreme case, the offer of private browsing options, or incognito mode (or a
TOR browser), as popularized by the Google Chrome browser, has been an
interesting solution to the customer desire to have different levels of privacy for
different browsing sessions.115 Users sometimes want the convenience and efficiency
of having their online forms automatically filled or their passwords automatically
remembered that browser-based cookies used to track and store customer-specific
information can provide, but on other occasions they may want to remain com-
pletely anonymous, even if that means they have to re-enter login information
manually each time. They do not want to be presented with a trade-off decision,
but instead want the ability to choose what type of session they begin each time, and
“incognito mode” allows for this. A similar customer desire may present itself when
interacting conversationally with IoT devices116. As previously discussed, the inher-
ent traits of an individual, such as biological sex, age or ethnicity117,118,119 may be
inferable through the voice alone by use of sophisticated algorithms. Furthermore,
voice-based interactions with certain platforms may be stored in the cloud, meaning
that an individual interacting with a new device for the first time might be recog-
nized by acoustic patterns unique to their voice.120 Individuals may wish to turn off
the ability of devices to utilize these inferences at their pleasure.121 One person may
wish to have their preferences automatically inferred when asking about local
restaurants, while another may want these settings turned off when requesting
updates on the latest news. A privacy-conscious user may wish to have this option
always off, even if that leads to poor accuracy in recommended options and other
core features of IoT devices. Overall, many or most customers may ultimately
choose not to opt out or turn off such settings, choosing convenience over privacy,122
but a lack of any ability to conversationally browse in a way similar to browser-based

113
Bailey, “Seduction by Technology: Why Consumers Opt out of Privacy by Buying into the
Internet of Things” (2016) 94(5) Texas Law Review 1023‒1054.
114
Bajarin (n 111).
115
Said et al., “Forensic Analysis of Private Browsing Artifacts” International Conference on
Innovations in Information Technology (IIT). IEEE, 2011.
116
Apthorpe, Reisman, and Feamster, “A Smart Home Is No Castle: Privacy Vulnerabilities of
Encrypted IoT Traffic”. arXiv preprint arXiv:1705.06805 (2017).
117
Gobl and Ailbhe (n 94).
118
Xue, An, and Fucci (n 96).
119
McComb et al. (n 97).
120
Rozeha et al., “Security System Using Biometric Technology: Design and Implementation of
Voice Recognition System (VRS).” International Conference on Computer and Communi-
cation Engineering, ICCCE, 2008.
121
Tene and Polonetsky, “Big Data for All: Privacy and User Control in the Age of Analytics”
(2012) 11 Northwestern Journal of Technology and Intellectual Property xxvii.
122
Bailey (n 113).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
266 Brian Subirana, Renwick Bivings, and Sanjay Sarma

incognito modes may present a challenge on the road to achieving voice net
neutrality.
In the context of conversational commerce, disputes between companies and
consumers may arise due to the impracticality of creating and communicating
voice-based terms of service. Can users really be expected to fully agree to
transaction-specific terms via a voice prompt when many users even now fail to
adequately read text-based online terms of service?123 Blockchain technology may
offer a novel method by which consumers can privately enforce their rights with
regard to any such disputes without the need to seek recourse from the relevant
company or a third-party entity.124 Although still largely in the conceptual stage,
novel approaches like this are needed to bridge the trust gap between voice-based
conversational commerce and more traditional browser-based transactions. One way
to address this issue is to develop AI-based algorithms that extract selective infor-
mation and proof125 that they are correct in a legally binding way, without ever
disclosing PII that may compromise human rights.

9.6.4 Wake Neutrality and the Expression Layer

9.6.4.1 Voice Synthetization


Another potentially interesting development may lie in the way conversational
devices adapt to individuals, changing their voice to match the tastes, age, mood
or other aspect of the user. In many cases, the general public may find such a
capability desirable. For instance, we may find it appropriate for the same machine’s
voice to change when approached by a child as opposed to a law-enforcement
official. We may also want the machine’s voice to change its affect depending on
our emotional state, providing relief when we feel down or motivation when it’s time
to work. However, the principles of net neutrality require that such a capability be
configurable, meaning that such changes would only happen with a user’s permis-
sion. The reason for this requirement is that, without our permission, devices could
change their voice in ways intended to manipulate users without their knowledge or
consent. There is ample evidence that people are susceptible to such persuasion,
especially if they are under duress. Furthermore, as these technologies become more
advanced, their apparent authority may become more prominent. This would only

123
Obar and Oeldorf-Hirsch, “The Biggest Lie on the Internet: Ignoring the Privacy Policies and
Terms of Service Policies of Social Networking Services” (2018) Information, Communication
& Society https://doi.org/10.1080/1369118X.2018.1486870
124
Koulu, “Blockchains and Online Dispute Resolution: Smart Contracts As an Alternative to
Enforcement” (2016) 13(1) SCRIPTed: A Journal of Law, Technology and Society 40‒69.
125
Goldwasser, Micali, and Rackoff, “The Knowledge Complexity of Interactive Proof Systems”
(1989) 18(1) SIAM Journal on Computing 186‒208.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
“Wake Neutrality” of Artificial Intelligence Devices 267

heighten the persuasiveness of machine voices, increasing the need for such cap-
abilities to be configured at the behest of the user.

9.6.4.2 Standardization of Interactions


With the advent of modern technology has come a myriad of new copyright and
trademark challenges.126 A simple question as to whether or not the design of a
touchscreen button is patentable can lead to cascading and expensive lawsuits. Part
of the problem is that as these technologies and devices cater ever more effectively to
our wants and needs, their designs become so good as to be arguably obvious. When
a certain design or structure is deemed to be a product of common sense, it can be
hard to patent, absent other factors. This trend is not likely to decline in the realm of
voice-based interactions. When a user says, “Alexa,” “Hey Siri,” or “OK Google,” we
might agree that this activation greeting only applies to each respective company’s
devices.

9.7 conclusion and future research


This chapter argues first and foremost that wake neutrality is an important concept
for legal compliance of artificial intelligence devices. We have also argued that
simple forms of wake neutrality, such as the use of a Voice Name System with wake
words, can be implemented if there is a broad consensus. We feel this can only
happen with leadership by an international standards body facilitating industry-wide
peloton127 consensus.128 Moving beyond this simple form, emotional firewalls and
smart contract technologies may serve as a foundation for algorithmic legal compli-
ance of broader versions of wake neutrality and facilitate its enforcement. In general,
AI devices may be awakened by many signals including IoT or EEG signals, which
is why we are working on what we call common wake constructs (CWCs) which
implicitly generalize the notion of wake neutrality beyond wake words. Here we
have briefly illustrated the relevance of wake neutrality across the full
architecture stack
Much research remains to be done by academics, policy experts, device manu-
factures, platform developers and other relevant stakeholders before effective
automated legal compliance with wake-word neutrality can be ensured. All-
encompassing AI wake neutrality is a complex problem due to the myriad factors
involved that are desirable but at the same time may be in conflict with one another.
126
Kaplan, “Copyright and the Internet” (2003) 22(1) Temple Environmental Law & Technology
Journal 1‒14.
127
Subirana, “Back to the Future. Anticipating Regulatory Hurdles within IoT Pelotons” in The
American Bar Association (ed) The Internet of Things (2019).
128
Sarma et al., “Realizing the Internet of Things: A Framework for Collective Action” World
Economic Report 2019.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
268 Brian Subirana, Renwick Bivings, and Sanjay Sarma

The biggest challenges are privacy and security considerations, together with the
unexplainability, unreliability and power requirements of current neural network
speech-recognition systems. A proposal we are exploring is for an independent third
party that implements a token-activated cognitive emotional firewall distributed
system acting like a human-within-the-machine to observe the system holistically.
A legal programming approach to the VNS may facilitate legal enforcement of wake
neutrality through partial algorithmic enforcement. A simple form of emotional
firewall, offering a weak incognito-type mode, is a third-party service translating
speech into text without revealing anything else from the voice signal (such as
gender, accent or mood).
While this chapter has focused mainly on wake neutrality in conversational
commerce within smart speakers and IoT systems, the more general problem goes
to the heart of how human beings hope to interact with machines,129 the solution to
which will surely continue to be a hotly debated topic for the foreseeable future.

129
Subirana, Sarma, Rice, and Cottrill, “Can Your Supply Chain Hear Me Now?” MIT Sloan
Management Review. Frontiers Blog, 7 May 2018. Available at: <https://sloanreview.mit.edu/
article/can-your-supply-chain-hear-me-now>.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:32:39, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.010
10

The (Envisaged) Legal Framework for Commercialisation


of Digital Data within the EU

Data Protection Law and Data Economic Law As a Conflicted


Basis for Algorithm-Based Products and Services

Björn Steinrötter*

introduction
Nowadays everything revolves around digital data. They are, however, difficult to
capture in legal terms due to their great variety. They may be either valuable goods or
completely useless. They may be regarded as syntactic or semantic. However, it is the
particularly sensitive data protected by data protection law that are highly valuable and
interesting for data-trading, big-data and artificial-intelligence applications in the
European data market. The European legislator appears to favour both a high level
of protection of personal data, including the principle of ‘data minimisation’, and a
free flow of data. The GDPR includes some free-flow elements, but especially
legislation on trading and usage of non-personal data is currently under discussion.
The European legislator faces key challenges regarding the (partly) conflicting object-
ives reflected in data protection law and data economic law. This contribution assesses
the current state of legal discussions and legislative initiatives at the European level.
Key Words: data protection, data producer’s right, access rights, data portability, free
flow of data, digital single market strategy, data ownership, data holder, GDPR,
privacy

10.1 the link between data and algorithms


In practice, algorithms cannot work without data; conversely, without algorithms it
would not be possible to ‘understand’ many of the unstructured masses of data, more
precisely, to discern the meaning of the information, the micro-content, which
digital data ‘carries’. Moreover, algorithms increasingly provide the nexus between
data and big-data applications (and hence, inevitably between data and ‘artificial

* I would like to thank Dr Marc Stauch, MA (Oxon) for thorough proofreading and valuable,
wise comments. All remaining mistakes and shortcomings are, of course, my own.

Downloaded from https://www.cambridge.org/core. University College269 London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
270 Björn Steinrötter

intelligence’,1 including machine learning).2 At the same time, the quality of the
data to be processed3 is crucial, not only for the processing speed but also for
the accuracy of results. Even the smartest algorithm does not deliver usable results,
if the underlying data (structure) quality is poor. Against this background it is
unsurprising that phrases such as ‘data is the new oil’ or the ‘new gold’ of the digital
economy are commonplace, notwithstanding the fact that there is comparatively
little information value in such statements taken in isolation.4
It is certainly true that digital data, very often machine generated and in raw form,
are precious assets in this day and age. The law, in turn, must respond to this
development. When it comes to areas of law that concern data specifically, a kind of
dual standard is apparent in respect of data protection and data economic law. This
chapter will show that when it comes to commercialisation, in particular trading and
movement (= free flow) of data as a factual prerequisite for algorithm-based applica-
tions, the interplay of these two tracks is not completely harmonious.5
Data protection law is well known in European legal systems. This was already
true prior to the directly applicable General Data Protection Regulation (GDPR),6
as most of the European states and the EU itself7 have a strong tradition of data
protection.8 Hitherto this area of law could be said to have been an ‘only child’. Data
protection law now seems set to acquire a ‘legal sibling’ in the shape of data
economic law,9 which covers such issues as ‘data ownership’, ‘data producer rights’

1
Cf. regarding the link between big data and artificial intelligence Fink, ‘Big Data and Artificial
Intelligence’ (2017) 9 Zeitschrift für Geistiges Eigentum/Intellectual Property Journal (ZGE/IPJ)
288.
2
Expert Opinion of the German Association for the Protection of Intellectual Property (GRUR)
on the European Commission Communication ‘Building a European Data Economy’, 3 April
2017 (hereafter cited as GRUR; available at <https://tinyurl.com/llpygqh>) 5: ‘. . . algorithms
often do the major part of the work.’
3
For instance: are they already sorted or still raw?
4
Without algorithms the masses of data would not be very helpful. If data are the oil of the
economy, algorithms are the engines; see Pleier, ‘Big Data und Digitalisierung: Warum
Algorithmen so entscheidend sind’, [https://tinyurl.com/yd4wr3xo]; cf. (to get a first impression
and in general) the several contributions in: Harvard Business Manager 4/2014, Big Data;
instructive regarding big data: Sugimoto, Ekbia, and Mattioli (eds), Big Data Is Not a Monolith
(MIT Press 2016).
5
See Becker, ‘Reconciliating Data Privacy and Trade in Data – A Right to Data-Avoiding
Products’ (2017) 9 ZGE/IPJ 371.
6
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data, and repealing Directive 95/46/EC (General Data Protection Regula-
tion) OJ 2016 L 119, 4.5.2016, p. 1.
7
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the
protection of individuals with regard to the processing of personal data and on the free
movement of such data, OJ 1995 L 281, p. 31.
8
Cf. Becker, ‘Rights in Data – Industry 4.0 and the IP Rights of the Future’ (2017) 9 ZGE/IPJ
253, 258.
9
Cf. also Berger, ‘Property Rights to Personal Data? – An Exploration of Commercial Data Law’
(2017) 9 ZGE/IPJ 340 using the term Commercial Data Law including private law issues as well

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 271

or ‘data access rights’.10 As (especially raw,11 machine-generated12 and non-


personal)13 data as such14 are not protected by existing (European) IP rights or
property rights, there seems at least prima facie to be a gap in protection.
The rest of this chapter is organised as follows. First, in Section 10.2, the central
term ‘digital data’ will be explored. As the dual legal data system can be paradigmat-
ically traced within the legal framework of European legal initiatives, the current
state of data economy law (Section 10.3) and data protection law (Section 10.4) at
European level is then analyzed – though only with reference to potential commer-
cialisation effects. The conflict between these tracks will be illustrated in Section
10.5. The two final sections, 10.6 and 10.7, attempt to harmonise the inconsistencies
and explore the outlook with respect to possible future regulatory starting points.

10.2 definition of digital data


A text dealing with data must necessarily define the term ‘data’ so that we can, for
example, distinguish uses of data within the scope of protection of a given rule from
those outside it. This is even more important in view of the fact that the topics

as the data protection law and subsequently determining ‘a dichotomy in legal terms between
private commercial data law and public data protection law’.
10
Cf. to the development of the discussion regarding an IP right in data: Becker (n 8) 253 ff; see
also Fezer, ‘Data Ownership of the People. An Intrinsic Intellectual Property Law Sui Generis
Regarding People’s Behaviour-generated Informational Data’ (2017) 9 ZGE/IPJ 356; Spindler,
‘Data and Property Rights’ (2017) 9 ZGE/IPJ 399; Wiebe, ‘A New European Data Producers’
Right for the Digital Economy?’ (2017) 9 ZGE/IPJ 394; in respect of personal data Buchner, ‘Is
there a Right to One’s Own Personal Data?’ (2017) 9 ZGE/IPJ 416; Specht, ‘Property Rights
Concerning Personal Data’ (2017) 9 ZGE/IPJ 411; constitutive in respect of the recent discus-
sion in Germany and beyond regarding the syntactical level of information: Zech, ‘Information
als Schutzgegenstand’, 2012.
11
The term ‘raw data’ describes unsorted data.
12
I.e., automatically generated without the active intervention of a human being.
13
If the data concerned are not of a personal nature, they are not even protected by data
protection law.
14
Of course, there is an existing mosaic-like protection that covers data as such in an indirect way.
For example, the sui generis right of the Directive 96/9/EC of the European Parliament and of
the Council of 11 March 1996 on the legal protection of databases, OJ 1996 L 77, p. 20 applies
under certain conditions. The same holds true for the law of trade secrets, see the Directive
(EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection
of undisclosed know-how and business information (trade secrets) against their unlawful
acquisition, use and disclosure, OJ 2016 L 157, 15.6.2016, p. 1, which needed to be transposed
into Member State law by June 2018. Competition Law instruments might be helpful in some
cases, too. Furthermore, national private laws, e.g. tort law, could provide a kind of ‘reflexive’
protection. Christians and Liepin, ‘The Consequences of Digitalization for German Civil Law
from the National Legislator's Point of View’ (2017) 9 ZGE/IPJ 331, 336; Becker (n 8) 253 et
seq., also emphasising the supposed shortcomings of this patchwork-protection de lege lata;
continuative Steinrötter, ‘Vermeintliche Ausschließlichkeitsrechte an binären Codes’, Multi-
Media und Recht (MMR) (2017) 731, 733 ff.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
272 Björn Steinrötter

digitalisation, big data, algorithms etc. are replete with buzzwords, often without it
being clear what the real meanings of those words are.
According to ISO/IEC 2382:2015, IT Vocabulary, 2121272, data are a ‘reinterpre-
table representation of information in a formalized manner suitable for communi-
cation, interpretation, or processing’. That means, the term ‘data’ is not congruent
with the term ‘information’. It is only a ‘representation’ of the latter and the
‘formalized manner’ is in the present context – digital data – the binary coding.15
Hence, data as such concern the syntactic level, whereas information means
the semantic, content-related level (indeed according to another view the term
‘information’ could be divided into syntactic, semantic and even structural16
components).17 Both syntactic and semantic levels potentially come into question
as an economic good and a legal object. Keeping to the former approximation of the
concept of data, data are merely a ‘carrier’ of the information.

10.3 data economic law

10.3.1 Brief Description and Rationale


The aim of a ‘data economic law’, which has been evolving slowly but steadily
within academic discourse as well as legislative initiatives, is to improve the com-
mercialisation of data, and above all to promote data trading18 required for big-data
applications. Thus, whereas restrictions on the free movement of data (such as
certain requirements imposed by public authorities on the location of data for
storage or processing purposes) could constrain the development of the data econ-
omy,19 a data economic law potentially supports the free flow of data.
The discussion to date has concentrated on non-personal data and/or even data as
such, meaning the syntactic level.20 One benefit of these limitations seems to be
that data protection law cannot counteract innovative considerations in particular
relating to exclusive rights or access concepts a priori. This is evident in respect of
15
Zech (2017) 9 ZGE/IPJ 317, 322: ‘information coded to be machine readable’.
16
Referring to the structure of a physical carrier of data such as a USB stick.
17
Continuative Zech (n 10) 13 et seq., in particular 35 et seq.; summarised by Zech, ‘Data as a
Tradable Commodity’, in de Franceschi (ed), European Contract Law and the Digital Single
Market (Intersentia 2016) 51, 53 et seq.
18
Commission Staff Working Document on the free flow of data and emerging issues of the
European data economy accompanying the document Communication Building a European
data economy, 10.01.2017, SWD (2017) 2 final 13: ‘For centuries, information has been traded.
However, with the availability of information stored in a digital form, data trading has
drastically increased. Examples of well-developed markets for non-personal data are the markets
for financial or commodities market data’; cf. also Zech (n 17) 51, 57 et seq.
19
Cf. Communication from the Commission to the European Parliament, the Council, the
European Economic and Social Committee and the Committee of the Regions ‘Building a
European Data Economy’, 10.01.2017, COM(2017) 9 final 3.
20
See, however, the reflections regarding ‘personal data ownership’ under Section 10.4.3.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 273

non-personal data (on a semantic level). At this point, though, it is appropriate to


address a more general issue that poses a more fundamental challenge to legislative
efforts to free up and incentivise greater data access and sharing for economic
purposes. This is the problematic relationship between non-personal data, whose
free sharing is seen as economically and publicly desirable, and personal data, whose
use in the EU is subject to existing stringent rules (now found in the GDPR) aimed
at protecting the data subject’s privacy.21 Indeed, it is not even clear if the distinction
between personal and non-personal data is still possible, taking into account tech-
nology leading to easier identifiability (cf. Art 4 No 1 GDPR). Even anonymisation is
arguably not a completely safe tool for preventing the application of data protection
law, since de-anonymisation is becoming more and more possible in the light of
technological development.22 In cases of mixed data sets, partly the non-personal
part of economic data law and partly the GDPR would apply – which would be a
very complex application of the law. Therefore, the demarcation problems between
personal and non-personal data could cause legal uncertainty, in particular if the
distinction is used to delineate legal fields from each other.
In addition, at least prima facie it seems that existing property/protective rights and
data protection law cannot cover syntactic information since the term ‘personal data’
is a semantic one. Of course, there is an existing mosaic-like protection that covers
data as such in an indirect way.23
Assuming for now that this limitation – implying a sharp distinction between the
two forms of data – is a tenable one, there is, however, a tension that must be
resolved.24 On the one hand, it seems necessary to facilitate access to and incentivise
the sharing of – non-personal25 – data26 in order to achieve (more) innovation and
avoid lock-in effects.27 On the other hand, data generators (or possibly diverging data
holders) have a legitimate interest in protecting their investments and assets (trade
secrets or other confidential data).

21
Cf. Steinrötter, ‘Feuertaufe für die EU-Datenschutz-Grundverordnung – und das Kartellrecht
steht Pate’ (2018) 2 Zeitschrift für Europäisches Wirtschafts- und Steuerrecht (EWS) 61 (III.4.).
22
Cf. EAID, Statement of 23/11/2017, p. 2.
23
N 14.
24
Spindler (n 10).
25
Where data are personal, the protection of the data subject regarding privacy aspects (with the
consequence of the application of the GDPR) needs to be added. Certainly, personal data
could be anonymised and would then be considered as non-personal data.
26
Nowadays such relevant data are regularly machine generated. Of course, the manual collec-
tion/curation of data takes more effort and would possibly be just as worthy of protection as
machine-generated data, if at all.
27
Lock-in effects can be described as conditions in which a strong market participant (here:
having a monopoly-like position because of the factual access to data plus having the technical
means to protect the data from access by third parties, which leads to factual exclusivity) is
capable of making it at least very difficult for its contractual partners to switch to another
supplier/provider.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
274 Björn Steinrötter

Unsurprisingly, given the economic and social importance of the issue, the matter
has attracted the regulatory interest of the European Union. Indeed, a national
approach would not be convincing, as data transactions typically do not have
national borders. For every transborder data flow, conflict-of-law rules would deter-
mine which country’s national law regime applied.28 This would further increase
complexity.29 An uncoordinated approach risks the creation of a fragmented system
that is the opposite of what is needed in the internal market.30 Therefore, a
European approach seems indicated.

10.3.2 The Free Flow of Data Initiative of the European Commission


As a part31 of its Digital Single Market Strategy from 201532 the Commission
published a paper at the beginning of 2017 titled ‘Building A European Data
Economy’,33 accompanied by a working document.34 These two documents consti-
tute the Free Flow of Data Initiative.
The Commission states that there is a lack of a comprehensive policy framework
concerning raw machine-generated data that do not qualify as personal data.35 This

28
Steinrötter (n 14) 731, 735.
29
Of course, if a European Directive came into play, there would still be a need to find the
applicable national law. However, from a practical point of view, this issue is softened. If an EU
Regulation came into force, that problem would be resolved to a large extent. It is questionable
whether the EU would have the competencies to introduce a data ownership or similar legal
concepts (cf. Art 345 TFEU).
30
COM(2017) 9 final 11.
31
See also the Industry Package, consisting of Communication from the Commission to the
European Parliament, the Council, the European Economic and Social Committee and the
Committee of the Regions, ‘European Cloud Initiative – Building a competitive data and
knowledge economy in Europe’, COM(2016) 178 final; Communication from the Commis-
sion to the European Parliament, the Council, the European Economic and Social Commit-
tee and the Committee of the Regions,’ ICT Standardisation Priorities for the Digital Single
Market,’ COM(2016) 176 final; Communication from the Commission to the European
Parliament, the Council, the European Economic and Social Committee and the Committee
of the Regions, ‘Digitising European Industry – Reaping the full benefits of a Digital Single
Market’, COM(2016) 180 final; Commission Staff Working Document, ‘Advancing the Inter-
net of Things in Europe’, SWD(2016) 110 final.
32
Communication from the Commission to the European Parliament, the Council, the Euro-
pean Economic and Social Committee and the Committee of the Regions, ‘A Digital Single
Market Strategy for Europe’, COM(2015) 192 final; see already about one year earlier: Com-
munication from the Commission to the European Parliament, the Council, the European
Economic and Social Committee and the Committee of the Regions, ‘Towards a thriving data-
driven economy’, COM(2014) 442 final.
33
Communication from the Commission to the European Parliament, the Council, the Euro-
pean Economic and Social Committee and the Committee of the Regions, ‘Building a
European Data Economy’, COM(2017) 9 final; Commission Staff Working Document on
the free flow of data and emerging issues of the European data economy, SWD(2017) 2 final.
34
SWD(2017) 2 final.
35
COM(2017) 9 final 10.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 275

is of concern in a context where, as increasingly happens, enterprises (users) that


wish to collect, store and utilise relevant data rely on software from external service
providers or manufacturers to do so.
In such cases, the service providers and manufacturers are often the de facto
‘holders’ of the data generated by their processes or machines, whereas the users
have no direct access, even though they may be the owner of the machine.36 The
use of specific technical methods (e.g., encryption or proprietary formats) may
further strengthen the market positions of the service providers/manufacturers.
Technical protection leads to factual exclusivity.37
When it comes to economic innovation and tradability, contractual solutions are
generally perceived as the optimal approach.38 However, this could be problematic if
the negotiating power of the parties is unequal,39 when unfair standard contract terms
could easily be imposed.40 In the market for data-gathering and data-management
software systems, this appears to be a significant risk indeed. This is not only due to the
relative size of the entities (the leading service providers are often very large enter-
prises), but above all because of the disparity in the parties’ technical knowledge about
how the relevant systems, which are generally highly complex, operate.
Therefore, custom-fit regulatory measures appear (at first glance) appropriate to
guarantee innovation-friendly and fair results and to support access by new market
participants and avoid lock-in constellations.41 Objectives of a possible future EU
framework are, according to the Commission, improved access to (anonymous)
machine-generated data, data-sharing incentives, the protection of investments and
(confidential) data as assets, and the prevention of lock-in effects.42
Against this background, the Commission has set two priorities. First, abolition of
unjustified data location restrictions that risk fragmenting the market, and reducing
both the quality of service for users and the competitiveness of data service pro-
viders.43 Second, the availability and use of data, the fostering of new business
models and the creation of data analytics should be improved.44 An important
element here is access to data.

10.3.2.1 Data Location Restrictions


Some Member State legislators have laid down local storing and/or processing
requirements (legal rules or administrative guidelines) for financial service
36
Ibid.
37
Zech (n 17) 51, 53.
38
COM(2017) 9 final 10; cf. also the indirect protection of other legal fields at footnote 11.
39
The weaker party is not necessarily a consumer.
40
See Section 10.3.3.
41
COM(2017) 9 final 10.
42
SWD(2017) 2 final 30.
43
COM(2017) 9 final 3.
44
Ibid 4.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
276 Björn Steinrötter

providers, or have implemented professional secrecy regulations.45 The global trend


also appears to be towards more data localisation as national solutions are often safer,
in terms of protecting data from attack/misuse, than cross-border ones.46
The Commission, in contrast, emphasises the ‘principle of free movement of data
within the EU’ as an important guideline resulting from the free movement of
services (Art 56 TFEU), the free establishment rules (Art 49 TFEU) and the
respective secondary legislation.47 In other words, free movement of data exists as
a result of EU primary law.48
From an economic point of view data location restrictions prevent efficient
processing and storage in states with low energy or wages costs and hence cost-
effective cloud servers.49 It is probably difficult to justify data location restrictions, at
least where non-personal data are concerned.

10.3.2.2 Availability and Use of Data


As previously stated, (machine-generated) data are increasingly needed as the key
component for the functioning of algorithms, which in turn are the basis of
numerous innovative products or services in fields such as health care, transport,
production, energy markets and smart living. Hence, from an economic point of
view market participants ideally need extensive access to data.50 Currently, (factual)
data holders such as the generators of the data usually keep and analyse it them-
selves.51 In some instances, as noted earlier, it may be their software service providers
who obtain de facto control. In either case, third-party access for re-use is the
exception rather than the rule. The much-cited calls for data trading and data
market places are at present mainly just theory. One of the reasons may be that
companies fear losing their competitive advantages if they grant access to their data.

45
COM(2017) 9 final 5 et seq.
46
Ibid 6.
47
Ibid 7.
48
Interestingly, as discussed further below, this is explicitly provided for by law regarding – (of all
things) – personal data Art 16(1) TFEU, Art 1(3) GDPR. The GDPR contains several opening
clauses, of course, which could be used by Member States to implement data location
restrictions.
49
Centrum für europäische Politik (cep), cep Policy Brief No 33 (2017) 3.
50
Regarding the re-use of data held by the public sector see Directive 2003/98/EC of the
European Parliament and of the Council of 17 November 2003 on the re-use of public sector
information, OJ 2003 L 345, p. 90, revised by Directive 2013/37/EU of the European Parliament
and of the Council of 26 June 2013 amending Directive 2003/98/EC on the re-use of public
sector information, OJ 2013 L 175, p. 1.
51
COM(2017) 9 final 8 et seq.: ‘[. . .] access and transfer in relation to the raw data [. . .] are
therefore central to the emergence of a data economy [. . .]’; see also the several European Data
Market Study Reports [http://datalandscape.eu/study-reports]; cf. the ‘Report of the high-level
conference Building the European Data Economy’ [https://ec.europa.eu/digital-single-market/
en/news/report-high-level-conference-building-european-data-economy].

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 277

Another reason is that it seems difficult to quantify the data’s monetary value.52
Companies may also be wary of the severe administrative fines pursuant to Art
83 GDPR in case of infringements of rules on personal data protection. As discussed
further below, this reflects the challenge in confidently demarcating non-personal
(anonymous) data – the focus of the EU’s data commerce initiatives – from
identifiable personal data. This is true also for much machine-generated data,53
where the differentiation of whether data is personal or not has become more and
more difficult in practice.54
Leaving the last point aside for now, in its proposals the Commission identifies a
number of possible options for addressing the issue of data access. These are
presented and (in parts) assessed below.

lower intervention levels 55 The lowest intervention level would consist of


the Commission considering issuing guidance on how data control rights should be
addressed in contracts between data management system providers and users, taking
into account existing (EU) legislation.
It is also worth considering whether the persistent identification of data sources
could sustainably increase trust in a data system. This could be achieved by defining
reliable and possibly standardised protocols for such an identification.

access rights The improvement of access to data is one way of maximising the
value of data in society.56 Possible access rights address the (factual) data holder
(e.g., manufacturers or service providers).
Whatever the case may be, it is important that access rights are designed with a
sense of proportion,57 as the incentive for data generation could be reduced if
generated data became more or less freely available.

existing access rights as a potential role model? First of all, in cases of


‘general interest’, public authorities could be granted data access, for example, real-
time data could be obtained from cars to improve traffic management.58 Conversely,
authorities may have to allow access to their data in ‘general interest’ or ‘economic
necessity’ cases, that is, beyond existing freedom of information requests.59 Access
rights for scientific purposes could arguably be granted only to public institutions
such as universities and not to commercial research organisations – otherwise almost
52
Cf. COM(2017) 9 final 10.
53
Specht (n 10).
54
GRUR 4.
55
COM(2017) 9 final 12 et seq.
56
SWD(2017) 2 final 47.
57
GRUR 3.
58
COM(2017) 9 final 12.
59
In Germany in particular: Informationsfreiheitsgesetz (IFG), Umweltinformationsgesetz (UIG).

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
278 Björn Steinrötter

all data analysis would be called ‘research’.60 Here, in contrast with existing open
data initiatives within the EU, the institutions or persons involved decide on their
own to make data available; they are normally not legally bound to do so.
Apart from these higher-level aims, it would be worth considering data access in
return for remuneration61 (full or partial; perhaps after anonymisation).62 The
development of FRAND63 terms such as are found in competition policies on
standard-essential patents64 seems conceivable.65 When it comes to standards
resulting from technology under patent, the patent holder is often required to
licence the use of relevant information,66 and this could serve as a model to a
certain extent, notwithstanding the fact that it is difficult to implement licensing
terms which meet the requirements of being fair, reasonable and non-discrimin-
atory.67 In certain cases it might also be possible to draw upon the ‘essential facility
doctrine’68 from competition law (giving companies access to other companies’
infrastructural facilities where they are essential for participation in a downstream
market).69 Whether access should be free of charge or chargeable could also depend
on the respective sector or the data-producing costs of the parties involved. At the
same time, competition law approaches are certainly incapable of addressing all
cases70 where data are withheld at the expense of the public interest.71 Accordingly,

60
Zech (n 15) 317, 326.
61
COM(2017) 9 final 13.
62
A ‘potential benefit’ is seen insofar by GRUR 3.
63
Fair, reasonable and non-discriminatory.
64
European Court of Justice (ECJ), 16.7.2015, case 170/13 (Huawei/ZTE), ECLI:EU:C:2015:477;
cf. most recently High Court of Justice, Chancery Division, Patents Court, [2017] EWHC 3083
(Pat), Case No: HP-2014-000005; from legal scholarly literature see Colangelo and Torti,
Filling Huawei’s gaps: The recent German case law on Standard Essential Patents: (2017)
European Competition Law Review (ECLR) 538; Cross and Strath, Computer and Telecommu-
nications Law Review (CTLR) 2017, 112; Henningsen, ‘Injunctions for Standard Essential
Patents under FRAND Commitment: A Balanced, Royalty-Oriented Approach’ (2016) Inter-
national Review of Intellectual Property and Competition Law (IIC) 438.
65
COM(2017) 9 final 13; SWD(2017) 2 final 37. The payment of a reasonable and proportionate
fee is due in the motor vehicle sector, too (Art 7(1) Regulation No 715/2007).
66
Standard essential patents (SEP); SWD(2017) 2 final 38.
67
Cf. Mariniello, ‘Fair, Reasonable and Non-discriminitory (FRAND) Terms: A Challenge for
Competition Authorities’ (2011) 7 Journal of Competition Law & Economics 523.
68
See the obligations to licence the use of commercially-held information provided by the Cases:
ECJ, 6.4.1995, joined cases 241/91 and 242/91 (RTE and ITP/Commission), ECLI:EU:
C:1995:98; ECJ, 12.2.2004, case 218/01 (Henkel KGaA), ECLI:EU:C:2004:88; European Gen-
eral Court, 17.9.2007, case 201/04 (Microsoft Corp/Commission), ECLI:EU:T:2007:289; ECJ,
Huawei/ZTE (n 64).
69
Cf. Section 19(2) of German Act against Restraints of Competition (GWB).
70
It is not even clear if data could be an essential facility; see the detailed review of whether EU
competition law applies in principle to the data economy, Drexl, ‘Designing Competitive
Markets for Industrial Data – Between Propertisation and Access’ (2017) 8 Journal of Intellectual
Property, Information Technology and Electronic Commerce Law (JIPITEC) 257.
71
GRUR 3; Max Planck Institute for Innovation and Competition, Position Statement of 26 April
2017 on the European Commission’s ‘Public consultation on Building the European Data

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 279

while further access rights seem worth considering, competition law could to a
certain extent definitely serve as a template.72
More generally, an obligation to grant access to data in certain specific contexts is
certainly well known in European law (for example, Art 6–9 Regulation 715/2007/
EC,73 Art 35–36 Directive 2015/2366/EU,74 Art 27, 30 Regulation 2006/1907/EC,75
Art 30, 32 Directive 2009/72/EC,76 Recital 11 Directive 2010/40/EU77 and Art 9 Regu-
lation 2019/1150/EU).78 These instruments address the importance of data sharing in
the public interest in the widest sense in respect of such matters as access to vehicle
repair and maintenance information, access to payment systems, maintaining public
safety in relation to dangerous chemicals, etc.

Economy (hereafter cited as “MPI”)’ 12; Spindler (n 10) 399, 404; Zech (n 15) 317, 328; see also
Podszun, ‘Competition and Data’ (2017) 9 ZGE/IPJ 406.
72
GRUR 3.
73
Regulation (EC) No 715/2007 of the European Parliament and of the Council of 20 June
2007 on type approval of motor vehicles with respect to emissions from light passenger and
commercial vehicles (Euro 5 and Euro 6) and on access to vehicle repair and maintenance
information, OJ 2007 L 171, p. 1; amended by Regulation (EU) No 459/2012 of 29 May
2012 amending Regulation (EC) No 715/2007 of the European Parliament and of the Council
and Commission Regulation (EC) No 692/2008 as regards emissions from light passenger and
commercial vehicles (Euro 6), OJ 2012 L 142, p. 16; cf. also Art 12(2) Regulation (EU) 2015/758
of the European Parliament and of the Council of 29 April 2015 concerning type-approval
requirements for the deployment of the eCall in-vehicle system based on the 112 service and
amending Directive 2007/46/EC, OJ 2015 L 123, p. 77.
74
Directive (EU) 2015/2366 of the European Parliament and of the Council of 25 November
2015 on payment services in the internal market, amending Directives 2002/65/EC, 2009/110/
EC and 2013/36/EU and Regulation (EU) No 1093/2010, and repealing Directive 2007/64/EC,
OJ 2015 L 337, p. 35.
75
Regulation (EC) No 1907/2006 of the European Parliament and of the Council of 18 December
2006 concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals
(REACH), establishing a European Chemicals Agency, amending Directive 1999/45/EC and
repealing Council Regulation (EEC) No 793/93 and Commission Regulation (EC) No 1488/
94 as well as Council Directive 76/769/EEC and Commission Directives 91/155/EEC, 93/67/
EEC, 93/105/EC and 2000/21/EC, OJ 2007 L 136, p. 3; cf. also Commission Implementing
Regulation (EU) 2016/9 of 5 January 2016 on joint submission of data and data-sharing in
accordance with Regulation (EC) No 1907/2006 of the European Parliament and of the
Council concerning the Registration, Evaluation, Authorisation and Restriction of Chemicals
(REACH), OJ 2016 L 3, p. 41.
76
Directive 2009/72/EC of the European Parliament and of the Council of 13 July 2009 concern-
ing common rules for the internal market in electricity and repealing Directive 2003/54/EC, OJ
2009 L 211, 55; cf. Proposal for a Directive of the European Parliament and of the Council on
common rules for the internal market in electricity (recast), COM(2016) 864 final/2; cf. also
Regulation (EC) No 1099/2008 of the European Parliament and of the Council of 22 October
2008 on energy statistics, OJ 2008 L 304, p. 1.
77
Directive 2010/40/EU of the European Parliament and of the Council of 7 July 2010 on the
framework for the deployment of Intelligent Transport Systems in the field of road transport
and for interfaces with other modes of transport, OJ 2010 L 207, p. 1.
78
Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on
promoting fairness and transparency for business users of online intermediation services, OJ
2019 L 186, 57.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
280 Björn Steinrötter

As the data market is heterogeneous, an overarching data access regime would not
meet the needs of different sectors (connected cars, mechanical engineering, smart
grids, smart homes, medical and health care sectors, agriculture etc.). If
access provisions are to be granted at all, it therefore seems preferable to create
different ones.79

data portability Portability might be one possible solution to data access


rights.80 Data portability means the ability of users to transfer certain data from
one data platform or management system to another without any problems, mean-
ing – in particular – subject to no or only low switching costs.81 By imposing an
obligation on the platform/system provider to facilitate such transfers (where desired
by the user) the aim is to attack lock-in effects and thereby lower entry barriers to the
data-(driven) economy.82 Indeed, data portability, closely linked with questions of
interoperability83 of data, seems to be the current approach of the European
legislator. Thus, as will be outlined below, under current rules and regarding
personal data Art 20 GDPR provides a ‘right to data portability’.84 Further, concern-
ing non-personal data there does exist a new regulation (see below) that should –
pursuant to its Art 685 – guarantee at least a minimum level of data portability.
However, data portability rights are not – or at any rate not principally – designed
to create a data market by, for example, improving tradability and giving access to
third parties. The focus is rather on promoting accessibility and usability of data for
the user’s own purposes.86 Portability, nevertheless, supports data movements de
facto. And a certain pro-competitive character of portability solutions cannot be
denied.87

regulation on a framework for the free flow of non-personal data


in the eu Building on the policy initiatives discussed above, in September
2017 the Commission published a proposal for a Regulation on a framework for
the free flow of non-personal data in the EU,88 resulting in the Regulation (EU)
79
Cf. COM(2017) 9 final 13; affirmative Becker (n 8) 253, 257; GRUR 3.
80
MPI 11.
81
SWD(2017) 2 final 47.
82
Cf. COM(2017) 9 final 15.
83
Cf. Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007
establishing an Infrastructure for Spatial Information in the European Community (INSPIRE),
OJ 2007 L 108, p. 1, applying to certain data of the public sector.
84
See Section 10.4.2.
85
See Section 10.3.2.2.
86
Zech (n 15) 317, 320 et seq.
87
MPI 11.
88
COM(2017) 495 final (hereinafter referred to as ‘the proposal’); Commission Staff Working
Document, ‘Impact Assessment’, SWD(2017) 304 final, Part 1/2; Commission Staff Working
Document, ‘Impact Assessment’, Annexes to the Impact Assessment, SWD(2017) 304 final, Part
2/2; Commission Staff Working Document, ‘Executive Summary of the Impact Assessment’,

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 281

2018/1807,89 legally grounded in the competence provision of Art 114 TFEU. The
concrete objectives of this act, which in general terms aims for a more competitive
and integrated internal market for data storage and other processing services and
activities, are:
 Improving the mobility of non-personal data across borders in the single
market, which is limited today in many Member States by localisation restric-
tions or legal uncertainty in the market;
 Ensuring that the powers of competent authorities to request and receive
access to data for regulatory control purposes, such as for inspection and audit,
remain unaffected; and
 Making it easier for professional users of data storage or other processing
services to switch service providers and to port data, while not creating an
excessive burden on service providers or distorting the market.90
More specifically, pursuant to Art 1, issues to be addressed include ‘data localisation
requirements, the availability of data to competent authorities and the porting of
data for professional users’. The scope is restricted to the processing of electronic
data other than personal data pursuant to Art 4(1) GDPR91 with a specific territorial
link to the EU (Art 2). This should avoid overlap with the GDPR. In case of conflict,
the GDPR prevails (Art 2 para 2 Regulation [EU] 2018/1807). Cloud computing, big-
data applications, artificial intelligence and the internet of things are the most
relevant applications.92

data localisation restrictions From a quantitative point of view the reduc-


tion of data localisation restrictions arguably predominates (Art 4 and many recitals).
In this regard, a number of criticisms may be made. To begin with, it is not easy to
understand why data localisation restrictions should be limited to non-personal data
(already due to the limited scope of the regulation), since in the light of the freedom
to provide services such restrictions seem even more problematic. This is even more
true since localisation obligations are based on the assumption that the data in
question are particularly sensitive ([normally] meaning personal) ones.93 In add-
ition, data localisation restrictions do not always differ between personal and non-
personal data.94 Whereas the draft did not suggest how these ‘mixed’ restriction rules
are to be dealt with, Art 2 para 2 Regulation (EU) 2018/1807 sets out that in ‘the case of

SWD(2017) 305 final; DG CNECT, Opinion, Ref. Ares(2017)4184873-25/08/2017; EU COM


Fact Sheet, MEMO/17/3191; Council of the European Union, Interinstitutional File 2017/0228
(COD), 15724/1/17 REV 1.
89
OJ L 303, p. 59.
90
COM(2017) 495 final 2.
91
Art 3 No 1.
92
Europäische Akademie für Informationsfreiheit und Datenschutz (EAID), Statement 23/11/
2017, p. 1.
93
Deutscher Anwaltsverein (DAV), Statement 4/2018, January 2018, p. 5.
94
Statement Bundessteuerberaterkammer, 24/11/2017, p. 2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
282 Björn Steinrötter

a data set composed of both personal and non-personal data, this Regulation applies
to the non-personal data part of the data set. Where personal and non-personal data in
a data set are inextricably linked, this Regulation shall not prejudice the application of
[GDPR].’ It is unclear if a substantial area of application remains for non-personal
data.95

data porting However, what is even more interesting here is the provision
dealing with ‘porting of data’. As discussed earlier, to port data freely – to transfer
data smoothly between systems/platforms offered by different providers – is to be
‘a key facilitator of [informed] user choice [enabling] easy comparisons of the
individual components of various data storage or other processing services and
effective competition in markets for data storage or other processing services’.96
Art 6 reads as follows:
1. The Commission shall encourage and facilitate the development of self-
regulatory codes of conduct at Union level (‘codes of conduct’), in order to
contribute to a competitive data economy, based on the principles of transpar-
ency and interoperability and taking due account of open standards, covering,
inter alia, the following aspects:
(a) best practices for facilitating the switching of service providers and the
porting of data in a structured, commonly used and machine-readable
format including open standard formats where required or requested by
the service provider receiving the data;
(b) minimum information requirements to ensure that professional users are
provided, before a contract for data processing is concluded, with suffi-
ciently detailed, clear and transparent information regarding the pro-
cesses, technical requirements, timeframes and charges that apply in
case a professional user wants to switch to another service provider or port
data back to its own IT systems;
(c) approaches to certification schemes that facilitate the comparison of data
processing products and services for professional users, taking into account
established national or international norms, to facilitate the comparability
of those products and services. Such approaches may include, inter alia,
quality management, information security management, business con-
tinuity management and environmental management;
(d) communication roadmaps taking a multi-disciplinary approach to raise
awareness of the codes of conduct among relevant stakeholders.
2. The Commission shall ensure that the codes of conduct are developed in
close cooperation with all relevant stakeholders, including associations of
SMEs and start-ups, users and cloud service providers.

95
Sceptical with a view to the draft’s ‘low impact’ on the one hand and ‘substantial additional
costs’ on the other hand: German ‘Bundesrat’, BR-Drucks. 678/1/17, p. 2.
96
Recital 20 first sentence, Recital 21 first sentence of the proposal.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 283

3. The Commission shall encourage service providers to complete the develop-


ment of the codes of conduct by 29 November 2019 and to effectively
implement them by 29 May 2020.

The aim of this provision is to eliminate private restrictions, such as ‘legal, contract-
ual and technical issues hindering or preventing users of data processing services
from porting their data from one service provider to another or back to their own
information technology (IT) systems, not least upon termination of their contract
with a service provider’ (recital 5).
The Commission has chosen a self-regulation approach as, in its judgement, this
would not disturb the innovation process and rather bears in mind ‘the experience
and expertise of the providers and professional users of data storage or other
processing services’.97
Nonetheless, there are arguably problems with this aspect of the regulation, too.
First of all, Art 6 only encourages the switching of providers and porting of data,
without any obligation to do so and without precise specifications. The provision
only implies soft law (codes of conduct). However, this is precisely what might be
the best option at the moment, as it is not quite clear whether the EU legislator has a
complete overview of the data-(driven) market (who can say that they have the
overview in toto?). It is quite a tempting idea to start with soft law, to continue with
the analysis of the market and to reserve the creation of ‘hard law’ for later on.98
Then again, it is suboptimal99 to have two different portability provisions100 – Art
6 Regulation (EU) 2018/1807 and Art 20 GDPR.101
The restriction on personal scope – only professional users are captured by Art 6 –
makes the act even more irrelevant in practice. More generally, this aspect of the
regulation is directed at cases where data users make use of third-party provider
systems to manage their data – here, as noted, it may encourage a certain freeing up
of the data (by strengthening the user’s position against the provider); by contrast, it
leaves untouched other cases where organisations – including many larger ones –
entrust their data to their own competent IT specialists. Here, other mechanisms
appear necessary to encourage data sharing by such holders, namely by addressing
commercial factors that currently tell against the granting of third-party access.

97
Recital 21 second sentence of the proposal.
98
See Art 6(3) [and Recital 21 third sentence] of the proposal: ‘The Commission shall review the
development and effective implementation of such codes of conduct and the effective provi-
sion of information by providers no later than two years after the start of application of this
Regulation.’ See also Art: 9 (1): ‘No later than [5 years after the date mentioned in Article 10(2)],
the Commission shall carry out a review of this Regulation and present a report on the main
findings to the European Parliament, the Council and the European Economic and Social
Committee.’
99
EAID, Statement of 23/11/2017, p. 3.
100
Therefore, DAV Statement 4/2018, January 2018, p. 8 proposes to wait how Art 20 GDPR will
work in practice and then address the respective issue concerning non-personal data.
101
Section 10.4.2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
284 Björn Steinrötter

A potential step in this direction may be to incentivise sharing by granting property-


style rights in data, which the holders may then trade.

data producer’s right – or another kind of exclusive right


in data In recent years, there has been a debate within the legal scholarly
community (including the German one) as to whether non-personal (in particular
machine-generated and/or industrial) data and/or information at the syntactic
level102 (data as such) could be the subject of an exclusive right, a kind of a ‘data
ownership’ or ‘data property’ (Dateneigentum) or a kind of right in rem.103 The
Commission took up this debate in its 2017 policy initiative documents, in which it
considered arguments for and against creating such a right.104

background to the discussion A fundamental aspect of this approach is that


data nowadays have a value and that it seems prima facie reasonable and fair to
support the utilisation of data for individuals and at the same time to help to unlock
in-house data for other market participants and the general public.105 An exclusive
data right could provide incentives. The proponents of this approach stress the
clarification of the legal situation regarding the handling of (non-personal) data,106
since an exclusive data right would be mandatory law and would potentially have an
inter omnes effect. Otherwise, if the uses of the data were left to parties themselves to
regulate by contract, it could be ineffective since the stronger party could simply opt
out of restrictive provisions.107
The aim of such a right, resulting in a comprehensive allocation of data, could be,
inter alia, the improvement of tradability of those data as an economic good,
102
Sceptical insofar Wiebe (n 10) 394, 396; see also SWD(2017) 2 final 36 ‘. . . the claim is made
that in many scenarios the value intrinsic to the data is minimal and critically depends on the
capacity to make sense of the data (the algorithm). The more the competitive advantage results
from that capacity, the less important it is to control (and restrict) access to data.
103
Just see Becker, ‘Rechte an Industrial Data und die DSM-Strategie‘, GRUR Newsletter 01/2016,
7; Kerber, ‘A New (Intellectual) Property Right for Non-personal Data? An Economic Analysis’
(2016) Gewerblicher Rechtsschutz und Urheberrecht international (GRUR Int) 989; Kerber,
‘Governance of data: Exclusive property vs. Access’ (2016) IIC 759; Specht, ‘Auss-
chließlichkeitsrechte an Daten – Notwendigkeit, Schutzumfang, Alternativen’ (2016) Com-
puter und Recht (CR) 288; Wiebe, ‘Protection of industrial data – a new property right for the
digital economy?’ (2016) GRUR Int 877; Zech (n 17) 51, 74 et seq.; Zech (n 15) 317; Zech,
‘“Industrie 4.0” – Rechtsrahmen für eine Datenwirtschaft im digitalen Binnenmarkt’ (2015)
GRUR 1151; Zech, ‘Daten als Wirtschaftsgut - Überlegungen zu einem “Recht des Datener-
zeugers”’ (2015) CR 137; Zech, ‘Information as Property’ (2015) 6 JIPITEC 192; Hürlimann and
Zech, ‘Rechte an Daten’ (2016) sui generis 89; cf. also Gärtner and Brimsted, ‘Let’s Talk about
Data Ownership’ (2017) European Intellectual Property Review (EIPR) 461.
104
COM(2017) 9 final 13; see the assessment regarding the Commission’s text by Wiebe (2017) 9
ZGE/IPJ 394, 395 et seq.; Zech (n 15) 317, 318 et seq.
105
This direction is (at first) followed by COM(2017) 9 final 13.
106
Cf. SWD(2017) 2 final 34.
107
Zech (n 15) 317, 327.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 285

promoting transfer to those market participants who would most benefit from using
them.108 It would imply (alongside a set of defensive rights)109 the exclusive right to
utilise data and to license their usage.110 This would potentially cover the whole
data-related value chain, including the copying, curation and analysis of data.111

problems and uncertainties Granting property rights in data would treat


them as an object of legal protection akin to other intangible assets protected by
IP rights. However, in contrast to the latter, it is conspicuous that there is no
correlation between production of data and a specific performance such as a
personal intellectual creation, new and commercially usable technical inventions,
etc.112 This in itself is not a reason to refuse the assignment of an exclusive data right.
Incentive effects (with respect to quantity and quality of data production) and/or
economic considerations could also conceivably justify such a right113 – at least in
the case of market failure otherwise.114
However, other difficulties remain. An overarching data right would be trouble-
some since data production and stakeholder interests differ considerably across
sectors.115 Sector-specific116 rights need to be sharply delineated, which would not
be easy either.117 The application of the law could then be overlaid by a significant
‘preliminary’ legal examination of the relevant sector.
The assignment of an exclusive right in data could also lead to further substantial
problems. The first question is: What is a valid use case demonstrating the need for

108
SWD(2017) 2 final 33, 36; cf. Zech (n 17) 51, 77 et seq.
109
It would also be conceivable to create a data producer’s right as a set of defensive rights in
favour of the factual data holder, assuming the factual assignment is balanced and fair; in this
direction arguably Kerber (2016) GRUR Int 989, 998: ‘Our negative result in regard to
protecting data through an exclusive property right does not imply that the data of data holders
should not be protected against a wide array of behaviour that endangers and impedes the
holding, use, and trade of these data. [. . .] In that respect, we could also talk about ‘rights on
data’ and ‘ownership’ of data, which however would not encompass an exclusive right on these
data (as physical property or traditional IPRs). Therefore the possession and use of data can be
protected without the necessity of introducing exclusive property.’
110
SWD(2017) 2 final 33.
111
Zech (n 15) 317, 318.
112
Becker (n 8) 253, 256 who emphasises the correspondence between this specific performance
and the exclusive allocation of rights in use. However, this aspect is rather coloured by
continental, Hegelian droit d'auteur type arguments for IP. It is less pronounced in Anglo-
American law, where the main rationale has always been to reward the effort – whether creative
in a higher sense or not – that went into producing a given work.
113
Spindler (n 10) 399, 401, who at the same time clarifies that licence contracts remain necessary
either way; Zech (n 17) 51, 77; with respect to personal data: Specht (n 10) 411, 412.
114
MPI 5, 8.
115
GRUR 2.
116
One might think of the areas of autonomous transportation, industrial systems, personal
systems, medical fields etc.
117
Nevertheless, in favour of such an approach GRUR 2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
286 Björn Steinrötter

such a right?118 In this regard, there has been little evidence of a market need for
such a data right hitherto.119 The fact that presently not enough data are freely
shared (to assist big-data applications, etc.), as holders prefer to hoard it for their own
use, does not show a market need just for a data property right. If the exclusivity was
not only factual but legal, this could further increase the hoarding tendencies.
Another (and arguably the most controversial)120 issue is: who would be the original
holder of the exclusive data right? The Commission defines the ‘data producer’ as
‘the owner or long-term user [. . .] of the device’.121 Neither the assignment to the
‘data producer’ nor the Commission’s definition of the term seem compelling.
Another possibility would be the ‘economic beneficiary’. However, the identity of
the latter is not always clear. For example, is it the developer (bears the development
costs), the producer (bears the manufacturing costs) or the user of a device (bears the
maintenance costs)?122 Moreover, the scope of protection (possibly a limitation of
the allocated uses to commercial uses)123 and the limitations and exceptions (maybe
an obligation to share data to a certain extent in order to achieve welfare-enhancing
effects with, for example, scientists performing research)124 need to be carefully
outlined,125 a significant undertaking. Additionally, the right balance and relation
must be struck between the data exclusive right on the one hand and other rights,126
such as those resulting from data protection law, copyright law, patent law, database
law, the law of know-how protection (regarding trade secrets), competition law,
private127 and even criminal law on the other hand. It would seem convincing to
classify the data producer’s right (as considered here) as a supplementary right in
relation to (most of )128 the aforementioned legal fields. However, the question arises

118
Cf. ibid.
119
Becker (n 8) 253, 255 et seq.: ‘. . . companies control their data via technical means so
extensively that legal protection is not a pressing issue for them’; ‘for companies with adequate
IT-security, exclusive rights only become relevant for outgoing data’; ‘. . . especially . . . when
data is exchanged with business partners; or if public availability of data is necessary, as the case
with most internet services’.
120
Zech (n 15) 317, 324; cf. also Becker (n 8) 253, 255.
121
COM(2017) 9 final 13; sceptical Wiebe (n 10) 394, 395.
122
See also the reluctance of the possibility of an adequate personal allocation by GRUR 2.
123
This would be the approach of Zech (n 15) 317, 318 et seq.
124
SWD(2017) 2 final 35.
125
Easily comprehensible would be a limitation in favour of the data producer to fulfil legal
obligations such as monitoring products on the market (product safety and security). The same
applies to the free use for certain authorities regarding public welfare functions, for instance;
Zech (n 15) 317, 325 et seq.
126
See the overview given by SWD(2017) 2 final 19 et seq.
127
In particular, tort law. In Germany it is debated whether the integrity of data should be directly
protected by tort law (section 823(1) German Civil Code (BGB)). Renowned professors have
already spoken in favour of such an approach: Spindler in Beck’scher Online-Groß-Kommentar
(BeckOGK) zum Bürgerlichen Gesetzbuch, § 823 Rn 182 ff; Wagner in Münchener Kommentar
zum Bürgerlichen Gesetzbuch (MüKO BGB), Band VI, 7. Auflage, 2017, § 823 Rn 296.
128
Contract Law would be superseded, of course.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 287

as to whether an exclusive data right is really needed, considering the indirect


protection already provided by these fields.129
Furthermore, as an exclusive right would even de iure prevent others from
accessing and using the data in question,130 it seems doubtful that regulatory
interference in the freedom of competition, the freedom to copy and the freedom
of information could be justified.131 Finally, an exclusive data right risks preventing
future business and development opportunities (risk of dysregulation).
It is therefore arguably welcome that in the end, the Commission has taken such
a right off the agenda by not including it in its Regulation (EU) 2018/1807,132 opting
instead to concentrate on the possibility of access solutions133 existing alongside an
adjusted data contract law regime.

10.3.3 Non-personal Data Contract Law


A ‘contractual approach’, necessarily considering the parties’ unequal bargaining
power, appears more expedient than the ‘exclusive right solution’.134 Currently,
access to and use of data are first and foremost the subject of contracts.135 From
the perspective of contract law it is irrelevant whether exclusive rights exist,136 as the
contractual subject matter need not be an existing right. Contracts can rather
originally create rights in factual positions (regarding access, transfer, usage
etc.).137 This could be the starting point for any ‘soft’ regulation measures.138
129
Steinrötter (n 14) 731, 733; see also n 11.
130
Of course, the objective of such a right would be to achieve the opposite effect, namely the
disclosure of data because of the exclusive right. It appears very doubtful whether this would de
facto be the case; likewise Christians and Liepin (n 14) 331, 337: risk of ‘inappropriate monop-
olization of data by means of civil law’; MPI 6.
131
Cf. Berger (n 9) 340, 346.
132
Besides, the same holds true for the German legislator; see [www.justiz.nrw.de/JM/schwer
punkte/digitaler_neustart/index.php].
133
See above Section 10.3.2.2; already before favouring an access solution: GRUR 3.
134
Concordant Drexl (2017) 8 JIPITEC 257, 291; GRUR 2 et seq.; MPI 9, 12; also sceptical about
exclusive rights in data OECD, ‘Data-Driven Innovation – Big Data for Growth and Well-
Being’ (OECD Publishing 2015) 195 et seq.
135
COM(2017) 228 final, under 3.2; SWD(2017) 2 final 16; cf. Berger (n 9) 340: ‘data contract law
lies at the heart of commercial data law.’
136
Specht (n 10) 411, 414.
137
Zech (n 17) 51, 59.
138
At European level, some legal acts warrant closer consideration at the contract law level. This
applies, for example, to the Directive 2005/29/EC of the European Parliament and of the
Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the
internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC
and 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/
2004 of the European Parliament and of the Council (‘Unfair Commercial Practices Directive’),
OJ 2005 L 149, p. 22, that protects consumers against, inter alia, a trader’s omission to inform a
consumer that its data will be used for commercial purposes. This could be classified as a
misleading omission of material information, SWD(2017) 2 final 21. In addition, the Council
Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts, OJ 1993 L 95, p. 29

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
288 Björn Steinrötter

There are, however, also some problems with taking a contractual approach.
Contracts only have inter partes effect, which provides less legal certainty regarding
legal transactions and fails to address structural disparities between the potential
parties. If an exclusive use is desired in order to create or maintain data markets, and
‘property rights’ in data are rejected, secrecy – secured by technical means – remains
the only option.139 Drafting contracts that contain such factual exclusivity is rather
complex and therefore costly. Moreover, consumers often possess neither the equip-
ment nor the know-how to deal with these technical matters, nor do they have the
market power to safeguard their interests contractually, for example, regarding
connected cars.140 Rather, the previous de facto holder will ‘use standard contract
terms formulated in its own interest’.141
Furthermore, it is unclear what type of contract is relevant (e.g., whether it is a
contract of sale or for services, etc.) until a special data contract law regime is created
as a standard.142 This is perhaps more of an issue for continental codified systems of
law; practical problems can arise regarding the review of the terms and conditions.
The Directive on certain aspects concerning contracts for the supply of digital
content and digital services143 is intending to bridge this gap. However, the directive
only applies to B2C contracts and explicitly refuses to stipulate a type of contract.144
One factual problem seems to be that current contractual practice tends to limit
onward re-use of data.145 Parties are mostly not entitled to use the data for any
purpose other than fulfilling the relevant contract, such as for own purposes or
transfer to third parties.146
To reduce the imbalance in parties’ bargaining power, while maintaining a
contractual freedom-based approach to data access, certain default rules could be
considered, perhaps coupled with unfairness controls regarding contractual data
clauses and/or a set of standard contract terms.147

provides a certain protection, whereas it deserves attention that some Member States apply its
provisions or its ‘spirit’ also to b2b-constellations, SWD(2017) 2 final 21. In connection with data as
such the Directive 2011/83/EU of the European Parliament and of the Council of 25 October 2011
on consumer rights, amending Council Directive 93/13/EEC and Directive 1999/44/EC of the
European Parliament and of the Council and repealing Council Directive 85/577/EEC and
Directive 97/7/EC of the European Parliament and of the Council, OJ 2011 L 304, p. 64 may
become relevant. The same will hold true regarding the final version of the Directive on certain
aspects concerning contracts for the supply of digital content, COM(2015) 634 final.
139
Zech (n 17) 51, 60: ‘factual exclusivity – that is secrecy – is difficult to trade’.
140
Cf. Zech (n 17) 51, 60.
141
MPI 7.
142
Specht (n 10) 411, 414.
143
OJL 136, p. 1.
144
Zech (n 17) 51, 61.
145
SWD(2017) 2 final 16 refers to Clark, ‘Legal Study on Ownership and Access to Data’ (2016) 79
(available at [https://tinyurl.com/y8w478m6]).
146
SWD(2017) 2 final 16.
147
See GRUR 4 calling for an introduction of an unfairness control in b2b constellations; MPI 7;
cf. also Zech (n 15) 317, 327; cf. also Spindler (n 10) 399, 402 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 289

10.4 data protection law

10.4.1 Brief Description and Rationale


Although the – infelicitously chosen – expression ‘data protection law’ suggests that
certain data are protected (cf. also Art 1(2) GDPR), this field of law ultimately aims at
protecting the data subject, namely each person’s right to privacy and self-
determination. Personal data are those relating to an identified or identifiable
natural person (Art 4(1) GDPR). With the definition of data already discussed148 in
mind, ‘personal information’ would be the more appropriate term, since the seman-
tic level is addressed here.149
Pursuant to Art 1(3) GDPR the ‘free movement of personal data within the Union
shall be neither restricted nor prohibited for reasons connected with the protection
of natural persons with regard to the processing of personal data’.150 As this makes
clear, even with regard to personal data,151 there does exist a principle of free
movement of data within the EU. However, these assertions exist mainly on paper,
since the key feature of the GDPR – the principle ‘prohibition unless permission’,152
including its arguably main permission criterion, ‘privacy consent’153 – follows a
strong prima facie data protection model.

10.4.2 Personal Data Movement and Trading


As argued throughout this chapter, efficient data commercialisation calls for a far-
reaching possibility of data being moved or traded. In this regard, it is clear that
148
See Section 10.2.
149
Strictly speaking, the term ‘(Information) Privacy Law’ seems at first glance therefore more
appropriate. However, there is a distinction between privacy and data protection at the consti-
tutional level in Europe, making it appropriate to separate the two terms; see Kokott and Sobotta,
‘The Distinction between Privacy and Data Protection’ (2017) 3(4) International Data Privacy
Law 222. Nowadays it makes sense, moreover, to separate these terms, since privacy law is
associated with the US approach, whereas data protection law describes the European approach.
150
See also Art 1 para 2 of the proposal for a Regulation concerning the respect for private life and
the protection of personal data in electronic communications and repealing Directive 2002/58/
EC (Regulation on Privacy and Electronic Communications), COM(2017) 10: ‘This Regulation
ensures free movement of electronic communications data and electronic communications
services within the Union, which shall be neither restricted nor prohibited for reasons related
to the respect for the private life and communications of natural and legal persons and the
protection of natural persons with regard to the processing of personal data.’; this could be
interpreted as an indication for the development of a nascent data economic law, see Steinrötter,
§ 5 ‘ePrivacy’ in Specht and Mantz (eds), Handbuch Europäisches und deutsches Datenschutz-
recht (2019) 129, 131.
151
Communication from the Commission to the European Parliament, the Council, the Euro-
pean Economic and Social Committee and the Committee of the Regions on the Mid-Term
Review on the implementation of the Digital Single Market Strategy – A Connected Digital
Single Market for All, COM(2017) 228 final, under 3.2.
152
Art 6 GDPR.
153
Art 7 GDPR.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
290 Björn Steinrötter

personal data are a potential commercial asset, namely a tradable good. Indeed, they
may often constitute the most valuable kind of asset (e.g., enabling businesses to
target their customers with advertising based on a detailed knowledge of their
individual interests and assets).154 However, the strict requirements of data protec-
tion law lead to the conclusion in parts of legal literature that at present this potential
is not being exploited to the full.155 In the light of existing data protection law, the
value of personal data is actually lower than it would be otherwise, as its use exposes
holders to significant regulatory costs and/or risks. To this extent, data protection and
commercialisation of data may be seen as contradictory objectives. On the one
hand, Art 20 GDPR may assist the movement of data to a certain extent by
safeguarding data portability.156 One of the few real innovations within the new
data protection act is as follows: (Art 20 GDPR)
1. The data subject shall have the right to receive the personal data concerning
him or her, which he or she has provided to a controller, in a structured,
commonly used and machine-readable format and have the right to transmit
those data to another controller without hindrance from the controller to
which the personal data have been provided, where:
(a) the processing is based on consent pursuant to point (a) of Article 6(1) or
point (a) of Article 9(2) or on a contract pursuant to point (b) of Article
6(1); and
(b) the processing is carried out by automated means.
2. In exercising his or her right to data portability pursuant to paragraph 1, the
data subject shall have the right to have the personal data transmitted directly
from one controller to another, where technically feasible.
3. The exercise of the right referred to in paragraph 1 of this Article shall be
without prejudice to Article 17. That right shall not apply to processing
necessary for the performance of a task carried out in the public interest or
in the exercise of official authority vested in the controller.
4. The right referred to in paragraph 1 shall not adversely affect the rights and
freedoms of others.
In general, portability means the ‘ability to move, copy or transfer something’.157 The
rationale of Art 20 GDPR is to avoid lock-in effects and to improve the process of
switching from one service provider to another.158 It has more of a competition

154
Becker (n 8) 253, 259.
155
Cf. Berger (n 5) Abstract.
156
SWD(2017) 2 final 20.
157
SWD(2017) 2 final 46.
158
Hennemann, ‘Datenportabilität’ (2017) Privacy in Germany (PinG) 5; cf. also the ‘switching
mechanisms’ of Art 9 of Directive 2005/29/EC, Art 30 Universal Service Directive 2002/22/EC
and Art 13 No 2 lit. c, Art 16(4) lit. b of the proposal for a Directive on contracts for the supply of
digital content, COM(2015) 634 final.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 291

law159 than a data protection law background,160 albeit the data subject’s right to data
protection is preserved by better data sovereignty, at least indirectly due to Art
20 GDPR.
Looking at the provision in more detail, Art 20 GDPR implies two components:
first, the transmission of personal data from one controller to another (if technically
possible); and second, the receipt of the data from the controller. A trouble spot
seems to be the ascertainment of the term ‘personal data [. . .] which he or she has
provided to a controller’ since this determines which data is eligible to be ported. It
is clear that data ‘actively and knowingly’ provided by the data subject is encom-
passed.161 The same arguably holds true regarding data provided automatically as a
result of the subject’s use of a device or a service.162 In contrast, data created by the
controller on the basis of data that were provided by the data subject appear to fall
outside the scope of Art 20 GDPR.163
In fact, even before the applicability of the GDPR, the data subject enjoyed a
well-known ‘right of data access’ (under Art 12 Directive 95/46/EC). This right, now
re-enacted in Art 15 GDPR, ‘supports’ the newly created data portability right. Art
15 GDPR reads as follows:
1. The data subject shall have the right to obtain from the controller confirm-
ation as to whether or not personal data concerning him or her are being
processed, and, where that is the case, access to the personal data and the
following information:
(a) the purposes of the processing;
(b) the categories of personal data concerned;
(c) the recipients or categories of recipient to whom the personal data have
been or will be disclosed, in particular recipients in third countries or
international organisations;
(d) where possible, the envisaged period for which the personal data will be
stored, or, if not possible, the criteria used to determine that period;
(e) the existence of the right to request from the controller rectification or
erasure of personal data or restriction of processing of personal data
concerning the data subject or to object to such processing;
(f ) the right to lodge a complaint with a supervisory authority;
(g) where the personal data are not collected from the data subject, any
available information as to their source;
(h) the existence of automated decision-making, including profiling, referred
to in Article 22(1) and (4) and, at least in those cases, meaningful

159
Paal and Pauly (eds) Datenschutz-Grundverordnung Bundesdatenschutzgesetz (2nd edn, Beck
2018) Art 20 para 6.
160
Hennemann (n 158) 5, 6.
161
SWD(2017) 2 final 46.
162
Ibid.
163
Ibid.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
292 Björn Steinrötter

information about the logic involved, as well as the significance and the
envisaged consequences of such processing for the data subject.
2. Where personal data are transferred to a third country or to an international
organisation, the data subject shall have the right to be informed of the
appropriate safeguards pursuant to Article 46 relating to the transfer.
3. The controller shall provide a copy of the personal data undergoing process-
ing. For any further copies requested by the data subject, the controller may
charge a reasonable fee based on administrative costs. Where the data subject
makes the request by electronic means, and unless otherwise requested by the
data subject, the information shall be provided in a commonly used
electronic form.
4. The right to obtain a copy referred to in paragraph 3 shall not adversely affect
the rights and freedoms of others.
A seemingly simple solution, to avoid the application of these requirements, would
be for the data controller to anonymise the personal data. More generally, this might
open the door to the application of data economic law. However, the current law
places high demands on the process of anonymising (and de-anonymisation seems
to be possible quite often). Moreover, anonymised data is probably not as valuable
(e.g., for big-data applications) as non-anonymised data.164

10.4.3 Personal Data Ownership/Property in Personal Data?


Academic debate continues as to whether data protection law should be refined
towards a kind of ‘data ownership’ or ‘data property’ (Dateneigentum). If it were, the
(exclusive) allocation of the pertinent property right to the data subject seems (at
least at first glance) obvious.165 The data subject should be in a position to commer-
cialise their data and participate in the data value. Moreover, this should improve
freedom of manoeuvre and data sovereignty. It has to be stressed, however, that
existing law does not allocate the personal data exclusively to the data subject.
Rather, the provisions are in parts open to consideration, more precisely to the
balancing of interests.166 This discussion is – including for German legal scholars –
anything but new.167
164
Cf. GRUR 4.
165
Cf., however, Berger (n 9) 340, 349, who also considers the following criteria: data refinement,
data collection, data defining, access to data, the power of disposal, ownership of data carrier.
All these criteria are not convincing considering the informational self-determination implica-
tions of personal data, which remains the stronger argument regarding the original assignment.
This rather clear assignment contrasts the allocation of exclusive data rights concerning non-
personal data, see above Section 10.3.2.2.
166
Zech (n 17) 51, 67 points to Art 6(1) 1 lit. f GDPR.
167
See already Buchner, Informationelle Selbstbestimmung im Privatrecht (2006) 208 et seq.
(recently anew Buchner (n 10)); Götting, Persönlichkeitsrechte als Vermögensrechte (1995);
Kilian, ‘Strukturwandel der Privatheit‘, in Garstka and Coy (eds), Wovon – für wen – wozu.
Systemdenken wider die Diktatur der Daten, Wilhelm Steinmüller zum Gedächtnis (2014) 195.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 293

De lege lata it is true that such an evolution of data protection law conflicts with
the free revocability of consent pursuant to Art 7(3) GDPR and other principles of
data protection. Indeed, for this reason the Commission has resolutely opposed such
a development:
As the protection of personal data enjoys the status of a fundamental right in the EU
and processing of personal data is protected by the highest standards of data
protection legislation in the world, in the EU personal data cannot be subject to
any type of ‘ownership’.168

However, it does not seem a priori unreasonable to create an exclusive right


regarding personal data de lege ferenda. Then, of course, the understanding of data
protection would need to change towards more sober self-determination and less
paternalistic structures. Even so, it remains questionable whether such a right would
really improve the functioning of data markets. This is due, inter alia,169 to the
accompanying (potential) monopolisation effects (where subjects could demand
high fees for the use of their data or exclude this altogether).170

10.4.4 Personal Data Contract Law


In practice, (personal) data can be factually transferred, and the transfer is regularly
accompanied by the relevant contract.171 The content and digital services directive172
(applying to B2C contracts only) shows that personal data can be the contractual
subject of performance Art 3(1). However, data protection law takes precedence
Art 3(8), which is a significant disadvantage for the contractual counterparty,
particularly with regard to the right to withdraw the privacy consent at any time
(Art 7(3) GDPR).173
Christian Berger174 has recently argued that data protection could be managed de
lege ferenda by a kind of contractual control system, based on personal autonomy in
order to ensure informational self-determination ‘at least equivalent and perhaps
indeed more effective [. . .] than executive bans and interventionist administration’.
However, he suggests an exception (where data protection law would remain as
regulative law) in the case of the protection of minors: under many legal systems,

For the discussion in the US, which is not quite suitable for the European debate since the
privacy approach is very different, just see Samuelson, ‘Privacy as Intellectual Property’ (1999)
52 Stanford Law Review 1125.
168
SWD(2017) 2 final 24.
169
See the many disadvantages listed under Section 10.3.2.2 regarding syntactic information.
Mutatis mutandis, the list holds true here, too.
170
Cf. Berger (n 9) 340, 345.
171
Berger (n 9) 340, 351 et seq.
172
OJL 136, p. 1.
173
Berger (n 9) 340.
174
Berger (n 9) 340, 343: ‘contract in lieu of ban’.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
294 Björn Steinrötter

including in Germany, the age threshold for consenting to have one’s data processed
is in any event lower than that for entering into a contract, rendering ostensible
‘contractual’ consent ineffective from the outset (cf. section 107 of the German Civil
Code (BGB)).
It may also be questioned whether, if data protection law were developed into a
data private (contract) law regime, contract law would provide adequate means of
dealing with personal data trades and movements. One could argue that contract
law would not be able to solve the problem of the imbalance of power between the
data subject and the data industry. As it stands, however, this is not really true.
Modern contract law certainly has the tools to balance the different power levels of
the parties,175 all the more so given that existing data protection law mandatorily
takes precedence and secures the data subject beyond purely contractual mechan-
isms. In other words, the parties cannot waive this minimum protection. This
becomes evident, once again, with respect to the ‘right [of the data subject] to
withdraw his or her consent at any time’ (Art 7(3) GDPR).

10.5 conflicts
Privacy concerns and the free flow of data may obviously conflict with each other. At
first sight, this does not affect non-personal data, such as that concerning non-
human physical phenomena, which remain outside the scope of the GDPR.
However, the economic reality is that datasets and data flows often contain both
personal and non-personal data.176 This phenomenon applies also to machine-
generated data, which are created without direct human intervention but rather
by computers or sensors. Indeed, as we have seen, differentiation between non-
personal and personal data is at the least very difficult,177 and maybe increasingly
impossible.
Where such data allow the identification of natural persons, the GDPR applies,
and with it the potential for very substantial fines.178 This clearly does not allow for
unrestricted trading and processing of personal data.179 It is remarkable (and
somewhat incomprehensible) that the Commission, in its current legislative initia-
tives regarding the EU data economy, has made little attempt to harmonise these
with the GDPR.180 Rather, as with other recent initiatives,181 such as the digital
content and digital services directive (promoting the use of data [personal as well as
175
Berger (n 9) 340, 351.
176
COM(2017) 9 final 9. This reality is also recognised by the European legislator, as Art 2 para 2
Regulation (EU) 2018/1807 illustrates.
177
GRUR 4.
178
Art 83(3) (6) GDPR.
179
GRUR 4: ‘Personal data shall be adequate, relevant and limited to what is necessary in relation
to the purposes for which it is processed.’
180
Becker (n 8) 253, 258 et seq.
181
See Section 10.3.2.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 295

other data] as a means of payment: Art 3(1)),182 it appears that the EU has prioritised
the issue of ‘data as tradable goods’.
Here, data protection law, including the principle of data minimisation, has
opposite objectives to, for example, big-data applications. Hence, some commen-
tators argue that the data minimisation principle is no longer up to date.183 More-
over, as noted, pursuant to Art 7(3) GDPR, the ‘data subject shall have the right to
withdraw his or her consent at any time’, which would lead – if rigidly interpreted –
to a stoppage of the big-data process.184

10.6 alternatives
Maybe data are not even the right starting point for regulating the data economy.
Maybe disclosure of the methods and techniques used by algorithms is.185 However,
businesses concerned will argue that algorithms are an important trade secret to
them. In addition, it seems questionable whether the end user would benefit from
the disclosure. In cases of artificial intelligence, it is often suggested that it would not
be even possible to trace back how the results have been obtained.186
Data economic law will eventually have to be reconciled with data protection
law, as the trick of ‘taking refuge in syntactic information’ is not convincing and data
protection law remains the standard measure. In all this there remains one promis-
ing starting point: the autonomous decision of the data subject, in other words,
informed consent.187 It is true that the ‘concept of informed consent [. . .] has proved
insufficient in legal reality.’188 Therefore, the aim must be to optimise its efficiency.
Setting aside data economic law considerations, one of the central legal policy
issues in recent years has been the improvement of (digital) ‘data sovereignty’,189 in
combination with certain information obligations and/or the enforcement of

182
OJL 136, p. 1.
183
GRUR 4.
184
Ibid, pointing out that in addition the several rights of the persons affected typically disturb big
data applications.
185
Ibid 5; regarding the regulation of algorithms see, e.g., Comandè, ‘Regulating Algorithms’
Regulation? First Ethico-Legal Principles, Problems, and Opportunities of Algorithms’ in
Cerquitelli, Quercia, and Pasquale (eds) Transparent Data Mining for Big and Small Data
(Springer 2017); Martini, ‘Algorithmen als Herausforderung für die Rechtsordnung’ (2017)
Juristenzeitung (JZ ) 1017.
186
It is questionable whether this is true. Computer scientists currently conduct a lot of research
on the question of interpretability of AI systems (source: personal conversation with Avishek
Anand, professor at Leibniz University Hanover).
187
Cf. Sattler, Sattler, GRUR Newsletter 01/2017 7 et seq.
188
Becker (n 5) 371.
189
Krüger, ‘Datensouveränität und Digitalisierung’ (2016) Zeitschrift für Rechtspolitik (ZRP) 190;
Rosenzweig, ‘International Governance Framework for Cybersecurity’ (2012) 37 Canada-
United States Law Journal (Can-US LJ) 405, 421 et seq.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
296 Björn Steinrötter

‘privacy by design’ and ‘privacy by default’. Of course, increasing the self-


determination of people in all fields, including the digital one, is a worthy objective.
Nevertheless, precisely how this can be done is the question. Taken individually,
such calls remain buzzwords.
Maximilian Becker190 has recently proposed the introduction of a genuine choice
for consumers (perhaps even for businesses as well) between data-collecting prod-
ucts (which is also a form of remuneration free of charge) and non-data collecting
products (which are potentially chargeable). Although some exceptions would be
necessary,191 for example, for goods and services that rely on data collection such as
dating or fitness apps, this approach is an interesting one but at the same time
involves strong market intervention. The data-avoiding product would probably be
offered at an excessive price, unless further market regulation were also intro-
duced.192 In a (even in a social) market economy, government price regulation of
all data-collecting services and goods would, however, arguably be going too far.
Overall, it seems rather unlikely that this model could be implemented.
Another possible approach is the establishment of a kind of ‘data traffic light
system’.193 The company that collects the data could itself be permitted to decide
whether the quantity and quality of data processing should be more or less intense in
the light of privacy issues. For this self-evaluation, fixed criteria would need to be
established. Authorities could conduct reviews on a random basis to determine
whether the company is complying with the requirements, and impose fines in case
of violations. Consumers would arguably welcome the reduction in complexity.
They are probably sufficiently well informed (at no expense) to review their decision
in each individual case and enter into a contract (or not, as the case may be) on this
basis.194 This could restore the balance to the market, at least to some extent.

10.7 conclusions
This contribution has sought to demonstrate how complicated the establishment of
a data economic law would be. Much still remains unclear.
We have the factual problem of differentiating non-personal data from personal
data, which is decisive in respect of the applicable legal regime – especially taking
into consideration that data protection law and data economy law have partly
conflicting objectives. It seems tempting to establish a data economic law

190
Becker, ‘Ein Recht auf datenerhebungsfreie Produkte’ (2017) JZ 170; Becker (n 5) 371: ‘a right to
data-avoiding products’.
191
Becker (n 5) 371, 384 et seq.
192
Becker (n 5) 371, 388 et seq.
193
Forgó cited in Beer, ‘Europäische Datenschutzgrundverordnung: Rechtsinformatiker plädiert
für Datenschutzampel’ <https://tinyurl.com/ycj3xla2>.
194
Note, however, the scepticism with respect to personal goods as trading objects tracing back to
Immanuel Kant from: Becker (n 5) 371, 375 et seq.; cf. also Specht (n 10) 411, 412.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Legal Framework for Commercialisation of Digital Data 297

(apparently) outside the strict GDPR regime. However, this is arguably not feasible.
The distinction between personal and non-personal data in practice is very complex
and, in some cases, perhaps even impossible, especially since even machine-
generated data are in many cases personal.195
In addition, ultimately the data in question are semantic in nature. Even big-data
applications aim at the micro content that the syntactic level ‘carries’; they do not
aim at the syntactic level as such. To focus on the syntactic level might work in
theory but it is not an option for future legislation. Even if it were possible to separate
the two levels, the GDPR (and other legal fields such as copyright law) would still
thwart the regulations on the syntactic level. If the binary code ‘transports’ legally
protected meaning, this protection would prevail and spill over onto the
syntactic level.
The necessity for legal certainty in the fields of data trading and usage means that
despite these concerns, a data economic law will gradually be established. While
data producers’ rights would not be a convincing component of such law, access
rights could be one way forward – along with an adjusted contract law.
Whatever the future holds, proper regulation of the data market remains a hot
legal topic, both now and for the foreseeable future. Discussion of the optimal legal
framework for addressing the use of digital data will occupy legal scholarship for
some time to come.

195
GRUR 4, has already pointed out this aspect.

Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011
Downloaded from https://www.cambridge.org/core. University College London (UCL), on 06 Jul 2020 at 07:36:27, subject to the Cambridge Core
terms of use, available at https://www.cambridge.org/core/terms. https://doi.org/10.1017/9781108347846.011

You might also like