You are on page 1of 456

© The Editors and Contributing Authors Severally 2019

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or
transmitted in any form or by any means, electronic, mechanical or photocopying, recording, or
otherwise without the prior permission of the publisher.

Published by
Edward Elgar Publishing Limited
The Lypiatts
15 Lansdown Road
Cheltenham
Glos GL50 2JA
UK

Edward Elgar Publishing, Inc.


William Pratt House
9 Dewey Court
Northampton
Massachusetts 01060
USA

A catalogue record for this book


is available from the British Library

Library of Congress Control Number: 2018958455

This book is available electronically in the


Law subject collection
DOI 10.4337/9781785367724

ISBN 978 1 78536 771 7 (cased)


ISBN 978 1 78536 772 4 (eBook)

Typeset by Servis Filmsetting Ltd, Stockport, Cheshire

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:16AM
via New York University

WAGNER_9781785367717_t.indd 4 13/12/2018 15:25


Contents

List of contributorsviii

Introduction to the Research Handbook on Human Rights and


Digital Technology1
Ben Wagner, Matthias C. Kettemann and Kilian Vieth

PART I  C
 ONCEPTUAL APPROACHES TO HUMAN RIGHTS AND
DIGITAL TECHNOLOGY

  1. Human rights futures for the internet 5


M.I. Franklin
  2. There are no rights ‘in’ cyberspace 24
Mark Graham
  3. Beyond national security, the emergence of a digital reason of state(s) led by
transnational guilds of sensitive information: the case of the Five Eyes Plus
network33
Didier Bigo
  4. Digital copyright and human rights: a balancing of competing obligations,
or is there no conflict? 53
Benjamin Farrand

PART II  S
 ECURITY AND HUMAN RIGHTS: BETWEEN
CYBERSECURITY AND CYBERCRIME

  5. Cybersecurity and human rights 73


Myriam Dunn Cavelty and Camino Kavanagh
  6. Cybercrime, human rights and digital politics 98
Dominik Brodowski
  7. ‘This is not a drill’: international law and protection of cybersecurity 113
Matthias C. Kettemann
  8. First do no harm: the potential of harm being caused to fundamental rights
and freedoms by state cybersecurity interventions 129
Douwe Korff

v
Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:22AM
via New York University

WAGNER_9781785367717_t.indd 5 13/12/2018 15:25


vi  Research handbook on human rights and digital technology

PART III  I NTERNET ACCESS AND SURVEILLANCE: ASSESSING


HUMAN RIGHTS IN PRACTICE

  9. Access to the Internet in the EU: a policy priority, a fundamental, a human


right or a concern for eGovernment? 157
Lina Jasmontaite and Paul de Hert
10. Reflections on access to the Internet in Cuba as a human right 180
Raudiel F. Peña Barrios
11. Surveillance reform: revealing surveillance harms and engaging reform
tactics195
Evan Light and Jonathan A. Obar
12. Germany’s recent intelligence reform revisited: a wolf in sheep’s clothing? 223
Thorsten Wetzling

PART IV AUTOMATION, TRADE AND FREEDOM OF EXPRESSION:


EMBEDDING RIGHTS IN TECHNOLOGY GOVERNANCE

13. Liability and automation in socio-technical systems 247


Giuseppe Contissa and Giovanni Sartor
14. Who pays? On artificial agents, human rights and tort law 268
Tim Engelhardt
15. Digital technologies, human rights and global trade? Expanding export
controls of surveillance technologies in Europe, China and India 299
Ben Wagner and Stéphanie Horth
16. Policing ‘online radicalization’: the framing of Europol’s Internet Referral
Unit319
Kilian Vieth

PART V ACTORS’ PERSPECTIVES ON HUMAN RIGHTS: HOW CAN


CHANGE HAPPEN?

17. When private actors govern human rights 346


Rikke Frank Jørgensen
18. International organizations and digital human rights 364
Wolfgang Benedek
19. Recognizing children’s rights in relation to digital technologies: challenges
of voice and evidence, principle and practice 376
Amanda Third, Sonia Livingstone and Gerison Lansdown

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:22AM
via New York University

WAGNER_9781785367717_t.indd 6 13/12/2018 15:25


Contents  vii

20. Digital rights of LGBTI communities: a roadmap for a dual human rights


framework411
Monika Zalnieriute

Index435

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:22AM
via New York University

WAGNER_9781785367717_t.indd 7 13/12/2018 15:25


Contributors

Wolfgang Benedek, University of Graz, Austria


Didier Bigo, King’s College London, UK and CERI Sciences-Po, Paris, France
Dominik Brodowski, Saarland University, Saarbrücken, Germany
Giuseppe Contissa, LUISS University, Italy
Paul de Hert, Vrije Universiteit Brussel (VUB), Brussels, Belgium and Tilburg University,
Tilburg, the Netherlands
Myriam Dunn Cavelty, ETH Zurich, Switzerland
Tim Engelhardt, OHCHR, United Nations, Switzerland
Benjamin Farrand, University of Warwick, UK
M.I. Franklin, Goldsmiths University of London, UK
Mark Graham, Oxford Internet Institute, University of Oxford, UK and Alan Turing
Institute, London, UK
Stéphanie Horth, European University Viadrina, Frankfurt/Oder, Germany
Lina Jasmontaite, Vrije Universiteit Brussel (VUB), Brussels, Belgium
Rikke Frank Jørgensen, Danish Institute for Human Rights, Denmark
Camino Kavanagh, King’s College London, UK
Matthias C. Kettemann, Leibniz Institute for Media Research – Hans-Bredow-Institut,
Hamburg, Germany
Douwe Korff (em.), London Metropolitan University, UK, Oxford Martin School,
University of Oxford, UK and Member of the Expert Advisory Panel of the Global
Cyber Security Capacity Centre of the OMS
Gerison Lansdown, International Children’s Rights Advocate
Evan Light, Glendon College, York University, Canada
Sonia Livingstone, London School of Economics and Political Science, UK
Jonathan A. Obar, York University, Canada
Raudiel F. Peña Barrios, University of Havana, Cuba
Giovanni Sartor, University of Bologna and European University Institute, Florence,
Italy
Amanda Third, Western Sydney University, Australia

viii
Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:28AM
via New York University

WAGNER_9781785367717_t.indd 8 13/12/2018 15:25


Contributors  ix

Kilian Vieth, Stiftung Neue Verantwortung, Berlin, Germany


Ben Wagner, Vienna University of Economics and Business, Vienna, Austria
Thorsten Wetzling, Stiftung Neue Verantwortung, Berlin, Germany
Monika Zalnieriute, Allens Hub for Technology, Law, and Innovation, University of New
South Wales, Sydney, Australia

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:28AM
via New York University

WAGNER_9781785367717_t.indd 9 13/12/2018 15:25


Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:28AM
via New York University

WAGNER_9781785367717_t.indd 10 13/12/2018 15:25


Introduction to the Research Handbook on Human
Rights and Digital Technology
Ben Wagner, Matthias C. Kettemann and Kilian Vieth

In a digitally connected world, the question of how to respect, protect and implement
human rights has become unavoidable. As ever more human beings, organizational
systems and technical devices transition online, realizing human rights in online settings
is becoming ever more pressing. When looking at basic human rights such as freedom
of expression, privacy, free assembly or the right to a fair trial, all of these are heavily
impacted by new information and communications technologies.
While there have been many long-standing debates about the management of key
Internet resources and the legitimacy of rules applicable to the Internet – from legal
norms to soft law, from standards to code – it is only more recently that these debates
have been explicitly framed in terms of human rights. The scholarly field that has grown in
response to these debates is highly interdisciplinary and draws from law, political science,
international relations, geography and even computer science and science and technol-
ogy studies (STS). In order to do justice to the interdisciplinary nature of the field, this
Research Handbook on Human Rights and Digital Technology: Global Politics, Law and
International Relations unites carefully selected and reviewed contributions from scholars
and practitioners, representing key research and practice fields relevant for understanding
human rights challenges in times of digital technology.
The Research Handbook offers five Parts, which cover key aspects of the debate on
technology and human rights. Part I takes a more conceptual approach, looking at the
future(s) of human rights (M.I. Franklin, Chapter 1), the geography in which struggles for
rights are anchored (Mark Graham, Chapter 2), the ‘reason of state’ behind government
surveillance (Didier Bigo, Chapter 3) and how intellectual property protection (Benjamin
Farrand, Chapter 4) relates to human rights online.
Part II discusses how debates about security influence human rights on the Internet.
Here, the terminology of ‘cyber’ is strongly present with authors studying the interplay
between cybersecurity (Myriam Dunn Cavelty and Camino Kavanagh, Chapter 5), cyber-
crime (Dominik Brodowski, Chapter 6) and human rights. The authors in this section
look at both the role of states and the international community in protecting cybersecurity
as lying in the global common interest (Matthias Kettemann, Chapter 7) and the dangers
for human rights caused by (cyber)security arguments (Douwe Korff, Chapter 8).
Part III focuses on specific human rights and assesses challenges to them in practice. The
contributions look at the role of Internet access as a fundamental right in the European
Union (Lina Jasmontaite and Paul de Hert, Chapter 9) and in Cuba (Raudiel Peña Barrios,
Chapter 10) and analyse reforms to government surveillance in North America (Evan
Light and Jonathan Obar, Chapter 11) and Germany (Thorsten Wetzling, Chapter 12).
Contributors to Part IV look more closely at embedding human rights in digital
technologies, with the initial two contributions analysing the human rights implications

1
Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:33AM
via New York University

WAGNER_9781785367717_t.indd 1 13/12/2018 15:25


2  Research handbook on human rights and digital technology

of liability regimes for automation (Giuseppe Contissa and Giovanni Sartor, Chapter 13)
with a specific focus on torts law (Tim Engelhardt, Chapter 14). These are then followed
by perspectives on embedding human rights in trade regulation of technology exports
(Ben Wagner and Stéphanie Horth, Chapter 15) and in realizing human rights-sensitive
counter-radicalization strategies online (Kilian Vieth, Chapter 16).
This Research Handbook’s concluding Part V takes a step back from the assessment of
individual human rights to look at the perspectives of different actors and how they can
enable change or counter progress in online human rights protection. The first chapters
in this part study the role of private actors (Rikke Frank Jørgensen, Chapter 17) and of
international organizations (Wolfgang Benedek, Chapter 18). The analyses also cover
groups with special protective needs: children’s rights (Amanda Third, Sonia Livingstone
and Gerison Lansdown, Chapter 19) and LGBTI communities (Monika Zalnieriute,
Chapter 20).
As editors, we believe that there can be no single answer to the question of how human
rights can be safeguarded in digital technologies. There are, however, important lessons
to be learned and insights to be garnered from rigorous interdisciplinary research efforts,
as presented in this Research Handbook. The perspectives provided in this Research
Handbook are an important step of making the community of scholars working on these
issues more widely accessible by presenting their work in a unified form. We believe
that the scholarship and the practical experiences reported here should be accessible to
the widest possible audience and believe that its content is equally relevant outside of
academia.
What insights are key? Importantly, human rights are frequently played off against each
other. As many of the chapters show, even when focusing on human rights in one specific
context, they are only meaningful in a framework that protects all human rights. All
human rights are universal, interdependent, interrelated and mutually reinforcing – online
and offline. Thus, attempts to play privacy against freedom of expression or security
against the right to a fair trial are necessarily conceptually flawed. Of course, human rights
have limits. Not every interference with freedom of expression online is illegal – it may
comply with human rights if the restriction is based on law, pursues a legitimate goal and
is the least restrictive means to reach that normative purpose. The rhetoric of ‘balancing’
rights needs to be carefully scrutinized as it is often used to the detriment of both of the
rights at issue. Human beings can and should expect to enjoy not just one right or the
other but all rights. Technologies can endanger human rights, but they are also enablers
of human rights.
Rather than trivialize, oversimplify or ignore human rights in digital technologies,
there is an urgent need to ensure that human rights are systematically integrated at the
heart of the development of digital technology, its implementation, law and governance.
This Research Handbook provides numerous perspectives on how human rights can be
considered, from the very theoretical and conceptual level all the way through to very
concrete challenges of implementation. The editors hope that this Research Handbook
can contribute to the ongoing debate on human rights in digital technologies and ensure
that they are more systematically considered on the national, regional and international
levels, by legislators and politicians, by technologists and programmers.
Finally, if there is one common thread that runs throughout this Research Handbook, it
is that ensuring human rights is an achievable goal which can be better realized by digital

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:33AM
via New York University

WAGNER_9781785367717_t.indd 2 13/12/2018 15:25


Introduction  3

technologies. This is not a utopian approach. We do not claim that the Internet will set
us free. Further, it is not that all actors or technologies are perfect, but rather that claims
about the impossibility of implementing human rights in digital technologies lack a solid
empirical basis. Human rights are a reality and need to be mainstreamed into technologi-
cal development. They need to be realized at every step of the chain of ‘production’ of
the new artefacts of digital technology, from self-improvement apps to predictive policing
algorithms to smart fridges counselling against your second helping of ice-cream. As
numerous contributors to this Research Handbook show, it is perfectly possible to engage
with digital technologies and improve their contribution to human rights. Whether this
productive engagement happens is not related to technological or institutional impossibil-
ity but simply to the willingness of those involved.
As more and more power is transferred into digital systems without adequate account-
ability structures (in itself a human rights violation), a human rights based approach is
becoming particularly important. Without safeguarding human rights in the develop-
ment, management and governance of digital technologies, those rights are likely to be
systematically eroded. Human rights can and should form a key part as to how digital
technologies are developed and implemented. As editors, we hope that this Research
Handbook offers convincing arguments for why, and perspectives on how, this can and
should be done.

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:33AM
via New York University

WAGNER_9781785367717_t.indd 3 13/12/2018 15:25


1.  Human rights futures for the internet
M.I. Franklin*

1. INTRODUCTION

In 2015, the UN General Assembly launched the Sustainable Development Goals,


successor to the Millennium Development Goals from 2000. Another declaration from
the same meeting renewed a set of undertakings, begun in 2003 under the auspices of
the International Telecommunications Union and entitled the World Summit on the
Information Society. This declaration makes explicit the merging of future decisions on
internet design, access and use with these renewed Development Goals and the human
rights dimensions of achieving these goals in a world premised on the supraterritoriality
of internet-dependent media and communications:1

‘We reaffirm our common desire and commitment to . . . build a people-centred, inclusive and
development-oriented Information Society . . . premised on the purposes and principles of the
Charter of the United Nations, and respecting fully and upholding the Universal Declaration
of Human Rights’.2

Even at a symbolic level, high-level utterances such as these have been a source of
some encouragement for those mobilizing across the spectrum of human rights at this
particular policy-making nexus. Edward Snowden’s whistleblowing in 2013 on US-led
programmes of state-sponsored mass online surveillance, deployed in the name of
Western democratic values, played no small part in the shift from the margins to the

*  This chapter is an adaptation of a six-part essay entitled Championing Human Rights for the
Internet, OpenDemocracy Human Rights and the Internet Series (31 January 2016), available at
www.opendemocracy.net/hri.
1
  See Jan Aart Scholte, Globalization: A Critical Introduction (2nd edn, Palgrave Macmillan,
2015). The term internet (uncapitalized) is used here as a broad rubric for computer-dependent
media and communications that include internet design, access, use, data and content manage-
ment. This term includes goods and services, and cultures of use that are not covered in the more
restricted engineering definition of the Internet (capitalized) as a computerized communications
architecture comprising a planetary ‘network of networks’. For more on these distinctions see
Giampiero Giacomello and Johan Eriksson (eds), ‘Who Controls the Internet? Beyond the
Obstinacy or Obsoleteness of the State’ (2009) 11(1) International Studies Review (January)
205–30.
2
  UN General Assembly, Outcome Document of the High Level Meeting of the General
Assembly on the Overall Review of the Implementation of WSIS Outcomes (December 2015) para 6,
available at http://workspace.unpan.org/sites/Internet/Documents/UNPAN95707.pdf; UN News
Centre, ‘UN Member States outline information technology roadmap to achieve sustainable
development’, 17 December 2015, available at www.un.org/apps/news/story.asp?NewsID=52851#.
VnqNLFK0KO2. For more information on UN Resolutions and related reports on the Right to
Privacy in the Digital Age, see UN Office of the High Commissioner for Human Rights, www.
ohchr.org/EN/Issues/DigitalAge/Pages/DigitalAgeIndex.aspx.

5
M.I. Franklin - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 5 13/12/2018 15:25


6  Research handbook on human rights and digital technology

centre that human rights-based agendas for the online environment have made, in the
internet heartlands at least.3
Geopolitical and techno-legal power struggles over ownership and control of largely
commercial web-based goods and services, and how these proprietary rights implicate
shared guardianship of the Internet’s planetary infrastructure with UN member states,
were being thrown into relief two years after Edward Snowden went public with evidence
of US-led programmes of mass online surveillance. Presaged by Wikileaks and worldwide
social movements for social and political change (e.g. the Arab Uprisings, Occupy and
Indignados campaigns), these revelations have contributed to the politicization of a
generation of ‘digital natives’. The rise in mobile/smart-phone usage and internet access
in the Global South and in Asia underscores a longer-term generational shift towards
an online realm of human experience and relationships. Ongoing disclosures of just
how far, and how deeply, governmental agencies and commercial service providers can
reach into the online private and working lives of billions of internet users have exposed
how passionately young people regard internet access as an entitlement, a ‘right’, their
mobile, digital and networked communications devices (currently called smart-phones) as
indispensable to their wellbeing.4
This rise in the public profile of the human rights-internet nexus has accompanied a
comparable leap up the ladder of media, and scholarly interest in how traditional human
rights issues play out on – and through – the Internet’s planetary infrastructure, as the
web becomes a global platform for bearing witness to rights abuses on the ground.5
Going online (e.g. using email for interviews, being active on social media platforms)
exposes web-dependent generations of bloggers/journalists, political dissidents and

3
  Ian Thomson, ‘GCHQ mass spying will “cost lives in Britain”, warns ex-NSA tech
chief’, The Register, 6 January 2016, available at www.theregister.co.uk/2016/01/06/gchq_mass_spy​
ing_will_cost_lives_in_britain/.
4
  Internet World Stats, The Internet Big Picture: World Internet Users and 2018 Population
Stats, available at www.internetworldstats.com/stats.htm (last accessed 8 February 2018).
5
  See Nicholas Jackson, ‘United Nations declares Internet access a basic human right’, The
Atlantic, 3 June 2011, available at www.theatlantic.com/technology/archive/2011/06/united-nations-
declares-internet-access-a-basic-human-right/239911/. See Andrew Clapham, Human Rights: A
Very Short Introduction (New York: Oxford University Press, 2007), and Andrew Vincent, The
Politics of Human Rights (Oxford: Oxford University Press, 2010), for accessible overviews of the
legal and political dimensions to international human rights that do not address information and
communication technologies in general, or internet media and communications in particular. For a
critical reappraisal of international human rights institutions, see Nicolas Guilhot, The Democracy
Makers: Human Rights and the Politics of Global Order (New York: Columbia University Press,
2005). Analyses that focus on the human rights-information and communications technology nexus,
in part or as a whole, include Rikke F. Jørgensen (ed.), Human Rights in the Global Information
Society (Cambridge, MA: MIT Press, 2006); Wolfgang Benedek and Matthias C. Kettemann,
Freedom of Expression on the Internet (Strasbourg: Council of Europe Publishing, 2014); Toby
Mendel, Andrew Puddephatt, Ben Wagner, Dixie Hawtin and Natalia Torres, Global Survey
on Internet Privacy and Freedom of Expression, UNESCO Series on Internet Freedom (Paris:
UNESCO, 2012); Navi Pillay, The Right to Privacy in the Digital Age, Report of the Office of the
United Nations High Commissioner for Human Rights, Human Rights Council, Twenty-seventh
session, Agenda items 2 and 3, Annual Report of the United Nations High Commissioner for
Human Rights, A/HRC/27/37 (30 June 2014), available at www.ohchr.org/EN/HRBodies/HRC/
RegularSessions/Session27/Documents/A.HRC.27.37_en.pdf.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 6 13/12/2018 15:25


Human rights futures for the internet  7

human rights defenders to threats of another order, enables perpetrators with a digital,
computer-networked constitution. This is not only because our online presence (personal
information, activities and networks) can be tracked and monitored, but also because
these activities can lead to networked forms of abuse, bullying and harassment. In some
parts of the world, posting material seen as overly critical of vested interests or a chal-
lenge to social and political power incurs prison sentences, beatings, and even death when
obstruction and censorship do not suffice.6 The normalization of internet censorship
techniques (e.g. denial of access, content filtering, or website blocking) go hand-in-hand
with the legalization of the pervasive and sophisticated forms of state-sponsored online
surveillance that Snowden brought to the public domain. On the other hand, they reveal
comparable excesses from commercial service providers whose intimate monitoring of
what people do online include automated forms of data-tracking and data-retention
practices without clear forms of accountability. As campaigns and reports from media,
and internet-based civil liberties watchdogs show (e.g. Witness, Reporters Without
Borders, Article 19, Privacy International, Global Voices), these policies have substantial
implications for the protection of fundamental rights and freedoms not only on the
ground but also online.7 As these practices become less extraordinary, repackaged as
pre-emptive security measures if not acceptable levels of intrusion into the private online
lives of individuals and whole communities, they underscore the ways in which public and
private powers at the online-offline nexus have succeeded in normalizing practices that
render citizens as putative suspects (guilty until proven innocent) and commodities (‘you
are the product’ as the saying goes) in turn.8
Official recognition (from the UN Human Rights Council as far back as 2012) that
online human rights matter too points to the legal and ethical complexities of this
techno-political terrain, however. It begs the question of how human rights jurispru-
dence can account for the digital and the networked properties of internet-dependent
media and communications that are trans-border by design; or how emerging issues,
such as online anonymity or automated data-gathering and analysis, challenge legal
jurisdictions and jurisprudence based on customary law but also pivoting on the landed
borders of state sovereignty.9 Recognizing that human rights exist online is not the same

6
  Media Freedom and Development Division of the Organization for Security and Co-operation
in Europe (OSCE), www.osce.org/media-freedom-and-development (accessed 8 February 2018).
7
  Electronic Frontier Foundation (EFF), Chilling Effects of Anti-Terrorism: ‘National Security’
Toll on Freedom of Expression (2015), available at https://w2.eff.org/Privacy/Surveillance/Terrorism/
antiterrorism_chill.html; Necessary and Proportionate Campaign, International Principles on the
Application of Human Rights to Communications Surveillance (May 2014), available at https://
en.necessaryandproportionate.org/.
8
  Benjamin Herold, ‘Google under fire for data-mining student email messages’, Education
Week, 13 March 2014, available at www.edweek.org/ew/articles/2014/03/13/26google.h33.html. See
also Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your
World (New York: W. W. Norton & Company, 2015).
9
  US Mission Geneva, HRC Affirms that Human Rights Must Also Be Protected on the Internet
(Resolution Text) (6 July 2012), available at https://geneva.usmission.gov/2012/07/05/internet-
resolution/; United Nations Human Rights Council, Resolution A/HRC/26/L.24: Promotion
and Protection of All Human Rights, Civil, Political, Economic, Social and Cultural Rights,
including the Right to Development, Twenty-sixth Session, Agenda item 3, UN General Assembly,
20 June 2014, available at http://ap.ohchr.org/documents/dpage_e.aspx?si=A/HRC/26/L.24. For

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 7 13/12/2018 15:25


8  Research handbook on human rights and digital technology

as being able to fully exercise and enjoy those rights. In this context, the glacial tempo of
intergovernmental treaty negotiations, or legal rulings, has a hard time keeping up with
the high-speed velocity of commercial applications and market penetration of today’s
Tech Giants.

2.  ARE DIGITAL RIGHTS ALSO HUMAN RIGHTS?

That there are inherently digital and internet-worked dimensions to the legal, moral and
political complexities of international human rights brings legislators, software designers
and judiciaries face-to-face with an inconvenient truth of the age. If human rights law and
norms are indeed applicable to the online environment, then disproportionate levels of
automated personal data retention, alongside the insidiousness of pre-emptive forms of
online surveillance, imply suitable and internationally acceptable law. A next generation
of legal instruments that can articulate more clearly how existing human rights, such
as freedom of expression or privacy, should be guaranteed if both state surveillance
measures, and commercial forms of monitoring, data-collection, and retention continue
along their current trajectories, are in their infancy. The tension between how judiciaries
and politicians are reconsidering their own remits in this regard, their relative ignorance
of the technicalities of internet-design, access and use is one pressure point. Conversely,
technical standard-setters, engineers, software developers, and corporate strategists have
to confront the ethical and legal demands that rights-based sensibilities bring to their de
facto authority as technical experts and proprietors in the global business of internet-
based products and services. The difference between the respective areas of expertise
and commitment that reside within these decision-making constituencies stretches out
beyond the ‘Internet Freedom’ versus ‘Internet Sovereignty’ rhetoric of lobby-groups and
opinion-makers. It affects the terms of debate about who does, or who should, control
the Internet in ways that shifts the usual positioning of states and markets as antagonists,
polar opposites in this stand-off, to where they have been along this timeline to date,
co-protagonists.10

a US-based perspective, see Mike Masnick, ‘UN says mass surveillance violates human rights’,
Techdirt, 17 October 2014, available at www.techdirt.com/articles/20141015/07353028836/un-says-
mass-surveil​lance-violates-human-rights.shtml.
10
  Joe Wolverton, ‘TPP copyright provisions threaten Internet freedom, U.S. sovereignty’, The
New American, 1 September 2012, available at www.thenewamerican.com/tech/computers/item/12685-
tpp-copyright-provisions-threaten-internet-freedom-and-us-sovereignty; Nancy Scola, ‘Defining the
“We” in the Declaration of Internet Freedom’, The Atlantic, 9 July 2012, available at www.theatlantic.
com/technology/archive/2012/07/defining-the-we-in-the-declaration-of-internet-freedom/259485/. See
the late John Perry Barlow ‘A Declaration of the Independence of Cyberspace’, Electronic Frontier
Foundation, 8 February 1996, available at https://projects.eff.org/~barlow/Declaration-Final.html
(accessed 7 October 2016). A twenty-first century riposte revises the title of Barlow’s much-cited
declaration. See Daniel Castro, ‘A Declaration of the Interdependence of Cyberspace’, Computer
World, 8 February 2013, available at www.computerworld.com/arti​cle/2494710/internet/a-declaration-
of-the-interdependence-of-cyberspace.html (accessed 7 October 2016). I discuss these battles over
the narrative of the Internet’s origins in M.I. Franklin, Digital Dilemmas: Power, Resistance and the
Internet (Oxford: Oxford University Press, 2013).

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 8 13/12/2018 15:25


Human rights futures for the internet  9

Several high-profile court cases notwithstanding,11 for most people, knowing your
rights as they may apply when you are online are only one side of the coin. Being able to
fight for your rights online is another. Having the know-how goes alongside the want-to
and the wherewithal in this regard. Addressing this particular ‘disconnect’ has been
one of the main reasons behind various campaigns to raise awareness of human rights
for the online environment, on the one hand and, on the other, for how international
law places obligations on designers and policy-makers at the national and international
level. Yet, arguments about why indeed human rights matter for our online lives, and
who is responsible for taking action – the individual, the government, or the service
provider – rage over most people’s heads. Recent public debates, in the European Union
(EU) at least, are steeped in a post-neoliberal rhetoric of whether the ‘not so bad’ of
government regulation is an antidote for the ‘not so good’ of runaway market-leaders
in internet services who have access to the private online lives of approximately two in
seven people on the planet. The disconnect between this everyday level of onlineness and
what people know about how their digital footprints are being monitored, let alone what
they believe they can do about it, is underscored by the entrenchment of commercial
service provision: in the workplace, schools and universities, hospitals and government
departments. For instance, ‘free’ Cloud services come with a price as commercial service
providers set the terms of use of essential services (from email to data-storage) in the
long term. With that they become private gatekeepers of future access to public and
personal archives of digital content (so-called Big Data) housed in corporate server
farms around the world.12

3.  S
 HIFTING HISTORICAL CONTEXTS AND TERMS OF
REFERENCE

Some points from a wider institutional and historical perspective bear mentioning at this
point. First, any talk of human rights has to take into account the trajectory of successive
generations of international human rights law and norms. The UN system and its member
states as the progenitor and inheritor of existing human rights norms has an implicit stake

11
  Maximillian Schrems v. Data Protection Commissioner, ECLI:EU:C:2015:650; Electronic
Privacy Information Centre (EPIC), Max Schrems v Irish Data Protection Commissioner (Safe
Harbor) (2015), available at https://epic.org/privacy/intl/schrems/; Robert Lee Bolton, ‘The Right
to Be Forgotten: Forced Amnesia in a Technological Age’ (2015) 31 Marshall J. Info. Tech. and
Privacy L. 133–44; Jeffrey Rosen, ‘The Right to Be Forgotten’ (2012) 64 Stan. L. Rev. Online 88 (13
February 13); Andrew Orlowski, ‘Silicon Valley now “illegal” in Europe: Why Schrems vs Facebook
is such a biggie’, The Register, 6 October 2015, available at www.theregister.co.uk/2015/10/06/
silicon_valley_after_max_schrems_safe_harbour_facebook_google_analysis; ‘The right to be for-
gotten: drawing the line’, The Economist, 4 October 2014, available at www.economist.com/news/
international/21621804-google-grapples-consequences-controversial-ruling-boundary-between.
12
  Brandon Butler, ‘Recent cloud critics, including Wozniak, intensify debate’, NetworkWorld, 9
August 2012, available at www.networkworld.com/article/2190427/cloud-computing/recent-cloud-
critics--including-wozniak--intensify-debate.html; Jonathan Nimrodi, ‘10 facts you didn’t know
about server farms’, Cloudyn Blog, 8 September 2014, available at www.cloudyn.com/blog/10-fa​
cts-didnt-know-server-farms/.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 9 13/12/2018 15:25


10  Research handbook on human rights and digital technology

in any decisions that affect the future of internet design, access, use, data and content-
management. This means that human rights advocacy enters ongoing debates about the
legal stature and implementation of so-called first generation human rights treaties and
covenants that make up the International Bill of Rights, i.e. the Universal Declaration of
Human Rights (UDHR, 1948), the International Covenant on Civil and Political Rights
(ICCPR, 1966), and the often overlooked International Covenant on Economic, Social
and Cultural Rights (ICESCR, 1966), inter alia. In this respect, human rights treaty
negotiations and a patchy record of ratification over 70 years are branded by the ways in
which the United States continues to exercise its political, military and hi-tech hegemony
in material and discursive ways.13
Second, as scholars and judiciaries start to tackle these issues, as they play out online
but also at the online-offline nexus, they are confronted with the political and legal limits
of the Westphalian international state system and its jurisprudence. Despite notable
exceptions (e.g. agreements on the Law of the Sea, Outer Space, on custodianship of
the environmental integrity of the Antarctic and Arctic regions) and debates about the
security implications of conceiving the internet and its cyberspaces as a global commons,14
the current system is fuelled by the aforementioned institutionalized privilege of state-
centric rule of law and bounded citizenries thus structuring the horizon of possibility
for change. The ways in which ordinary people, corporate actors, social movements and
transnational networks (from global financial markets to criminal organizations) use
internet technologies have been rattling the cage of this geopolitical status quo for some
time, however. The rest of the text of the UN Resolution cited above attempts to link
this historical world order to the emergence of multi-stakeholder decision-making as a
substitute for multilateral institution-building.15
Third, alongside the formative role that prominent civil society organizations, and
emerging global networks representing the ‘technical community’, play (the Global
Network Initiative, Internet Society, or the Internet Engineering Task Force, for example)
in promoting so-called multi-stakeholder participation as the sine qua non of internet
policy-making, corporate actors play no small part in delimiting this horizon of possibil-
ity as well.16 This is a role that grants these players policy-making power – in kind rather

13
  This position of incumbent power has had a role to play in debates about whether existing
human rights law are best implemented diachronically (one by one, step by step) or synchroni-
cally (as an interrelated whole). See Patrick Macklem, Human Rights in International Law: Three
Generations or One? (28 October 2014), available at http://ssrn.com/abstract=2573153; Vincent,
The Politics of Human Rights, n. 5 above.
14
  Mark Raymond, The Internet as Global Commons? (Centre for International Governance
Innovation (CIGI), 26 October 2012), available at www.cigionline.org/publications/2012/10/inter​
net-global-commons.
15
  UN General Assembly, Outcome Document, n. 2 above. For more on these two seemingly
mutually exclusive terms in official statements, see M.I. Franklin, ‘(Global) Internet Governance
and its Civil Discontents’ in Joanne Kulesza and Rob Balleste (eds), Cybersecurity: Human Rights
in the Age of Cyberveillance (Lanham: Rowman and Littlefield/Scarecrow Press, 2015) 105–28.
16
  Examples of relevant meetings include the NETmundial: Global Multistakeholder Meeting
on the Future of Internet Governance, 23–24 April 2014, available at www.netmundial.br/; the
annual Internet Governance Forum meetings, available at www.intgovforum.org/cms/. For a
further discussion on the politics of terminology, see Franklin, ‘(Global) Internet Governance and
its Civil Discontents’, n. 15 above.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 10 13/12/2018 15:25


Human rights futures for the internet  11

than by international treaty – through the proprietary rights of commercial enterprise


and copyright.17
In this respect, it is a misnomer to talk of the influence that internet-dependent
media  and communications have on society, culture and politics in simple, techno-
determinist terms. Nor is it elucidating to continue labelling the last quarter-century’s
successive generations of internet service provisions, news and entertainment, and
user-generated content, as ‘new’ media. It is tempting. But recourse to such binaries
serves to derail more nuanced and informed interventions. One reason is that an ongo-
ing preoccupation about value that undergirds these entrenched binaries (e.g. ‘existing’
versus ‘new’ rights, ‘old’ media versus ‘new/social’ media) obstructs considerations of
how the exercise, or being deprived, of our rights already matter in online settings. It
also presumes that pre-Internet and/or offline domains of sociocultural or political
engagement are of a higher moral order, innocent and without violence. The record
shows they are not.
Taking this insight on board can help shift entrenched value-hierarchies that position
successive generations of internet-based mobilization, forms of solidarity and dissent
(e.g. e-petitions, social media campaigns, community-building) lower on the political
pecking order of authenticity, such as those of twentieth century civil rights and other
social movements. More familiar displays of solidarity, such as street marches, hardcopy
petitioning, print and televisual media presence, are also not without abuses of privilege,
empty rhetoric or opportunism. Besides, these once older ‘new social movements’ have
gone online, gone digital also, adopting commercial social media tools as fast as possible
over the last five to ten years. It also means to stop worrying, quite so much, about
younger generations who are now living and loving through their mobile and other
computer screens for that reason alone.18 What is needed instead is to explore how these
modalities for social interaction and intimacy matter to these web-embedded generations,
on their own terms within the changing terms of proprietary, or state-sanctioned access
and use. These conceptual, even philosophical, issues are as integral to the outcome of
social mobilization around human rights online as they are for decisions that affect the
hardware and software constellations that make internet-based communications function
in design and implementation terms. These are no longer simply added to our world, they
increasingly frame and co-constitute the world in which we live.
But what we have to focus on here is how the Snowden revelations underscore, as
did whistle-blowing trailblazers before him,19 that nation-states’ chequered human
rights record in the offline world are integral to international human rights advocacy

17
  See Rebecca MacKinnon, ‘Playing Favorites’, Guernica: A Magazine of Art and Politics, 3
February 2014, available at www.guernicamag.com/features/playing-favorites/.
18
  Daria Kuss, ‘Connections aren’t conversations: while technology enables, it can also inter-
fere’, The Conversation, 21 December 2015, available at https://theconversation.com/connections-
arent-conversations-while-technology-enables-it-can-also-interfere-51689. See also Sherry Turkle,
Alone Together: Why We Expect More from Technology and Less from Each Other (New York: Basic
Books, 2011); Dave Everitt and Simon Mills, ‘Cultural Anxiety 2.0’ (2009) 31(5) Media, Culture and
Society 749–68.
19
  Government Accountability Project (GAP), Bio: William Binney and J. Kirk Wiebe (2016),
available at www.whistleblower.org/bio-william-binney-and-j-kirk-wiebe.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 11 13/12/2018 15:25


12  Research handbook on human rights and digital technology

for the online world. Incumbent and emerging powers in the UN system, from within
and outside the Internet’s historical heartlands, have different views of their ‘roles and
responsibilities’ and with that, different degrees of tolerance to civil society demands for
equal footing in decisions about its future operations. Likewise for those global corporate
players objecting to state interference, with or without the tacit support of their allies in
government, whose business models go to the heart of how contemporary, increasingly
privatized, internet goods and services operate.20 There has also been a move towards at
least a nominal recognition that human rights and internet policy-making do and, indeed,
should mix within powerful agencies opposed to direct forms of government regulation
as a point of principle, e.g. the once US-incorporated Internet Corporation of Assigned
Names and Numbers (ICANN) .21 The ante has been upped thereby for governments,
post-Snowden, claiming the higher moral ground by virtue of their legal responsibilities
under international human rights law, in the face of state-sponsored abuses of fundamen-
tal rights and freedoms in turn.
What does this mean at the techno-economic and political level of national and inter-
national negotiations between public and private players who control the national and
international policy agendas?22 First, it brings representatives of those intergovernmental
organizations and non-governmental organizations, such as standard-setting bodies of
expert networks, used to working behind the scenes, under public scrutiny. Second, this
increased scrutiny implicates commercial actors also, whose global market share also
imputes to them decision-making powers normally reserved for governments, national
sovereigns of old.23 I am referring here to the geographical and proprietary advantage of
those largely but not exclusively US-owned corporations that own and control the lion’s
share of devices, applications and platforms which control what people do, and where
they go once online. Their emerging competitors, counterparts in China and Russia who
also exercise power over their citizens’ access and use of respective social media tools,
online goods and services, are not beyond reproach either.24
No power-holder, public or private, has been left untouched by Snowden’s whistle-

20
  Daniel Sepulveda, ‘Negotiating the WSIS+10 and the Future of the Internet’, DIPNOTE:
US Department of State Official Blog, 23 December 2015, available at http://blogs.state.gov/
stories/2015/12/23/negotiating-wsis10-and-future-internet.
21
  Article 19, Policy Brief: ICANN’s Corporate Responsibility to Respect Human Rights (6 February
2015), available at www.article19.org/resources.php/resource/37845/en/icann%E2%80%99s-corpo​
rate-responsibility-to-respect-human-rights; Gautham Nagesh, ‘ICANN 101: who will oversee
the Internet?’, Wall Street Journal, 17 March 2014, available at http://blogs.wsj.com/washwire/​
2014/03/17/icann-101-who-will-oversee-the-internet/; ICANN, NTIA IANA Functions’ Stewardship
Transition: Overview (14 March 2014), available at www.icann.org/stewardship.
22
  One infographic of the ‘internet ecosystem’ is available from the Internet Society at www.
internetsociety.org/who-makes-internet-work-internet-ecosystem.
23
  MacKinnon, ‘Playing Favorites’, n. 17 above.
24
  Statista, ‘Market capitalization of the largest Internet companies worldwide as of May 2017
(in billion U.S. dollars)’, www.statista.com/statistics/277483/market-value-of-the-largest-internet-
companies-worldwide (accessed 8 February 2018); Paul De Hert and Pedro Cristobal Bocas, ‘Case
of Roman Zakharov v. Russia: The Strasbourg follow up to the Luxembourg Court’s Schrems
judgment’, Strasbourg Observers, 23 December 2015, available at http://strasbourgobserv​ ers.
com/2015/12/23/case-of-roman-zakharov-v-russia-the-strasbourg-follow-up-to-the-luxembourg-
courts-schrems-judgment/.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 12 13/12/2018 15:25


Human rights futures for the internet  13

blowing. Now in the spotlight, incumbent powerbrokers have started to concede, at least
in principle, that the ‘hard’ realities of technical standard-making and infrastructure
design are not separate from ‘soft’ human rights considerations; ‘only’ a technical
problem, business matter or state affair. What has been achieved in getting human rights
squarely on technical and legislative agendas is not negligible from a wider historical
perspective. Even if this means only looking back over the last decade or so, ten years is
several lifetimes in computing terms. In this period, industry and government sponsored
‘high-level’ declarations of principles, alongside UN-brokered reviews of global internet
governance frameworks, and diverse intergovernmental undertakings, have taken off.25
There has also been a mushrooming of rights-based declarations for the online environ-
ment from civil society organizations and lobby groups, the business sector, and national
political parties around the world.26 Organizations and networks which were once
quite shy of the ‘human rights’ label have started to frame their work in various sorts
of (digital) rights-speak even if, for some critics, these changes in strategy presage the
excesses of regulations.27

4.  FUTURES AND PASTS: CHARTING A COURSE

Those with an historical disposition may also note that these practical and ideational
contentions retrace the history of competing social justice and media advocacy agendas
at the international level repeating itself. There is some truth to this given an under-­
recognized genealogy of human rights-based approaches that go back to the earliest days
of the United Nations (e.g. Article 19 of the Universal Declaration of Human Rights),
into the late twentieth century (the New World and Information Communication Order)
and this one (the initial World Summit on the Information Society 2003–2005).28 As a
consciously dissenting voice, civil society rights-based i­nitiatives along this historical
spectrum have also had their precursors: the Communication Rights for the Information

25
 NETmundial, NETmundial Multistakeholder Statement (24 April 2014), available at http://
netmundial.br/netmundial-multistakeholder-statement/; UNESCO WSIS+10 Review Event 2013,
‘Towards Knowledge Societies, for Peace and Sustainable Development’, 25–27 February 2013,
available at www.unesco.org/new/en/communication-and-information/flagship-project-activities/
wsis-10-review-event-25-27-february-2013/homepage/; OECD, Internet Governance (2015), avail-
able at www.oecd.org/internet/internet-governance.html; Council of Europe, Declaration by the
Committee of Ministers on Internet Governance Principles’, 21 September 2011, available at
https://wcd.coe.int/ViewDoc.jsp?id=1835773.
26
  See Rolf H. Weber, Principles for Governing the Internet: A Comparative Analysis, UNESCO Series
on Internet Freedom (2015), available at http://unesdoc.unesco.org/images/0023/002344/234435E.
pdf.
27
  Jim Harper, It’s ‘Declaration of Internet Freedom’ Day! (Cato Institute, 2 July 2012), available
at www.cato.org/blog/its-declaration-internet-freedom-day.
28
  UNESCO, Declaration on Fundamental Principles concerning the Contribution of the Mass
Media to Strengthening Peace and International Understanding, to the Promotion of Human
Rights and to Countering Racialism, Apartheid and Incitement to War, 28 November 1978,
available at http://portal.unesco.org/en/ev.php-URL_ID=13176&URL_DO=DO_TOPIC&URL_
SECTION=201.html (accessed 7 October 2016); International Telecommunications Union (ITU),
World Summit on the Information Society 2003–2005, available at www.itu.int/net/wsis/index.html.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 13 13/12/2018 15:25


14  Research handbook on human rights and digital technology

Society (CRIS), and campaigns from the Association for Progressive Communications
(APC) are two cases in point.
More recent ‘digital rights’ initiatives tacitly take their cue from these earlier iterations
as they also do from at least three formative initiatives that encapsulate these efforts up
to 2014, namely the Charter of Human Rights and Principles for the Internet from the
Internet Rights and Principles Coalition (IRPC) launched in 2010–2011, the Brazilian
Marco Civil, underway at the same time and finally passed into law in 2014, and the
Council of Europe’s Guide to Human Rights for Internet Users endorsed in 2014. Taken
together, they address law-makers, judiciaries and broader publics in a modality that is
distinct from, yet resonates with, human rights advocacy.29
Even the harshest critics of institutionally-situated forms of rights activism, or of
human rights themselves on philosophical grounds, are witnessing such undertakings
that focus on internet media and communications become public record, housed online
in the UN archives, used as primary documentation and reference points in emerging
jurisprudence and research. This is, I would argue, a victory in the medium-term, given
years of concerted indifference from prominent governments, industry leaders and civil
society organizations uneasy about seeing human rights shift ‘up’ into cyberspace in the
face of unaddressed abuses on the ground. For this reason, this boom in rights-based
utterances can be seen as a good thing, at this stage on the road. More is, indeed,
more. By the same token, to be sustainable, human rights advocacates and, conversely,
approaches that isolate specific rights as they pertain to particular design issues have
their work cut out to make these techno-legally complex issues meaningful in practice.
The ways in which the economic benefits of the ‘real name systems’ underpinning social
networking business models and projected usefulness of the same for law enforcement
agencies trip up fundamental freedoms such as privacy, freedom of expression, and
association for vulnerable groups is one example.30 The relatively high entry-threshold
of terminology and specialized knowledge that confronts not only the average person,
but also the average manager or university, school or hospital ­administrator, is another
challenge in this regard. That work has barely begun and those organizations and

29
  Internet Rights and Principles Coalition, IRPC Charter of Human Rights and Principles
for the Internet (4th edn, 2014 [2011]), available at http://internetrightsandprinciples.org/site/;
Franklin, Digital Dilemmas, n. 10 above; M.I. Franklin, ‘Mobilizing for Net Rights: The Charter
of Human Rights and Principles for the Internet’ in Des Freedman, Cheryl Martens, Robert
McChesney and Jonathan Obar (eds),Strategies for Media Reform: Communication Research
in Action (New York: Fordham University Press, 2016) 72–91; Glyn Moody, ‘Brazil’s “Marco
Civil” Internet Civil Rights Law finally passes, with key protections largely intact’, Techdirt,
27 March 2014, available at www.techdirt.com/articles/20140326/09012226690/brazils-marco-
civil-internet-civil-rights-law-finally-passes-with-key-protections-largely-intact.shtml; Council of
Europe, A Guide to Human Rights for Internet Users (16 April 2014), available at https://wcd.coe.
int/ViewDoc.jsp?id=2184807.
30
  Ravin Sampat, ‘Protesters target Facebook’s “real name” policy’, BBC News, 2 June
2015, ­ available  at www.bbc.com/news/blogs-trending-32961249; Timothy B. Lee, ‘South
Korea’s “real names” debacle and the virtues of online anonymity’, Arstechnica, 15 August 2011,
available at http://arstechnica.com/tech-policy/2011/08/what-south-korea-can-teach-us-about-
online-anonymity/.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 14 13/12/2018 15:25


Human rights futures for the internet  15

grassroots networks doing this kind of educational and support work at the online-
offline nexus of structural disadvantage get little enough credit.31
Even before news of mass online surveillance by the United States and its allies (the
United Kingdom, Canada, Australia and New Zealand) hit the headlines in 2013,
agenda-setters at the UN and regional level (e.g. the EU, Latin America) were stepping
up the pace in order to make the internet-human rights interconnection more explicit, if
not take control of setting the wider agenda. In doing so, the hope is that such high-level
declarations of intent will become concrete policies, change existing business models, and
pave the way for affordable forms of legal redress.32 This change of heart is palpable at
the highest level of international political appointments. In the wake of a strongly worded
statement from the previous UN High Commissioner for Human Rights, Navi Pillay,
about the human rights implications of US surveillance programmes, the UN Human
Rights Council appointed Joe Cannataci as its first Special Rapporteur on the right to
privacy in 2015. The UN Special Rapporteur on the right to freedom of expression, David
Kaye, continues to develop the digital and online sensibility to this work begun by his
predecessor.33
It is also evident in the eventual engagement of international human rights organiza-
tions such as Amnesty International, or Article 19, in this domain. These participants
are now looking to combine their advocacy profile with an awareness of the rights-
implications of hi-tech research and development trajectories, e.g. the implications of
Artificial Intelligence, the Internet of Things, state and commercial intrusions into private
lives as the rule rather than the exception. They are also addressing more systematically
the security needs for carrying out advocacy work on the ground, increasingly based on

31
  For example, the Tactical Technology Collective, the Take Back The Tech initiative, at
www.takebackthetech.net/, the Hivos IGMENA Program, at http://igmena.org/activities, and the
International Network of Street Papers (INSP) which was first established in 2002, at http://insp.
ngo/.
32
  Mohit Kumar, ‘Treasure map: Five Eyes Surveillance Program to map the entire Internet’,
The Hacker News, 14 September 2014, available at http://thehackernews.com/2014/09/treasure-map-
five-eyes-surveillance.html#author-info; Agence France-Presse, ‘NetMundial: Brazil’s Rousseff says
Internet should be run “by all”’, Gadgets360, 24 April 2014, available at http://gadgets.ndtv.com/
internet/news/netmundial-brazils-rousseff-says-internet-should-be-run-by-all-513120; John Ruggie,
UN Guiding Principles on Business and Human Rights, UN Human Rights Council (2011), available
at http://business-humanrights.org/en/un-guiding-principles.
33
  Bea Edwards, ‘UN Human Rights Commissioner supports Snowden and denounces
US Surveillance Programs’, Huffpost Politics, 17 July 2014, available at www.huffington-
post.com/bea-edwards/un-human-rights-commissio_b_5596558.html; UN Office of the High
Commissioner for Human Rights, Report of the Special Rapporteur on the Right to Privacy (July
2015), available at www.ohchr.org/EN/Issues/Privacy/SR/Pages/SRPrivacyIndex.aspx; Frank La
Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom
of Opinion and Expression, A/HRC/17/27 (Human Rights Council, 16 May 2011), available
at www2.ohchr.org/English/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.PDF. See also
the intervention of the European Commission through David Kaye, La Rue’s successor, on
issues about censorship arising from the European Union draft Directive on copyright in the
digital single market, David Kaye, Mandate of the Special Rapporteur on the Promotion and
Protection of the Right to Freedom of Opinion and Expression, OL OTH 41/2018 (Geneva: Palais
des Nations, 13 June 2018), available at https://assets.documentcloud.org/documents/4516209/
OL-OTH-41-2018.pdf.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 15 13/12/2018 15:25


16  Research handbook on human rights and digital technology

mobile phones, internet access and related social media outlets.34 The Ninth Internet
Governance Forum meeting in Istanbul in 2014 was a first for both Amnesty and Human
Rights Watch in this respect, even if the latter’s assessment of this UN-brokered event
was less than enthusiastic.35 These sorts of UN-brokered consultations are drenched with
diplomatic protocol, hobbled by the constrictions of Realpolitik and limitations of the
host country’s attitudes to media and press freedoms. No surprise, then, that grassroots
activists and dedicated civil society networks with the technical know-how and want-to
would prefer to bypass these channels to concentrate on mobilizing and educating in
more immediate, media-friendly ways. Without such initiatives working both against
and alongside officialdom, the mumbo-jumbo of UN-speak coupled with commercially
invested cyber-babble that lays claim to decision-making as a private rather than public
concern would be even more impenetrable. They would be even more disconnected from
the inch-by-inch, face-to-face work that has characterized both traditional and internet-
focused human rights advocacy to date. Ten years may be several lifetimes in computing
terms but it is not very long at all for organizations like Amnesty or, indeed, the time it
took for iconic documents such as the Universal Declaration of Human Rights to be
granted the status of customary international law.

5.  NO TIME FOR COMPLACENCY

The time for rejoicing has been brief. The push-back from incumbent powers has begun,
and in earnest. As Lea Kaspar and Andrew Puddephatt note ‘cybersecurity has become
wholly conflated with “national security”, with no consideration of what a “secure”
Internet might mean for individual users’.36 What this amounts to is the squeezing of
robust rights-based standards at the online-offline nexus by national security and, now
global cybersecurity imperatives. On the one hand, we are seeing Bills before legislatures
around the world that are legitimizing extensive policies of online surveillance that now
include hacking and other forms of telecommunications tapping at the infrastructural
level.37 Freshly minted rights-based frameworks in one part of the world, such as the
Brazilian Marco Civil, have come under pressure as judiciaries and global corporations
lock horns over their competing jurisdictional claims for users’ personal data. The
48-hour blocking of Facebook’s Whatsapp in Brazil in December 2015 in the face of
this US service provider’s purported refusal to recognize Brazilian jurisdiction under

34
  Sherif Elsayed-Ali, We Must Understand Threats in the Technology We Use Every Day,
openDemocracy Human Rights and the Internet Series (13 June 2016), available at www.opende​
mocracy.net/sherif-elsayed-ali/we-must-understand-threats-in-technology-we-use-every-day.
35
  Eileen Donahoe, Dispatches: An Internet Freedom Summit . . . in Turkey? (Human Rights
Watch, 10 September 2014), available at www.hrw.org/news/2014/09/10/dispatches-internet-free​
dom-sum​mit-turkey.
36
  Lee Kaspar and Andrew Puddephatt, Cybersecurity is the New Battleground for Human
Rights’, openDemocracy (18 November 2015), available at www.opendemocracy.net/wfd/andrew-
pudde​phatt-lea-kaspar/cybersecurity-is-new-battleground-for-human-rights.
37
  Andrew Murray, Finding Proportionality in Surveillance Laws (11 December 2015), available
at https://paulbernal.wordpress.com/2015/12/11/finding-proportionality-in-surveillance-laws-gu​
est-post-by-andrew-murray/.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 16 13/12/2018 15:25


Human rights futures for the internet  17

the aforementioned Marco Civil is one example.38 The UK Investigatory Powers Act
2016 still stands despite the outcome of litigation which Liberty UK brought against
the Conservative government in the European Court of Human Rights, while the Dutch
government ignores the outcome of a national referendum on its version of the UK
‘Snooper’s Charter’ in turn. Meanwhile, a raft of practices are already in place that entail
disproportionate levels of online tracking, data collection, retention and manipulation
on the part of those powerful commercial service providers who currently monopolize
global market-share.39
This dependence on private service providers for basic access, if not internet goods
and services, is particularly acute in parts of the Global South where access is still patchy
and expensive. Yet it is also evident in parts of the Global North where health, educa-
tion and public access to government services depend on outsourced, Cloud computing
services.40 For these reasons, I would argue that the human rights-internet advocacy nexus
is at a critical stage. Becoming visible in the increasingly search-engine defined domain
of public policy-making and related scholarly debates is one thing. Staying visible, not
being drowned out by hostile agendas, or captured and then defused by lobbies of every
ilk, is another. Not only governments, but so also are powerful vested interests in the
commercial sector using the law and electoral agendas, instrumentalizing different legal
jurisdictions and public sentiments to confound this newly gained ground.
So why indeed pursue a human rights approach, rather than one in which terms such
as ‘digital rights’ seem to have more traction in public imaginaries, sound less bogged
down in the complex and chequered cultural record of international human rights law?
Should advocates adjust these terms of reference if appeals to existing human rights
legal standards are still so contentious, bound by national experiences and interests? Is
the term human rights too politically loaded, past its use-by date, given the contentious
historical legacy of international human rights law and institutions? ‘Because we must’
is one short answer. Another is that campaign slogans such as ‘digital rights are human
rights’ put the digital cart before the legal horse. Whilst human rights may now be
recognized as ipso facto digital rights, the converse is not the case. Hence, evoking human
rights remains a political act, whatever the current state of international and national
jurisprudence.41
Shami Chakrabarti, former director of the civil liberties charity, Liberty, points to

38
  Jonathan Watts, ‘Judge lifts WhatsApp ban in Brazil after ruling block punished users
unfairly’, The Guardian, 17 December 2015, available at www.theguardian.com/world/2015/dec/17/
brazil-whatsapp-ban-lifted-facebook?CMP=share_btn_tw.
39
  Alex Hern, ‘Facebook accused of deliberately breaking some of its Android apps’, The
Guardian, 5 January 2016, available at www.theguardian.com/technology/2016/jan/05/facebook-
deliberately-breaking-android-apps?utm_source=esp&utm_medium=Email&utm_campaign=GU
+Today+main+NEW+H&utm_term=147979&subid=7611285&CMP=EMCNEWEML6619I2.
40
  Peter Novak, ‘Why “zero rating” is the new battleground in net neutrality debate’, CBC News,
7 April 2015, available at www.cbc.ca/news/business/why-zero-rating-is-the-new-battle​ground-in-
net-neutrality-debate-1.3015070; Save the Internet Team, What Facebook Won’t Tell You or the
Top Ten Facts about Free Basics (2016), available at https://docs.google.com/document/d/1Sj8​
TSC_xXUn3m5ARcVqZpXsmpCZdmw3mitmZd9h4-lQ/edit?pref=2&pli=1/.
41
  Connor Forrest, ‘Why an internet “bill of rights” will never work, and what’s more
important’, TechRepublic, 13 March 2014, available at www.techrepublic.com/article/why-an-

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 17 13/12/2018 15:25


18  Research handbook on human rights and digital technology

another response to the ‘why bother?’ challenge, namely, that cynicism and disinterest
are the privilege of those who believe they have ‘nothing to hide’, nothing to lose.42
Taking human rights protections for granted is for those who believe their worldview,
liberal democratic way of life is beyond the need to re-examine the obligations that these
historical and legal norms mean for our times. As formative and necessary as they are,
engaging in critical debates in academe about the philosophical and legal vagaries of
human rights norms are of a different order of business to the advocacy work required to
address how the full spectrum of human rights norms relate to future visions for digital,
plugged in and logged on polities.43 The need to move along the rest of this spectrum
implies a longer-term historical view of change. For instance, the reduction and parsing
out of certain rights (freedom of expression or privacy) ahead of others is one obstacle on
this journey, because this privileging of earlier, first generation treaties and covenants is
the default position of incumbent powers. Those legal standards that follow – for persons
with disabilities, or the rights of children, for instance – and those that bespeak the whole
panoply of international human rights norms, such as gender and women’s rights, and
those pertaining to where the Internet and the 2015 Sustainable Development Goals
intersect, are the points where scholars and activists need to keep on the pressure.44
I would argue that it is time to become more daring in staking a claim that internet
futures, however defined, behove all, not just some, of the international human rights law
currently on the books. It is all too convenient from an advocacy, social justice point of
view to note that international human rights, forged by mid-twentieth century horrors, are
regularly contravened by those actors, UN member states and related agencies designated

internet-bill-of-rights-will-never-work-and-whats-more-important/; Franklin, ‘(Global) Internet


Governance and its Civil Discontents’, n. 15 above.
42
  Shami Chakrabarti, ‘The Reading Agency Fourth Annual Lecture at the British Library’,
30 November 2015, available at http://readingagency.org.uk/news/blog/shami-chakrabarti-lecture-
in-full-on-liberty-reading-and-dissent.html: Jathan Sadowski, ‘Why does privacy matter? One
scholar’s answer’, The Atlantic, 25 February 2013, available at www.theatlantic.com/technology/
archive/2013/02/why-does-privacy-matter-one-scholars-answer/273521/.
43
  Stephen Bowen, ‘Full-Spectrum’ Human Rights: Amnesty International Rethinks, openDem-
ocracy (2 June 2005), available at www.opendemocracy.net/democracy-think_tank/amnesty_2569.
jsp; Council of Europe Commissioner for Human Rights, The Rule of Law on the Internet and in
the Wider Digital World, Issue Paper (Strasbourg: Council of Europe, December 2014), available
at https://wcd.coe.int/ViewDoc.jsp?Ref=CommDH/IssuePaper%282014%291&Language=lanEng
lish&Ver=original&Site=COE&BackColorInternet=DBDCF2&BackColorIntranet=FDC864&B
ackColorLogged=FDC864; Rikke F. Jørgensen, Framing the Net: The Internet and Human Rights
(Cheltenham: Edward Elgar Publishing, 2013); Franklin, Digital Dilemmas, n. 10 above.
44
  Geetha Hariharan, Comments on the Zero Draft of the UN General Assembly’s Overall
Review of the Implementation of WSIS Outcomes (WSIS+10) (Centre for Internet and Society
(CIS), 16 October 2015), available at http://cis-india.org/internet-governance/blog/comments-
on-the-zero-draft-of-the-un-general-assembly2019s-overall-review-of-the-implementation-of-wsis-
outcomes-wsis-10; Sonia Livingstone, ‘One in Three: Internet Governance and Children’s Rights’,
IRP Coalition, Blog Post, 2015, available at http://internetrightsandprinciples.org/site/one-in-
three-internet-governance-and-childrens-rights/; Liz Ford, ‘Sustainable development goals: all you
need to know’, The Guardian, 19 January 2015, available at www.theguardian.com/global-devel​
opment/2015/jan/19/sustainable-development-goals-united-nations; Bishakha Datta, Belling the
Trolls: Free Expression, Online Abuse and Gender, openDemocracy (30 August 2016), available at
www.opendemocracy.net/bishakha-datta/belling-trolls-free-expression-online-abuse-and-gender.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 18 13/12/2018 15:25


Human rights futures for the internet  19

as custodians and enforcers of these laws and norms. Different societies, their changing
political regimes, and judiciaries interpret and institutionalize these legal norms in ways
that are both internally contradictory or challenge the unitary understandings of these
norms as universal. It is also a given that judiciaries and legislatures are still catching up
with how the ways in which people – companies and state authorities – use internet media
and communications have already made a difference to the ability of existing or pending
laws to respond appropriately, and in good time.45
And there is another reason why we should bother, rise above the comfort of intel-
lectual cynicism or sense of entitlement. Human rights frameworks, however contentious
in sociocultural terms, can provide a constructive and sustainable way to re-examine
existing democratic models and institutions as they reconstitute themselves at the online-
offline nexus, are deployed and leveraged by digitally networked forces of control and
domination. Human rights, as soft and hard law, confront everyone whether laypersons
or experts, political representatives or business leaders, to be accountable for the outcomes
of both policy and design decisions. This challenge also applies to highly skilled employees
of the military-industrial establishment from which online surveillance programmes (e.g.
Echelon, PRISM) and international collaborations between intelligence agencies (e.g. the
aforementioned Five Eyes programme) have been developed. And it applies to educators,
managers, emerging and established scholarly and activist communities with a stake in
the outcome of this historical conjuncture. This is a time in which powerful forces have at
their disposal the computer-enhanced means to circumvent existing rights and freedoms,
and do so on a scale that begs discomforting comparisons with twentieth-century war
machines of industrialized domination, totalitarianism that now deploys 24/7, Big
Brother-like forms of surveillance-as-entertainment. If, as Bill Binney, former technical
director of the NSA turned whistle-blower of the first hour has argued, the ‘issue is the
selection of data, not the collection of data’,46 then these engineering, software-design
decisions are also sociopolitical issues. Putting humans at the centre of the techno-led
power matrix of thought and action that currently dominates how internet policy-making
is communicated is one way to confront anti-democratic designs on the planet’s future,
no less.

6.  TWO STEPS FORWARD, SIX STEPS BACK

The first iteration of a UN Resolution on the Internet and Human Rights in 2012 (A/
HRC/20/L.13) was a fillip to human rights advocacy in the years leading up to Snowden.
Its eventual endorsement in 2014 underscored results already achieved. That said, it has
possibly already outlived its use-by date given the thinness of the wording, despite the
reiteration of these sentiments in the aforementioned UN General Assembly’s adoption

45
  Independent Reviewer of Terrorism Legislation, at https://terrorismlegislationreviewer.
independent.gov.uk/.
46
  William Binney and Anthony Barnett, ‘We Had to Wait for Snowden for Proof’, an Exchange
with NSA Whistleblower William Binney, openDemocracy (5 June 2014), available at www.open​
democracy.net/william-binney-anthony-barnett/%E2%80%9Cwe-had-to-wait-for-snowden-for-pr​
oof%E2%80%9D-exchange-with-william-binney.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 19 13/12/2018 15:25


20  Research handbook on human rights and digital technology

of the Outcome Document of the WSIS+10 meeting in 2015.47 As Parminder Jeet Singh
argued in an address to the UN General Assembly in this same meeting:

People, directly or through their representatives, alone can make public policy and law. Neither
business nor technical experts can claim special, exalted roles in public policy decisions. Such
a trend, as parts of civil society have noted with concern, is an unfortunate anti-democratic
development in Internet governance today.48

Singh’s stance is from the Global South, a view from a trenchant critic of US corporate
ownership and control of internet architecture and services. It is a position under fire
as the extent to which the public-private partnerships that developed and funded the
online surveillance and data-retention practices brought to light in recent years point
the finger at democratically elected governments. Nonetheless, for those member
states with less geographical and techno-economic clout than those ruling over the
UN Security Council and General Assembly, the aforementioned UN Human Rights
Council Resolution and those declarations that have ensued are landmarks in resisting
techno-economic hegemony at the global rather than national level.49 This is the point
that Singh is making – the ongoing fragility of ordinary people’s ability to assert their
rights under the law. The hopefulness in this pronouncement pits international rights-
based framings of internet design, access and use against the increasing tendency for
governments around the world to retreat into national security narratives, and dust off
laissez-faire approaches to the business of policy-making that, at the end of the day,
contradict these obligations.
The differences between how public and private actors work with successive gen-
erations of human rights norms within and between national jurisdictions underscore
these complexities. Take, for instance, arguments around the legal status of privacy or
freedom of expression in different jurisdictions (e.g. between the United States and
EU) and their respective political economic implications. Another case is the way in
which competing rules for data retention in the European Union, Latin America and
Caribbean, or Asia-Pacific regions come up against respective statutes of limitations,
different national experiences of dictatorship (e.g. South Korea, Latin America), and
vast differences in infrastructure (India or Sub-Saharan Africa). Looking ahead in light
of the United Nations’ focus on all-things-Internet in the Sustainable Development
Goals, the environmental and social costs of ‘connecting the next billion’ in the Global
South at any price reveals Internet heartlands’ dependence on the precious metals

47
  Article 19, ‘UNHRC rejects attempts to dilute Internet freedoms’, 26 June 2014, available
at www.article19.org/resources.php/resource/37602/en/unhrc-rejects-attempts-to-dilute-internet-
freedoms; UN General Assembly, Outcome Document, n. 2 above.
48
  Parminder Jeet Singh, Statement at the UN General Assembly High Level Meeting on
WSIS+10 Review, 16 December 2015, available at www.itforchange.net/UNGA_WSIS10?ct=t%​
28IT_for_Change_Newsletter_Dec_2015_FINAL12_22_2015%29.
49
  United Nations Human Rights Council, Resolution A/HRC/26/L.24, Promotion and
Protection of All Human Rights, Civil, Political, Economic, Social and Cultural Rights, including
the Right to Development, Twenty-sixth Session, Agenda item 3, UN General Assembly, 20 June
2014. See also Peter Higgins and Katitza Rodriguez, UN Human Rights Report and the Turning
Tide Against Mass Spying (EFF, 16 July 2014), available at www.eff.org/deeplinks/2014/07/
un-human-rights-report-and-turning-tide-against-mass-spying.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 20 13/12/2018 15:25


Human rights futures for the internet  21

and unprotected labour of IT manufacturing and knowledge workers in these same


regions.50
Thinking about contemporary and future internet-media and communications within
human rights frameworks has changed the terms of the debate, generated concrete
action plans that engage communities unused to these considerations. This shift from the
margins to the policy centre has also provided inspiration for a range of community-based
and national campaigns from civil society organizations. But what have yet to get going
are more informed discussions in local (schools, universities, hospitals, town halls) and
national (parliaments and businesses) settings. Until then, debates about who or what
agency is responsible for tackling the complex practicalities of human rights-informed
decisions on the future of internet design, access, use and content management will stall in
the quagmire of mutual recriminations between vested interests. This is where historically
aware and thorough critical scholarship can start to unpack the sociocultural and techno-
economic nuances of everyday online-offline realities; not simply parrot the gung-ho
rhetoric of vested interests looking to ring-fence internet futures as business-as-usual,
wherever these voices may reside.
Implementing human rights demands a next step at the level of public discourse
as well, from raising public awareness to education and international coordination.
Only then can human rights commitments make a difference in those decision-making
domains where ownership and control of the world’s ‘digital imaginations’ take place
without due democratic process, accountability, or with respect to affordable and
culturally appropriate avenues of legal redress for ordinary ‘netizens’. This is where a lot
of work remains; raising awareness and education but also developing robust account-
ability mechanisms for not only disproportionate governmental surveillance agendas
but also the excesses of commercial exploitation of our digital footprints, and other
misuses of these technological capabilities for ‘global surveillance’.51 Only then can
human rights frameworks in the round, and how specific rights and freedoms apply to
the fast-changing online environment at any given moment, be more than an exercise in
empty rhetoric. Chakrabarti puts her finger again on the sore spot (without mentioning
the implications of an Internet of Things) when she notes that:

[to] scoop up everyone’s data on the off chance that at some indefinite point in the future some
of us will fall under suspicion, or for the purpose of a ‘trawling expedition’ to find potential
suspects, is the twenty-first-century equivalent of planting cameras and microphones in every
family home.52

50
  UN Sustainable Development Knowledge Platform, Sustainable Development Goals (2016),
at https://sustainabledevelopment.un.org/topics/sustainabledevelopmentgoals.
51
  This term is from the late Caspar Bowden in his concluding comments on the human
rights implications of Cloud computing services: Caspar Bowden, ‘Human Rights in a Digital
Age’, Public Debate, 25 February 2015, available at www.opendemocracy.net/can-europe-make-
it/marianne-franklin/defending-human-rights-in-digital-age See also Robert Booth, ‘Facebook
reveals news feed experiment to control emotions’, The Guardian, 30 June 2014, available at www.
theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds.
52
  Chakrabarti, ‘The Reading Agency Fourth Annual Lecture’, n. 42 above.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 21 13/12/2018 15:25


22  Research handbook on human rights and digital technology

These considerations are not a Western indulgence, pivoting on the history of human
rights as a response to the Holocaust and refugee crisis in the aftermath of the Second
World War. Rather, it is one that changes the political, and with that the techno-legal,
conversation about the sociocultural dimensions to a generation of information and
communications technologies whose uses have been construed in narrow technical terms,
by and large. It demystifies the way they work in terms of meaning making, community
formation by social beings – and their avatars. It puts them back firmly in the remit of
political struggle, democratic praxis, and responses to the power modalities by which
both consent and dissent are being ‘manufactured’ (to borrow from Noam Chomsky),
reproduced, and recirculated on a planetary scale.

7.  IN CONCLUSION: TOO MUCH OR NOT ENOUGH?

Bringing these reflections to some sort of conclusion, consider an earlier UN Resolution


as Snowden’s revelations of mass online surveillance started to make the news headlines.
This resolution, on the right to privacy with respect to the online environment, makes
clear official concerns at the:

negative impact that surveillance and/or interception of communications, including extra­


territorial surveillance and/or interception of communications, as well as the collection of
personal data, in particular when carried out on a mass scale, may have on the exercise and
enjoyment of human rights, Reaffirming that States must ensure that any measures taken to
combat terrorism are in compliance with their obligations under international law, in particular
international human rights, refugee and humanitarian law . . . the same rights that people have
offline must also be protected online, including the right to privacy.53

But there is still a long way to go if these sorts of high-level statements are able to meet
the challenges raised by the ways in which people using the internet goods and services
already outstrip the legal conventions and horizons of possibility that constitute national
and international institutional politics. Even if such recognition has symbolic value, and it
is often easy to under-estimate the power that resides in symbolic gestures, this statement
of ‘deep concern’ is but one reason to be cheerful.
Three points to sum up: first, what is needed from an advocacy and engaged intellectual
perspective is a strengthening not a weakening of resolve and analysis, respectively. Hence,
I would take issue with the claim by some commentators that ‘human rights aren’t enough
anymore’.54 Notwithstanding a significant critical literature of how human rights in prac-
tice can be more problem than cure, claiming that they do not go far enough misses the
historical conjuncture at which we find ourselves. It is, moreover, a short distance between
this notion and its counterpart, that human rights frameworks are ‘too much’, neither the

53
  UN General Assembly, Resolution 68/167, The Right to Privacy in the Digital Age, A/
RES/68/167 (2013).
54
  Cathleen Berger, Human Rights Aren’t Enough Any More: We Need a New Strategy,
openDemocracy (17 December 2015), available at https://opendemocracy.net/wfd/cathleen-berger/
human-rights-aren-t-enough-any-more-we-need-new-strategy.

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 22 13/12/2018 15:25


Human rights futures for the internet  23

‘real thing’ nor up to scratch from a particular ethnocentric experience.55 In all respects,
such casual dismissals overlook, if not wilfully misread, the need for due diligence when
forging new laws that couple human rights with issues arising from how states, businesses
and individuals (mis-)use digital and networked communications. It also dismisses the
suffering of those millions these laws and norms still address. Second, engaged scholars/
activists need to keep intervening in what are increasingly polarized debates, in so doing
keep accompanying terms of reference, legislative measures and jurisprudence that
would evoke human rights under critical scrutiny. Not all rule of law is good. Nor are all
judgments in human rights tribunals beyond reproach; these treaties and covenants are
themselves historical and sociocultural artefacts. As such, they are contested outcomes, as
are the precedents set by ensuing judicial rulings in national and international tribunals.
Third, polemics on whether future visions for sustainable and inclusive internet-
dependent societies are either too much or not enough mask another hazard. This is
the popularity of ‘Internet Freedom’ narratives that instrumentalize rights-speak for
short-term, self-serving political or commercial agendas. Along with governmental and
think-tank pronouncements that put jingoistic understandings of security ahead of civil
liberties, they obstruct the public debates needed to consider sustainable futures in the
longer-term. The selective approach these discourses take by putting some rights and free-
doms ahead of others also dismisses the long, hazardous routes being travelled by current
generations of suffering as they struggle to get to the safety of the would-be free world.
In this respect, Walter Benjamin’s reflections on Paul Klee’s 1920 image, Angelus Novus,
have digital, and networked dimensions that we would be ill advised to ignore.56
The hard work is only just beginning, that is the drip, drip, drip of legal, political and
intellectual labour to ensure that future generations on this planet get the media and com-
munications they deserve, in full, not in part. For these reasons alone, both old hands and
new arrivals to human rights advocacy for internet futures cannot afford to get bogged
down in positions of power, status, entitlement or privilege.

55
  Declaration of Internet Freedom campaign, at http://declarationofinternetfreedom.org/.
56
  Benjamin writes, as a witness to the rise of the Nazi war machine and impending Holocaust,
about how this image depicts an angel being blasted backwards by the violence of the past/present,
into a future as yet unseen. Klee’s image is of ‘an angel looking as though he is about to move away
from something he is fixedly contemplating. His eyes are staring, his mouth is open, his wings are
spread. This is how one pictures the angel of history . . . The angel would like to stay, awaken the
dead, and make whole what has been smashed. But a storm is blowing from Paradise; it has got
caught in his wings with such violence that the angel can no longer close them. The storm irresist-
ibly propels him into the future to which his back is turned, while the pile of debris before him
grows skyward. This storm is what we call progress’. Walter Benjamin, ‘Theses on the Philosophy
of History’ (1940), republished in Hannah Arendt (ed.), Illuminations (New York: Harry Zohn
(trans.), Schocken Books, 1969).

M.I. Franklin - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:41AM
via New York University

WAGNER_9781785367717_t.indd 23 13/12/2018 15:25


2.  There are no rights ‘in’ cyberspace
Mark Graham*57

Rights are always bounded, granted, bestowed, enjoyed, received, performed, and vio-
lated in real, physical places. This chapter argues that just because actions or interactions
happen ‘online’ – or mediated by digital networks – does not mean that they happen in
any sort of alternate sphere or space beyond the laws, norms and principles that apply
and are practised elsewhere.
It posits that we have often taken a wrong turn in discussions about digital politics and
rights by relying on inaccurate and unhelpful spatial metaphors. In particular, the chapter
focuses on the usage of the ‘cyberspace’ metaphor and outlines why the reliance by con-
temporary policy-makers on this inherently geographic metaphor matters. The metaphor
constrains, enables and structures very distinct ways of imagining the interactions between
people, information, code and machines through digital networks. These distinct imagina-
tions, in turn, have real effects on how we enact politics and bring places into being.
The chapter traces the history of ‘cyberspace’, explores the scope of its current usage,
and highlights the discursive power of its distinct way of shaping our spatial imagination
of the Internet. It then concludes by arguing that we should take the lead in employing
alternate, nuanced and spatially grounded ways of envisioning the myriad ways in which
the Internet mediates social, economic and political experiences.

1.  CYBERSPACE IN POPULAR DISCOURSE


Cyberspace is real.
  President Barack Obama (2009)

In late 2011, the London Conference on Cyberspace was organized by William Hague
and the UK Foreign Office. The conference, held in the heart of Westminster, brought
together powerful and influential names such as UK Prime Minister David Cameron, US
Vice President Joe Biden, United Nations Development Programme Head Helen Clark,
former Swedish Prime Minister Carl Bildt, Wikipedia founder Jimmy Wales, and many
others, in order to tackle what even the organizers admitted was an ambitious goal: ‘to
develop a better collective understanding of how to protect and preserve the tremendous
opportunities that the development of cyberspace offers us all’.
A range of visions were presented for the future of the Internet, but what might

*  This chapter is an adapted version of the following article: Mark Graham, ‘Geography/
Internet: Ethereal Alternate Dimensions of Cyberspace or Grounded Augmented Realities?’
(2013) 179(2) Geographical Journal 177–82. I wish to thank Martin Dodge, Bernie Hogan, Ralph
Schroeder, Matthew Wilson and Matthew Zook for helpful comments and critiques, and stimulat-
ing debates and arguments, that helped to improve this chapter.

24
Mark Graham - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 24 14/12/2018 14:28


There are no rights ‘in’ cyberspace  25

i­ nterest geographers most was the constant use of the word ‘cyberspace’. David Cameron
remarked ‘we can’t leave cyberspace wide open to criminals’. Joe Biden called it ‘a new
realm’. Russia’s Minister for Communications was worried enough that he asked that
the Internet be made to respect borders and state sovereignty. Continuing the use of the
spatial metaphor, Carl Bildt, the former Prime Minister of Sweden, speculated that light
would be brought to even the most hidden corners of the Internet by asserting that ‘there
will be no dark spaces for dark acts any more’.
Importantly, the attendees at this conference are not the only contemporary decision-
makers to employ the ‘cyberspace’ metaphor.1 Two decades ago, Stephen Graham already
argued that Internet metaphors like ‘cyberspace’ mask many patterns and practices
enacted and brought into being through the intersections between information and
communication technologies (ICTs) and society.2 But since then, ‘cyberspace’ has not
disappeared as a way of describing the Internet and the interactions that occur through it.
In other words, the term is not solely a relic from an earlier age of the Internet.
‘Cyberspace’ remains present in the ways that many powerful actors talk about, enact
and regulate the Internet. Governments around the world have policies, laws and depart-
ments dedicated to regulating ‘cyberspace’. The United States, for instance, has a Cyber
Command dedicated to ensuring ‘US/Allied freedom of action in cyberspace and deny the
same to . . . adversaries’.3 South Korea, China, the United Kingdom, and other countries
all similarly have their own ‘cyber commands’.4
The media in much of the world contains daily stories that make reference to ‘cyber-
space’, and even academics have continued to employ the metaphor as a way of referring
to the Internet.5 Work grounded in the fields6 of law,7 politics,8 sociology,9 education,10

 1
  While the term is perhaps less widely used in the general media and in academia now than
it was a decade ago, it does remain widely employed. Perhaps more importantly though, it is used
more than ever in the public sector and state security services (as illustrated by the very name of
the London conference).
 2
  Stephen Graham, ‘The End of Geography or the Explosion of Place? Conceptualizing
Space, Place and Information Technology’ (1998) 22(2) Progress in Human Geography 165–85.
 3
  See arcyber.army.mil (accessed 19 October 2012).
 4
  News stories about the role of these defence agencies tend to be replete with quotes that
build on the perceived spatiality of the Internet. The Wall Street Journal, for instance, recently
quoted American General Keith Alexander as saying that ‘we do have to establish the lanes of the
road for what governments can and can’t do in cyberspace’, S. Gorman, ‘US Backs Talks on Cyber
Warfare’, Wall Street Journal (2010), available at http://online.wsj.com/article/SB100014240527487
03340904575284964215965730.html.
 5
  I admittedly have even employed the term in my own work until relatively recently.
 6
  In fact, there is even an entire branch of the study of law termed ‘cyberlaw’.
 7
  Richard Spinelo, Cyberethics: Morality and Law in Cyberspace (Jones & Bartlett Learning,
2011).
 8
  See e.g. Victoria Bernal, ‘Diaspora, Cyberspace and Political Imagination: the Eritrean
Diaspora Online’ (2006) 6(2) Global Networks 161–79; Ronald J. Deibert and Rafal Rohozinski,
‘Risking Security: Policies and Paradoxes of Cyberspace Security’ (2010) 4(1) International
Political Sociology 15–32.
 9
  Jessie Daniels, ‘Visualizing Race and Embodiment in Cyberspace’ (2011) 33(1) Symbolic
Interaction 141–44.
10
  Catherine J. Irving and Leona M. English, ‘Community in Cyberspace: Gender, Social
Movement Learning, and the Internet’ (2011) 61(3) Adult Education Quarterly 262–78.

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 25 13/12/2018 15:25


26  Research handbook on human rights and digital technology

religion,11 psychology,12 health,13 anthropology,14 and especially geography15 continue to


use the metaphor.

2.  THE SPATIAL METAPHOR


Because metaphors can guide our imagination about a new invention, they influence what it can
be even before it exists.16

In all of the cases mentioned above (and indeed many others), the idea of ‘cyberspace’
is deployed as an inherently geographic metaphor. We know that metaphors reflect,
embody, and, most importantly, reproduce ways of thinking about and conceptualizing
our world.17 As Stefik notes:18
When we change the metaphors, therefore, we change how we think about things . . . The
metaphors we use suggest ideas and we absorb them so quickly that we seldom even notice the
metaphor, making much of our understanding completely unconscious.

It is important to point out that even before the coining of the ‘cyberspace’ metaphor,
commentators were speculating that synchronous communication technologies like the
telegraph would bring humanity together in some sort of shared space. For instance, in
1846, in a proposal to connect European and American cities via an Atlantic telegraph,
it was stated that one of the benefits would be the fact that ‘all of the inhabitants of the
earth would be brought into one intellectual neighbourhood and be at the same time
perfectly freed from those contaminations which might under other circumstances be
received’.19 Twelve years later, after the completion of the Atlantic telegraph, The Times

11
  Abdallah M. Badahdah and Kathleen A. Tiemann, ‘Religion and Mate Selection Through
Cyberspace: A Case Study of Preferences Among Muslims’ (2009) 29(1) Journal of Muslim
Minority Affairs 83–90.
12
  John Suler, ‘Computer and Cyberspace “Addiction”’ (2004) 1(4) International Journal of
Applied Psychoanalytic Studies 359–62.
13
  Isabel Fernandez, Jacob C. Warren, Leah M. Varga, Guillermo Prado, Nilda Hernandez and
Stephen Bowen, ‘Cruising in Cyber Space: Comparing Internet Chat Room Versus Community
Venues for Recruiting Hispanic Men Who Have Sex with Men to Participate in Prevention Studies’
(2007) 6(2) Journal of Ethnicity in Substance Abuse 143–62.
14
  Denise Carter, ‘Living in Virtual Communities: An Ethnography of Human Relationships in
Cyberspace’ (2005) 8(2) Information, Communication and Society 148–67.
15
  See e.g. any of the following articles: Helen Couclelis, ‘Rethinking Time Geography in the
Information Age’ (2009) 41(7) Environment and Planning A 1556–75; Lomme Devriendt, Andrew
Boulton, Stann Brunn, Ben Derudder and Frank Witlox, ‘Searching for Cyberspace: The Position
of Major Cities in the Information Age’ (2011) 18(1) Journal of Urban Technology 73–92; Aharon
Kellerman, ‘Mobile Broadband Services and the Availability of Instant Access to Cyberspace’
(2010) 42(12) Environment and Planning A 2990–3005; Matthew Zook and Mark Graham,
‘Mapping DigiPlace: Geocoded Internet Data and the Representation of Place’ (2007) 34(3)
Environment and Planning B: Planning and Design 466–82.
16
  Mark Stefik, Internet Dreams: Archetypes, Myths, and Metaphors (MIT Press, 1996) xvi.
17
  George Lakoff and Mark Johnson, Metaphors We Live By (University of Chicago Press, 1980).
18
 Stefik, Internet Dreams, n. 16 above, xvi.
19
  Carolyn Marvin, When Old Technologies Were New: Thinking About Electric Communication
in the Late Nineteenth Century (Oxford University Press, 1988) 201.

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 26 13/12/2018 15:25


There are no rights ‘in’ cyberspace  27

proclaimed20 ‘the Atlantic is dried up, and we become in reality as well as in wish one
country’. In the 1960s, Marshall McLuhan’s philosophy of media posited a future not too
different from proclamations about the power of communication technologies a century
earlier. He noted:

electric circuitry has overthrown the regime of ‘time’ and ‘space’ and pours upon us instantly
and continuously concerns of all other men. It has reconstituted dialogue on a global scale . . .
‘Time’ has ceased, ‘space’ has vanished. We now live in a global village.21

But it was almost three decades ago when William Gibson, who coined the term
‘cyberspace’, defined it as:22

A consensual hallucination experienced daily by billions of legitimate operators, in every nation,


by children being taught mathematical concepts . . . A graphic representation of data abstracted
from the banks of every computer in the human system. Unthinkable complexity. Lines of light
ranged in the nonspace of the mind, clusters and constellations of data. Like city lights, receding.

John Perry Barlow built on Gibson’s concept in his Declaration of the Independence of
Cyberspace, in which he boldly asserts that ‘cyberspace does not lie within your borders’
and ‘ours is a world that is both everywhere and nowhere, but it is not where bodies live’.
William Mitchell23 similarly asserted that the Internet is ‘profoundly antispatial . . . You
cannot say where it is or describe its memorable shape and proportions’.
Many of the reasons for the power and prominence of the explicitly spatial metaphor
of ‘cyberspace’ are likely borne out of the early days of the Internet, when it was
understandably hard not to imagine the network as a portal to another dimension. It
was fully detached from mobile material realities (i.e. access had to occur through clunky
fixed infrastructure), but offered up almost infinite possibilities for information and
communication. For instance, Rey,24 describing Sterling’s25 book on hacker subculture,
argues that the idea of a ‘cyberspace’ was needed by people to make sense of the space
in-between instantaneously and yet non-proximate communications (such as telephone
calls):

The telephone creates a feeling of cognitive dissonance. How can the other person on the line be
so far and yet seem so near? To overcome this disconnect, we create for ourselves a little exposi-
tory travel narrative. We begin to imagine information as occupying space and then imagine this
space as something that can be traversed and experienced, an alternate geography that provides
a new path to reach the other person on the line. And though we know we are indulging in a
fantasy, we can’t help but take it seriously.

20
  Tom Standage, The Victorian Internet: The Remarkable Story of the Telegraph and the
Nineteenth Century’s Online Pioneers (Weidenfield and Nicolson, 1998) 80.
21
  Marshall McLuhan and Quentin Fiore, The Medium is the Massage: An Inventory of Effects
(Penguin, 1967) 63.
22
  William Gibson, Neuromancer (Harper Collins, 1984) 51.
23
  William Mitchell, City of Bits: Space, Place, and the Infobahn (MIT Press, 1996) 8.
24
  P.J. Rey, ‘The Myth of Cyberspace’, The New Inquiry (2012), available at http://thenewin​
quiry.com/essays/the-myth-of-cyberspace (accessed 25 September 2012).
25
  Bruce Sterling, The Hacker Crackdown: Law and Disorder on the Electronic Frontier
(Bantam, 1992).

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 27 13/12/2018 15:25


28  Research handbook on human rights and digital technology

Both the metaphor of ‘cyberspace’ and the distinct social and spatial practices that
it described allowed the virtual to take on an ontic role.26 ‘Cyberspace’, in this sense, is
conceived of as both an ethereal alternate dimension which is simultaneously infinite and
everywhere (because everyone with an Internet connection can enter), and as fixed in a
distinct location, albeit a non-physical one (because despite being infinitely accessible
all willing participants are thought to arrive into the same marketspace, civic forum and
social space).27 ‘Cyberspace’ then becomes Marshal McLuhan’s28 ‘global village’.
The ontic role assigned to ‘cyberspace’29 is likely also reinforced by the grammatical rules
associated with the Internet in the English language. Common prepositions associated
with Internet use (e.g. to go to a website, or to get on the Internet) imply a certain spatiality
associated with the Internet. In other words, they imply the need to move to a cyberspace
that is not spatially proximate to the Internet user. Similarly, it is common practice to treat
the word ‘Internet’ as a proper noun (hence the trend to capitalize the word). In doing so,
the notion of a singular virtual entity or place is reinforced. A combination of a long his-
tory of dualistic philosophy in Western thought,30 and the reinforcement of the ontic role
applied to ‘cyberspace’ in popular media (e.g. Neil Stephenson’s Snow Crash, films such as
The Matrix, or the famous Dave Chapelle comedy sketch titled What If the Internet was
a Place that You Could Go To?) all further serve to reinforce these roles.
Such imaginations of ‘cyberspace’ all claim an aspatiality for the collective hallucina-
tion of Internet: a disembodied place, but a place nonetheless, paradoxically imbued
with another type of spatiality allowing for a global coming-together of humanity. They
give their imagined ‘cyber-’ space an ontic role. It becomes a fixed and singular, but also
an ethereal and ubiquitous alternate dimension. It is this spatial imagination that has
remained with us in the ‘cyberspace’ metaphor until the present day.

26
  Paul Adams, ‘Cyberspace and Virtual Places’ (1997) 87(2) Geographical Review 155–83;
Mark Graham, ‘Time Machines and Virtual Portals’ (2011) 11(3) Progress in Development Studies
211–27.
27
  Mark Graham, ‘Contradictory Connectivity: Spatial Imaginaries and Techno-Mediated
Positionalities in Kenya’s Outsourcing Sector’ (2015) 47 Environment and Planning A 867–83;
Mark Graham, Casper Andersen and Laura Mann, ‘Geographical Imagination and Technological
Connectivity in East Africa’ (2015) 40(3) Transactions of the Institute of British Geographers
334–49.
28
  Marshall McLuhan, The Gutenberg Galaxy: The Making of Typographic Man (University of
Toronto Press, 1962).
29
  This is not to deny the fact that ‘space’ has always been a contested and complex term. S.
Kern, The Culture of Time and Space (Harvard University Press, 2003), for instance, argues that
space is a historical construct and has necessarily evolved concomitantly with other cultural ele-
ments. R. Sack, Human Territoriality: Its Theory and History (Cambridge University Press, 1986)
demonstrated that space can be understood in myriad ways and is an essential framework for all
modes of thought (from art to magic to physics to social science). Space can also be many things
and is far from always described as fixed, homogenous, universal and coherent. Current thinking
in Geography, in particular, imagines space as relational. In other words, it emerges out of interac-
tions rather than preceding them. This chapter, therefore, recognizes that people do not experience
a ‘cyberspace’ in the same way, and that it cannot pre-determine particular ways of bringing spaces
into being.
30
  Margaret Wertheim, The Pearly Gates of Cyberspace: A History of Space from Dante to the
Internet (Virago Press, 1999).

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 28 13/12/2018 15:25


There are no rights ‘in’ cyberspace  29

3.  T
 HE INTERNET AS AN ABSTRACT SPACE OR A
NETWORK?
On the Internet, nobody knows you’re a dog.
  Peter Steiner (1993)

It is important to point out that we do have many alternate ways of conceptualizing


communication and flows of information and knowledge through the Internet. A first
step has been to move away from the assumed offline/online dichotomy in research and
descriptions about the social roles of the Internet (e.g. see work by Leander and McKim,31
or Wilson32). Wakeford,33 for instance, describes ‘the overlapping set of material and
imaginary geographies which include, but are not restricted to, on-line experiences’.
Burrell’s34 nuanced way of describing her fieldwork sites (Ghanian Internet cafés) as
networks rather than bounded places, similarly functions as a way of avoiding assump-
tions of material proximity or co-presence. In other words, they challenge the now famous
adage that ‘on the Internet, nobody knows you’re a dog’.
Many geographers have also moved away from envisioning the Internet as a technol-
ogy that can bring into being any sort of detached ‘cyberspace’. Stephen Graham35
importantly warned against the dangers of employing determinist metaphors of techno-
logical change, and Martin Dodge and Rob Kitchin have written extensively about the
intersections between the virtual and the material. They distinguish between ‘code/space’
(in which ‘code dominates the production of space’) and ‘coded space’ (in which code is
influential, but incidental, to the production of space).36 This distinction is important
specifically because it highlights how technology can produce or ‘transduce’ space via
continuously ‘reiterated digital practices that create space anew’.37
Information in and communication through the Internet can be thought to take
place as part of the many palimpsests (or layers) of place.38 It is also important to note

31
  Kevin Leander and Kelly McKim, ‘Tracing the Everyday “Sitings” of Adolescents on the
Internet: A Strategic Adaptation of Ethnography Across Online and Offline Spaces’ (2003) 3(2)
Education, Communication, and Information 211–40.
32
  Brian Wilson, ‘Ethnography, the Internet, and Youth Culture: Strategies for Examining
Social Resistance and “Online–Offline” Relationships’ (2006) 29(1) Canadian Journal of Education
307–46.
33
  Nina Wakeford, ‘Gender and the Landscapes of Computing in an Internet Café’ in
Mike Crang, Phil Crang and Jon May (eds), Virtual Geographies: Bodies, Spaces and Relations
(Routledge, 1999) 180.
34
  Jenna Burrell, ‘The Field Site as a Network: A Strategy for Locating Ethnographic Research’
(2009) 21 Field Methods 181–99.
35
  Stephen Graham, ‘The End of Geography or the Explosion of Place? Conceptualizing
Space, Place and Information Technology’ (1998) 22(2) Progress in Human Geography 165–85.
36
  Martin Dodge and Rob Kitchin, ‘Code and the Transduction of Space’ (2005) 95 Annals of
the Association of American Geographers 162–80, at 172.
37
  Matthew Wilson, ‘Data Matter(s): Legitimacy, Coding, and Qualifications-of-Life’ (2011)
29(5) Environment and Planning D: Society and Space 857–72.
38
  Mark Graham, ‘Neogeography and the Palimpsests of Place: Web 2.0 and the Construction
of a Virtual Earth’ (2010) 101(4) Tijdschrift voor Economische en Sociale Geografie 422–36; Mike
Crang, ‘Envisioning Urban Histories: Bristol as Palimpsest, Postcards and Snapshot’ (1996) 28
Environment and Planning A 429–52.

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 29 13/12/2018 15:25


30  Research handbook on human rights and digital technology

that the Internet has been shown to have distinct spatial biases that greatly influence
possibilities for voice, representation and communication that are mediated through the
network.39 These broad and individualized geographies of enablement and constraint
also then shape the ways that we bring our Internet mediated (or augmented) presences
into being.40
Ultimately, places can never have ontological security and they are always ‘of-the-
moment, brought into being through practices (embodied, social, technical), always
remade every time they are engaged with’.41 Therefore, even if we did choose to employ
a spatial metaphor for online interactions, the singular ‘global village’ entailed by most
popular imaginations of ‘cyberspace’ would remain unsuitable as a way of imagining the
relational and contingent ways that those places are enacted, practised, and brought into
being. We might, of course, simply pluralize ‘cyberspace’ (as we do with ‘knowledges’,
‘materialities’, ‘spatialities’, ‘genders’, ‘subjectivities’, ‘positionalities’, etc.), but that
doesn’t resolve more problematic assumptions that are mobilized with the word that are
addressed in the following section.

4.  BEYOND CYBERSPACE

As noted in the section above, many people have moved beyond the idea of a singular
ontic entity of ‘cyberspace’ that we can enter into to transcend our material presences,
and recognize the hybrid and augmented ways in which the Internet is embedded into our
daily lives. That is probably why few of us imagine a movement into ‘cyberspace’ when
we access Wikipedia, log into Facebook, send an email, or watch a video on YouTube.42
So why do the global leaders present at the London Conference on Cyberspace, national
defence agencies and many academics insist on using this term?
The reason is certainly not because ‘cyberspace’ is being thought of as a networked
relational space. The ‘cyberspace’ metaphor is instead almost always applied to an ontic
entity. Rather, I would argue that part of the reason likely can be attributed to the fact
that states, and their representatives and leaders, are naturally concerned with unregulated
activity that is hard to geographically place. When thinking about warfare, hackers,
pornography, fraud, and other threats to the rule of law that pass through the Internet,
it is challenging to fully understand the complex geographies of these processes and

39
  Mark Graham and Matthew Zook, ‘Visualizing Global Cyberscapes: Mapping User
Generated Placemarks’ (2011) 18(1) Journal of Urban Technology 115–32; Michael Crutcher and
Matthew Zook, ‘Placemarks and Waterlines: Racialized Cyberscapes in Post-Katrina Google
Earth’ (2009) 40 Geoforum 523–34.
40
  Mark Graham, Matthew Zook and Andrew Boulton, ‘Augmented Reality in the Urban
Environment’ (2013) 38(3) Transactions of the Institute of British Geographers 464–79; Mark
Graham and Matthew Zook, ‘Augmented Realities and Uneven Geographies: Exploring the Geo-
linguistic Contours of the Web’ (2013) 45(1) Environment and Planning A 77–99.
41
  Rob Kitchin and Martin Dodge, ‘Rethinking Maps’ (2007) 31(3) Progress in Human
Geography 331–44, at 335.
42
  We probably also don’t imagine a move into ‘physical space’ when we step out the door. But
the crucial difference here is that ‘cyberspace’ has often been held up as an alternate dimension
(whereas ‘physical space’ usually isn’t).

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 30 13/12/2018 15:25


There are no rights ‘in’ cyberspace  31

practices. It is much easier to imagine that they simply happen ‘out there’ in Carl Bildt’s
dark spaces of the Internet.
Another reason is likely the extensive literature on the ‘information revolution’ and the
‘networked society’. Most national governments have departments, task forces, plans and
policies set up to address issues of digital exclusion. Because of the existence of the ‘global
village’ ontology of cyberspace, there is often a pollyannaish assumption that once the
material ‘digital divide’ is bridged, the many problems attributed to ‘digital divides’ will
also vanish.43 Or, in other words, once people are placed in front of connected terminals,
the ‘digital divide’ becomes bridged and the previously disconnected are consequently able
to enter ‘cyberspace’. As such, those without access to ‘cyberspace’ and the ‘global village’
are therefore seen to be segregated from the contemporary socio-economic revolution
taking place. This idea of exclusion is powerful,44 and some, such as former US Secretary
of State Colin Powell45 and the chief executive of 3Com,46 have on separate occasions
gone so far as to term this exclusion ‘digital apartheid’.
But the most duplicitous explanation is that a dualistic offline/online worldview can
depoliticize and mask the very real and uneven power relationships between different
groups of people. We cannot simply escape what Doreen Massey47 terms ‘power-
geometries’ by accessing an imagined ‘cyberspace’. While many people and projects
have demonstrated that the Internet can indeed be harnessed to challenge entrenched
economic, cultural and political interests, it remains that it is not a utopian space that
allows us to automatically transcend most of the real and place-based constraints that we
face. This is not to say that ‘virtuality’ cannot provide a site for the alternate performances
that have been so immensely important to queer studies, cyberpunk literature, and various
online social movements. Propositional spaces of possibility and idealism can be left open
by avoiding denials of contingency and recognizing that spaces can be augmented (rather
than transcended) with a range of technological mediations.
In contrast to imaginations of a digital global village and an ontic entity of ‘cyberspace’,
this chapter has argued that there isn’t some sort of universally accessible ‘cyberspace’ that
we are all brought into once we log onto the Internet.48 The Internet is not an abstract
space or digital global village, but rather a network that enables selective connections
between people and information. It is a network that is characterized by highly uneven
geographies and in many ways has simply reinforced global patterns of visibility, repre-
sentation and voice that we’re used to in the offline world.
My hope is that this argument does not mean that we should throw our hands in the

43
  Mark Graham, ‘Time Machines and Virtual Portals’ (2011) 11(3) Progress in Development
Studies 211–27.
44
  It is also important to point out that it is almost always overstated. Many governments
and development organizations with digital strategies tend to confuse necessary and sufficient
conditions.
45
  See www. businessweek.com/adsections/digital/powell.htm (accessed 19 October 2012).
46
  See http://news.cnet.com/FCC-cuts- E-rate-funding/2100-1023_3-212239.html (accessed 19
October 2012).
47
  Doreen Massey, ‘Power Geometry and a Progressive Sense of Place’ in Jon Bird, Barry
Curtis, Tim Putnam, George Robertson and Lisa Tickner (eds), Mapping the Futures: Local
Cultures, Global Change (Routledge, 1993).
48
  There isn’t, of course, any sort of universally accessible ‘material space’.

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 31 13/12/2018 15:25


32  Research handbook on human rights and digital technology

air when it comes to thinking about the complex digitalities and materialities of rights.
What is clear is that we need more nuanced language, metaphors and heuristics to talk
about our hybrid, augmented, digitally-mediated lives. In some cases, where rights are
weak in people’s socio-spatial contexts, some talk of rights will certainly need to use the
digital to allow people to grow, and augment their material presences. The digital can be
used as a way of building strength in a context of weak existing rights. In other cases,
we may wish to ensure that current configurations of digital/material interactions are
not privileging transnationally powerful actors and organizations, who can treat human
digitally-mediated activity as if it were disembedded from the socio-spatial contexts in
which it emerged.
It is this balance that we’ll need to get right if we want to be serious about what it
means to respect, protect and implement human rights in a digital age. We will ultimately
need to take the lead on employing more suitable and appropriate ways of talking about
the Internet. But, too often, we have lazily employed old and tired metaphors. Imagining
the Internet as a distinct, immaterial, ethereal alternate dimension makes it more chal-
lenging to think through the contingent and grounded ways in which we consume, enact,
communicate and create through the Internet. The Internet is characterized by complex
spatialities which are challenging to understand and study, but that doesn’t give us an
excuse to fall back on unhelpful metaphors which ignore the Internet’s very real, very
material, and very grounded geographies.

Mark Graham - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:47AM
via New York University

WAGNER_9781785367717_t.indd 32 13/12/2018 15:25


3.  Beyond national security, the emergence of
a digital reason of state(s) led by transnational
guilds of sensitive information: the case of the
Five Eyes Plus network
Didier Bigo

1.  A
 CHANGE OF PARADIGM OF NATIONAL SECURITY:
TOWARDS THE EMERGENCE OF A DIGITAL REASON OF
STATE(S)?

I argue in this chapter that the scale and scope of surveillance and the transnationaliza-
tion of secret intelligence services we have witnessed over the last few years require a
renewed investigation of contemporary world security practices. Reflexively, it means
also we need a careful mapping of our very own categories of analysis, especially the one
of national security. National security had a stabilized meaning during the Cold War,
but has been gradually challenged by the idea of a global security agenda and by the
argument that cooperation between intelligence services to trace transnational threats
was a necessity, especially in matters of terrorism. The destabilization of meaning is not
specific to national security as such. Sovereignty, security communities, territory, border
control, surveillance, technology, intelligence and rule of law have also ended up meaning
different things for different people with different normative and political judgements.
Is security, protection of the people or mass surveillance? Is rule of law a danger for an
adequate prevention in need of efficiency and high-speed action? Are border controls an
effective tool for intelligence purposes if suspects are already on the territory? The list can
continue. What is under question is not the transformation of one of these categories over
another one, but how all these categories have simultaneously changed. This supposes to
move away from the mainstream of intelligence and security studies and to develop an
International Political Sociology (IPS) of freedom and security inspired by surveillance
studies and human rights legal knowledge, in addition to the legacy of critical security
studies.
Following this IPS approach, I will argue that national security is no longer national
as such, nor does it correspond to a traditional understanding of security as protection
from war. This change in national security practices is what I call ‘the emergence of a
digital reason of state(s)’ based on the possibility for intelligence services to cooperate
and compete to extend their goals of prevention of crime, terrorism or espionage by the
inclusion of technologies collecting traces of human activities.
As I will claim in the first section, national security, because of the structural changes
in technologies at distance including the use of Internet and smart-phones, of the huge
exchange of data between different intelligence services in order to give sense to events via
global interconnected information, and of the current forms of neoliberal management, is

33
Didier Bigo - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 33 13/12/2018 15:25


34  Research handbook on human rights and digital technology

no longer political, national and public in its making. In the second section, I will develop
by showing that in the current state of the game, the very idea of a ‘national’ security is
the result of the power struggles of a field of actors who want to control the management
of sensitive data. These actors, that I call guilds of management of sensitive information,
are now one of the key components of this transnationalization of the Raison d’Etat
(Reason of State) in a digital age, which is still called national security but is in need of
a different name. Then, I argue that this boils down to an argument over the digitization
and heterogenization of Raison d’Etat which national security expresses poorly. Thus,
in a third section, I will argue that these inner struggles within a transnational guild of
professionals are creating the current series of paradoxes of situations after the Snowden
disclosures of the practices of the US National Security Agency (NSA) and the ‘Five
Eyes Plus’ network. This explains, for example, the current paradoxical status of some
European national laws on intelligence which gave to these secret services more personnel,
technologies and rights after 2013 than before, even if the necessity of oversight has been
recognized. The follow-up to the debates about surveillance and democracy have by the
same token created interventions of ‘intelligence non-professionals or amateurs’ into the
very heart of the control of data management by introducing references to data protec-
tion, encryption, privacy, democracy. But it is not clear that they succeed to destabilize the
idea of an inevitability of surveillance in a technological world, justifying an extension of
intrusive intelligence, often because a confusion between intrusive intelligence and perva-
sive forms of everyday surveillance has occurred, including inside the academic literature.

1.1 Methodology

Key to my argument is therefore to first understand and to analyse how the classic Raison
d’Etat and its contemporary iterations, such as national security during the cold war, have
undergone profound mutation with the process of digitization leading to the emergence
of ‘datafication’ of our societies in everyday life, the development of transnational
exchange of data between secret services, and the extension of the personnel involved
in intrusive intelligence beyond police and intelligence services.1 This increase in gather-
ing digital communication and data has nurtured a wider transnational collaboration
amongst national intelligence and security professionals and resulted in an extension
of the category of foreign intelligence in order to share data that could be of national
concern more specifically. The Reason of State is becoming shared between a group of
states that elaborate strategies, responding unequally to their own interests. Therefore, by
projecting their national security ‘inside out’, via a transnational alliance of the profes-
sionals of national security and sensitive data, they have in return an ‘outside in’ effect of
suspicion for all Internet subjects; a situation which destabilizes strongly the categories
of ‘foreign’ and ‘domestic’ by dispersing them and transforming the line that separated

1
  Didier Bigo, Sergio Carrera, Nicholas Hernanz, Julien Jeandesboz, Joanna Parkin, Francesco
Ragazzi and Amandine Scherrer, Mass Surveillance of Personal Data by EU Member States and
its Compatibility with EU Law, CEPS Liberty and Security in Europe No. 61 (6 November 2013);
Zygmunt Bauman, Didier Bigo, Paulo Esteves, Elspeth Guild, Vivienne Jabri, David Lyon and
R.B.J. Walker, ‘After Snowden: Rethinking the Impact of Surveillance’ 2014) 8(2) International
Political Sociology 121–44.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 34 13/12/2018 15:25


The case of the Five Eyes Plus network  35

them into a Möbius strip.2 The national-foreigner divide that organizes both the cleavage
between military and police services, as well as the difference of targeting between citizen
and foreigners, is therefore blurred and becomes highly intersubjective and discretionary,
creating new forms of discrimination different from the traditional national security argu-
ment. The mode of acquisition changing as well as the objectives, the nature of the groups
in charge change also. The groups in charge of national security are now transnational
groups of experts, both public and private, both security and data ‘bureaucrats’, obeying
both their national politicians but also their transnational allegiances. This is rendered
possible by the accelerated growth of interception and intrusive collection of data that
these intelligence services extracting information at distance can perform, and by the ease
with which they can perform these extractions on a large scale, because of the digitiza-
tion of the formats of data and metadata. Those who manage this information have a
socio-technical capital on information at distance which allows them to claim a relative
autonomy, challenging the national monopoly of the politicians in assessing who is the
enemy and what are the objectives of national security.
Hence, national security now encapsulates practices which, first, are a mix between
national and transnational objectives; second, are organized more bureaucratically than
politically; and third, are assembled by a hybrid form of grouping of different public
services and private companies interested in the management of sensitive information in
police and intelligence matters.
To integrate in the reasoning these three key elements of metamorphosis of national
security, I propose a Bourdieusian-inspired analysis of the contemporary international
sphere, insisting on the transnational fields of power, their dynamics, and the dispositions
that the actors enact when what is at stake is the management and extraction of data for
purposes of constituting watch lists of suspects.3 In this approach, the positions of the
field inform the struggles in terms of symbolic power between the actors, their position-
takings and the regime of justifications they use. The dynamics in the field, and the
emergence of scandals by disclosure of secret practices creating turbulences, are therefore
less determined by one main actor than by the results of the field interactions which play a
central role for understanding the compliance and resistances of large parts of the public.
At the core of this grouping of actors connecting many tools of surveillance with
intelligence purposes of prevention is what I call a ‘guild’ of actors having their specific
know-how, their specific dispositions, sense of order, truth rituals, at the transnational
scale.4 This notion transposes and clarify the ones used by previous Bourdieusian

2
  Didier Bigo, ‘Internal and External Security(ies): The Möbius Ribbon’ in Mathias Albert,
David Jacobson and Yosef Lapid (eds), Identities, Borders, Orders, (Minneapolis, MN: University
of Minnesota Press, 2001) 91–116; Didier Bigo and R.B.J. Walker, ‘Political Sociology and the
Problem of the International’ (2007) 35(3) Millenium Journal of International Studies 725–39;
Didier Bigo, ‘Sécurité intérieure, sécurité extérieure: séparation ou continuum?’ in Sébastien-Yves
Laurent and Bertrand Warusfel (eds), Transformations et réformes de la sécurité et du renseignement
en Europe (Presses Universitaires de Bordeaux, 2016) 316.
3
  Didier Bigo, ‘International Political Sociology: Rethinking the International through Field(s)
of Power’ in Tugba Basaran, Didier Bigo, Emmanuel-Pierre Guittet and R.B.J. Walker (eds),
Transversal Lines (Routledge, 2016).
4
  Didier Bigo, ‘Sociology of Transnational Guilds’ (2016) 10(4) International Political Sociology
(1 December) 398–416.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 35 13/12/2018 15:25


36  Research handbook on human rights and digital technology

approaches about the role of power elites and professional expertise. Specifying the activi-
ties of intelligence services in the general management of unease by security professionals,
the idea of a guild of management of sensitive information is proposed to analyse the
current composition and roles of the SIGINT (signals intelligence) and Internet agencies
more well-known under the name of ‘Five Eyes’, which are the US NSA and its private
contractors, plus the UK GCHQ, the Australian ASD, the Canadian CSEC, and the New
Zealand GCSB. These so-called ‘Five Eyes’ have in addition asymmetrical ramifications
in different regions, including Europe, where other services play an important role in
producing information and intercepting data. The various segments of this guild (often
referred as ‘Five Eyes Plus’) have different capitals, be they socio-technical or symbolic,
which give them different assets and allow them to enter, or not, into the competition for
defining and prioritizing the tools, budgets, personnel who have to be in charge of the
world control of management of sensitive information. The actors of this guild extracting
data and building profiles of suspects are now as much technical experts coming from the
digital industry as they are policemen or military. Often, they straddle competencies and
move back and forth between public and private positions. Having the double competen-
cies of security agents and strong knowledge in informatics is one of their characteristics.
The apparent heterogeneity of the different trajectories is nevertheless mitigated by
the fact that they all share a common know-how in the management of sensitive data,
and that they consider that they are more experts in this domain than the politicians
themselves and the high spheres composing their national security councils. The feeling of
power connected with the shared secrecy of data in a small world of experts reinforces the
solidarity beyond national ties and reinforces the transnational dimension of professional
expertise. But they are not, in fact, all powerful, even when they think so. The field of their
inner struggles creates centrifugal dynamics destabilizing the secrecy of their universe, and
whistle-blowers are not exceptional in this universe. They often resist the doxa of the field
to be beyond the reach of the rule of law and democratic scrutiny and reintroduce other
actors, security amateurs, into the debates about what is at stake in this field. It is essential
to take into account these characteristics in order to understand the public controversies
around the legitimacy of large-scale surveillance by intelligence services in the name of
anti-terrorism, the counter-claims of the necessity of democratic controls, and counter-
technologies like encryption. The latter element may paralyse in some ways traditional
forms of protests and mobilizations by the quick acceptance that current technologies
are inevitable and necessary. But this form of doxa regarding the social effects of digital
technologies impacts on the public at large and many academics, which reinforces a priori
compliance, but also generates alternative behaviours, and reframes the field dynamic in
a way that the core actors do not control.

2.  N
 ATIONAL SECURITY IN A DIGITAL WORLD: A
SHRINKING NOTION OR A PHOENIX RESURRECTION?

2.1  When National Security was Meaning National First

National security is a terminology often taken for granted and considered almost univer-
sal. Certainly, we do not lack for definitions, and Arnold Wolfers has given a definition

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 36 13/12/2018 15:25


The case of the Five Eyes Plus network  37

largely accepted by the United States during the Cold War by capturing the older formula
of Walter Lippman (1943) in condensing it as: ‘a nation is secure to the extent to which
it is not in danger of having to sacrifice its core values’.5 The work of national security
is therefore to produce the means to identify who or what can endanger these values and
to define the threats to these core values in order to protect them. As long as the enemy
is seen as another state whose forces are mainly outside of the territory, the horizon of
deterrence and military war can be used to be sure that the balance of power is sufficient to
avoid any attacks, but if the intelligence services have to cope with more complex elements
coming from social changes, decolonization, revolution or attempts of destabilization by
a spectacular form of political violence, the ‘national’ character is already not completely
synchronous with the one of territory. Nevertheless, if one adds to military intelligence
services more police and criminal-justice oriented services, it is possible to fill the gap, or
at least to believe it. National security means national interest for the government of a
specific territory and is an instrument of sovereign power. The web of international laws,
regulations and organizations is not penetrating the national logic.
Even after the end of the Cold War, the idea of national security has not been chal-
lenged; on the contrary, national security has been extended, via societal security or
environmental security to new domains.6 But, nevertheless, the introduction of the will
to control different threats and global risks began to change the ‘national’ game, and
in-depth cooperation is considered as a necessity for the different national intelligence
services to ‘connect the dots’ at a global reach. Agreements which were signed against
a precise common adversary are therefore extended to a more common appreciation of
what is a threat or a risk, with nevertheless contradictions and competitions in different
forecasts about the risks to come. This happens even if the sharing of information limits
discrepancies, with the possibility that some crucial ones, the most important ones, have
not been shared. Coopetition (cooperation and competition) is officially the rules between
countries, but theoretically not inside the national agencies. This will change.
The ‘war on terror’, launched on 14 September 2001 and presented as the only possible
answer to the 11 September attacks, will facilitate this shift by enforcing a ‘coalition of
the willing’ who consider that a global civil war is at stake and justify a total information
awareness and the extraction of information by secret services by whatever techniques
they want to use. The CIA will be put in charge of ‘extracting information’. It seems,
then, that national security has never been so important and has justified exceptional
and emergency measures in the fight against terrorism. But, whose national security
is reinforced? At what costs? Is cooperation for the benefit of all countries or only for
some? Is national security still defined by core values or by a politicization of one agenda
imposed on a coalition unaware of it or complicit in it? By considering national security as
potentially unlimited, with a doctrine of security first, where human rights are secondary,
national security will succeed for a period to see a return to a police state at the global
stage, but with major criticisms.

5
  Arnold Wolfers, ‘“National Security” as an Ambiguous Symbol’ (1952) 67(4) Political Science
Quarterly 481–502.
6
  Barry Buzan, Ole Wæver and Jaap de Wilde, Security: A New Framework for Analysis
(Boulder, CO: Lynne Rienner Publishers, 1998).

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 37 13/12/2018 15:25


38  Research handbook on human rights and digital technology

The delegitimization of some modalities of extraction of information via extraordinary


rendition in countries torturing on the behalf of the CIA or complicit in the transporta-
tion will create in fact a ‘denationalization’ of national security. Different modes of
acquisition will emerge in competition for a better legitimacy and will oppose de facto
previous transnational networks of services organized along their technical specificities:
human intelligence, security intelligence, defence intelligence and signal intelligence. But
this time the competition turns into a struggle for symbolic power over security and
growth of budgets and missions, with the real possibility that some services disappear as
such.
SIGINT intelligence will win the fight for national security by reframing the game
around digitization, large-scale surveillance and cyber(in)security as the main threat for
the future. This is not to say that it is forever. Many current tensions in the transatlantic
domain regarding extrajudicial killing, extraordinary rendition, indefinite detention,
which were almost settled by the Obama Administration, are back again, the elite in
the US Administration considering that their national security imperatives give them an
‘imperial right’ to act as they wish (a position nevertheless not shared by the practitioners).
However, if one looks at what national security has done effectively to justify practices
of exception, it is obvious that in a less contentious way (but far more important for
everyday practices), the controversy has focused from 2013 on the management of
sensitive data and especially (but not only) personal data. The first target has become the
NSA and the Five Eyes network, while the CIA’s practices now appear as an error from
the ‘past’.

2.2  T
 urning Digital? Re-Opening the Competition Around the Control of National
Security, Meaning Privileging a Different Set of Actors: SIGINT Agencies

The digitization of the means of acquisition of information for intelligence purposes


redesigned as the tool by excellence for national security will change the possibilities of
large-scale surveillance and the idea of national security itself, by inversing priorities and
going outside in to follow the suspects. The core of the activity is then no longer turned
towards external attacks but also turned internally towards potential infiltrations and a
generalization of suspicion. Global security replaces national security and transforms the
latter into a form of ‘egoist’ practice.
This is one of the first paradoxes inherited from the transformations of the practices
of national security. The strong boundaries made between totalitarian regimes and
democracies was on the moderate use of intelligence services, but after the end of the
Cold War and the argument of a global war against terrorism, this argument is no longer
a strong currency for many Global North (previously called Western) states.7 It is still
important to differentiate from Russia and China or Iran, but the cybersecurity discourse
is framed more by filling the gap against them than by a self-restraint on the use of
intrusive software. For many years, non-democratic regimes have dreamed about having

7
  The terminology of Global North includes the United States, Canada, Australia, New
Zealand, European countries, Japan, South Korea, Israel; a list which is not far from the configura-
tion of the full Five Eyes Plus network.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 38 13/12/2018 15:25


The case of the Five Eyes Plus network  39

instruments to undertake this general surveillance of the regime’s opponents, but the ratio
between surveillance and surveillees was of four to one. Now, it is said that preliminary
investigations can be done ‘at a click of the mouse’ and that ‘data collection’ is an easy
task if collaboration between secret services exists (e.g. ICREACH between the Five Eyes
Plus), or if one has built specific spy software for browsing the Internet and the social
media, often bypassing the restrictions that the Internet providers have put in place to
limit such easy collection.
China has recently been very efficient, as well as Russia and even Iran, in setting up
intrusive means of collecting data, but what is at stake in democracies? Do they behave
differently or not? Are these countries still in a position to outlaw intrusive practices
against Internet users? To paraphrase Allen Dulles’ quotation, are the secret services of
democracies protected from the temptation that the easiness of technology pushes them
to collect data not because ‘they need to do it’ but because ‘it is just nice to do it’?
Clearly, since the mid-1990s, technological transformations resulting from increased
digitization of everyday life have changed the way in which these SIGINT (and later on)
Internet intelligence agencies operate at a distance. It already existed during the scandal
of Echelon, where it was known that the Five Eyes had intercepted communication of
their allies in order to take advantage in commercial competition, but it has changed in
scale with the surveillance of the Internet. Today, digital traces left by almost all transac-
tions and mundane actions are stored and collected for commercial or security purposes.
Besides, from their offices, acting at distance, the specialists of intrusion have had the
capacity to trace almost all of the online activities that an Internet user was undertaking
during the day, at least before 2013. After this date, corresponding to the Snowden disclo-
sure of the practices of the NSA and its network regarding intrusive capture of Internet
users’ data, as I will show, it has become more complicated and a fight between encryption
and decryption will give a certain density to the losses and the gains in the digital realm,
but it will not stop the practices of secret services putting at risk fundamental rights of
Internet users and democratic scrutiny of their own countries.
It is these latest evolutions that most authors of security and intelligence studies have
not taken into account, mostly because of the proximity between the analyses they pro-
duce themselves and those of the practitioners; the convergence of the two narratives is
building a very strong form of power knowledge relation inside and forming a doxa almost
impossible to challenge, but also a very weak interpretation in terms of understanding of
global transformations, obliging a reconsideration of the basic notions at stake.

2.3  Power-Knowledge of National Security: Assumptions at Stake

Security and intelligence studies have often refused to take into account these changes
in the paradigm of national security that we have discussed. They continue with the
idea that a cycle of intelligence exists; that the distinction between data and information
is clear; that the work of intelligence services is under the control of politicians; that
national allegiances always take primacy over any other one. More importantly, they
have even continued to defend the idea that secret services are outside the boundaries of
normal jurisdiction and democratic accountability. They have spent their time confirm-
ing the impression of the agents of this field of secret actions that they had a special
code of conduct; that they have, by contract, immunity regarding the right to carry out

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 39 13/12/2018 15:25


40  Research handbook on human rights and digital technology

forbidden activities as long as they obey a hierarchical order. Certainly, some among
them have insisted that national security is no longer the emanation of a police state, of
the old ‘Raison d’Etat’, but a way to defend democracies against their enemies, and that
consequently the secret services need a new contract making clear the boundaries between
secrecy and publicity, detailing their actions, as well as establishment of rules of account-
ability supposing a more powerful and independent oversight verifying the behaviours of
the agents.8 But, for the vast majority of practitioners and academics who were former
practitioners, and for a large part of the realists in international relations, to admit that
secret services come within the realm of legality is a threat to their existence as such; they
therefore claim that agents are most of the time carrying out ‘a-legal’ actions (not directly
illegal as they have orders, or actions which are located outside of the current legal order
and by which they benefit from holes in the positive legislation of the technical realm not
yet controlled).9 However, it seems that even if they have easily convinced politicians and
Members of Parliament to accept this rhetoric, they have had far more difficulties with
the judges, who consider that what is not legal is by definition illegal and susceptible to
be condemned.
More in line with the judicial vision, surveillance studies and critical security studies
have insisted on the imposition of democratic limits on the secret services, which shows
the transformations by which national security is now a field of forces, in the Bourdieusian
sense of a magnetic field attracting more and more actors, including technical and private
ones, around the formation of a legitimate definition of what national security means, and
simultaneously a field of struggles between the transnational groupings of a specific kind
of secret services (e.g., the Five Eyes network) opposing other transnational groupings
(e.g., the CIA and the other external services) not only in international politics, but also
and perhaps mainly in national politics, where budgets and missions are crucial to obtain,
as well as the power to define the priorities of the threats and risks.
It is this field, now populated by different actors, but all interested to fight for a say
on the digital Reason of State that is emerging, and which is more transnational than
national, more hybrid than public, more bureaucratic than political, that I will describe
in more detail in the next section.

3.  F
 IVE EYES PLUS CONFIGURATION: EMBODIMENT
OF THE DIGITAL REASON OF STATES AND ITS FIELD
EFFECTS

Edward Snowden has described the Five Eyes network as a ‘supra-national intelligence
organisation that doesn’t answer to the laws of its own countries’.10 This is an important

 8
  Richard J. Aldrich, ‘Transatlantic Intelligence and Security Cooperation’ (2004) 80(4)
International Affairs (1 July) 731–53; David Omand, Securing the State (London: C. Hurst & Co.
Publishers Ltd, 2012).
 9
  Sébastien Laurent and Bertrand Warusfel, Transformations et réformes de la sécurité et du
renseignement en Europe (Pessac, France: Presses Universitaires de Bordeaux, 2016).
10
  Edward Snowden, ‘Testimony Submitted to the European Parliament’, Brussels, European
Parliament, 2014.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 40 13/12/2018 15:25


The case of the Five Eyes Plus network  41

point of departure. The question is not about technology or surveillance in general, the
question is not about Big Data in our lives, it is about rule of law and secrecy, reason
of state or more exactly reason of ‘states’ that consider they may have shared interests,
constructed by the chains of interdependencies of their secret services and the associated
public and private bureaucracies acting inside and outside the territory in a transnational
context.
Too many works have confused intelligence with surveillance, and it has created some
misunderstanding of the current situation. Without an industry developing intrusive
intelligence technologies and working for the secret services, openly or not, the situation
would be different. This is also the case of the structure of the relations between the
different secret services specialized in SIGINT and Internet management of sensitive
data. They are not a form of meta-policing acting against global terrorism by sharing
confidential intelligence, and their mutual ‘trust’ is certainly not highly developed. They
are as much in competition than in collaboration, and they have created their own word
of ‘coopetition’ to express it. This is why the Bourdieusian notion of field of struggles
is so important to describe what is at stake in this domain of intrusive intelligence, and
why the understanding of the logic of distinction between the agents, as well as what
capitals they can mobilize, explain their current strategies internationally and locally
simultaneously.
Therefore, this approach is reluctant to take for granted the common narrative
regarding the cultural ties of democracies in the Anglo-American world, and the special
relation between the United States and the United Kingdom forming a unique security
community. The power relations and asymmetries are deep inside the Five Eyes network,
as well as the competition for specific tools enhancing the capacities of secret services.

3.1  Mutual Trust Narrative and its Culturalism

Nevertheless, most frequently, the history of the ‘Five Eyes’ as an organization emerging
from the collaboration during the Second World War and the struggle against commu-
nism has a cultural narrative set up by the first books on the NSA and which have been
repeated again and again without a serious second examination.11 Intelligence studies
create a continuity in the collaboration between the different Anglophone members and
do not speak of the strong conflicts of the mid-1970s, with Australia, for example. They
want a progressive development from the origins during the Second World War to now
which is supposedly based on the mutual trust between these partners, more difficult
to obtain if they include the Swedes, the Germans or the French. This may have been
true at the beginning, but deference to NATO and the United States has been de facto
the essential political factor and has played a role regarding countries who wanted
a strong European defence pillar independent from the United States. If the United
Kingdom was recognized as a special partner, it was more because its socio-technical

11
  James Bamford, The Puzzle Palace: Inside the National Security Agency, America’s Most
Secret Intelligence Organization (New York: Penguin Books, 1983); Chuck Darwin and James
Bamford, Body of Secrets: Anatomy of the Ultra-Secret National Security Agency (New York:
Doubleday, 2002); James Bamford,The Shadow Factory: The Ultra-Secret NSA from 9/11 to the
Eavesdropping on America (New York: Doubleday, 2008).

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 41 13/12/2018 15:25


42  Research handbook on human rights and digital technology

capital in terms of research has been higher than other countries with specific innova-
tions (decryption of the Enigma machine by Alan Turing, better encryptions during
the mid-1970s, and Tempora software with the Internet). The specific location of the
United Kingdom as the prime receiver of transatlantic cables has also assured a neces-
sity to collaborate with the UK government, even during disagreement. Australia and
New Zealand have been much more subordinate and considered as outposts more than
as partners. So, beyond the story of the great ‘fraternity’ between the Anglo-American
people, it may be useful to look in more detail at the structure of the network and its
asymmetry.
The network has changed profoundly and is not based on an Anglo-American core
which will have some marginal partners. The more robust partners, and sometimes
simultaneously adversaries, inside the Five Eyes Plus are the ones placed in strategic
locations in relation to the network cables through which Internet connections are
possible, downgrading the previous key role of satellites. They are also the ones who
have invested in technological research, and have their own software or ‘niche’ in the
markets of intrusive surveillance via their (so-called) private industry. Germany, Sweden,
France and Israel, are key players. Research into the so-called SSEUR or ‘SIGINT
Seniors Europe’ (Belgium, Denmark, France, Germany, Italy, the Netherlands, Norway,
Spain and Sweden) is showing the importance of this network, beyond the Five Eyes as
such.12 Their activities range from active collaboration on counter-terrorism matters and
geopolitical analysis of the Middle East, to competition and struggles around collections
of data intercepted via computer access, border mobility and/or financial transactions for
economic and business advantages for their own firms. The existence of this ‘coopetition’
with different rules and forms of trust between the agencies structures an asymmetrical
transnational system of exchange of ‘sensitive’ information in intelligence and police
matters that has to be checked, and implies a reflection on oversight mechanisms when
transnational activities are at stake. SSEUR analysis shows that key positions gained by
Sweden, Germany and France are related to their structural positioning, and that the
United Kingdom, Australia and New Zealand are not privileged as the so-called ‘first
circle’. They are only privileged if they add material support to the large-scale collection
or retention of data. The internal divisions in almost every agency seem to confront the
denizens of a ‘Wild West’ Internet where no rights exist and where the most powerful
can intercept what they want if they have an interest to do so, versus a group led by
an alliance of lawyers, inside the services and inside private companies, pleading for a
more systematic collection of information allowing a reduction in false rumours and to
minimize the size of information.
It seems also that the specificity of the NSA network is to have all over the world a
series of specialized agencies in signals intelligence acting ‘regionally’, and that a series of
them coming from the Western alliance and its clients around the world are exchanging
some of the data they intercept, in competition with other networks, which seems more
correlated with national structures, but may also have connections with Russia, China, or
even Kazakhstan, Iraq and Pakistan.

12
  See Ronja Kniep, forthcoming.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 42 13/12/2018 15:25


The case of the Five Eyes Plus network  43

3.2  F
 orms of Capitals which are Mobilized to Play into the Field of Management of
Sensitive Information

The hybrids constituted by the secret services and their private partners have different
forms of capitals, some of which materialize easily, others are more symbolic, but
nevertheless very effective. I will here describe the part of the field known as the ‘Five
Eyes Plus’ network, which plays the most important role now that national security is
digitized, but a full picture would need to analyse the various other intelligence services,
the CIA’s network, the FBI’s policing counterparts, and for each of them their links with
the private companies, the politicians, the judges, the media. The first criteria is the size
of the personnel and their socio-technical capacities, which means here that it is not the
amount of technology they have which is important, but how they use it. The second
criteria is the location of these services in relation to the Internet cables infrastructure.
The last one is the socialization of their agents and their trajectories

3.3  N
 umber of Personnel Enrolled into Intrusive SIGINT and Internet Intelligence and
Their Socio-Technical Capacities

In the United States, with approximately 100,000 people employed at the NSA, of which
‘about 30,000 are military and the rest private contractors’, the NSA is ‘by far the biggest
surveillance agency in the world’.
The NSA has primarily used a platform named UPSTREAM which operates, where a
request from the Five Eyes alliance cannot obtain permission to obtain information, to
bypass this by an intrusive form of ‘monitor any communication it engages in’, tapping
directly into the infrastructure – undersea fibre-optic cables.13
The other programme that has made news worldwide is PRISM, also run by the NSA.
Its main idea is simple: it essentially means a ‘program where people in corporations
or non-profits of any kind, are complying in helping the government, because they are
forced under the FISA Amendments Act’.14 Basically, PRISM receives information from
the Internet or social network providers such as Google, Apple, Facebook, Amazon, but
also Microsoft, Yahoo and Netflix, which are intermediaries in the process of intrusive
interception by facilitating, or with various degrees of resistance limiting, the easiness of
the intrusion, depending on the history of the company, the socialization of their person-
nel, the strength of their legal teams.15
The NSA therefore has seven times more personnel than the UK GCHQ and eight

13
  Known as ‘upstreaming’ (tapping directly into the communications infrastructure as a
means to intercept data). Upstream collection includes programmes known by the blanket terms
FAIRVIEW, OAKSTAR and STORMBREW, under each of which are individual SIGADs. Each
data processing tool, collection platform, mission and source for raw intelligence is given a specific
numeric signals activity/address designator, or a SIGAD. The NSA listening post at Osan in Korea
has the SIGAD USA-31. Clark Air Force Base is USA-57. PRISM is US-984XN. Source: file://
localhost/wiki/SIGAD.
14
  See www.revolvy.com/topic/Upstream%20collection&item_type=topic (accessed 8 September
2018).
15
  See Felix Tréguer, ‘Intelligence Reform and the Snowden Paradox: The Case of France’,
Media and Communication, 1 March 2017, 5, available at https://doi.org/10.17645/mac.v5i1.821.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 43 13/12/2018 15:25


44  Research handbook on human rights and digital technology

times more employees than the French Directorate-General for External Security (DGSE)
and German Federal Intelligence Service (BND). In addition, the NSA employs private
contractors to do part of the job, so it could be considered that the number of employees
could be 12 to 16 times superior to that of any other agency. This is the same for the
budget. The NSA has a budget of 7 billion Euros a year. Within Europe, the GCHQ,
with a budget of 1.2 billion Euros, is well below that of the NSA but has nevertheless over
twice the yearly budget of other agencies, such as the BND, DGSE or Swedish National
Defence Radio Establishment (FRA).
Nevertheless, another important actor is the UK GCHQ, which has a programme of
its own design, different in execution, but similar in purpose and focusing on the capacity
to retain data for analysis, the Tempora project, previously known as ‘Mastering the
Internet’. It allows the GCHQ to collect any data that passes through Great Britain and
store it for several days, the necessary time to filter the data with specific selectors, a
technology that all other intelligence services including NSA want to use.
Beyond the NSA and GCHQ, the other powerful actors are services which were not
part of the initial agreements but have a key role because of their positions in the circula-
tion of data through the cables, and/or their own capacities in having the personnel and
the technologies giving them something to ‘exchange’, to ‘sell’ to the other services. What
has been called the Five Eyes ‘Plus’ is this network, which is theoretically composed of 18
eyes, and where clearly some SIGINT services in Europe and in the world exchange more
information than the United States and Canada or New Zealand (supposedly in the core
group).16 The BND in Germany, DGSE in France, are in this first circle of strong actors
because they have these capacities in personnel and technologies, and for them, the dis-
closures of Snowden have played a positive role in seeking to recruit more personnel and
to have more investment in technologies, as we will see later. Their increased role, which
was not clear in 2013, is also connected with their strength in the second criteria, which
is their location and capacity to intervene in the building, management and capacity of
interception in the Internet cables.

3.4  Position in the Internet Infrastructure

The NSA has constructed a network of relations that in practice follow the routes of
the international infrastructure of submarine cables and the places of exchange into the
main terrestrial networks, without forgetting satellite communications.17 The Internet is
therefore not immaterial or in the ‘clouds’, it is a subterranean infrastructure that has
its main roads and its small pathways, as shown by the many maps of submarine and
terrestrial Internet cables.
Looking at these maps, it becomes clearer why the NSA collaborates more with
some agencies than with others. The transatlantic cables in Europe are distributed
with important nodes beginning with the United Kingdom (GCHQ), Sweden (FRA),

16
  Some journalists have spoken of the Nine Eyes, with the addition of Denmark, France, the
Netherlands and Norway; or the 14 Eyes, with Germany, Belgium, Italy, Spain and Sweden.
17
  Ronald Deibert, ‘The Geopolitics of Internet Control: Censorship, Sovereignty, and
Cyberspace’ in Andrew Chadwick and Philip N. Howard (eds), Routledge Handbook of Internet
Politics (Abingdon, UK: Routledge, 2009) 323–36.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 44 13/12/2018 15:25


The case of the Five Eyes Plus network  45

Germany (BND), France (DGSE), the Netherlands, Italy and Spain. Each of these places
are important for intercepting Internet data, especially Sweden regarding Russia, and
France for the Middle East. Germany is central for all EU networks. In North America,
the Canadian Communications Security Establishment (CSEC) has been important to
intercept data discreetly from Latin America and Brazil in particular. The connection on
the Austral hemisphere bypasses Australia and New Zealand and maybe also Japan. In
Asia, the secret services of Singapore, South Korea and also Pakistan are involved, plus in
the Middle East a strong connection with Israel and Jordan, and France for North Africa.
Obviously, despite the extension of the network and the lengthening of the chains of
interdependence between each point of the network, the structural power of the NSA
is still very strong because of the concentration in its hands of the combination of the
different modalities of acquisition of data via hundreds of specialized programs uni-
fied under various platforms of integration, and specific software for data mining and
profiling. As long as the NSA has the most important personnel, be it its own agents or
contractors, has constructed specific software allowing intrusion, and has obliged the
intermediaries to operate or to allow them to carry out this intrusion, its pre-eminence
cannot be challenged. Nevertheless, as in any form of interdependence, brute power is
much more complex to use in practice, and some local actors may impose local agendas
on the full network.

3.5  Socialization of the Agents, Dispositions, Trajectories

Many things have been said about the culture of trust and mistrust, and the limitations
of this explanation are obvious. The relations to the geopolitics of the cables and the
analysis of capacities of man- and socio-technical power are certainly more telling, but
this may be too mechanistic. What is at stake to understand the relations between the
individual agents, their willingness or not to collaborate, their solidarities and allegiances
in case of contradiction between national and professional imperatives, is less well
known. Investigations about the style of intrusive intelligence techniques, the respect
of limitations, attitudes towards the rule of law and oversights are in progress, but still
lacking comparative evidence. Nevertheless, it seems important to notice that the sharing
of information is better with foreign intelligence services that have the same kind of
skills and know-how than with national intelligence services that have different kinds of
know-how, even if they theoretically participate in the same mission, or even are in a coor-
dination or fusion centre. Studies of the professionals involved in policing anti-terrorism
has shown that the professional training, the socialization via Euro-Atlantic meetings,
the digital nature of exchange of information beyond one dossier, the ‘geek’ attitude of
some of these agents, the belief in predictive software and preventive solutions, are all
criteria playing into enhancing the so-called necessity of an increased amount of digital
technologies of surveillance in all the operations of intelligence and the necessity to be
both a technological and a security person.
I would suggest that the different agencies are hierarchized, as were the guilds in the
Middle Ages with their rituals, their codes, their rules of strict hierarchy, obedience
and solidarity, and that is why coopetition is possible at the transnational scale because
professional solidarities sometimes trump national interests and political games of the
moment. And just like these old guilds, these transnational professional organizations

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 45 13/12/2018 15:25


46  Research handbook on human rights and digital technology

confer symbolic power on a specific know-how, which can be strong enough to challenge
some politician players and compete against them regarding the truth concerning threats
and risks.
The national security game played by the politicians in charge, and sometimes by
their national security councils, is not the same as the one that the transnational guild of
sensitive information is playing. It may create serious confrontations. For example, the
asymmetry is obvious in favour of the NSA but this is not in direct correlation with the US
policy at a certain period of time. Transnational solidarities partly escape the international
political games.
Then, the NSA is at the core of the guild structuration, and often imposes its own
interests onto other ones, to the point that the other SIGINT services seem to have some-
times privileged the interests or needs of information of the NSA over the interests and
alliances-organizations they are in: for example, the UK GCHQ spying on EU institutions
for the NSA, or BND spying on their own aeronautic industry for the benefit of the NSA
and Boeing, or Australia conducting espionage on Indonesian high officials for the NSA,
directly hurting the Australian foreign policy and national interests.
The description of these inner struggles and the different capitals and strategies of the
central actors who are carrying out intrusive intelligence using techniques to capture the
flows of information that are now part of our everyday life, is in my view central to under-
stand how global effects are sometimes paradoxical, for example, when the disclosure of
NSA practices has helped the so-called victims, i.e., the other secret services, to make play
with it, in order to claim more budgets, personnel and legal power. It will also allow us to
understand the emerging common doxa of the inevitability of surveillance on the public,
the problematic confusion of surveillance and intrusive intelligence.

4.  C
 ONTEMPORARY FIELD OF POWER IN THE DIGITAL
REASON OF STATE AND ITS EFFECTS OF COMPLIANCE
AND RESISTANCE

4.1  F
 ive Eyes Contemporary Structure and the Snowden Paradox: Recent Intelligence
Laws and Recent Judgments

The disclosures in 2013 by Edward Snowden of the secret US NSA programme PRISM,
and of more than 1,000 intrusive software systems with genuinely hush-hush codenames,
have raised serious concerns about the scope and scale, the qualitative and quantitative
dimensions of surveillance of everyday Internet users for intelligence purposes. What has
been done by the NSA and the Five Eyes network during the previous ten years, in secret?
Is it possible in democracies to act in such a way, certainly less directly violent than the
CIA’s networks, but nevertheless problematic for democratic forms of states?
Quite clearly, Snowden’s disclosures of NSA practices have sparked significant public
and political concerns. Some concerns about security to start with – security for whom? –
were identical with the critique of the war on terror, but they were followed by questions
about technological progress and a sense of the ineluctability of the deprivation of con-
fidentiality and privacy in our modes of communication, wrapped around by an overall
argument about the inherently violent, unsecured and dangerous state of the world.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 46 13/12/2018 15:25


The case of the Five Eyes Plus network  47

There certainly lies a change in the regime of justification of national security within
this argument. First, a justification has been expressed and presented openly, because the
scandal provoked by the disclosure of large-scale surveillance was too strong to return
to opacity, to the traditional: ‘no comment, no denial’ policy. But, the national security
argument has been connected centrally with the large-scale intrusive data interceptions
the press has called mass surveillance and bulk collection, while the services and the
Internet providers have considered that their methods were necessary, appropriate and
proportional.
The controversy has implied, on the technological side, a branching out of existing
rhizomatic commercial surveillance for profit and intrusive methods of interception and
collection of personal information by specialized intelligence services and their contrac-
tors. It has also opened a legal conflict with the judiciary on many fronts, and the national
security terminology, which was in many countries a doctrine coming from the United
States and the United Kingdom, has nevertheless entered the national legislations by the
mid-2000s, even if for most of them (France, Germany, Spain, Italy) the notion of ‘secret
defence’ is still more relevant in a legal context than that of national security. A reaction
will take time, but will nevertheless transform the US approach and its pervasiveness in
the EU, and will eventually turn the United States’ approach back in favour of human
rights activists.

4.2  A
 Counter-Move: Rule of Law, Human Rights Countering Technological Arguments
and Necessity of Intrusive Intelligence?

National courts and European Courts have been more and more clear in their judgments
post-2010 that, if it is for the government to decide the content of national security, this
cannot be completely discretionary. This has been and still is the main challenge theo-
retically for the concept of ‘national security’ and the encapsulated practices of human
and technological intelligence. National security (and the secrecy around it) cannot be
transformed into a cover for arbitrariness of the executive that other powers and citizens
cannot check. Even when citizens believe that, in general, agents of secret services are also
good citizens, they nevertheless want to have the capacity to differentiate inside the group,
to punish those who act against inviolable rights, like the prohibition of torture, and to
know who has given these orders. Since 2010, in a not yet stabilized doctrine, a framing,
which has been developed in Europe via the role of Courts (European Court of Justice
and European Court of Human Rights), but also national courts in the United Kingdom
and Germany, has contradicted the US NSA’s approach following the war on terror
based on a more military and strategic vision justifying the President’s power and its own
practices. It seems that in Europe, legally, national security cannot trump the rule of law
and democracy for political opportunist interests; the derogations have to be necessary
and proportional to the threat. The threat itself cannot be the product of a flourishing
imagination, it needs some evidence of an actual project of realization. Prevention is not
fiction; anticipation has to have grounds.
Nevertheless, the reactions of the highest courts, acclaimed by activists and lawyers
challenging the government, have not transformed the everyday practices of Internet
users, and pushed them to defend by themselves their rights of data protection, privacy
and forms of freedom endangered by intrusive intelligence. The argument of the ­necessity

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 47 13/12/2018 15:25


48  Research handbook on human rights and digital technology

of struggle against terrorism has been quite powerful, as well as the argument that the
traces left by the use of the Internet and the limitations of privacy are the normal coun-
terpart of more communication at distance, as Zuckerberg once bluntly said.

4.3  Resistances and Compliance: Contemporary Situation

Since Snowden’s disclosures, one has certainly witnessed the end of a monopoly by the
different intelligence services and the circles of experts of national security of what is
the legitimacy of the practices enacted in the name of this national security. Competing
discourses on intelligence services coming from non-professional circles (the ‘amateurs’
of security), and numerous NGOs of Internet activists discussing national security and
surveillance, have emerged and have used the Internet and the social networks to claim their
disagreements with these practices. These coalitions between these Internet activists (hack-
tivists as they call themselves) and human rights lawyers, as well as privacy lawyers, have
set up a counter-discourse on the legitimacy of intelligence services in democratic regimes,
which has given the judges of the highest courts the impression that they were not obliged
to be completely deferential to the executive branch, and that their defiance was welcome.
This coalition has also sometimes been supported by major Internet providers accused
of participating in the large-scale surveillance, or at least complicit and silent about what
had happened over more than ten years before Snowden disclosed it. Public controversies
occurred between those who, in light of the revelations, were claiming the necessity to
improve intelligence oversight, and those who simply favoured the public denunciation of
the intelligence services who have covertly colluded and used the worst and most arbitrary
means to arrest and detain suspects, first with the CIA, secondly with the NSA (and in
some cases killer drones abroad).
Yet, the world of intelligence remained quasi-untouched by the different scandals and
has been moving even faster towards a more globalized cooperation among Western
democracies, the implementation of alliances with non-democratic regimes, and the
automation and digitization of their tasks and tools with the blessing and legitimizing
authority of some of the new laws on surveillance.18
But here also, it is important to understand the ‘fracturing’ of the positions between
actors. If this happened, it is because some Western governments (Germany, Brazil,
France, Sweden) have played a specific game due to their ambiguous position in claiming
initially, after Snowden, that they were themselves the first victims of this intrusive surveil-
lance of the Anglo-American Five Eyes. The different SIGINT services have therefore
lobbied their own governments to show the discrepancy between their tools and the ones
that the NSA used, as if it came as a surprise to them. The efforts towards more means of
‘defensive’ digital technologies for cybersecurity purposes have been their first argument.
By that move, some governments have ‘modernized’ their laws on intelligence, introducing
some official possibilities of better oversight, but simultaneously reinforcing the possibili-
ties for the intelligence services to use more capacities of interceptions, not less.

18
  Didier Bigo, The Paradox at the Heart of the Snowden Revelations, Open Democracy (10
February 2016); Félix Tréguer, ‘Intelligence Reform and the Snowden Paradox: The Case of
France’ (2017) 5(1) Media and Communication (22 March) 17–28.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 48 13/12/2018 15:25


The case of the Five Eyes Plus network  49

The paradox has consequently been that the post Snowden national legislations have not
followed the approach hoped for by the NGOs. They have been a way for some services in
need of technical capacities, first, to ask for more funding to acquire them, to develop also
their own segment of surveillance industry, and ultimately to combat their Parliaments and
especially their courts, which were motivated on the contrary to limit the intrusiveness of
any techniques applied by intelligence services. They have nevertheless provided a way for
the courts to identify the main violations of rights that such interceptions imply, beginning
with the length of data retention and its use for profiling, the unnecessary large-scale col-
lection of data, the function creep in access to databases and data mining.
The structure of the game has therefore reinforced the initial contradictions, and all the
central players may claim that they have won, while at the same time the general debate
initiated in 2013 is, five years later, inaudible in mainstream media and traditional political
arenas.
The fact that the orientation of the debate on large-scale surveillance and personal
data has been connected with the question of the struggle against terrorism by electronic
means has certainly created a very different landscape around the claim of what national
security can perform, and what is open in terms of operational practices when carried out
by a government and its secret services. There has been a dual effect, especially after the
Paris attacks of 2015, with, on one side, the revival of a discourse about the necessity of
large-scale surveillance in order to prevent bombings before they happen, and justifying
some of the public emergency measures and even their routinization. But, on the other
side, it has also transformed the common understanding of national security by affirming
that its definition, organization and means were not the exclusive domains of a national
security council, and that these intellectual activities of interrogating the purposes of
national security have to be shared by the other branches of power (to use the metaphor
of Montesquieu), in particular the judicial, in opposition to the arguments that law has
to follow and accept technology.
Indeed, the fate of different reforms of intelligence laws in the United Kingdom,
Germany and France is currently suspended awaiting courts’ decisions. This is also the
case of a series of agreements between the EU and North America (Canada and the
United States) on passenger name records (PNR) and on transfer of personal data by pri-
vate companies. They now depend on assessments by judges of the impact of the practices
enacted in the name of national security by intelligence and law enforcement services, as
well as private companies, upon privacy, data protection, right of access and correction to
personal data, and ownership of data by the data subject. The UK Investigatory Powers
Act 2016 was repealed on 30 January 2018, the EU-Canada Agreement on PNR has also
been blocked, as well as different Directives of the European Commission on data reten-
tion, to mention only a few cases, showing that while an argument on national security
grounds still provides a government the possibility to open a right to exceptional practices,
it will be under the gaze of judges.

4.4  T
 he Inevitability of Surveillance: An Effect of the Doxa of a Transnational Field?
Position Takings and Controversies

The story on the inevitability of surveillance has been constructed based on the potential-
ity of the system. We have to live with it and to forget privacy. The game is changing,

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 49 13/12/2018 15:25


50  Research handbook on human rights and digital technology

but it accelerates always in the same direction. Privacy is an old idea, laws are always late
regarding technologies. Nothing can be done politically and collectively. Only a clever
strategy may shift the trend at its margins. But it will come from those who know the
technology. The belief that hackers are more important than judges to save what is left of
privacy is of great currency in some circles. Efficient encryption is more useful than a law
on the right to be forgotten. As surveillance is almost the only route, only smart individu-
als are equipped for the winding road of privacy. In any case, if counter-technologies
can block technologies of surveillance, this is only for a short period of time. The ones
without technology merit their fate as victims. Technological individualism is a form of
‘Darwinian’ survival.
How has this discourse of ‘lassitude’ in the face of battles known to be lost in
advance, emerged? As I have said previously, it is certainly related to the fact that the
capacity to act at distance has certainly increased the traceability of data; the possibility
of data retention; the capacity to build software that enables complex relations between
databases, and to deduct from these data emerging trends, statistical categories of
behaviours or individuals; but it has created the belief that these emergent and minor-
ity trends give intelligence services a decisive advantage in conducting their various
activities, such as espionage, economic intelligence, and the struggle against terrorism
and crime.
This has been advertised as the key role of Big Data in algorithmic analytics. For
many popular newspapers, but also in different important journals, future prediction, in
the form of scientific prediction, is no longer a fantasy, but an ‘advanced knowledge’.
Minority Report is for tomorrow and begins already today. Nevertheless, I will contend
(playing the novel of Philip K. Dick against the end of the movie) that the last point is
not certain at all. The prediction of the secret services looks more like violent sacrifice to
the omens than to scientific knowledge of future human actions.19
The lack of debate is astonishing. Why are so many actors saying that surveillance is
a fatality and a necessity? When the debates exist, it seems that they concentrate on its
efficiency and its proportionality from a legal point of view, forgetting the very first argu-
ment of possible lack of necessity; in technological debates, the main discourses look like
a religious faith in Big Data analytics and the capacity of artificial intelligence to change
our modes of reasoning by anticipating the future of individual actions.
A paradoxal conclusion is therefore that the inevitability of surveillance comes from
very different actors, and constitutes in some ways the new doxa of all the actors, includ-
ing the critical ones.
Of course, against the narrative of a scientific frame of predictive behaviour of terrorist
suspects that justified any increased measure of surveillance, some investigative journal-
ists and academic books have been central to address key issues and to demonstrate the
false pretence of this predictivity (e.g. works by Bauman et al. (2014), Lyon (2014) and
Greenwald et al. (2013)).20 Coming from different disciplines, they have raised a number

19
  Didier Bigo, ‘Sécurité maximale et prévention? La matrice du futur antérieur et ses grilles’
in Barbara Cassin (ed.), Derrière les grilles: sortir du tout évaluation (Paris: Fayard, Mille et une
nuits, 2013).
20
  Zygmunt Bauman et al., ‘After Snowden: Rethinking the Impact of Surveillance’ (2014)
8(2) International Political Sociology 121–44; David Lyon, ‘Surveillance, Snowden, and Big Data:

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 50 13/12/2018 15:25


The case of the Five Eyes Plus network  51

of other issues challenging the main story telling. First, they have asked why the question
of the role of surveillance in a democracy has been reduced to one about the limits of the
rights to data protection and privacy. Second, they have questioned the discussion around
the balance of power and why it has been reframed as one about the exaggerated power
of the courts, especially the regional and international courts that can more readily chal-
lenge the legitimacy of presidential powers. And third, they have asked why the question
of secrecy in a democracy has been silenced, limited to a question of transparency, leaving
aside the rights of persons accused of crimes based only on accumulated suspicions, and
caricatured by the discourse that real innocents have nothing to fear if they have nothing
to hide from the police.
But, because this alone does not seem sufficient, and because they have not seen
mobilizations and demonstrations against the practices of the SIGINT Internet intel-
ligence services, their hope has faded. For example, Bernard Harcourt has beautifully
described the emergence of a society of exhibition insisting on our own weaknesses, our
self-surveillance tendencies, our will to serve instead if it is too complicated in terms of
satisfaction of a desire to wait. Many other books and articles have followed the same
line of thought.21 Zygmunt Bauman has spoken of a do it yourself (DIY) surveillance.22
And these arguments are certainly not without validity, but, in my view, they contribute
to avoiding the analysis of the formidable strength of these transnational guilds of
management of sensitive information. This is perhaps because too many of the same
authors have confused in their theoretical framework surveillance and intrusive intel-
ligence practices, and in some paradoxical ways, have reinforced the positions of the
most powerful actors of the game by validating the idea that we cannot escape the new
world of surveillance and that we have only the choice to adjust to it.
So, I would like to finish with a focus on some elements that seem already well-known,
but which are not taken sufficiently seriously. It is important to remember that the
modalities of these SIGINT and Internet intelligence services are not equivalent to
commercial profiling, even if those are also a problem. Here, the perpetrators of intrusive
forms of intelligence do not ask you on Facebook to become your electronic friend; on
the contrary, they capture what you have been concerned not to give away, even to your
best friends.
The tacit contract of adherence to the newly established order of the digital Reason
of State and its confusion with practices of surveillance and self-surveillance therefore
defines the doxa of a field of power, but to recognize its existence is not to deconstruct
this doxa. Heterodox positions, heretical subversions are not by themselves sufficient
to break up this established order, as long as it has not experienced objective crisis.

Capacities, Consequences, Critique’ (2014) 1(2) Big Data and Society, 2053951714541861; Glenn
Greenwald et al., No Place to Hide: Edward Snowden, the NSA, and the US Surveillance State (New
York: Macmillan, 2013).
21
  Bernard E. Harcourt, Exposed: Desire and Disobedience in the Digital Age (Cambridge, MA:
Harvard University Press, 2015).
22
 Bauman et al., ‘After Snowden: Rethinking the Impact of Surveillance’, n. 20 above;
Zygmunt Bauman et al., ‘Repenser l’impact de la surveillance après l’affaire Snowden: sécurité
nationale, droits de l’homme, démocratie, subjectivité et obéissance’ (2015) 98 Cultures and
Conflicts (15 October) 133–66.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 51 13/12/2018 15:25


52  Research handbook on human rights and digital technology

Nevertheless, these heretical positions challenging the authorized discourses on a digital


national ­security, on the key role of secret services, may point out the dialectic between
the authorized language of inevitability of surveillance and the disposition of the groups
authorizing it by the configuration of their struggles.23

23
  Paraphrasing Pierre Bourdieu in Language and Symbolic Power (Cambridge, MA: Harvard
University Press, 1993) 127.

Didier Bigo - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:53AM
via New York University

WAGNER_9781785367717_t.indd 52 13/12/2018 15:25


4.  Digital copyright and human rights: a balancing
of competing obligations, or is there no conflict?
Benjamin Farrand

Copyright protection on the Internet has become an inherently political and contested
subject. With increased access to the Internet has come a dramatic, and indeed expo-
nential, increase in both the desire to create as well as access information. However, in
the sharing of information, it is also possible that Internet users may infringe copyright.
Actions seeking to restrict, prevent or seek redress for copyright infringement may in
turn have implications for other rights, such as privacy or freedom of expression. This
is a fact that has been recognized by international bodies. The UN Sub-Commission
on Human Rights, stating that ‘there are apparent conflicts between the intellectual
property rights regime . . . and international human rights law’,1 indicated that these
conflicts were likely to exacerbate informational, social and economic inequalities at
the international level. Yet whether there is a conflict between copyright and other
fundamental rights, or indeed whether copyright protection should be afforded human
right status, are subjects of debate. One key distinction is that between the European
Union (EU), and its focus on human and fundamental rights, and the United States,
in which fundamental rights are not recognized as such, but may instead be considered
as rights originating in the US Constitution. On this basis, this chapter seeks to answer
the question: what tensions exist between copyright protection on the Internet and
other categories of human right? The chapter explores this further by providing an
overview of what copyright protects, arguments concerning its human rights status
at international, regional and national levels, as well as whether copyright protection
is perceived to conflict with other rights. As this chapter will demonstrate, whereas
the EU recognizes copyright protection as a human or fundamental right, drawing
from international conventions on human rights protection, the United States does
not afford those international conventions the same legal weight; furthermore, due
to historical differences, the notion of ‘human rights’ is not as present in US-based
legal discourse. References are instead to Constitutional rights. In the EU, the juris-
prudence of the European Court of Justice (CJEU) has concluded that copyright as
a fundamental right can come into conflict with other competing fundamental rights
obligations, such as the right to privacy, or freedom of expression. For this reason, a
balance must be struck. In comparison, jurisprudence in the United States considers
that copyright facilitates expression, as was the Constitutional intent, therefore meaning
that consideration of limits on freedom of speech resulting from copyright protection
are not subject to Constitutional analysis, as there is no perceived conflict. Regardless
of these jurisprudential differences, however, citizens in the EU and United States both

1
  Sub-Commission on Human Rights, Resolution 2000/7 on Intellectual Property Rights and
Human Rights, Office of the High Commissioner for Human Rights, para. 2.

53
Benjamin Farrand - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 53 13/12/2018 15:25


54  Research handbook on human rights and digital technology

see strong copyright protection on the Internet as having significant implications for
both privacy and freedom of expression; rather than copyright and human rights being
a field of ‘no conflict’, it instead represents an uneasy tension in which a balance must
continually be (re)struck.

1.  C
 OPYRIGHT AS A HUMAN RIGHT: FROM GLOBAL
PRINCIPLES TO EU JURISPRUDENCE

Copyright, briefly, concerns the protection of creative works, defined as ‘literary and
artistic works . . . [including] every production in the literary, scientific and artistic
domain’ under the Berne Convention,2 the international agreement serving as the basis for
the international protection of copyright under the Trade Related Aspects of Intellectual
Property Rights Agreement (TRIPS Agreement, or TRIPS). Concluded in 1994 and
entering into force in 1995, Article 9 of TRIPS requires that all states parties to the World
Trade Organization shall incorporate Articles 1 to 21 of the Berne Convention into their
national laws.3 Copyright protection therefore extends to all forms of creative content
made available in digital forms on the Internet, whether sound recordings in the form of
MP3s, audio-visual works encoded as AVI or Flash files, ebooks, or computer programs.
Copyright grants the right-holder exclusive rights over acts such as the reproduction of
a work, or its communication or distribution to the public,4 meaning that by performing
these acts without the authorization of the right-holder, an end-user of a computer system
may infringe copyright.
One may be forgiven, then, for thinking that copyright is solely a creature of private
law relationships and commercial activity. The TRIPS Agreement can be considered an
economic treaty, concerned with intellectual property insofar as it relates to the protection
of property in the context of international trade, and as such, no reference to intel-
lectual property and human rights is made. Indeed, studies of intellectual property law,
particularly those associated with the law and economics movement, appear to reinforce
such a perception.5 Furthermore, those authors seeking to provide greater justification
for intellectual property protection, whether using ‘rights-based’ discourse6 or appeals
to maximizing the utility of creative works,7 or instead critiquing the assumptions upon

2
  Berne Convention for the Protection of Literary and Artistic Works 1886, Art. 2(1).
3
  With the exception of Art. 6bis of the Berne Convention, which pertains to the protection of
an author’s moral rights.
4
  Incorporated into EU law under arts 2, 3 and 4 of Directive 2001/29/EC on the harmoniza-
tion of certain aspects of copyright and related rights in the information society.
5
  See, in particular, Richard A. Posner, ‘Intellectual Property: The Law and Economics
Approach’ (2005) 19 Journal of Economic Perspectives 57; see also Niva Elkin-Koren and Eli
M. Salzberger, The Law and Economics of Intellectual Property in the Digital Age: The Limits of
Analysis (Routledge, 2013); Michele Boldrin and David K. Levine, Against Intellectual Monopoly
(Cambridge University Press, 2008) for alternative economic views on copyright law.
6
  Robert P. Merges, Justifying Intellectual Property (Harvard University Press, 2011).
7
  As discussed in Anne Barron, ‘Copyright Infringement, “Free-Riding” and the Lifeworld’ in
Lionel Bently, Jennifer Davis and Jane C. Ginsburg (eds), Copyright and Piracy: An Interdisciplinary
Critique (Cambridge University Press, 2010).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 54 13/12/2018 15:25


Digital copyright and human rights  55

which such justifications are traditionally based,8 tend toward the grounding of argu-
ments in economic analysis.9 However, another way of reconceptualizing the protection
of copyright, and indeed the tensions that such protection creates, can be in terms of
human rights. Such an approach to intellectual property protection is not new, and has
in recent years generated considerable academic interest.10 The consideration of the right
to protection of copyright can be traced back to the genesis of human rights as universal
principles. Article 27(1) of the Universal Declaration of Human Rights of 1948 states that
everyone is free to participate in the cultural and scientific life of the community, with
Article 27(2) declaring, furthermore, that everyone ‘has the right to the protection of the
moral and material interests resulting from any scientific, literary or artistic production
of which he is the author’. In other words, copyright protection is regarded as a category
of right recognized as representing the ‘inherent dignity [and] the equal and inalienable
rights of all members of the human family’.11 While the consideration of intellectual
property rights was subject to considerable debate and contestation during the drafting of
the Universal Declaration,12 it was nevertheless incorporated into the final text ‘by those
who felt the ongoing internationalization of copyright needed a boost and that this could
be a tool in this respect’.13
As readers will no doubt be aware, however, the Universal Declaration is not directly
legally binding,14 instead serving as the basis for future international law-making.
Other covenants are legally binding, however, such as the International Covenant on
Economic, Social and Cultural Rights (ICESCR).15 The ICESCR mirrors the language
of the Universal Declaration, stating at Article 15(1)(c) that states recognize the right

 8
  Such as the response to the above cited work by Barron, n. 7 above, in Jonathan Aldred,
‘Copyright and the Limits of Law-and-Economics Analysis’ in Lionel Bently, Jennifer Davis and
Jane C. Ginsburg (eds), Copyright and Piracy: An Interdisciplinary Critique (Cambridge University
Press, 2010); see also William F. Patry, How to Fix Copyright (Oxford University Press, 2011).
 9
  Such as R.M. Hilty and others, ‘Comment by the Max-Planck Institute on the Commission’s
Proposal for a Directive to Amend Directive 2006/116 Concerning the Term of Protection for
Copyright and Related Rights’ (2009) 31(2) European Intellectual Property Review 59 on whether
the term of protection for sound recordings should have been extended to 70 years post-publication
in the EU.
10
  See, e.g., the works of Laurence R. Helfer and Graeme W. Austin, Human Rights and
Intellectual Property: Mapping the Global Interface (Cambridge University Press, 2011); and
Duncan Matthews, Intellectual Property, Human Rights and Development: The Role of NGOs and
Social Movements (Edward Elgar Publishing Ltd, 2012); as well as the volumes edited by Paul
L.C. Torremans (ed.), Intellectual Property and Human Rights: Enhanced Edition of Copyright and
Human Rights (Kluwer Law International, 2008); and Christophe Geiger, Research Handbook on
Human Rights and Intellectual Property (Edward Elgar Publishing, 2016).
11
  Universal Declaration of Human Rights (1948), Preamble, para. 1.
12
  Paul L.C. Torremans, ‘Copyright as a Human Right’ in Paul L.C. Torremans (ed.),
Copyright and Human Rights: Freedom of Expression, Intellectual Property, Privacy (Kluwer Law
International, 2004) 5–6; see also Helfer and Austin, Human Rights and Intellectual Property, n. 10
above, 176–80.
13
  Torremans, ‘Copyright as a Human Right’, n. 12 above, 6.
14
  Mihály Ficsor, ‘Collective Management and Multi-Territorial Licensing: Key Issues of the
Transposition of Directive 2014/26/EU’ in Irini A. Stamatoudi (ed.), New Developments in EU and
International Copyright Law (Kluwer Law International, 2016) 220.
15
  International Covenant on Economic, Social and Cultural Rights (1966).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 55 13/12/2018 15:25


56  Research handbook on human rights and digital technology

of everyone to ‘benefit from the protection of the moral and material interests resulting
from any scientific, literary or artistic production of which he is the author’. Most of
the world is now party to the ICESCR, with the notable exception of the United States,
which, while signing the Covenant, did not ratify, with the views of US Administrations
ranging from perceiving the concept of the rights reflected in the ICESCR as being
‘socialist’ during the height of the Cold War,16 to being ‘socially desirable goals’ to strive
towards rather than binding rights,17 to most recently the Obama Administration stating
the importance of human rights, but not committing to ratification of the ICESCR.18
With regard to the treatment of intellectual property generally, and indeed copyright
specifically, the US legal regime provides for no specific protection of these rights as
any kind of human right, instead affording Constitutional protection under Article 1,
Section 8, Clause 8, empowering Congress to ‘promote the Progress of Science and
useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right
to their respective Writings and Discoveries’. In this, it may be seen that the protection
afforded to copyright in the United States is on the basis of a consequentionalist line
of reasoning, in which it is for the benefits that the dissemination of works provides
that it is protected, rather than on specific human rights grounds.19 According to
Blau, human rights-based discourses do not feature prominently in US politics, with
US citizens neither clear nor convinced of the relevance of such approaches to legal
protection, particularly in comparison to the EU, where such understandings have
been mainstreamed.20 It is perhaps unsurprising, then, that the majority of discussion
of copyright in human rights terms, and indeed law-making along such lines, has taken
place in Europe. For this reason, the analysis that will be conducted of the balancing of
human rights obligations impacted by copyright will focus upon actions and jurispru-
dence at the EU level.
The European Convention on Human Rights (ECHR), the international treaty to
which all Council of Europe member states are party, also appears to provide for intel-
lectual property protection as a human right. For clarity, it is worth reiterating that the
Council of Europe, the ECHR and the European Court of Human Rights (ECtHR)
are not institutions of EU law, but intergovernmental agreements outwith the EU legal
framework. Under Article 1 of Protocol No. 1 to the ECHR, ‘every natural or legal person
is entitled to the peaceful enjoyment of his possessions’. Whether this applied to ‘intellec-
tual’ property as well as physical property was uncertain, with the ECtHR and European
Commission on Human Rights avoiding consideration of intellectual property-related
issues.21 However, the ECtHR has concluded that the protection of intellectual property

16
  See, e.g., Helen M. Stacy, Human Rights for the 21st Century: Sovereignty, Civil Society,
Culture (Stanford University Press, 2009).
17
  Patrick J. Austin, ‘Expansive Rights: FDR’s Proposed “Economic” Bill of Rights Memorialized
in the International Covenant on Economic, Social and Cultural Rights, But with Little Impact in
the United States’ (2015) XV Chicago-Kent Journal of International and Comparative Law 1, at 23.
18
  Judith Blau, ‘Human Rights: What the United States Might Learn from the Rest of the World
And, Yes, from American Sociology’, doi:10.1111/socf.12299 (2016) Sociological Forum 1, at 2.
19
  Helfer and Austin, Human Rights and Intellectual Property, n. 10 above, 174–75.
20
  Blau, ‘Human Rights’, n. 18 above, 7.
21
  Laurence R. Helfer, ‘The New Innovation Frontier? Intellectual Property and the European
Court of Human Rights’ (2008) 49 Harvard International Law Journal 1, at 3.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 56 13/12/2018 15:25


Digital copyright and human rights  57

rights, be they copyright, patent or trademark, ‘incontestably enjoys the protection of


Article 1 of Protocol 1’.22 This decision was confirmed in Anheuser-Busch v. Portugal in
2007, in which the Grand Chamber stated that it agreed ‘with the [trial] Chamber’s conclu-
sion that Article 1 of Protocol No. 1 is applicable to intellectual property as such’.23 For
some time, it had appeared that the EU would become a party to the ECHR. The EU was
expected to join the ECHR after the entry into force of the Treaty of Lisbon, as written
in Article 6 of the Treaty on European Union (TEU). However, in Opinion 2/1324 of the
CJEU, sitting as a full court, it was determined that the agreement formulated to achieve
this accession was not compatible with EU law, particularly regarding the autonomy of
EU law, a decision considered ‘regrettable’ by Storgaard.25
Despite this, however, the EU has nevertheless constituted the ECHR as part of its con-
stitutional order through using it in the construction of its own approach to fundamental
rights as constituting general principles of EU law.26 Arguments that copyright should
be included appear based on a belief that ‘a human right to one’s creative productions
arguably casts new emphasis on the role and vulnerabilities of individual creators’.27
Geiger, for example, argues that the consideration of copyright protection in human
rights (or fundamental rights) terms is to be welcomed, and can serve as the basis for a
constitutionalization of intellectual property law.28 Furthermore, the EU has provided for
its own recognition of intellectual property protection as a ‘fundamental right’ under the
EU’s Charter on Fundamental Rights.29 Under ECHR, Article 17’s ‘right to property’,
which provides in Article 17(1) that everyone ‘has the right to own, use, dispose of and
bequeath his or her lawfully acquired possessions’, Article 17(2) rather perfunctorily
states that ‘intellectual property shall be protected’. This protection, it must be noted, is
based upon Article 1 of Protocol No. 1 of the ECHR, with intellectual property being
mentioned specifically and separately due to its ‘growing importance and Community
secondary legislation’.30 In comparison to the United States, then, the EU appears to
be clear in its understanding that copyright, as well as being an economic right, can also
constitute a human right.

22
  Anheuser-Busch Inc. v. Portugal [2005] ECHR 686 (11 October 2005).
23
  Anheuser-Busch Inc. v. Portugal [2007] ECHR 40 (11 January 2007), para. 72.
24
  Opinion 2/13 of the Full Court of 18 December 2014, EU:C:2014:2454.
25
  L.H. Storgaard, ‘EU Law Autonomy versus European Fundamental Rights Protection: On
Opinion 2/13 on EU Accession to the ECHR’ (2015) 15 Human Rights Law Review 485, at 499.
26
  Tuomas Mylly, ‘The Constitutionalization of the European Legal Order’ in Christophe
Geiger (ed.), Research Handbook on Human Rights and Intellectual Property (Edward Elgar
Publishing, 2015) 105.
27
  Helfer and Austin, Human Rights and Intellectual Property, n. 10 above, 180.
28
  See generally Christophe Geiger, ‘“Constitutionalising” Intellectual Property Law? The
Influence of Fundamental Rights on Intellectual Property in the European Union’ (2006) 37
International Review of Intellectual Property and Competition Law 371; see also Megan M. Carpenter,
‘Intellectual Property: A Human (Not Corporate) Right’ in David Keane and Yvonne McDermott
(eds), The Challenge of Human Rights: Past, Present and Future (Edward Elgar Publishing, 2012);
for a critique of this reasoning, see Jonathan Griffiths and Luke McDonagh, ‘Fundamental Rights
and European IP Law: the Case of Art. 17(2) of the EU’ in Christophe Geiger (ed.), Constructing
European Intellectual Property: Achievements and New Perspectives (Edward Elgar Publishing, 2013).
29
  Charter of Fundamental Rights of the European Union [2012] OJ C326/391.
30
  Explanations relating to the Charter of Fundamental Rights [2007] OJ C303/17, 303/23.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 57 13/12/2018 15:25


58  Research handbook on human rights and digital technology

2.  S
 TRIKING A FAIR BALANCE: THE DECISIONS OF THE
EUROPEAN COURT OF JUSTICE

Perhaps at the forefront of discussions on copyright and human rights on the Internet have
been the issues of privacy31 and freedom of expression,32 particularly as they pertain to
copyright enforcement, as well as the right to expression. At the outset, it must be made
clear that the TRIPS Agreement makes no reference to the balancing of the protection
of copyright with other human rights obligations. In the EU, the Information Society
Directive,33 the main Directive concerning copyright protection on the Internet, does not
deal with privacy or expression directly as concepts in conflict with copyright protection.
Instead, it makes brief references to the notion, stating in its non-legally binding recital 3
that copyright protection will help facilitate freedom of expression, and at recital 57 that
the use of Technological Prevention Measures (TPMs) used to protect copyright, such
as rights-management information systems that gather data on users of digital works,
should incorporate privacy safeguards as provided for in the Data Protection Directive.34
Furthermore, in article 9, it was stated that the Directive does not prejudice earlier legal
provisions, including those pertaining to data protection and privacy. The Enforcement
Directive,35 passed in 2004 and intended to facilitate effective protection of intellectual
property rights through ensuring access to information regarding individuals alleged to be
engaging in infringement,36 refers in recital 2 to the fact that the measures contained ‘should
not hamper . . . the protection of personal data, including on the Internet’, but does not refer
to privacy as a fundamental right. What it does reference, interestingly, is the fundamental
rights dimension of intellectual property protection, by stating that ‘the Directive respects
the fundamental rights . . . recognised by the Charter . . . in particular, [it] seeks to ensure
full respect for intellectual property, in accordance with Article 17(2) of that Charter’.37
The requirement to achieve a balance between competing rights was first established in
the case of Promusicae.38 This case concerned the right to information under article 8 of
the Enforcement Directive, and whether an Internet Service Provider (ISP) was obliged
to provide personal information regarding an alleged copyright infringer. According to
Promusicae, a trade organization representing the recorded music industry in Spain,
ISPs had a legal requirement under the Enforcement Directive to provide information,
such as the name and address of users of the KaZaA peer-to-peer software alleged to be

31
  Recognized as a fundamental right under Art. 8 of the ECHR, and under Art. 7 of the
Charter of Fundamental Rights, with Art. 8 providing protection for personal data.
32
  ECHR, Art. 10 and Charter of Fundamental Rights, Art. 11.
33
  Directive 2001/29/EC on the harmonization of certain aspects of copyright and related
rights in the information society.
34
  Directive 95/46/EC on the protection of individuals with regard to the processing of personal
data and on the free movement of such data, which has been replaced by Regulation 2016/679 on
the protection of natural persons with regard to the processing of personal data and on the free
movement of such data (GDPR), which has recently entered into force.
35
  Directive 2004/48/EC on the enforcement of intellectual property rights.
36
  Ibid. art. 8.
37
  Ibid. recital 32.
38
  Case C-275/06 Productores de Música de España (Promusicae) v. Telefónica de España SAU,
EU:C:2008:54.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 58 13/12/2018 15:25


Digital copyright and human rights  59

infringing copyright through sharing copies of copyrighted music. Telefónica, the ISP,
instead claimed that it was only authorized by Spanish law to provide this information to
facilitate investigations into criminal conduct in order to safeguard public security, not in
nor prior to civil investigations. In its preliminary ruling, the CJEU concluded that while
article 8 of the Enforcement Directive does require EU Member States to ensure that
national judicial authorities are able to order that information concerning infringement
be provided, it did not necessarily follow that, in order to ensure effective protection of
copyright, this imposed ‘an obligation to communicate personal data in the context of
civil proceedings’.39 Furthermore, the CJEU considered this case in light of fundamental
rights, considering that Member States must ‘take care to rely on an interpretation of the
directives which allows a fair balance to be struck between the various fundamental rights
protected by the Community legal order’.40 In this case, this required a balancing of the
fundamental right to respect of intellectual property, and the protection of personal data
‘and hence of private life’.41
Promusicae was then followed by several judgments concerned with copyright protec-
tion on the Internet, each of which reiterated this principle and the need to ensure the
striking of a ‘fair balance’ between these fundamental rights. In two closely linked cases,
Scarlet v. SABAM42 and SABAM v. Netlog,43 the CJEU was asked to consider requests
by SABAM, a collective rights management organization in Belgium, to require Scarlet,
an ISP, to end copyright infringement by users of its service. SABAM proposed a system
in which Scarlet would make it impossible for its customers to send or receive copyright
infringing files by way of peer-to-peer file-sharing software, on the basis that it considered
the ISP ‘best placed . . . to take measures to bring to an end copyright infringements
committed by its customers’.44 The CJEU concluded that in order to prevent infringing
activities from taking place in the way SABAM requested, a preventative monitoring
system would need to be created that would identify files related to peer-to-peer traffic,
and indeed whether those files contained content protected by copyright, in order to
block that infringing activity. Such a system was declared incompatible with article 15
of the E-Commerce Directive,45 which provides that Member States shall not impose an
obligation upon ISPs to actively monitor the information that they transmit or store.46
Furthermore, the CJEU considered that such a system would raise serious privacy
concerns, as it would necessitate analysing all information to be transmitted and all cus-
tomers using that network.47 While acknowledging that copyright protection constitutes a
fundamental right under Article 17(2) of the Charter of Fundamental Rights, the CJEU
argued that there was nothing in that provision to indicate that the ‘right is inviolable and

39
  Ibid. para. 58.
40
  Ibid. para. 68.
41
  Ibid. para. 63.
42
  Case C-70/10 Scarlet Extended SA v. Société belge des auteurs, compositeurs et éditeurs SCRL
(SABAM), EU:C:2011:771.
43
  Case C-360/10 SABAM v. Netlog NV, EU:C:2012:85.
44
  Scarlet, n. 42 above, para. 18
45
  Directive 2000/31/EC on certain legal aspects of information society services, in particular
electronic commerce, in the Internal Market.
46
  Scarlet, n. 42 above, para. 40.
47
  Ibid. para. 39.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 59 13/12/2018 15:25


60  Research handbook on human rights and digital technology

must for that reason be absolutely protected’.48 Therefore, following Promusicae, it was
necessary to ensure that a fair balance was struck between the protection of copyright as
a fundamental right, and the right to protection of personal data and private life under
Article 8 of the Charter, as well as the freedom to receive and impart information under
Article 11.49 In the proposed system, this balance would not be struck.50 The reasoning in
Netlog was much briefer, but followed directly from Scarlet. Netlog was not an ISP, but a
social media platform, similar in function to Facebook. In this case, SABAM requested
the same filtering system be activated, but in this instance by the owners of the Netlog
platform, rather than the ISP. The CJEU reiterated its decision from Scarlet, stating the
need for a balance to be struck, and that such a preventative monitoring system would
be incompatible with the right to privacy under Article 8 of the Charter, as well as the
freedom to receive or impart information under Article 11.51
The interpretation of the effect of these decisions has been considerably broad. While
Hugenholtz states that these decisions make it clear that copyright protection must be
balanced with other fundamental rights such as privacy and freedom of expression,52
the extent to which this may be considered an explicit recognition of, and indeed desire
to protect, fundamental rights is disputed; whereas citizens’ rights organizations such as
EDRi (European Digital Rights) considered these decisions to constitute a ‘vital victory
for Internet freedoms’,53 not all academics have been so convinced of an express human
rights rationale. Griffiths, for example, argues that the human rights arguments raised in
cases such as Scarlet and Netlog did not reflect a desire by the Court to expressly balance
competing human rights obligations, but instead reinforced a decision already made on
the basis of specific Directives, albeit in a way that expanded the Court’s competence.54
Regardless of underlying motive, however, it is clear that in the jurisprudence of the
CJEU, the understanding is that copyright and other fundamental freedoms may be in
conflict, requiring a balancing of those competing obligations. This has been further
reflected in cases such as GS Media,55 where it was concluded that hyperlinking to content
online would only constitute copyright infringement where it was communicated to a
‘new’ public by means of circumventing password protection or other TPMs,56 and if the
alleged infringer was likely to have knowledge that their act constituted an infringement.57
On the issue of fundamental rights, the Court stated that:

48
  Ibid. para. 43.
49
  Ibid. para. 50.
50
  Ibid. para. 53.
51
  Netlog, n. 43 above, para. 48.
52
  P. Bernt Hugenholtz, ‘Flexible Copyright: Can EU Author’s Rights Accommodate Fair
Use?’ in Irini A. Stamatoudi (ed.), New Developments in EU and International Copyright Law
(Kluwer Law International, 2016) 432.
53
 EDRI, Scarlet v SABAM: A Win for Fundamental Rights and Internet Freedoms (EDRI, 30
November 2011), available at www.edri.org/edrigram/number9.23/scarlet-sabam-win-fundamen​
tal-rights (accessed 7 August 2013).
54
  Jonathan Griffiths, ‘Constitutionalising or Harmonizing? The Court of Justice, the Right to
Property and European Copyright Law’ (2013) 38 European Law Review 65.
55
  Case C-160/15 GS Media BV v. Sanoma Media Netherlands and others, EU:C:2016:644.
56
  Following Case C-466/12 Svensson v. Retriever Sverige AB, EU:C:2014:76.
57
  GS Media, n. 55 above, paras. 48–49.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 60 13/12/2018 15:25


Digital copyright and human rights  61

[It] should be noted that the internet is in fact of particular importance to freedom of expression
and of information, safeguarded by Article 11 of the Charter, and that hyperlinks contribute
to its sound operation as well as to the exchange of opinions and information in that network
characterised by the availability of immense amounts of information.58

Similarly in the case of UPC Telekabel,59 which considered whether IP blocking regimes
that prevented users from accessing websites making available copyright infringing mate-
rials were contrary to principles of fundamental rights, that the protection of copyright as
a fundamental right had to be balanced with the protection of fundamental rights, such
as those to conduct a business or access information.60 Noting that there was ‘nothing
whatsoever in the wording of Article 17(2) of the Charter to suggest that the right to
intellectual property is inviolable and must for that reason be absolutely protected’,61 the
CJEU nevertheless concluded that blocking orders would nevertheless be viewed as pro-
portionate where they are specifically targeted and ‘do not unnecessarily deprive internet
users of the possibility of lawfully accessing the information available’.62 The CJEU has
therefore developed a line of case law that is explicit on two points: (1) that copyright pro-
tection constitutes a fundamental right, deserving of protection under the EU Charter;
and (2) that its protection may potentially come into conflict with other fundamental
rights, requiring a balancing of those competing obligations. That this conflict between
copyright protection and other fundamental rights exists, and is a conflict that requires
balancing, is something that has been recognized and adopted by other EU institutions
such as the European Commission. For example, in the December 2015 Communication
on ‘Towards a Modern, More European Copyright Framework’,63 the Commission stated
that while copyright requires civil enforcement mechanisms, they should ‘take full account
of fundamental rights’.64 Whether the proposed reforms to enforcement achieve this,
however, is something that is not yet known, as at the time of writing the Commission’s
approach to this issue has not yet been published.

3.  C
 OPYRIGHT AS THE ENGINE OF EXPRESSION: FIRST
AMENDMENT CONSIDERATIONS IN THE UNITED STATES

Unlike the thrust of the case law in the EU, disputes concerning the balancing of
copyright protection with other rights are not framed in human rights terms in the
United States. While the jurisprudence at times deals with similar issues, they are framed
by constitutional rather than fundamental rights discourses. As stated above, copyright
protection is afforded constitutional protection in the United States, on the basis of a
consequentionalist reasoning that dictates protection is granted in order to facilitate

58
  Ibid. para. 45.
59
  Case C-314/12 UPC Telekabel v. Constantin Film Verleih GmbH, EU:C:2014:192.
60
  Ibid. para. 46.
61
  Ibid. para. 61.
62
  Ibid. para. 63.
63
  European Commission, Towards a Modern, More European Copyright Framework, COM(2015)
(2015) 626.
64
  Ibid. 11.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 61 13/12/2018 15:25


62  Research handbook on human rights and digital technology

socially beneficial progress in the arts and sciences. In comparison to the EU, there is
no specifically stated Constitutional status afforded to the right of privacy in the United
States, and privacy ultimately receives little legal protection.65 Indeed, the Supreme Court
in Griswold v. Connecticut66 declared that there is an implied Constitutional protection
of privacy by way of the First Amendment,67 which ‘has a penumbra where privacy is
protected from governmental intrusion’.68 This right of privacy has been applied in cases
such as Roe v. Wade,69 regarding a woman’s right to a termination of pregnancy. However,
as Strahilevitz argues, this protection is limited in its efficacy, ‘largely because judges
have chosen to interpret the First Amendment in a way that places privacy and speech
interests at loggerheads’.70 Furthermore, according to Shiffrin, protections for privacy in
the United States are rooted in protecting the liberty of individuals against government
tyranny, ‘rather than against violations of dignity by the media or the market’.71 Indeed,
as the First Amendment states, ‘Congress shall make no law respecting an establishment
of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech,
or of the press’. The target of this principle is law-makers, rather than individuals
or organizations, raising questions as to whether individuals can rely upon the First
Amendment in horizontal disputes with other individuals. In the context of copyright law,
this perception of two rights at ‘loggerheads’, namely privacy and freedom of expression,
appears credible.
In fact, given the comparatively meagre privacy protection given to those in the public
eye in the United States, including politicians and celebrities,72 it is perhaps no surprise
that the use of copyright as a means of protecting privacy rights otherwise given short
thrift by the courts has been attempted. Some of the earlier cases considering the interac-
tion between copyright and freedom of speech under the First Amendment were in fact
cases in which the dispute was over rights to privacy in conflict with free speech, in which
copyright protection was not at the centre of the conflict, but the means by which other
rights were being protected.73 For example, in the Random House case,74 the Second Circuit
of the Court of Appeal was asked to intervene in a dispute between Howard Hughes, an
entrepreneur known for his eccentric and reclusive lifestyle, and John Keats, writer of an
unauthorized biography. Seeking to protect his privacy and prevent certain details of his

65
  Lior Jacob Strahilevitz, ‘Toward a Positive Theory of Privacy Law’ (2013) 126 Harvard Law
Review 2010, at 2012.
66
  Griswold v. Connecticut, 381 US 479 (1965).
67
  In addition to others, such as the Fifth Amendment concerning self-incrimination; for the
purposes of this chapter, however, the main focus is upon the First Amendment.
68
  Ibid. 483.
69
  Roe v. Wade, 410 US 113 (1973).
70
  Strahilevitz, ‘Toward a Positive Theory of Privacy Law’, n. 65 above, 2013.
71
  Steven H. Shiffrin, What’s Wrong with the First Amendment? (Cambridge University Press,
2016) 23.
72
  See generally, Scott J. Shackelford, ‘Fragile Merchandise: A Comparative Analysis of the
Privacy Rights for Public Figures’ (2012) 49 American Business Law Journal 125.
73
  Benjamin Farrand, ‘Regulatory Capitalism, Decentred Enforcement and Its Legal Consequences
for Digital Expression: The Use of Copyright Law to Restrict Freedom of Speech Online’ (2013) 10
Journal of Information Technology and Politics 404, at 412.
74
  Rosemont Enterprises Inc. v. Random House Inc., 366 F.2d 303 (2d Cir. 1966).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 62 13/12/2018 15:25


Digital copyright and human rights  63

life becoming public, Hughes had purchased the copyright over a series of articles called
‘The Howard Hughes Story’, upon which John Keats had drawn considerably in writing
his biography. Placing freedom of speech above privacy, the Court concluded that the use
of the material from the biographies was fair use,75 a ‘privilege in others than the owner
of a copyright to use copyrighted material in a reasonable manner without his consent’,76
and that on occasion the interest of a copyright holder may be subordinated to ‘the greater
public interest in the development of art, science and industry’.77 Given the comparatively
little judicial weight given to privacy in the United States, more attention has been given
to the interaction between copyright and First Amendment rights to speech.
In comparison to the EU, where the interaction between the protection of copyright
and freedom of expression is seen as one of balancing between separate and distinct
rights, in the United States, ‘contemporary jurisprudence is not one that balances free
speech considerations against other interests . . . [and] does not weigh the free speech
rights of one party against the copyright rights of the other party’.78 Instead, if copyright
is perceived as interfering with free speech, it is to be analysed internally, determining
whether copyright itself is compliant with the First Amendment. In Harper & Row,79
the US Supreme Court considered the interaction between copyright and the First
Amendment, determining that not only was copyright First Amendment compatible, but
that it acted to serve the interests of the First Amendment. Referring to the drafting of
the Constitution, the Supreme Court argued that:

[the] framers intended copyright itself to be the engine of free expression. By establishing a
marketable right to the use of one’s expression, copyright supplies the economic incentive to
create and disseminate ideas.80

In other words, the Supreme Court has held that there is no need to scrutinize copyright’s
compatibility with the First Amendment, as both ultimately serve the same goal, namely,
that of facilitating speech.81 The Supreme Court reiterated this in Eldred v. Ashcroft,82 a
case concerned with the duration of copyright, and an appeal against the term of protec-
tion being extended to the life of the author plus 70 years. In this case, the Supreme Court
referred to Harper & Row, stating that ‘copyright law contains built-in First Amendment

75
  17 US Code § 107, which allows for the use of a work protected by copyright, subject to an
assessment by a court regarding whether that use is fair, taking into account aspects such as the
nature of the use, the nature of the copyrighted work, the amount and substantiality of the use as
a portion of the original work, and whether such use is commercial in nature.
76
  Ibid. para. 13.
77
  Ibid.
78
  Michael D. Birnhack, ‘Copyright Speech: A Transatlantic View’ in Paul Torremans (ed.),
Copyright and Human Rights: Freedom of Expression, Intellectual Property, Privacy (Kluwer Law
International, 2004) 41–42.
79
  Harper & Row Publishers v. Nation Enterprises, 471 US 539 (1985).
80
  Ibid. 558, emphasis added.
81
  Melville B. Nimmer, ‘Does Copyright Abridge the First Amendment Guarantees of Free
Speech and Press’ (1969) 17 UCLA Law Review 1180; William McGinty, ‘First Amendment Rights
to Protected Expression: What are the Traditional Contours of Copyright Law?’ (2008) 23 Berkeley
Technology Law Journal 1099.
82
  Eldred v. Ashcroft, 537 US 186 (2003).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 63 13/12/2018 15:25


64  Research handbook on human rights and digital technology

protections’,83 referring once again to the fair use doctrine. While over-protection of copy-
right may potentially conflict with free speech, the internal conflict is mediated through
reliance upon fair use in a court when accused of an act of infringement.84 In Golan
v. Holder,85 this was again affirmed, finding that there was no conflict.86 A number of
scholars have supported this position, such as Eisgruber, who has argued that ‘copyright
is not censorious [and it] does not pick and choose among ideas and subject-matters’,87
and Nimmer, who argued that while the First Amendment serves to protect speech,
copyright then works to disseminate it; any potential hindrance to freedom of speech is
‘far out-balanced by the public benefit that accrues through copyright encouragement
of creativity’.88 Such a perception, it is submitted, is based on the view that rather than
having the potential to censor, copyright instead serves as a tool for the exchange of
ideas.89
Yet such an approach is too focused upon the role of the state as censor, which is
understandable given the nature of the First Amendment as a check on governmental
power.90 However, on the Internet, media and market power complement, and in some
respects, supplant the power of the state.91 Timothy Garton Ash refers to the Internet’s
private superpowers, companies such as Google, Facebook, Twitter, Intel, Oracle, Cisco
and Wikipedia.92 Yet it is not only these Internet intermediary service providers that wield
considerable power on the Internet, but traditional copyright industries such as the media,
record labels and movie studios have also been empowered. Through a process of online
intermediarization, the decision whether actions on the Internet are considered to be a
fair use of a work no longer resides in a judicial assessment, but an administrative exercise
performed by a private undertaking such as Google or Facebook.93 Under the Digital
Millennium Copyright Act (DMCA) of 1998,94 a service provider shall not be deemed
liable for copyright infringement for material transferred through the Internet connection
or stored on a computer system, so long as it removes or restricts access to the infringing

83
  Ibid. 219.
84
  It must be stated that fair use is not a ‘proactive’ right that can be enforced, but is a defence
to an accusation, as discussed generally in Pierre N. Leval, ‘Toward a Fair Use Standard’ (1990) 103
Harvard Law Review 1105.
85
  Golan v. Holder, 132 S Ct 873 (2012).
86
  Ibid. 890.
87
  Christopher L. Eisgruber, ‘Censorship, Copyright and Free Speech: Some Tentative
Skepticism About the Campaign to Impose First Amendment Restrictions on Copyright Law’
(2003) 2 Journal on Telecommunications and High Technology Law 17, at 18.
88
  Nimmer, ‘Does Copyright Abridge the First Amendment Guarantees’, n. 81 above, 1192.
89
  Thomas F. Cotter, ‘Gutenberg’s Legacy: Copyright, Censorship and Religious Pluralism’
(2003) 91(2) California Law Review 323, at 328.
90
  Eisgruber, ‘Censorship, Copyright and Free Speech’, n. 87 above, 18; Neil Weinstock
Netanel, Copyright’s Paradox (Oxford University Press, 2008) 35.
91
  See, e.g., Niva Elkin-Koren and Eli M. Salzberger, Law, Economics and Cyberspace: The
Effects of Cyberspace on the Economic Analysis of Law (Edward Elgar Publishing, 2004); Angela
Daly, Private Power, Online Information Flows and EU Law: Mind the Gap (Hart Publishing, 2016).
92
  Timothy Garton Ash, Free Speech: Ten Principles for a Connected World (Main edition,
Atlantic Books, 2016) 21.
93
  See generally Farrand, ‘Regulatory Capitalism, Decentred Enforcement’, n. 73 above.
94
  17 US Code § 512.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 64 13/12/2018 15:25


Digital copyright and human rights  65

content upon receiving a request from the copyright holder. Through a system known as
‘notice and takedown’, should an allegation of infringement be made to a service provider
such as YouTube, saying that a video contains infringing content, under the DMCA that
service provider is obliged to remove that infringing content. Where that removed content
is a direct copy of a work, such as a song from an album uploaded to YouTube without
the permission of the copyright holder, in principle there would be no First Amendment
concern; the right to free speech does not include the right to pirate someone else’s work,95
or as it was put in Eldred v. Aschroft, to ‘make other people’s speeches’.96
However, in more complex cases, there is the risk that notice and takedown in the
absence of judicial assessment may lead to conflicts with free speech, such as where a
transformative work such as a mash-up, in which two songs are remixed together in a way
to create a new, original work, is removed from YouTube,97 or where a video of a dancing
baby is removed because of the inclusion of the music to which the baby was dancing.98
In the second case, the uploader of the video sued Universal Music Group (UMG) for
misrepresentation of a DMCA claim, arguing that UMG acted in bad faith, as the entity
would have been aware that such a use would be fair.99 While the District Court found
in the uploader’s favour, it was not until the appeal heard by the Ninth Circuit of the
Court of Appeal in 2015,100 eight years later, that the court affirmed that the video was
fair use. While reiterating that fair use was a defence rather than cause of action, the
Court of Appeal nevertheless concluded that ‘duty to consider, in good faith and prior
to sending a takedown notification, whether allegedly infringing material constitutes fair
use’.101 It must be stated that the First Amendment was not mentioned at any point in
the decision of the District Court, nor in the Court of Appeal. It would appear that, at
least in the eyes of the Court, there is still no (officially recognized) conflict. Nevertheless,
scholars have argued that the DMCA and the system of notice and takedown do have a
noticeable ‘chilling effect’,102 as noted by Hon. McKeown, a judge of the Ninth Circuit
Court of Appeal. In an address to the Chicago-Kent Supreme Court IPR, republished
as an article, McKeown stated that ‘copyright has become a go-to tool to prevent the

 95
  Mark A. Lemley and Eugene Volokh, ‘Freedom of Speech and Injunctions in Intellectual
Property Cases’ (1998) 48 Duke Law Journal 147, at 211–12; Rebecca Tushnet, ‘Copy This Essay:
How Fair Use Doctrine Harms Free Speech and How Copying Serves It’ (2004) 114 Yale Law
Journal 535, at 567.
 96
  Eldred v. Ashcroft, n. 82 above, 191.
 97
  See generally Andrew S. Long, ‘Mashed Up Videos and Broken Down Copyright: Changing
Copyright to Promote the First Amendment Values of Transformative Video’ (2007) 60 Oklahoma
Law Review 317.
 98
  Lawrence Lessig, Remix: Making Art and Commerce Thrive in the Hybrid Economy (Avery
Publishing, 2008) 2–3; Samantha Von Hoene, ‘Fair Use in the Classroom; A Conundrum for
Digital User-Generated Content in the Remix Culture’ (2015) 7 Hastings Science and Technology
Law Journal 97.
 99
  Lenz v. Universal Music Group, 572 F.Supp.2d 1150 (2007).
100
  Lenz v. Universal Music Group, 801 F.3d 1126 (2015).
101
  Ibid. 1138.
102
  Margot Kaminski, ‘Copyright Crime and Punishment: The First Amendment’s Proportionality
Problem’ (2013) 73 Maryland Law Review 587; see also Wendy Seltzer, ‘Free Speech Unmoored in
Copyright’s Safe Harbor: Chilling Effects of the DMCA on the First Amendment’ (2010) 24 Harvard
Journal of Law and Technology 171.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 65 13/12/2018 15:25


66  Research handbook on human rights and digital technology

spread of damaging or offensive information on the Internet’.103 Examples include the


use of claims of copyright infringement to remove Wikileaks disclosure of US diplomatic
cables, embarrassing internal memos by Citigroup praising austerity as good for business
after receiving considerable bailouts, and the use of takedown requests to suppress elec-
tion campaign videos on YouTube by rival camps.104 In the absence of effective privacy
protection, McKeown argues, copyright law becomes a surrogate.105 While copyright
may be seen as the engine of expression, and fair use its safety valve, in the absence of
strong human rights norms, there remains an uneasy tension in the apparent ‘no conflict’
approach of US jurisprudence.

4.  S
 EEING THINGS DIFFERENTLY: CITIZEN PARTICIPATION
IN LAW-MAKING WHERE BALANCE IS NOT FOUND, OR
RIGHTS ARE IN CONFLICT

Whether a fine-tuned balance, as perceived by the judges of the CJEU, or an internal


conflict (or indeed denial of conflict) resolved by recourse to the principle of fair use, as
in the United States, the compatibility of copyright protection with human rights online is
increasingly a contested political debate.106 With the increase in concern over the existence
of ‘digital rights’,107 Internet users become activists, forming ‘issue networks’, in which
they collaborate to raise awareness and challenge what they perceive as threats to these
online freedoms, in particular where they concern privacy and freedom of speech.108 We
have seen such protests throughout the EU and the United States, albeit taking somewhat
different forms, and focused upon different fundamental rights issues. Considering these
developments chronologically (as the two phenomena are linked), it is best to consider
the United States first. Due to the perceived ineffectiveness of the established ‘notice and
takedown’ system, two Bills with similar subject matter were announced in 2011, one
in the House of Representatives, the other in the Senate. In the House, a Bill called the
Stop Online Piracy Act (SOPA)109 was introduced by Representative Lamar Smith, and
in the Senate, the PROTECT IP Act (PIPA)110 by Senator Patrick Leahy. Both proposals

103
  M. Margaret McKeown, ‘Censorship in the Guise of Authorship: Harmonizing Copyright
and the First Amendment’ (2016) 15 Chicago-Kent Journal of Intellectual Property 1, at 11.
104
  See Farrand, ‘Regulatory Capitalism, Decentred Enforcement’, n. 73 above, 415–19 for
more on these examples.
105
  McKeown, ‘Censorship in the Guise of Authorship’, n. 103 above, 14–15.
106
  See, e.g., Bill D. Herman, The Fight Over Digital Rights: The Politics of Copyright and
Technology (Cambridge University Press, 2013); Benjamin Farrand, Networks of Power in Digital
Copyright Law and Policy: Political Salience, Expertise and the Legislative Process (Routledge,
2014).
107
  See, e.g., Lawrence Lessig, Code 2.0 (Basic Books, 2006); Rebecca MacKinnon, Consent of
the Networked: The Worldwide Struggle for Internet Freedom (Basic Books, 2012).
108
  On Internet users as activists in issue networks, see Milton Mueller, Networks and States:
The Global Politics of Internet Governance (MIT Press, 2010); on protest as a form of civic engage-
ment, see Ken Kollman, Outside Lobbying: Public Opinion and Interest Group Strategies (Princeton
University Press, 1998).
109
  Stop Online Piracy Act, HR.3261, 112th Congress (2011–2012).
110
  PROTECT IP Act, S.968, 112th Congress (2011–2012).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 66 13/12/2018 15:25


Digital copyright and human rights  67

were considered to have been heavily lobbied for by copyright industries, including the
Recording Industry Association of America, and the Motion Picture Association of
America.111 SOPA112 contained provisions that would allow for immunity from liability
of an Internet intermediary service provider for voluntarily blocking foreign websites (i.e.
websites with a non-US based IP address)113 believed to be infringing IP rights, allowing
for the Attorney General to order the blocking of websites believed to be infringing IP
rights,114 as well as making it illegal to stream infringing copyrighted works.115 According
to Minnock, these provisions raised considerable First Amendment concerns, includ-
ing the right of US citizens to ‘read and listen to foreign speech’.116 According to the
Electronic Frontier Foundation (EFF), while ostensibly intended to target sites allowing
for indiscriminate piracy of copyrighted works, SOPA’s vaguely defined notion of ‘foreign
website’ could include services such as Rapidshare and Dropbox, as well as sites discussing
piracy such as TorrentFreak, and sites featuring user-generated content such as Deviant
Art or Sound Cloud, which may incorporate elements of a copyrighted work in a way
constituting fair use. EFF concluded by stating that ‘had these bills been passed five or
ten years ago, even YouTube might not exist today’.117
Despite judicial understanding that copyright poses no First Amendment conflict, and
that fair use serves as a suitable safety valve for free speech, such an understanding was not
shared by US citizens concerned by the SOPA and PIPA legislation. Activists pointed to
the Department for Homeland Security’s Immigration and Customs Enforcement (ICE)
unit erroneously shutting down websites for alleged copyright infringements that were
not sharing infringing materials, such as a hip-hop blog, without allowing the owner the
opportunity to appeal that decision prior to the site being taken offline.118 With increased
public attention being brought to the SOPA/PIPA proposals, online activists began to
disseminate information concerning the potential for the legislation to impact upon First
Amendment rights, framing their arguments as an explicit conflict between copyright
protection and freedom of speech.119 The event that significantly raised the profile of

111
  Christian Yoder, ‘A Post-SOPA (Stop Online Piracy Act) Shift in International Intellectual
Property Norm Creation’ (2012) 15 Journal of World Intellectual Property 379, at 379; see also Susan
K. Sell, ‘Revenge of the “Nerds”: Collective Action Against Intellectual Property Maximalism in
the Global Information Age’ (2013) 15 International Studies Review 67, at 79.
112
  In the interests of brevity, and given the similarity of the two documents, this chapter will
focus on the substantive provisions of SOPA alone.
113
  SOPA, s.104.
114
  SOPA, s.102.
115
  SOPA, s.201.
116
  Stephanie Minnock, ‘Should Copyright Laws be Able to Keep Up with Online Piracy?’
(2014) 12 Colorado Technology Law Journal 523, at 533.
117
  EFF Issues, SOPA/PIPA: Internet Blacklist Legislation (Electronic Frontier Foundation,
2012), available at www.eff.org/issues/coica-internet-censorship-and-copyright-bill (accessed 19
November 2016).
118
 MacKinnon, Consent of the Networked, n. 107 above, 102–3.
119
  See Annemarie Bridy, ‘Copyright Policymaking as Procedural Democratic Process: A
Discourse-Theoretic Perspective on ACTA, SOPA, and PIPA’ (2012) 30 Cardozo Arts and
Entertainment Law Journal 153; Sandra Schmitz, ‘The US SOPA and PIPA: A European
Perspective’ (2013) 27 International Review of Law, Computers and Technology 213; and Yoder, ‘A
Post-SOPA (Stop Online Piracy Act) Shift’, n. 111 above.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 67 13/12/2018 15:25


68  Research handbook on human rights and digital technology

anti-SOPA/PIPA activists, however, was the action taken by Wikipedia in January 2012.
After discussions between Jimmy Wales, founder of Wikipedia, and Wikipedia collabora-
tors, Wales made the decision to ‘black out’ Wikipedia on 18 January as a form of protest
against the IP Bills. Making the website inaccessible, except for a message stating that the
proposed legislation could ‘fatally damage the free and open Internet’, Wales provided
information regarding SOPA, and how concerned US citizens could contact their political
representatives regarding SOPA/PIPA, on a page accessed more than 162 million times.120
On this basis, more than 8 million people looked up their representatives’ contact infor-
mation using a Wikipedia provided tool.121 This action was mirrored by other leading
websites and online communities in a form of ‘day of action’, with other sites such as
Reddit also becoming inaccessible,122 and sites like Google and Flickr featuring protests
against SOPA/PIPA.123 As a result of this coordinated online and offline action, in which
the Internet was used to spread information regarding the potential First Amendment
concerns of the SOPA/PIPA legislation, and offline contacting of representatives in order
to protest against the adoption of the legislation, plans for the legislation were ultimately
shelved. The media attention, with newspapers such as the New York Times focusing
upon the protests and the concerns of activists,124 and the specific framing of SOPA/PIPA
as presenting serious First Amendment threats, facilitated collaborative and engaged
political action that made the position of Congress untenable.125 While not necessarily
resolving the apparent conflict between copyright and the First Amendment, what the
debate over SOPA/PIPA demonstrates is that the judicial perception of there being ‘no
conflict’ is not one shared by citizens.
In the EU, citizen mobilization centred on the Anti-Counterfeiting Trade Agreement
(ACTA). ACTA was a plurilateral trade agreement negotiated outside of the traditional
framework of the World Trade Organization and World Intellectual Property Organization,
which, amongst other provisions, such as allowing for the seizure and destruction of phar-
maceuticals in transit that were believed to infringe upon patents,126 also included provisions
on the protection of copyright on the Internet. Negotiated between Canada, Australia, the
EU, Japan, Mexico, Morocco, New Zealand, the Republic of Korea, Singapore, Switzerland

120
  Piotr Konieczny, ‘The Day Wikipedia Stood Still: Wikipedia’s Editors’ Participation in the
2012 Anti-SOPA Protests as a Case Study of Online Organization Empowering International and
National Political Opportunity Structures’ (2014) 62 Current Sociology 994, at 996.
121
  Ibid. 996–97.
122
  Richard Mills and Adam Fish, ‘A Computational Study of How and Why Reddit.com
Was an Effective Platform in the Campaign Against SOPA’ in Gabriele Meiselwitz (ed.), Social
Computing and Social Media (Springer International Publishing, 2015), available at http://link.
springer.com/chapter/10.1007/978-3-319-20367-6_23 (accessed 19 November 2016).
123
  Bridy, ‘Copyright Policymaking as Procedural Democratic Process’, n. 119 above.
124
  Jonathan Weisman, ‘In Piracy Bill fight, new economy rises against old’, New York Times,
18 January 2012, available at www.nytimes.com/2012/01/19/technology/web-protests-piracy-bill-
and-2-key-senators-change-course.html (accessed 19 November 2016).
125
  Peter Jay Smith, ‘Speaking for Freedom, Normalizing the Net?’ (2013) 10 Journal of
Information Technology and Politics 423.
126
  Of particular concern to countries such as India and Brazil, as discussed in Benjamin
Farrand and Helena Carrapico, ‘Copyright Law as a Matter of (Inter)national Security? The
Attempt to Securitise Commercial Infringement and Its Spillover onto Individual Liability’ (2012)
57 Crime, Law and Social Change 373.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 68 13/12/2018 15:25


Digital copyright and human rights  69

and the United States,127 leaks of drafts of the Agreement in 2008, then confirmed by the
release of deliberative drafts by the European Commission in 2010, indicated that Article
2.14 provided for the imposition of criminal sanctions ‘at least in cases of willful trademark
counterfeiting or copyright or related rights piracy on a commercial scale’, which was
intended to include wilful copyright and related rights infringements ‘that have no direct or
indirect motivation of financial gain’. Article 2.18(1) specifically stated that these sanctions
should be applied to cases of wilful infringement that take place by means of the Internet/
in the digital environment.128 In the final text, ACTA, Article 27(1), stated that each party
to the Agreement would ‘promote cooperative efforts with the business community to effec-
tively address . . . copyright or related rights infringement . . . while preserving fundamental
principles such as freedom of expression, fair process and privacy’. Unlike the legislation
proposed in the United States, this international agreement did make specific reference to
fundamental rights due to the involvement of the EU in negotiations. Indeed, academic
opinion was that while there were legitimate concerns regarding the lack of transparency in
the ACTA negotiating process, the Agreement itself did not propose any significant changes
to the existing EU regime, nor present threats to fundamental rights beyond those that have
already been discussed in the context of the Promusicae and Scarlet cases.129
Nevertheless, as with concerns over SOPA/PIPA, although legal opinion may have been
that there was not necessarily anything in ACTA to upset the existing balance between
the protection of copyright and other fundamental rights, this understanding was not
shared by online activists. Drawing considerably upon the resistance to SOPA/PIPA in the
United States, Polish citizens began mobilizing activists through Facebook, culminating
in a number of anti-ACTA protests taking place in early 2012. Facebook pages such as
‘Nie dla ACTA’ were set up, which according to a post made on the page on 21 January
2012, had over 10,000 views within the first 24 hours of being active. By 22 January 2012,
this number had reached 100,000 views.130 On 26 January, thousands of Polish protestors
marched through the streets of cities such as Warsaw, Krakow and Wroclaw, drawing
international press attention.131 These protestors framed ACTA not as being an instru-
ment of copyright protection, but as a significant threat to the privacy rights and freedom
of expression of EU citizens. As with the attention-raising activities of Wikipedia
against SOPA/PIPA, online organizations such as LQDN and Digitale Linke began
work facilitating information dissemination regarding ACTA and the perceived threat
to ‘Internet freedoms’. As a result, a coordinated protest took place in February 2012,
including tens of thousands of protestors in Germany, as well as thousands in Bulgaria,
France and other EU Member States.132 LQDN in particular provided information on

127
  Ibid. 392.
128
 Farrand, Networks of Power in Digital Copyright Law and Policy, n. 106 above, 181.
129
  See, e.g., ibid.; C. Geiger, ‘Weakening Multilateralism in Intellectual Property Lawmaking:
A European Perspective on ACTA’ (2012) 3(2) WIPO Journal 166; cf. Emma Leith, ‘ACTA: The
Anti-Counterfeiting Crack-Down’ (2011) 22(3) Entertainment Law Review 81.
130
 Farrand, Networks of Power in Digital Copyright Law and Policy, n. 106 above, 184.
131
  Ibid.
132
  Charles Arthur, ‘ACTA criticised after thousands protest in Europe’, The Guardian, 13
February 2012, available at www.theguardian.com/technology/2012/feb/13/acta-protests-europe
(accessed 19 August 2013).

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 69 13/12/2018 15:25


70  Research handbook on human rights and digital technology

how activists could contact Members of the European Parliament (MEPs) to voice their
concern, including contact details and scripts that could be used by those calling their
MEP.133 Through this combination of online activism, increased media attention, and
subsequent offline engagement with the political process, the position of the European
Parliament on ACTA changed significantly; from being fully in support of it, and urging
the Commission to take the action necessary to speedily conclude the Agreement,134 the
European Parliament instead became concerned regarding the implications of ACTA for
the Internet, and fundamental freedoms. The European Parliament, once again affirming
the balancing approach of the EU, recommended rejection of ACTA, stating that ‘it is
crucial to strike the appropriate balance between enforcement of IPRs and fundamental
rights such as freedom of expression, the right to privacy and protection of personal
data’.135 ACTA, it concluded, did not strike such a balance, and as such, had negative
implications for rights to freedom of expression and privacy.136 In June 2012, following
unprecedented citizen engagement with the European Parliament, the ratification of
ACTA by the EU was rejected by 478 votes to 39.137
However, it must be stated that citizen participation in the challenging of intellectual
property agreements perceived by members of the public as conflicting with other human
rights, such as access to information and privacy, is largely dependent upon their aware-
ness of the existence of such agreements, and being framed in language that allows for
mobilization against these legal reforms.138 Both the ill-fated Trans-Pacific Partnership
and Transatlantic Trade and Investment Partnership agreements appear to be dead with
little chance of resuscitation. Both agreements had provisions appearing to mandate
copyright enforcement provisions that would impact fundamental rights in similar ways
to SOPA/PIPA and ACTA, yet citizen mobilization against these agreements was not
nearly as visible as that against SOPA/PIPA in the United States, and ACTA in the EU.
Indeed, it was the election of current US President Trump that led to the negotiation
of these agreements being suspended, rather than citizen activism, and based on an
ostensibly anti-free trade agenda rather than a concern over fundamental rights. Where
changes to laws are more complex, and less visible, public engagement with law reforms
are significantly reduced, meaning that the balance may be tipped further in the favour of
copyright protection, as opposed to other competing human rights interests. What follows
these failed initiatives, whether in the EU, United States, or internationally, remains to
be seen.

133
  La Quadrature du Net, How to Act Against ACTA (2012), available at www.laquadra​
ture.net/wiki/How_to_act_against_ACTA#Contact_your_Elected_Representatives (accessed 20
August 2013); La Quadrature du Net, ‘Tools’ (2012), available at www.laquadrature.net/en/tools
(accessed 20 August 2013).
134
  European Parliament, ‘Motion for a Resolution to Wind up the Debate on the Statement by
the Commission Pursuant to Rule 110(2) of the Rules of Procedure on ACTA’ (2010).
135
  Committee on Civil Liberties, Justice and Home Affairs, Draft Opinion of the Committee
on Civil Liberties, Justice and Home Affairs for the Committee on International Trade on the
Compatibility of the Anti-Counterfeiting Trade Agreement with the Rights Enshrined in the Charter of
Fundamental Rights of the European Union, 2011/0167(NLE) (European Parliament, 2012) para. 3.
136
  Ibid. 14.
137
 Farrand, Networks of Power in Digital Copyright Law and Policy, n. 106 above, 188.
138
  Ibid. 192–94.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 70 13/12/2018 15:25


Digital copyright and human rights  71

5.  CONCLUDING THOUGHTS

Whereas at the global level, copyright protection is afforded universal human rights
protection, the level of acceptance of this proposition in different legal regimes varies.
The EU has been unequivocal in its affirmation that it affords copyright protection
fundamental rights status. It has reiterated this both in binding legislation, as well as in
the jurisprudence of its highest court. Yet, it recognizes that even as a fundamental right,
copyright protection requires balancing against interests represented in other fundamen-
tal rights, such as privacy or freedom of expression – copyright is not supreme as a form
of right, and the extent of its protections is limited in order to ensure effective protection
of other competing interests. In the United States, however, the courts have not reached
the same conclusions. The legal position is that copyright facilitates rather than hinders
speech, and therefore is not subject to First Amendment analysis. Any potential restric-
tions of speech can be managed internally, through the fair use doctrine. However, as has
been demonstrated, the automation of notice and takedown procedures, as well as the
ability to use copyright to suppress information in such a way that fair use cannot be relied
upon, brings the US courts’ jurisprudence into question. It is a position that Internet
activists may not necessarily agree with; it is interesting to note that the unity between
the US and EU systems came in the form of citizen protests decrying the expansion of
IP rights on the Internet, with both groups mobilizing through coordinated online and
offline action to combat these perceived threats. For Internet users, expansive copyright
protections pose significant threats to privacy, as well as freedom of expression, and they
are willing to engage in lobbying in order to prevent that threat. It is difficult, then, to say
that between copyright and other human rights, there is no conflict.

Benjamin Farrand - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:49:58AM
via New York University

WAGNER_9781785367717_t.indd 71 13/12/2018 15:25


5.  Cybersecurity and human rights
Myriam Dunn Cavelty and Camino Kavanagh

1. INTRODUCTION

The insecurity of global cyberspace is brought to our attention often: data breaches or
other cyber-related incidents feature regularly in the news. But not only the frequency is
increasing; these incidents seem to be getting more sophisticated and therefore more con-
sequential in terms of financial damage, but also, as a result of political action, in terms
of the breaking down of human rights protections. This development is fuelled by the
widespread and continuing securitization and militarization of the issue: cybersecurity is
treated as a serious matter of national and international security by all states. Cyberspace
is considered both a vulnerability-multiplier and a force-multiplier – a target (risk to
cyberspace) and a weapon (risk through cyberspace).1 As a result, cybersecurity is handled
as an issue that cannot be sufficiently managed through technical, legal or regulatory
means alone; states also approach the insecurity of cyberspace through diplomatic tools
as well as military and intelligence practices.
This chapter is concerned with the complex relationship between national security
and human rights. On the one hand, a certain level of security – generally understood as
freedom from risks and dangers – is needed so that human rights can be guaranteed. Even
more, security is a human right: Article 3 of the Universal Declaration of Human Rights
states that ‘Everyone has the right to life, liberty and security of person’. On the other
hand, national security needs are often invoked as a justification for policies that directly
violate human rights. In particular, when security is tied to the survival of the state or
the survival of the party in power, it is often put forward as more important than other
societal values such as liberty, privacy, or freedom of opinion and expression, especially
in times of heightened ‘threat’.
When cybersecurity is framed as a national security concern, its relationship with
human rights tends to be problematic as well. However, a major difficulty for researchers
interested in the relationship between cybersecurity and human rights is the lack of an
established academic research agenda. A literature search in two of the most prominent
research databases at the beginning of 2017 (Web of Science and Scopus) with the
combined key word ‘cybersecurity’ and ‘human rights’ reveals a dozen entries in both.
However, a more policy-oriented body of literature on cybersecurity and human rights
has emerged in the last few years. This literature focuses predominantly on those state
practices with most implications for human rights,2 but also sheds light on the activity

1
  Ronald J. Deibert and Rafal Rohozinski, ‘Risking Security: Policies and Paradoxes of
Cyberspace Security’ (2010) 4(1) International Political Sociology 15.
2
  See, e.g., AccessNow, A Human Rights Response to Government Hacking (2016), available at
www.accessnow.org/governmenthackingdoc; Natalie Green and Caronline Rossini, Cybersecurity
and Human Rights (Public Knowledge, 2015), available at www.publicknowledge.org/cybersecurity-

73
Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 73 13/12/2018 15:25


74  Research handbook on human rights and digital technology

of non-state actors – hackers, criminals, terrorist or violent extremist groups – and


the responses by states to such activity in the name of public safety and national and
international security. Cybersecurity by itself is a fast-growing field of study, but it is
dominated by computer sciences and engineering scholars and by the quest for better
technical security solutions. In contrast, literature on the political aspects of the topic is
not as well-developed.3
The difficulty any scholar who wants to study cybersecurity and human rights faces is
thus twofold. First of all, it is hard to build on an established body of literature from which
to further develop research questions. Second, and related, making a clear-cut contribu-
tion to a scholarly debate is equally difficult, as is finding scholarly journals where such
research would fit. Navigating around this difficulty, we focus this chapter less on a discus-
sion of literature and more on policy developments. By pointing to the various aspects of
the tension-filled relationship between cybersecurity and human rights, the chapter aims
to be a solid starting point for identifying the most interesting research topics.
The chapter has three sections. In the first, we look at two different understandings of
cybersecurity that reveal the underlying issues between cybersecurity and human rights. In
particular, we show that a more technical understanding based on the notion of technical
‘information security’ and a more political understanding linked to ‘national security’
exists. Most ‘information security’ practices in the technical domain are not problematic
for human rights – in fact, are even aligned with them in some instances. However, the
use of cyberspace as a strategic domain of warfare, as a tool to attain foreign policy
objectives, or as an auxiliary for counter-terrorism efforts through mass surveillance or
online content restrictions, is often problematic for human rights. We end the first part
with a discussion of what it means when cybersecurity is ‘securitized’ and/or ‘militarized’
in terms of security practices by states.
In the second section of the chapter, we show in more detail how specific security
practices clash with human rights, in particular with the right to privacy, freedom of
expression and the free flow of information. Importantly, it is not only state actors that
create human rights issues through their actions; increasingly, data collection efforts by
non-state actors, mainly companies, as well as the self-regulatory policies of Internet ser-
vice providers are cause for concern. We also describe current initiatives by supranational
organizations and advocacy groups geared towards countering the negative impact of
cybersecurity on human rights.
Finally, and based on the work of some of these advocacy groups, we attempt to
define a middle ground between cybersecurity and human rights. Our argument is that
cyber­security and human rights need not be opposed, as is often assumed. We argue
that drawing from the technical understanding of security, a good argument can be
made that if states do not honour human rights in their uses of cyberspace, security in
cyberspace cannot exist.

and-human-rights; Anja Kovak and Dixie Hawtin, Cyber Security, Cyber Surveillance, and Online
Human Rights (Global Partners Digital and Internet Democracy Project, 2013), available at www.
gp-digital.org/wp-content/uploads/pubs/Cyber-Security-Cyber-Surveillance-and-Online-Human-
Rights-Kovacs-Hawtin.pdf.
3
  Myriam Dunn Cavelty, ‘Cybersecurity Research Meets Science and Technology Studies’
(2018) 6(2) Politics and Governance 22.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 74 13/12/2018 15:25


Cybersecurity and human rights  75

2.  CYBERSECURITY, WHAT DO YOU MEAN?

Definitions are necessary to attach precise meanings to words, thereby reducing the
potential for misunderstandings. Beyond that, they also are interesting study objects,
particularly when concepts are new and stable meanings are only just forming. On the one
hand, struggles over definitions can help reveal underlying tensions in an issue area. On
the other, definitions are inherently political, in that they always serve particular interests
and convey particular values.4 This means in reverse that the larger the divergence between
values and interests in different camps, the harder it usually is to come to an agreement
about definitions.
Definitions for cybersecurity are a case in point. This chapter does not aim to be
an exhaustive discussion of definitional struggles, which define much of the current
politics. Rather, two fundamentally different understandings of cybersecurity serve as
an illustration for the root causes leading to a clash of cybersecurity practices with
human  rights and as a result of this, with human security, understood as a form of
security that ‘distances itself from the exclusive grip of a state-determined concept
and becomes security relevant to people’.5 In a first section, we look at technology-
focused  definitions. In a second, we introduce the framing of cybersecurity as national
security.
Clearly, the two types of securities – national security and human security – differ deci-
sively in scope, the actors involved, and in their referent object (that which is thought in
need of protection). They differ in scope, because national security entails quite a different
quality of engagement in terms of policy development (who shapes and implements the
policy) and resources (monetary, personnel, etc.) and very importantly, they differ hugely
in the mobilization of emotions. They also differ in terms of the actors involved in the two
types of security: computer or IT security experts, on the one hand, and ‘professionals
of (in)security’ on the other. Furthermore, while the security of information systems is,
in its pure form, concerned with technical measures to ensure that information flows
uninterrupted and uncorrupted, national security measures include much more, such
as the maintenance of armed forces, the maintenance of intelligence services to detect
threats, and civil defence measures. In a third section, we look at what happens when the
second framing gains traction.

2.1  Information Security

In the technical sphere, cybersecurity is linked to the so-called CIA Triad, the protection
of confidentiality, integrity and availability (CIA) of information.6 A comprehensive
definition along these lines is given by the ITU, the United Nations specialized agency for
information and communication technologies:

4
  Edward Schiappa, Defining Reality: Definitions and the Politics of Meaning (Southern Illinois
University Press, 2003).
5
  Gunhild Hoogensen and Kirsti Stuvøy, ‘Gender, Resistance and Human Security’ (2006)
37(2) Security Dialogue 207.
6
  See ISO Standard ISO/IEC 27032:2012.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 75 13/12/2018 15:25


76  Research handbook on human rights and digital technology

Cybersecurity is the collection of tools, policies, security concepts, security safeguards,


guidelines, risk management approaches, actions, training, best practices, assurance and tech-
nologies that can be used to protect the cyber environment and organization and user’s assets.
Organization and user’s assets include connected computing devices, personnel, infrastructure,
applications, services, telecommunications systems, and the totality of transmitted and/or
stored information in the cyber environment. Cybersecurity strives to ensure the attainment
and maintenance of the security properties of the organization and user’s assets against
relevant security risks in the cyber environment. The general security objectives comprise
the following: Availability; Integrity, which may include authenticity and non-repudiation;
Confidentiality.7

Quoting from just one of many possible glossaries (National Institute of Standards and
Technology, NIST), the CIA concepts are defined as follows:8

● Confidentiality means preserving authorized restrictions on access and disclosure,


including means for protecting personal privacy and proprietary information. Data
encryption is a common method of ensuring confidentiality, as is user IDs and
passwords as standard procedure.
● Integrity means guarding against improper information modification or destruc-
tion, and includes ensuring information nonrepudiation and authenticity. Data
integrity covers data in storage, during processing, and while in transit. Typical
measures include file permissions and user access controls.
● Availability means ensuring timely and reliable access to and use of information. It
is ensured by hardware maintenance, regular and timely system upgrades, but also
disaster recovery plants.

As is immediately obvious from these basic definitions, there is a close connection between
these principles and human rights such as privacy or the free flow of information.
Information security (or information assurance, as it is sometimes called) is the overall
goal and signifies the condition when the risks to confidentiality, integrity and availability
of information are adequately managed. Since there are certifiable ISO standards as well
as a variety of laws and regulations with an effect on data processing and information
security,9 most companies base their cybersecurity practices on similar definitions. In the
world of policy, the struggle over definitions is a pertinent, itself deeply political, issue,
revealing ongoing bureaucratic turf ‘wars’ over influence in this field. However, with the

7
  ITU, ‘Definition of Cybersecurity’, www.itu.int/en/ITU-T/studygroups/com17/Pages/cyber​
security.aspx.
8
  Richard Kissel (ed.), Glossary of Key Information Security Terms, NISTIR 7298, Revision 2
(2013), available at http://dx.doi.org/10.6028/NIST.IR.7298r2.
9
  For example, the European Union General Data Protection Regulation 2016/679 (GDPR),
which replaces Data Protection Directive 95/46/EC, was designed to harmonize data privacy
laws across Europe, to protect and empower all EU citizens’ data privacy and reshape the way
organizations across the region approach data privacy. National efforts to implement the EU Data
Protection Directive such as the UK Data Protection Act of 1998 provide the national frameworks
for regulating the processing of information relating to individuals, including the obtaining,
holding, use or disclosure of such information. These will need to be up-dated in line with GDPR
requirements.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 76 13/12/2018 15:25


Cybersecurity and human rights  77

recent attempts of many states to centralize approaches to cybersecurity through top-


level national strategies or at least coordination of activities across agencies, we see more
official definitions emerging. A comparison of these reveals that most of them are based
on variations of the CIA Triad as well.10
In sum, the technology and data-oriented view of cybersecurity is widespread and well-
established, with clear consequences for how cybersecurity issues are handled in practice
in the server rooms of this world. Importantly, there is no disagreement about the referent
object (a term which signifies the entity to be protected) in the case of information security
and no disagreement about what kind of values or interests are behind it. It is data and
its properties that are at the centre of attention and its value is related to the function that
data performs.

2.2  National Security

While most countries have relatively technical official definitions of cybersecurity, others
give their definition a distinctively strategic touch. Those include, among others, France,
Germany, Saudi Arabia, the United States and the United Kingdom. The latter, for
example, establishes a link between cyberspace and national interests like this:

Cyber security embraces both the protection of UK interests in cyber space and also the pursuit
of wider UK security policy through exploitation of the many opportunities that cyber space
offers.11

In sync with this, several strategic studies scholars have moved beyond the relatively con-
stricted confines of technical networks and define cybersecurity as the security one enjoys
in and from cyberspace,12 thus establishing a national security context. Even though
there has been a national security connotation to cybersecurity from the beginning of the
cyber-threats narrative,13 that particular focus has intensified over the years, in parallel
to society’s increasing ‘cyberification’ and the overall impression that cyber incidents are
becoming more frequent, more organized, costlier, more connected to geopolitics, and
altogether more dangerous.
Clearly, the concept of ‘information security’ as discussed above has seemingly little
in common with this type of security, which is about concepts such as survival and
exceptional measures such as military power.14 What this means is that the creation of
a specific ‘high politics’ connotation of cybersecurity had to be established in politics

10
  See, for a list of countries and their definitions, NATO Cooperative Cyber Defence Centre
of Excellence, ‘Cyber Defintions’, https://ccdcoe.org/cyber-definitions.html.
11
  Cabinet Office, Cyber Security Strategy of the United Kingdom: Safety, Security and
Resilience in Cyber Space (2009).
12
  Paul Cornish, Rex Hughes and David Livingstone, Cyberspace and the National Security of
the United Kingdom. Threats and Responses (Chatham House, 2009).
13
  Myriam Dunn Cavelty, Cyber-Security and Threat Politics: US Efforts to Secure the Information
Age (Routledge, 2008).
14
  Lene Hansen and Helen Nissenbaum, ‘Digital Disaster, Cyber Security, and the Copenhagen
School’ (2009) 53(4) International Studies Quarterly 1155, at 1160; Barry Buzan and Lene Hansen,
The Evolution of International Security Studies (Cambridge University Press, 2009) 15.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 77 13/12/2018 15:25


78  Research handbook on human rights and digital technology

first, so as to legitimize the national security framing. This happened through the
connection of the cyber-prefix to other security-relevant issues and concepts like war
and conflict, terror, or weapons. This way, the technical aspect of cyberspace is brought
into direct, though often loose, contact with more traditional forms of violence. The
prefix signals the relationship to cyberspace, which policy-makers believe to be a new
domain with rule-altering properties. Therefore, the new concepts create a specific
type of urgency and threat. An overall feeling of being vulnerable is enhanced by the
demonstrated willingness of dangerous actors, both state and non-state, to ruthlessly
exploit these vulnerabilities. Every major cyber incident serves as proof of this. It is
in this space that the second meaning of cybersecurity unfolds considerable political
power.

2.3  Securitization and Militarization

As a result, many nations are increasingly zooming in on the strategic-military aspects


of cybersecurity, often as a reaction to perceived build-up of offensive cyber-capabilities
by others15 and in response to the growing attention placed on the issues in multilateral
fora. In the process, the issue is that cybersecurity is both securitized (meaning the estab-
lishment of an issue in the security domain, often under the purview of the intelligence
community) and militarized, best defined as ‘the growing pressures on governments
and their armed forces to develop the capacity to fight and win wars in this domain’.16
Certainly, the technical understanding of cybersecurity is the base from where any action
in cyberspace, both defensive and offensive, starts. However, the framing of cybersecurity
as national security frames these practices in power political contexts, which introduce
largely divergent values and interests into the debate.
In traditional security studies, the majority of books, articles and reports on cyber­
security (and closely related issues) remains policy-oriented. The two main questions that
are being tackled are ‘who (or what) is the biggest danger for an increasingly networked
nation/society/military/business environment’ and ‘how to best counter the new and
evolving threat’.17 The threat-form that has triggered the most attention is cyberwar.18
This is not surprising, given the potentially devastating impact of a full-fledged cyber-
aggression and the long tradition in international law and military ethics to address new
forms of warfare and weapons systems from legal and ethical viewpoints.19 Recently, a few
cybersecurity related articles have also been published in high-ranking political science

15
  Vincent Boulanin, ‘Cybersecurity and the Arms Industry’ in SIPRI (ed.), SIPRI Yearbook
2013: Armaments, Disarmament and International Security (Oxford University Press, 2013).
16
  Ronald J. Deibert, ‘Tracking the Emerging Arms Race in Cyberspace’ (2011) 67(1) Bulletin
of the Atomic Scientists 1.
17
  David C. Gombert and Martin Libicki, ‘Cyber Warfare and Sino-American Crisis Instability’
(2014) 56(4) Survival: Global Politics and Strategy 7; J.P. Farwell and Rafal Rohozinski, ‘Stuxnet
and the Future of Cyber War’ (2011) 53(1) Survival: Global Politics and Strategy 23.
18
  Exemplary, for a vast literature: John Arquilla and David Ronfeldt, ‘Cyberwar is Coming!’
(1993) 12(2) Comparative Strategy 141; Thomas Rid, Cyber War Will Not Take Place (Hurst &
Company, 2013).
19
  Edward T. Barrett, ‘Warfare in a New Domain: The Ethics of Military Cyber-Operations’
(2013) 12(1) Journal of Military Ethics 4.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 78 13/12/2018 15:25


Cybersecurity and human rights  79

journals such as International Security20 or Journal of Peace Research,21 indicating the


beginning of a more sustained focus on cyber-conflict issues in the traditional security
domain.22 Traditional security studies see cyber-threats as objectively given: matters of
cyberwar, for example, can be identified as such, given specific ways of defining it.
In what is generally called ‘critical security studies’, there is a body of literature that
leans on frameworks inspired by securitization theory23 and is mainly concerned with
how different actors in politics have tried to argue for a link between the cyber-dimension
and national security.24 In a similar vein, recent articles have focused on metaphors in the
cybersecurity discourse to explain political response.25 This literature and its underlying
political philosophy is generally sceptical towards ‘securitization’, because issues that are
treated as security issues come with a specific anti-democratic logic. When we look at
cybersecurity’s second meaning and its implications for human rights and human security,
this literature is certainly a useful backdrop for further investigation.
Mainly, assertion of state power as we see it is linked to the possibility (and desir-
ability) to extend territorial borders into cyberspace, which has the potential to change
the topology of cyberspace as we know it.26 There is a certain appeal to a vision in which
the unruly, anarchical and dangerous side of cyberspace is kept ‘outside’, and relative
security can be established among and within states. Prominent concepts such as ‘Cyber-
Westphalia’ tap into the founding myths of a stable political world order based on state
power and invoke images of a delimited and thus defendable and securable place, newly
reordered by the state as the real guarantor of security.27 According to this view held by
a growing number of government actors, including those who have been represented on
the relevant UN Groups of Governmental Experts (UNGGE), the process of asserting
state sovereignty and re-establishing state control in cyberspace is inevitable, because

20
  Erik Gartzke, ‘The Myth of Cyberwar Bringing War in Cyberspace Back Down to Earth’
(2013) 38(2) International Security 41; Lukas Kello, ‘The Meaning of the Cyber Revolution’ (2013)
38(2) International Security 7.
21
  Brandon Valeriano and R.C. Maness, ‘The Dynamics of Cyber Conflict Between Rival
Antagonists, 2001–11’ (2014) 51(3) Journal of Peace Research 347.
22
  See also Robert Axelrod and Rumen Iliev, ‘Timing of Cyber-Conflict’ (2013) 111(4)
Proceedings of the National Academy of Sciences 1298; Yong-Soo Eun and Judith Sita Abmann,
‘Cyberwar: Taking Stock of Security and Warfare in the Digital Age’ (2014) 17(3) International
Studies Perspectives 343.
23
  Barry Buzan, Ole Wæver and Jaap de Wilde, Security: A New Framework for Analysis (Lynne
Rienner, 1998).
24
  Johan Eriksson, ‘Cyberplagues, IT, and Security: Threat Politics in the Information Age’
(2001) 9(4) Journal of Contingencies and Crisis Management 200; Dunn Cavelty, Cyber-Security and
Threat Politics, n. 13 above; Hansen and Nissenbaum, ‘Digital Disaster, Cyber Security, and the
Copenhagen School’, n. 14 above.
25
  David Barnard-Wills and Debi Ashenden, ‘Securing Virtual Space: Cyber War, Cyber
Terror, and Risk’ (2012) 15(2) Space and Culture 110; Tim Stevens and David J. Betz, ‘Analogical
Reasoning and Cyber Security’ (2013) 44(2) Security Dialogue 147; Myriam Dunn Cavelty, ‘From
Cyber-Bombs to Political Fallout: Threat Representations with an Impact in the Cyber-Security
Discourse’ (2013) 15(1) International Studies Review 105.
26
  M. Mueller, A. Schmidt and B. Kuerbis, ‘Internet Security and Networked Governance in
International Relations’ (2013) 15(1) International Studies Review 86.
27
  Chris Demchak and Peter Dombrowski, ‘Rise of a Cybered Westphalian Age’ (2011)
Strategic Studies Quarterly (Spring) 32.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 79 13/12/2018 15:25


80  Research handbook on human rights and digital technology

security is the most fundamental need of human beings, which means that security, and
the sovereign obligation to provide it, triumphs over other, lesser, inferior needs (such
as privacy, which is framed as a ‘nice to have’). Furthermore, the more cybersecurity is
presented as a traditional national security or public safety issue, the more natural it seems
that the keeper of the peace in cyberspace should be governments and their military, aided
by the intelligence community.
Totalitarian governments are embracing a growing ‘cyber-sovereignty’ movement to
further consolidate their power. Democratic states are also willing to control parts of
cyberspace, often with the help of private actors.28 Add to these developments a fantasy
about a version of cyberspace in which crime or even attacks by state actors become
impossible or at least very hard to conduct. Given that the prime issue for traditional law
enforcement methods based on punishment or well-proven military tools like deterrence is
the ‘attribution problem’ (the difficulty of clearly identifying those initially responsible for
a cyber-attack), and given that the attribution problem arises from technological protocols
that guarantee a great deal of anonymity for its users, taking away said anonymity, in parts
or fully, is sometimes seen as one of the best solutions for a secure Internet of the future.29
Here, the clash of different types of security becomes directly visible. From a human and
political rights perspective, anonymity is not a threat to security, it is a crucial part of it.
An Internet without the attribution problem would introduce a new issue: citizens could
be readily identified and punished for their political activities.30

3.  CYBERSECURITY AND HUMAN RIGHTS

Importantly, protecting basic human rights and fundamental freedoms from misuses
facilitated by information and communication technologies (ICT) has been an issue since
long before cyberspace and the Internet became so prominent in our lives. For instance,
certain state uses of information technologies during World War II helped bolster the
case for including basic principles such as freedom of expression and opinion in the core
human rights instruments: Article 19 of the Universal Declaration of Human Rights
(UDHR) and of the International Covenant on Civil and Political Rights (ICCPR)
are examples. During the 1970s, the emergence of the first networked computers and
unprecedented transnational data transfers, coupled with the commercialization of new
capabilities enabling remote processing and storage of citizens’ private information, led
to fierce debates within the UN General Assembly on questions relating to freedom of
opinion and expression (and the right to seek, receive and impart information), individual
privacy and data protection, and some of the first discussions on data nationalization.

28
  Ronald J. Deibert, Black Code: Inside the Battle for Cyberspace (McClelland & Stewart,
2013); Ben Wagner, ‘The Politics of Internet Filtering: The United Kingdom and Germany in a
Comparative Perspective’ (2014) 34(1) Politics 58.
29
  For example, Center for Strategic and International Studies, Securing Cyberspace for the 44th
Presidency, A Report of the CSIS Commission on Cybersecurity for the 44th Presidency (Washington,
DC, 2008), available at http://csis.org/files/media/csis/pubs/081208_securingcyberspace_44.pdf.
30
  Jonathan Zittrain, Freedom and Anonymity: Keeping the Internet Open (Scientific American,
2011), available at www.scientificamerican.com/article/freedom-and-anonymity/.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 80 13/12/2018 15:25


Cybersecurity and human rights  81

Political, economic and social changes have always entailed the recognition of new
rights, notably since they tend to drive all kinds of insecurities. Information communica-
tions technologies have played an important role in that iterative process, hence it is no
surprise that normative debates have emerged around the uses of the technologies that
today are contributing to such changes and driving the development of cybersecurity law
and policy. In continuation of older debates, today’s concerns are related to abuses of
freedom of expression and opinion, access to information, privacy and data protection,
as well as core principles such as transparency, accountability, predictability and remedy.
At the same time, significant normative work remains to be done, especially amongst
governments, about the scope and application of cybersecurity – particularly its relation-
ship to human rights. In this chapter, we delve deeper into the threat to human rights in the
following section before we look at a number of efforts aimed at countering the erosion
of human rights caused by security-related practices in a second.

3.1  Threat to Human Rights

Initially, most attention in the literature regarding actions which impacted upon human
rights was focused on those states which, in general, have a poor human rights record.
However, over the past decade it has become increasingly evident that many Western
states are equally ready to either directly curtail human rights or at least tolerate practices
that undermine them, in the name of public safety and national security. For instance,
in their series on Access Denied,31 Access Controlled32 and Access Contested,33 Deibert et
al. document and analyse several actions by states with implications for human rights.
These include Internet filtering practices in more than three dozen countries; Internet
controls in both Western and Eastern Europe; and the interplay of national security,
social and ethnic identity, and resistance in Asian cyberspace, including in-depth accounts
of national struggles against such controls. Overall, government cybersecurity practices,
such as online censorship, curtailing of encryption, surveillance, deep packet inspection
(DPI), and government hacking, have become prevalent across regions.34 For Deibert,
these developments are of mounting concern since ‘in liberal democratic countries we
are lowering the standards around basic rights to privacy just as the centre of cyberspace
gravity is shifting to less democratic parts of the world’.35
These practices principally affect basic human rights and fundamental freedoms set out
in the UDHR and the ICCPR, including freedom of expression, freedom of speech, the
right to privacy, freedom of opinion, and freedom of association. They also raise the levels
of broader insecurity, nationally and internationally. Indeed, the use of ICT capabilities

31
  Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski and Jonathan Zittrain, Access Denied:
The Practice and Policy of Global Internet Filtering (MIT Press, 2008).
32
  Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski and Jonathan Zittrain, Access
Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (MIT Press, 2010).
33
  Ronald J. Deibert, John G. Palfrey, Rafal Rohozinski and Jonathan Zittrain, Access
Contested: Security, Identity, and Resistance in Asian Cyberspace (MIT Press, 2011).
34
  See also AccessNow, A Human Rights Response to Government Hacking (2016), available at
www.accessnow.org/governmenthackingdoc/.
35
 Deibert, Black Code: Inside the Battle for Cyberspace, n. 28 above, 131.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 81 13/12/2018 15:25


82  Research handbook on human rights and digital technology

by state actors for political and military purposes carries important implications for
human rights, in addition to undermining trust in cyberspace and posing risks to critical
infrastructures. Moreover, recent state and non-state activity combining manipulation
of social media and traditional propaganda tools has demonstrated a capacity to sow
mistrust in core democratic institutions and processes, placing significant pressure on an
increasingly fragile international system.36 In the event that tensions between states esca-
late, it is unclear what human rights and humanitarian protections would be guaranteed,
although as discussed below, a number of scholars and policy-makers are considering
how existing international law, including humanitarian and human rights law, applies in
such cases.
Private companies, too, have become key actors in cybersecurity-related matters; their
actions, too, posing a threat to human rights.37 On the one hand, a small number of
companies own the majority of the infrastructure, technologies and software that are
being used and abused by different actors. On the other hand, private companies are also
playing an increasingly important role in shaping norms of social behaviour. As noted
by the UN Special Rapporteur on Freedom of Expression and Opinion, while this role
may be beneficial, in the sense that companies can also have a positive impact as norm
entrepreneurs, it is equally important to understand the sources of their legitimacy and
who participates in shaping their policies, not least because of the significant impact they
have on public life.38
The following two sections delve into these and more issues in greater detail, highlight-
ing the impact of broad cybersecurity policy and law on privacy, on the one hand, and
freedom of expression and opinion, on the other.

3.1.1 Privacy
In their ground-breaking law review article ‘The Right to Privacy’, Warren and Brandels
presciently noted ‘that the individual shall have full protection in person and in property
is a principle as old as the common law; but it has been found necessary from time to time
to define anew the exact nature and extent of such protection’.39 In the 120 plus years that
have since passed, the nature and extent of such protection have developed significantly,
influenced in no small part by abuses of the right to privacy by states, often by technologi-
cal means, and by efforts to ensure remedy for them.
Privacy is generally conceptualized in a multi-dimensional manner, building on Westin’s
distinction, which differentiated between solitude, intimacy, anonymity and reserve as

36
  ‘EU Strategic Communication to Counteract Anti-EU Propaganda by Third Parties’,
European Parliament Resolution on EU Strategic Communication to Counteract Propaganda
Against it by Third Parties, 2016/2030(INI) (November 2016).
37
  See Sophie Stalla-Bourdillon, Evangelia Papadaki and Tim Chown, ‘From Porn to
Cybersecurity Passing by Copyright: How Mass Surveillance Technologies are Gaining Legitimacy
. . . The Case of Deep Packet Inspection Technologies’ (2014) 30 Computer Law and Security
Review 670.
38
  UN Special Rapporteur, Report of the Special Rapporteur on Freedom of Expression and the
Role of the Private Sector in the Digital Age, A/HRC/32/38 (Human Rights Council, May 2016).
39
  Samuel D. Warren and Louis D. Brandeis, ‘Right to Privacy’ (1890) 4(5) Harvard Law Review
193.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 82 13/12/2018 15:25


Cybersecurity and human rights  83

the key principles underpinning privacy for individuals.40 Interpretations of privacy also
differ. For instance, according to the European Court of Human Rights:

it is neither possible nor necessary to determine the content of privacy in an exhaustive way
. . . and it can thus cover a wide [non-exhaustive] range of issues such as integrity, access to
­information and public documents, secrecy of correspondence and communication, protec-
tion of the domicile, protection of personal data, wiretapping, gender, health, identity [and so
forth].41

This means that in the European Union, in contrast to the United States, citizens enjoy
both a right to privacy, proscribing interference with one’s autonomy, as well as a right to
the protection of personal data, ‘which regulates the processing of data that can be used
to identify an individual person regardless of whether such data are inherently personal
or private’.42
Protecting privacy, therefore, in this digital age, is complex, confounded, as discussed,
not only by different interpretations of the concept, but also by the distancing of a grow-
ing number of states, including liberal democracies, from basic standards, despite their
obligations under, and commitments to, existing international law. Today, two major
cybersecurity-related developments have promoted significant debate over privacy mat-
ters. The first involves government surveillance and intelligence gathering practices, the
second law enforcement access to data across borders.

Government surveillance and intelligence gathering practices  Following the terrorist


attacks in the United States in September 2001, a number of governments pushed through
various kinds of knee-jerk ‘anti-terror’ legislation, legalizing broad surveillance and intel-
ligence gathering with limited, if any, public consultation or oversight. In the US context,
for example, the Patriot Act gave government agencies free rein to conduct secret searches,
detain without charges and due process, and monitor phone, Internet and other forms of
communication and everyday activities without a warrant.43 Across the border in Canada,
where privacy protections were well cemented in law prior to 9/11 and considered as
something to be emulated, the new Anti-Terrorism Act ‘established dangerous new levels
of acceptable privacy invasion’.44
Fast forward to 2013, and the revelations by Edward Snowden regarding the practices
of the US National Security Agency (NSA) as well as those of some of its allies laid bare
the extent to which states’ agency intelligence gathering and surveillance practices had
spread in some countries. The US NSA, for one, had an entire portfolio of intelligence
gathering and surveillance tools and techniques, of which there was little if any oversight.

40
  Alan Westin, ‘Privacy and Freedom’ (1968) 25 Washington and Lee Law Review 166.
41
  Marc van Lieshout, Michael Friedewald, David Wright and Serge Gutwirth, ‘Reconciling
Privacy and Security’ (2013) 26(1–2) Innovation: The European Journal of Social Science Research 119.
42
  Ibid.; Christopher Kuner, ‘The European Union and the Search for an International Data
Protection Framework’ (2014) 2 Groningen Journal of International Law; see EU General Data
Protection Regulation 2016/679 (GDPR).
43
  Vanmala Hiranandani, ‘Privacy and Security in the Digital Age: Contemporary Challenges
and Future Directions’ (2011) 15(7) International Journal of Human Rights 1091.
44
  Ibid.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 83 13/12/2018 15:25


84  Research handbook on human rights and digital technology

These included secret court orders allowing the NSA to sweep up Americans’ phone
records; a program called PRISM compelling companies to comply with government
requests for access to user data; techniques that could infiltrate links connecting Yahoo
and Google data centres unbeknownst to the companies; a program called Dishfire which
enabled the NSA to intercept some 200 million text messages everyday worldwide; and
regular political espionage activity involving numerous world leaders and foreign govern-
ments, including heads of state of Germany, Brazil and Mexico.
Across ‘the pond’ in the United Kingdom, the NSA equivalent, GCHQ, had also been
fine-tuning its interception skills, tapping fibre optic cables to gain access to data flowing
through the Internet and sharing intelligence with the NSA via a program codenamed
Tempora, which enjoyed the participation of telecommunications companies such as
Verizon Business, British Telecommunications, Vodafone Cable, Global Crossing, Level
3, Viatel and Interoute.45
While significant effort went into reeling back some of these practices, not least
because of their privacy implications, new legislation across countries as disparate as
China, France, Russia, the United Kingdom and the United States (also, ironically the
Permanent Five of the UN Security Council), often drafted as a knee-jerk reaction to
terrorism-related threats, is creating a new basis for massive repositories of personal
information for surveillance, compelling companies, particularly intermediaries such as
Internet Service Providers (ISPs), to assist. Instead of addressing the manifold problems
of information security, such as the vulnerabilities of the software and systems involved,
more recent massive data breaches affecting companies and government agencies have
hastened the speed of such legislative drafting.46 These practices are generally couched
in terms that suggest necessity in national security terms, yet their effects on privacy are
hardly proportionate, and remedial action hardly considered.
Other emerging practices involve mandatory anti-encryption regimes to facilitate live
interception of data as it transits through a requesting government’s jurisdiction, ensure
access to user information or the outright banning of encrypted messaging services.
For instance, the Snowden revelations shed light on NSA efforts to circumvent widely
used web encryption technologies, often compelling companies to install backdoors
in the technologies, by hacking into servers or computers, or by promoting the use of
weaker algorithms, much to the alarm of technologists and rights activists.47 Following
the Charlie Hebdo attacks in Paris, France, in 2015, UK Prime Minister David Cameron
announced his support to ban encrypted message services such as WhatsApp if British
intelligence agencies were not given increased access to messages and user data.48 Taking

45
 Mashable, The 10 Biggest Revelations From Edward Snowden’s Leaks (2014), ­available at
http://mashable.com/2014/06/05/edward-snowden-revelations/#JvabLH3FUPqx; Ewen MacAskill,
Julian Borger, Nick Hopkins, Nick Davies and James Ball ‘GCHQ taps fibre-optic cables for secret
access to world’s communications’, The Guardian, 21 June 2013, available at www.theguardian.com/
uk/2013/jun/21/gchq-cables-secret-world-communications-nsa.
46
 AccessNow, General Data Protection Regulation: What Tidings Do Ye Bring? (2015), avail-
able at www.accessnow.org/general-data-protection-regulation-what-tidings-do-ye-bring/.
47
 Mashable, The 10 Biggest Revelations From Edward Snowden’s Leaks, n. 45 above.
48
  Trevor Timm, ‘Four ways Edward Snowden changed the world – and why the fight’s not
over’, The Guardian, 5 June 2015, available at www.theguardian.com/commentisfree/2014/jun/05/
what-snowden-revealed-changed-nsa-reform.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 84 13/12/2018 15:25


Cybersecurity and human rights  85

the issue one step further, in 2016, the Hungarian Parliament voted on a legislative
amendment with the aim of actually prohibiting end-to-end encryption. The proposed
changes would mandate any providers offering end-to-end encryption to provide data for
the Hungarian government when it requests,49 propelling companies using encryption
technology to relocate to other jurisdictions.
In light of the bad publicity engendered by the Snowden revelations and driven by
technical considerations, a number of major technology companies such as Apple have
pushed back against efforts compelling them to give access to encrypted devices, argu-
ing that government efforts to bypass or disallow encryption through legislative means
would render communications more vulnerable and place citizens and ICT systems at
risk.50 Other major companies have backed Apple’s position.51 Reports such as the one
published by leading encryption experts in 2015 criticizing the viability of government
efforts to legitimize backdoors to encrypted communication have helped bolster the case
of privacy advocates, notably their assertion that ‘such access will open doors through
which criminals and malicious nation-states can attack the very individuals law enforce-
ment seeks to defend’.52 Nonetheless, as noted above, draft legislation on these and related
issues is currently under consideration in a range of jurisdictions, strongly influenced by
the surge in terrorist-related incidents across the globe.

Law enforcement access to data across borders  As noted by Daskal (2016),53 until
recently, law enforcement access to data across borders did not create as much of a stir as
government intelligence gathering and surveillance practices. Yet, coupled with concerns
stemming from reactions to the Snowden revelations, frustrations caused by delays in
accessing data located across territorial borders for criminal investigations are leading
states to take several actions that carry implications for human rights. These actions can
include regulation on data management requirements and practices (e.g. Hungary) or
mandatory data localization requirements (e.g. Russia). In these cases, the regulation
requires that the content of communications (or a copy of such content) involving a coun-
try’s residents and/or citizens is held in-country. The assumption is that the time, resources
and political capital required to access data in other jurisdictions will be reduced, since

49
  The original draft would have jailed end-to-end encryption application users and developers
would be required to offer some type of backdoor for government monitoring. F. Hidvégi and
R. Zágoni, ‘How Technology Enhances the Right to Privacy: A Case Study on the Right to Hide
Project of the Hungarian Civil Liberties Union’ (2016) Journal of National Security Law and
Policy 55, available at http://jnslp.com/wp-content/uploads/2017/10/How-Technology-Enhances-
the-Right-to-Privacy_2.pdf. See also Szabo and Vissy v. Hungary, App. No. 37138/14, HUDOC
(ECtHR, 12 January 2016), available at www.i-m.mx/szabomat/SzaboAndVissyVHungary.
50
  Camino Kavanagh, The UN, Cyberspace and International Peace and Security: Managing
Complexity in the 21st Century (UNIDIR, 2017).
51
  Brian Barrett, ‘The Year Encryption Won’, Wired, 2016, available at www.wired.com/2016/12/
year-encryption-won/.
52
  Nicole Perlroth, ‘Security experts oppose government access to encrypted communication’,
New York Times, 2015, available at www.nytimes.com/2015/07/08/technology/code-specialists-
oppose-us-and-british-government-access-to-encrypted-communication.html.
53
  Jennifer Daskal, ‘Law Enforcement Access to Data Across Borders: The Evolving Security
and Rights Issues’ (2016) Journal of National Security Law and Policy 473, available at http://jnslp.
com/2016/09/06/law-enforcement-access-data-across-borders-evolving-security-rights-issues/.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 85 13/12/2018 15:25


86  Research handbook on human rights and digital technology

domestic law enforcement would only have to follow domestic legal process to access data
in criminal cases. This approach both increases the potential for domestic surveillance,
while introducing a series of inefficiencies that would have to be borne by the private
sector, potentially stymieing investment and innovation.
Other approaches include unilateral assertions of extra-territorial jurisdiction (e.g.
UK DRIPA, Brazil’s Marco Civil, US approach in its ongoing litigation with Microsoft),
which ‘includes the authority to compel the production of stored content from any
company that does business in its jurisdiction . . . without limit based on the location of
the data, the location of the provider’s place of business, the target’s nationality, or the
target’s place of residence’.54 They also include threats against employees or officers of
local subsidiaries for failing to turn over user data (Microsoft and WhatsApp cases in
Brazil) or the use of malware and ‘other opaque and less accountable means’.55

3.1.2  Freedom of opinion and expression and the free flow of information
Freedom of opinion and expression and the free flow of information are rights enshrined
in both the UDHR and the ICCPR. In an important contribution, Penney (2011)
anchors some of the recent debates on the Internet and rights in context, tracing the
history and intellectual origins of Internet rights to a broader international and political
context of ‘evolving ideas about expression, information, and communication’, which
continue to evolve and inform political, economic and social change. Penney traces early
efforts to guarantee freedom from state interference and regulation on the Internet to the
cyber-libertarians of the 1990s.56 Strongly anchored in US culture and values, so-called
‘cyber-libertarians’ and ‘information-age luminaries’ such as John Perry Barlow, Alvin
Toffler, George Gilder and Esther Dyson promoted freedom, liberty and the unique-
ness of cyberspace. Their writings, which coincided with thinking on inter-dependence,
globalization, and in the post-Cold War years, libertarian euphoria, ‘helped forge the early
intellectual foundations for theorizing the Internet experience’ and the ‘founding values
of the Internet’.57
It was not just Barlow et al. who imbued the Internet with values in their writings. In
another important contribution to the field, Milan (2013) discusses the origins and nature
of social movements and their technologies and their role in ‘wiring social change’ in the
age of the Internet. She outlines the manner in which activists have sought to ‘contribute
to the efforts of contemporary progressive social movements to shape the world according
to principles of justice, equality, and participation’.58 In doing so, she discusses the impli-
cations of changes in political, economic and social power structures and the importance
of social movements in ensuring respect for key principles such as participation, which
are imperative to ensuring respect for rights and fundamental freedoms.

54
  Jennifer Daskal, ‘Law Enforcement Access to Data Across Borders: The Evolving Security
and Rights Issues’ (2016) Journal of National Security Law and Policy 477.
55
  Ibid.
56
  Jonathon W. Penney, ‘Internet Access Rights: A Brief History and Intellectual Origins’ (2011)
38(1) William Mitchell Law Review 9, available at http://open.mitchellhamline.edu/wmlr/vol38/iss1/11.
57
  Ibid. 16.
58
  Stefania Milan, Social Movements and Their Technologies: Wiring Social Change (MacMillan,
2013) 2.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 86 13/12/2018 15:25


Cybersecurity and human rights  87

For instance, the ‘hactivists’ that formed part of the underground group the Cult of
the Dead Cow (cDc) used direct action in the digital environment as a means to effect
political and/or social change. cDc promoted the development and use of technology to
foster human rights and the open exchange of information, later creating ‘Hactivismo’,
an operation ‘at the forefront of the struggle for human rights in and out of cyberspace’,
as well as tools enabling access to information otherwise restricted by governments.59
Over the following period, cDc and other, more intellectually rigorous groups such as
the Critical Arts Ensemble (CAE) developed their base and hactivism gradually grew to
encapsulate ‘the development and use of technology by grass-roots movements to foster
human rights and the open exchange of information’, or ‘the politically-motivated use
of technical expertise’ (Delio, 2004), with Article 19 of the UDHR and the other core
human rights instruments often serving as the normative crutch for the legitimization
of their activities. Wired magazine captured the values driving this mix of technologists,
intellectuals and political activists in a 1999 article entitled ‘Freedom to Connect’ which
noted that ‘[a]n essential component of the emerging global culture is the ability and
freedom to connect – to anyone, anytime, anywhere, for anything’.60
Significant pressures have been placed on these rights in recent years, notably as states
have elevated cybersecurity from a tactical to a strategic concern, developing capabilities
to both protect systems back home and disrupt systems abroad.61 For instance, the grow-
ing use of offensive capabilities by states or via proxies to intentionally interfere with
ICT-reliant systems can pose significant risk to accessing vital information or services,
an issue raised by Dutch scholar Dennis Broeders in a piece focused on identifying a new
norm that would protect the ‘core of the Internet’ from state interference.62 Protecting
against the growing tendency by states to ‘switch off’ the Internet in the face of tense
political contestation has also become a key concern.63
The private sector, too, has been criticized for acquiescing to state controls that restrict
the freedom of opinion and expression, as well as the free flow of information. While
some of the major technology and social media companies have become more com-
mitted to respecting key norms and principles such as freedom of expression, privacy
and transparency, the obligation to respect the laws of the countries within which they
operate are becoming more restrictive.64 Governments increasingly impose intermediary
liabilities, compel ISPs to serve as blocking, filtering or content removal agents. These
practices have spread in tandem as governments have scaled up their responses to
threats posed by terrorist or violent extremist groups. Certainly, growth in the use of the

59
  Camino Kavanagh, ‘The Limits of Dissent in Cyberspace’, Policy Brief for Annual Cyber
Dialogue hosted by the Center for Global Security Studies at University of Toronto’s Munk School,
2012.
60
  Penney, ‘Internet Access Rights’, n. 56 above.
61
  Tom Sorell, ‘Human Rights and Hacktivism: The Cases of Wikileaks and Anonymous’
(2015) 7(3) Journal of Human Rights Practice 391.
62
  Denis Broeders, The Public Core of the Internet. An International Agenda for Internet
Governance (Amsterdam University Press, 2015).
63
 AccessNow, Keep It On Campaign (2017), available at www.accessnow.org/keepiton/.
64
  Camino Kavanagh, Private Sector Engagement in Responding to the Use of the Internet and
ICT for Terrorist Purposes: Strengthening Dialogue and Building Trust (UNCTED and ICT4Peace,
2016).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 87 13/12/2018 15:25


88  Research handbook on human rights and digital technology

Internet for terrorist purposes over the past five years has led to a number of actions by
states centred on restricting content online through blocking, filtering or requesting the
removal of content or profiles, compelling companies to support these and other online
content-related actions. Even if these actions are driven by legitimate concerns relating
to terrorist and violent content, the grey area between some content that may or may not
be terrorist-related, as well as the lack of a common definition of terrorism, leaves the
uncomfortable feeling that these actions can be used for alternative purposes, including
the targeting of opposition or advocacy groups.
Private security companies, too, engage in gate-keeping functions such as blocking
access to sites, some of which are critical to dissidents, while some security institutions are
taking on questionable ex ante roles such as filtering web content and monitoring social
media as a means to pre-empt and disrupt incidents. Given the growing interest in how big
data can support predictive policing, the latter will likely increase in the coming period.

3.2  Efforts to Strengthen Human Rights Protections

Over the past decade, the number of groups and movements advocating cybersecurity
policies and practices consistent with basic human rights and key principles have
mushroomed. It is thanks to these groups and movements that a normative framework
is emerging to guide state – and increasingly private sector – behaviour in cybersecurity
policy and practice. In this section, we look at the development of new norms, at the level
of the United Nations and beyond first, and secondly at other norm entrepreneurs.

3.2.1  Normative developments at the international level


In 2011, the UN Special Rapporteur on the Freedom of Expression and Opinion stated
in a report on Key Trends and Challenges to the Right of All Individuals to Seek, Receive
and Impart Information and Ideas of All Kinds Through the Internet that the Internet is:

a vital communications medium which individuals can use to exercise their right to freedom of
expression, or the right to seek, receive and impart information and ideas of all kinds, regardless
of frontiers, as guaranteed under articles 19 of both the Universal Declaration of Human Rights
and the International Covenant on Civil and Political Rights.65

In particular, he stated that Article 19 of the latter guarantees everyone the right to hold
opinions without interference; and the right to freedom of expression, which includes
freedom to seek, receive and impart information and ideas of all kinds, regardless of
frontiers, either orally, in writing or in print, in the form of art, or through any other
media of his choice.
The exercise of the rights provided for in this latter paragraph carries with it special
duties and responsibilities and may therefore be subject to certain restrictions, but only
such as are provided by law and are necessary for respect of the rights or reputations

65
  UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of
Opinion and Expression, Report of the Special Rapporteur on Key Trends and Challenges to the
Right of All Individuals to Seek, Receive and Impart Information and Ideas of All Kinds Through the
Internet, A/HRC/17/27 (Human Rights Council, May 2011).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 88 13/12/2018 15:25


Cybersecurity and human rights  89

of others and for the protection of national security or of public order (ordre public),
or of public health or morals; and any restriction must be proven as necessary and
proportionate, or the least restrictive means to achieve one of the specified goals listed
above. Importantly, the UN Special Rapporteur stressed that Article 19 of the ICCPR
was drafted ‘with foresight to include and to accommodate future technological develop-
ments through which individuals can exercise their right to freedom of expression’, and
concluded that the framework of international human rights law ‘remains relevant today
and equally applicable to new communication technologies such as the Internet’.66
In a follow-up report, the Special Rapporteur also reminded member states of general
comment No. 34 of the Human Rights Committee on Article 19 of the International
Covenant, which underscored that:

when a State invokes a legitimate ground for restriction of the right to freedom of expression,
it must demonstrate in specific and individualized fashion the precise nature of the threat, the
necessity and the proportionality of the specific action taken, in particular by establishing a
direct and immediate connection between the expression and the threat.67

Moreover, in the Report, he examined Internet access rights from the perspective of nega-
tive and positive obligations on a state, noting that citizens have a right to access online
content and to be protected against government intrusions, on the one hand, and the
obligation of a state or government to provide access, or a form of access ‘to the physical
infrastructure necessary to connect to the Internet, on the other’.68
Other important reports by the Special Rapporteur include The Use of Encryption
and Anonymity in Digital Communications (2015),69 which attempts to reconcile tensions
between states’ obligations to ensure privacy and freedom of expression and opinion, on
the one hand, and national security and public safety prerogatives, on the other; and Role
of the Private Sector in the Digital Age,70 which raises important questions including the
legitimacy of industry actors as norm entrepreneurs.
Two other important resolutions to emerge from the UN include the Human Rights
Council Resolution on the Promotion, Protection and Enjoyment of Human Rights on
the Internet, that affirmed ‘that the same rights that people have offline must also be
protected online, in particular freedom of expression’,71 and the UN General Assembly
Resolution on the Right to Privacy in the Digital Age,72 the latter propelled by the

66
  Ibid. 7.
67
  UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of
Opinion and Expression, Report of the Special Rapporteur on the Promotion and Protection of the
Right to Freedom of Opinion and Expression, A/66/290 (August 2011) 7.
68
  Ibid.; Penney, ‘Internet Access Rights’, n. 56 above.
69
  UN Special Rapporteur on the Promotion and Protection of the Right to Freedom of
Opinion and Expression, Report on the Use of Encryption and Anonymity in Digital Communications,
A/HRC/29/32* (May 2015).
70
  UN Special Rapporteur, Report of the Special Rapporteur on Freedom of Expression and the
Role of the Private Sector in the Digital Age, A/HRC/32/38 (Human Rights Council, May 2016).
71
  UN Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights on
the Internet’, Resolution adopted by the UN Human Rights Council, A/HRC/RES/20/8 (July 2012).
72
  UN General Assembly, ‘The Right to Privacy in the Digital Age’, Resolution A/68/16 (January
2014).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 89 13/12/2018 15:25


90  Research handbook on human rights and digital technology

r­ eactions of the governments of Brazil and Germany to the Snowden revelations of US


mass data collection and surveillance practices.73 This latter resolution pitted traditional
allies against each other while also reawakening Cold War-era concerns regarding state
surveillance practices and over-dependency on US IT products and services, and high-
lighting important differences in how states approach and interpret questions relating to
data protection and privacy.
The Human Rights Council (HRC), importantly, has also outlined how governments
should respond to cybersecurity threats, calling on states to:

address security concerns on the Internet in accordance with their international human rights
obligations to ensure protection of freedom of expression, freedom of association, privacy and
other human rights online, including through national democratic, transparent institutions,
based on the rule of law, in a way that ensures freedom and security on the Internet.74

The UN Security Council, too, has continuously stressed the need to balance rights with
security in its recent terrorism-related resolutions.
Also of importance is the work of the UNGGE which acknowledged that human
rights law applies to state use of ICTs. Specifically, the 2013 report noted that ‘efforts to
address the security of ICTs must go hand-in-hand with respect for human rights and
fundamental freedoms set forth in the Universal Declaration of Human Rights and other
international instruments’,75 while the 2015 report reconfirmed this statement while also
reaffirming the thrust of the aforementioned HRC resolution, that human rights should
apply online as they do offline.76
Beyond the UN, regional organizations have also turned their attention to some of the
side effects of cybersecurity policy and law. The European Union, for its part, has developed
guidance for dealing with specific operational challenges, a recent case in point being its
Operational Human Rights Guidance for EU External Cooperation Actions Addressing
Terrorism, Organised Crime and Cybersecurity: Integrating the Rights-Based Approach.77
The Council of Europe, too, has become an important norm entrepreneur in this
regard. For example, in 2014, the Council of Europe’s Commissioner for Human Rights,
Nils Muižnieks, released an Issue Paper on The Rule of Law on the Internet and in the
Wider Digital World, urging member states to:

73
  The report built on the report of the UN High Commissioner for Human Rights, The
Protection and Promotion of the Right to Privacy in the Context of Domestic and Extraterritorial
Surveillance and/or the Interception of Digital Communications and the Collection of Personal Data,
Including on a Mass Scale.
74
  UN Human Rights Council, ‘The Promotion, Protection and Enjoyment of Human Rights
on the Internet’, Resolution A/HRC/32/L.20 (June 2016).
75
  UN General Assembly, Report of the Group of Governmental Experts on Developments in
the Field of Information and Telecommunications in the Context of International Security, A/68/98*
(June 2013).

76
  UN General Assembly, Report of the Group of Governmental Experts on Developments in
the Field of Information and Telecommunications in the Context of International Security, A/70/174
(July 2015).

77
  European Commission, Operational Human Rights Guidance for EU External Cooperation
Actions Addressing Terrorism, Organised Crime and Cybersecurity: Integrating the Rights-Based
Approach (2015).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 90 13/12/2018 15:25


Cybersecurity and human rights  91

[e]nsure that any restrictions on access to Internet content affecting users under their jurisdic-
tion are based on a strict and predictable legal framework regulating the scope of any such
restrictions and affording the guarantee of judicial oversight to prevent possible abuses. In
addition, domestic courts must examine whether any blocking measure is necessary, effective
and proportionate, and in particular whether it is targeted enough so as to impact only on the
specific content that requires blocking. Member states should not rely on or encourage private
actors who control the Internet and the wider digital environment to carry out blocking outside
a framework meeting the criteria described above.78

The report was followed by an in-depth report on the State of Democracy, Human Rights
and the Rule of Law,79 and a comparative study on Filtering, Blocking and Take-Down of
Illegal Content on the Internet.80 This latter study compares policy and practice across
the organization’s 47 member states, describing and assessing both the legal framework
and the relevant case law and practice in the field and the impact of those measures on
freedom of expression. More recently, the Council of Europe established a Committee of
Experts on Internet Intermediaries (MSI-Net) to prepare standard-setting proposals on
the roles and responsibilities of Internet intermediaries. The aim of the group is to prepare
a draft recommendation by the Committee of Ministers on Internet intermediaries and
the preparation of a study on human rights dimensions of automated data processing
techniques (in particular algorithms) and possible regulatory implications.81
Rights have also been defended in other multi-stakeholder processes. For instance, the
Seoul Framework document that resulted from the so-called ‘London Process’, a series of
global conferences that discuss issues pertaining to cyberspace and which have been held
thus far in London, Budapest, Seoul and The Hague, noted that:

[it] is important to maintain an open environment that supports the free flow of information,
research, innovation, entrepreneurship and business transformation, to ensure the protection of
personal information in the online environment and to empower consumers and users in online
transactions and exchanges.82

Following from this, the Chair’s statement from The Hague conference urged stakehold-
ers ‘to ensure that cybersecurity policies are, from their inception, rights-respecting and
consistent with international law and international human rights instruments’.83

78
  Council of Europe, The Rule of Law on the Internet and the Digital World, Issue Paper
(Council of Europe Commissioner for Human Rights, 2014).
79
  Council of Europe, State of Democracy, Human Rights and the Rule of Law in Europe, Report
by the Secretary-General of the Council of Europe (2015).
80
  Swiss Institute of Comparative Law, Comparative Study on Filtering, Blocking and Take-
Down of Illegal Content on the Internet (Council of Europe, 2015).
81
  MSI-NET Working Group, see www.coe.int/en/web/freedom-expression/committee-of-
experts-on-internet-intermediaries-msi-net-, and their recommendations on intermediary liability.
82
  Seoul Framework For and Commitment to Open and Secure Cyberspace (GCCS, 2013),
available at www.dsci.in/sites/default/files/Seoul%20Framework.pdf (accessed 12 November
2017).
83
  Chair’s Statement, Global Conference on Cyberspace GCCS (2015), available at www.
gccs2015.com/sites/default/files/documents/Chairs%20Statement%20GCCS2015%20-%2017%20
April.pdf (accessed 12 November 2017).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 91 13/12/2018 15:25


92  Research handbook on human rights and digital technology

3.2.2  Other norm entrepreneurs


Beyond the work that has influenced these normative developments at the international
and regional levels, a number of civil society organizations or multi-stakeholder group-
ings have also been very proactive in their efforts to shape cybersecurity policy. Global
Partners, for example, has developed a guide entitled Cyber Security Policy for Human
Rights Defenders. Beyond an important discussion on the challenges posed by the different
definitions of cybersecurity, the document covers a lot of ground, including the relation-
ship between cybersecurity and human rights; the human rights implications of a number
of information security issues, ‘including the need for international technical standards
on cybersecurity and information-sharing on cyber threats’; cybercrime legislation; and
the application of international norms to cyberspace. Moreover, it provides guidance to
human rights defenders on how they can promote human rights issues in cybersecurity
policy debates, and proposes a number of principles for human rights engagement with
cybersecurity policy-making.84
Other groups such as Citizen Lab, AccessNow, Observer Research Foundation and
many, many more have played an important role in engaging human rights issues in
cybersecurity policy-making. Some, like the Global Network Initiative, are working with
industry actors to ensure the respect for freedom of opinion and expression, developing
Principles on Freedom of Expression and Privacy for industry, which in turn helped
shape the UN Principles on Business and Human Rights endorsed by the Human
Rights Council in 2011 to guide industry action in the field of cybersecurity. Another
more targeted civil society initiative 13 Principles on the Application of Human Rights to
Communication Surveillance can also be considered as a guide for state action.85
Already under fire by users in light of the privacy-related issues noted above, companies
themselves are beginning to self-regulate, with some taking a stronger normative stance
on government requests for access to user data or to restrict or remove online content.86

4.  FINDING MIDDLE GROUND

Despite these important normative developments, the cybersecurity laws and measures
adopted by individual countries and discussed above continue to negatively impact human
rights ‘by directly infringing upon such rights or creating a chilling effect on the desire of
people to express [them]’.87 Across jurisdictions, national security prerogatives continue
to trump human rights, with limited transparency around the criteria of necessity and
proportionality applied to such actions and providing little if any access to remedy.

84
  Carly Nyst and Sheetal Kumar, Travel Guide to the Digital World: Cyber Security Policy for
Digital Defenders (Global Partners, 2016).
85
  The 13 Principles comprise: legality; legitimate aim; necessity; adequacy; proportionality;
competent judicial authority; due process; user notification; transparency; public oversight; integrity
of communications and systems; safeguards for international cooperation; and safeguards against
illegitimate access. See www.eff.org/document/13-international-principles-application-human-rights-
communication-surveillance.
86
  Nyst and Kumar, Travel Guide to the Digital World, n. 84 above.
87
  Ibid.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 92 13/12/2018 15:25


Cybersecurity and human rights  93

The general tendency in policy circles to discuss ‘tensions and trade-offs’ with regard to
security and rights has served to cement this viewpoint among policy-makers and across
jurisdictions, with limited discussion beyond the CJEU on how one can be achieved
without undermining the other. For instance, the relationship between privacy and
security has traditionally been viewed as a trade-off, whereby the need to increase security
would inevitably curb the privacy enjoyed by the citizenry. Such provisions making rights
contingent on public safety or public morals were included in the core human rights
instruments. Interestingly, they were also included in the first international regimes sur-
rounding telecommunications established in the mid-nineteenth century.88
Over time, criteria have been developed for the application of such provisions, notably
with regard to necessity, proportionality, predictability and access to legal remedy. Yet,
the arbitrary use of the provision by states when it comes to cybersecurity continues to
pose challenges. On a more positive note, however, a number of human rights experts and
technologists are focusing their efforts on determining how to shift from this zero-sum
mind-set to a concept of national security that ‘becomes security relevant to people’.89
In the field of cybersecurity, two approaches can contribute to the latter: cybersecurity
law and policy that is rights-respecting by design; and technology that is security- and
rights-respecting by design.

4.1  Cybersecurity Law and Policy that is Rights-Respecting by Design

On the privacy and data protection front, adopting these approaches that place rights
as a central feature of both policy and technology design may require a legal and
normative shift in our conception of data ownership, placing ownership and control of
personal information in the hands of the user, rather than the service provider. It could
mean guaranteed end-to-end encryption and public education programmes focused on
personal privacy and data protection. It could mean instilling stronger accountability and
oversight structures where data collection is deemed necessary, by ensuring that the scope
of such powers is narrowly defined, and that oversight mechanisms include staff with high
level computer skills, and judicial authorization for any interference in people’s privacy.90
Perhaps one development that may help shift the balance in cybersecurity strategies so
that human rights protections are integrated into cybersecurity law and policy is the EU
Network and Information Systems (NIS) Security Directive 2016/1148 ([2016] OJ L194/1,
July 2016). The Directive provides a range of legal measures to boost the overall level of
cybersecurity in the EU by ensuring Member States’ preparedness, requiring them to be
appropriately equipped, e.g. via a Computer Security Incident Response Team (CSIRT)
and a competent national NIS authority; cooperation and the exchange of information
among Member States. They will also need to set up a CSIRT Network, in order to promote
swift and effective operational cooperation on specific cybersecurity incidents and sharing

88
  Camino Kavanagh, Information Technology and the State: The Long View (Ph.D Dissertation,
Department of War Studies, King’s College, London, 2016).
89
  Hoogensen and Stuvøy, ‘Gender, Resistance and Human Security’, n. 5 above.
90
  Andrew Puddephatt and Lea Kaspar, Cybersecurity is the New Battleground for Human Rights
(openDemocray, 2015), available at www.opendemocracy.net/wfd/andrew-puddephatt-lea-kaspar/
cybersecurity-is-new-battleground-for-human-rights.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 93 13/12/2018 15:25


94  Research handbook on human rights and digital technology

information about risks; a culture of security across sectors which are vital for our economy
and society and, moreover, rely heavily on ICTs (energy, transport, water, banking, financial
market infrastructures, healthcare and digital infrastructure). Businesses in these sectors
that are identified by the Member States as operators of essential services will have to take
appropriate security measures and to notify serious incidents to the relevant national author-
ity. Also key digital service providers (search engines, Cloud computing services and online
marketplaces) will have to comply with the security and notification requirements under
the new Directive. Importantly, it also includes a strong paragraph (para. 75) reminding EU
Member States of other international law obligations when implementing the Directive:

This Directive respects the fundamental rights, and observes the principles, recognised by the
Charter of Fundamental Rights of the European Union, in particular the right to respect for
private life and communications, the protection of personal data, the freedom to conduct a
business, the right to property, the right to an effective remedy before a court and the right to be
heard. This Directive should be implemented in accordance with those rights and principles.91

To minimize risks, the NIS Directive does not set any new privacy rules, instead requiring
that providers of essential services comply with EU data protection rules when processing
personal data.
In this regard, when companies notify authorities about a security incident, they must
do so in a way that abides by the data security rules set out in the EU General Data
Protection Regulation 2016/679 (GDPR), which ‘ensures individuals are in control of
their own data, providing for a long list of users’ rights and a clear set of obligations for
companies’.92 Hence, if a security incident results in a breach of personal data placing
user privacy at risk, companies are obliged to notify the data protection authorities so that
users have the opportunity to seek remedy. As discussed by AccessNow, the Directive does
not, however, address governments’ capabilities as they handle the data being reported,
‘either in terms of how they will protect users’ privacy or how they will use the data for
analysis’,93 meaning that oversight of national data protection authorities will be crucial
for ensuring compliance with privacy standards.
In the United States, foreign policy-makers have largely viewed the promotion of
human rights and the protection of national security as in inherent tension with each
other. Almost without exception, each administration has treated the two goals as
mutually exclusive: promote human rights at the expense of national security or protect
national security while overlooking international human rights.94 The protection of core
rights and principles has nonetheless been introduced into cybersecurity policy. For
instance, the US International Strategy for Cyberspace (2011)95 recognized the importance

91
  EU Network and Information Systems (NIS) Security Directive 2016/1148 [2016] OJ
L194/1(July 2016).
92
 AccessNow, General Data Protection Regulation, n. 46 above.
93
 AccessNow, EU Cybersecurity Directive Finalised: The Knights Who Say NI(S) (2016), avail-
able at www.accessnow.org/eu-cybersecurity-directive-finalised-the-knights-who-say-nis/.
94
  William W. Burke-White, Human Rights and National Security: The Strategic Correlation, Faculty
Scholarship Paper 960 (2014), available at http://scholarship.law.upenn.edu/faculty_scholarship/960.
95
  US International Strategy for Cyberspace: Prosperity, Security, and Openness in a Networked
World (The White House, 2011).

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 94 13/12/2018 15:25


Cybersecurity and human rights  95

of respecting free speech and association, privacy, and the free flow of information within
the broader framework of cybersecurity policy, while Executive Order 13636 of 2013 on
‘Improving Critical Infrastructure Cybersecurity’ established that:

[i]t is the Policy of the United States to enhance the security and resilience of the Nation’s critical
infrastructure and to maintain a cyber environment that encourages efficiency, innovation, and
economic prosperity while promoting safety, security, business confidentiality, privacy, and civil
liberties.96

Yet, significant human rights challenges have emerged, and continue to emerge, around
the ‘narrow interests and undue fears’ that were limiting progress in the sphere of cyber-
security in the first place.

4.2  Technology that is Security- and Rights-Respecting by Design

Technology itself may result in the most effective guarantor of human rights. This, at
least, is the position of a growing number of experts. Shocked by the ease with which
human rights and civil liberties were being traded for greater security in the wake of the
September 2001 attacks in the United States, Bruce Schneier has since been stressing that
‘security measures that reduce liberty are most often found when system designers fail to
take security into account from the beginning’.97 When security is designed into a system,
he suggests, ‘it can work without forcing people to give up their freedom’.
This argument has gained significant currency. In a paper entitled Security and Privacy
by Design: A Convergence of Paradigms, Marc Chanliau of Oracle Corporation and Ann
Cavoukian, former Information and Privacy Commissioner of Ontario, Canada, noted
that:

privacy must be proactively incorporated into networked data systems and technologies, by
default. The same is true of security. Both concepts must become integral to organisational
priorities, project objectives, design processes and planning operations.98

Cavoukian had earlier proposed a number of principles for Privacy by Design (2006). A
decade later she launched the International Council on Global Privacy and Security by
Design with the aim of dispelling ‘the commonly held view that organizations must choose
between privacy and public safety or business interests’, and ‘educating ­stakeholders that
public- and private-sector organizations can develop policies and technologies where
privacy and public safety, and privacy and big data, can work together’99 for a better
outcome. This approach does not, however, deal with some challenges, including the fact

96
  US President Executive Order 13636 of 2013 on ‘Improving Critical Infrastructure
Cybersecurity’.
97
  Bruce Schneier, Protecting Privacy and Liberty: The Events of 11 September Offer a Rare
Chance to Rethink Public Security (Schneier on Security, 2001), available at www.schneier.com/
essays/archives/2001/10/protecting_privacy_a.html.
98
  A. Cavoukian and M. Chanliau, Security and Privacy by Design: A Convergence of Paradigms
(Ontario, 2015), available at www.ipc.on.ca/?id=1266.
99
  International Council on Global Privacy and Security by Design (ICGPSD), Mission
Statement (2016), available at https://gpsbydesign.org/who-we-are/.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 95 13/12/2018 15:25


96  Research handbook on human rights and digital technology

that governments can simply outlaw the use of certain services and equipment if they are
deemed to be threatening, or if the data flowing through them cannot be used for law
enforcement and surveillance purposes.
Getting technology to respect other rights such as freedom of expression by design is
a different and likely more complex issue. There is a growing reliance on algorithms, for
instance, to manage content relating to child pornography, violent extremism and terror-
ism, which have produced interesting results. Image hashing, which involves automating
the process of removing, for example, extremist videos from social media platforms
that have already been removed to prevent the so-called ‘whack-a-mole’ effect, is on the
up-tick. What is less clear is how extremist videos are identified in the first place, includ-
ing how much human effort is expended in trawling through video to find potentially
extremist content; whether there are automated processes in place to mine videos and
spot unwelcome footage; and what remedial action would be taken if content is removed
erroneously. The use of algorithms to remove written content is evidently much more
complex, requiring decisions not just with regard to detecting the content to be removed
(which is still a significantly grey area), but placing any decision to remove the content
within its proper international, regional or national normative context. Moreover, as
discussed by Kroll et al. in their article entitled ‘Accountable Algorithms’, ‘because
automated decision systems can return potentially incorrect, unjustified or unfair results,
additional approaches are needed to make such systems accountable and governable’100
so as to assure the interests of citizens and society as a whole.

5. CONCLUSION

In this chapter, we showed the various issues arising from the framing of cybersecurity
as national security. Even though there is no well-established or large body of literature
dealing with the subject, many recent developments provide ample material for further
academic research. By walking through both the threat to human rights posed by state
and non-state actors’ actions, as well as the various efforts to strengthen human rights pro-
tection in the digital realm, we have provided an overview of those developments that can
serve as a good starting point for researchers. Importantly, we think it is necessary to take
a stance against the common conception that national security and human rights as they
relate to cyberspace are incompatible. Through rights-respecting design of cybersecurity
law and policy and, very importantly, technology that is security- and rights-respecting
by design, both ‘camps’ can be reconciled, with benefits for all.
Frankly, without technical security that respects confidentiality, integrity and avail-
ability of information, there cannot be any security in and through cyberspace. The one
is a prerequisite for the other. Security practices such as intelligence agencies’ exploitation
of vulnerabilities in computer systems and their weakening of encryption standards have
the potential not only to create new vulnerabilities but also to further undermine trust
and confidence in cyberspace and in the very institutions involved. There is no guarantee

100
  Joshua A. Kroll et al., ‘Accountable Algorithms’ (2017) 165 University of Pennsylvania Law
Review 633.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 96 13/12/2018 15:25


Cybersecurity and human rights  97

that these entities have full control over the technologies and vulnerabilities involved.
Nor is there any guarantee they can maintain their secrecy – in other words, they could
be identified and exploited by criminal hackers or even ‘terrorists’. Here, state practices
not only become a threat for human security and rights: paradoxically, they also become
a threat to states themselves. Therefore, there is only one way forward: to strengthen the
overall security of the information technological infrastructures through rights-respecting
policy and technology by design.

Myriam Dunn Cavelty and Camino Kavanagh - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:08AM
via New York University

WAGNER_9781785367717_t.indd 97 13/12/2018 15:25


6.  Cybercrime, human rights and digital politics
Dominik Brodowski*

1.  C
 YBERCRIME AS A SHIELD AND AS A SWORD IN
RELATION TO HUMAN RIGHTS

1.1  Cybercrime and the Tool of Criminalization

Cybercrime1 law may be defined as the legal framework stipulating which conduct relat-
ing to cyberspace in particular, or information and communication technology (ICT) in
general, is criminalized, and how such crimes may be investigated and prosecuted. The
most obvious consequence of criminalization is that individual violations of such offenses
will lead to the prescribed criminal sanction (such as imprisonment); some jurisdictions
extend the imposition of criminal punishment to the wrongdoing of corporations.2 Beyond
the sphere of criminal justice, violations of criminal offenses may lead to civil and/or
tort liability,3 and their violation or (imminent) risk of violation may allow the police to
intervene based on police powers.4 Beyond these specific effects, some, such as the German
Federal Constitutional Court, consider criminal laws to express the fundamental consensus
of a society on how to behave.5 Therefore, the creation and modification of criminal
offenses can carry a high symbolic value,6 and thereby offer a pronounced political tool.7

1.2  On the Political Leverage of the ‘Fight Against Cybercrime’

Because of the severe stigma associated with certain kinds of cybercrimes (especially
terrorism, child pornography and organized crime come to mind), the ‘fight’, ‘combat’
or even ‘war’ on such crimes carries a high political leverage to introduce far-reaching

*  The research in furtherance of this chapter was partly supported by the German Federal
Ministry of Education and Research in the project ‘Open Competence Center for Cyber Security
(OpenC3S)’. The author is solely responsible for the content of this chapter.
1
  For the genesis of the term’ cybercrime’, and other approaches to defining it, see Chawki et
al. (2015) 3; Kshetri (2010) 3; Sieber (2008) 127–31); Schjolberg (2014) 12; United Nations Office
on Drugs and Crime (UNODC) (2013) 11; Vogel (2008) 1–2; Wall (2007) 10.
2
  Regarding corporate criminal liability in general, see, inter alia, Laufer (2006); Pieth and
Ivory (2011); Brodowski et al. (2014).
3
  For example, German Civil Code, s. 823 II, specifies tort liability for anyone committing a
breach of a (criminal) statute protecting another person.
4
  On the relation between criminal law, criminal investigations and police law in Germany, see
Brodowski (2016a).
5
  See German Federal Constitutional Court, Judgment of the Second Senate of 30 June 2009
– 2 BvE 2/08 = ECLI:DE:BVerfG:2009:es20090630.2bve000208, at 249 and 253.
6
  See generally Hassemer (1989).
7
  In spite of this, only a small fraction of criminal statutes and their modification are actually
discussed widely in public.

98
Dominik Brodowski - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 98 13/12/2018 15:25


Cybercrime, human rights and digital politics  99

changes to the regulation of ICT in general, and the Internet in particular.8 To name one
famous example: telecommunications data retention9 was and is regularly linked to the
prevention, investigation and prosecution of terrorism, organized crime, and (to a lesser
extent) child pornography.10 Once implemented, however, one of the quantitatively most
prominent usage of the retained data is to determine who committed copyright infringe-
ments and to hold them liable for damages under civil and/or tort law.11

1.3  Cybercrime as a Shield and as a Sword

Equally to criminal law in general, cybercrime serves both as a shield and as a sword in
relation to human rights.12
Commonly, human rights are seen as a limitation and thus as a ‘shield’13 protecting indi-
viduals against overly broad substantive criminal laws (i.e., the laws describing which spe-
cific14 conduct bears the threat of punishment): a state must not criminalize conduct insofar
as such conduct is protected by human rights.15 For example, statements fully protected by
freedom of speech or by freedom of expression may not be limited by criminal laws. On the
other hand, in recent years a ‘sword’ function of human rights and criminal law has gained
in relevance. Criminal laws are seen to protect those who are a victim of a crime or who
would, in the absence of a specific criminalization and its preventive effects, become victims
of such a crime.16 To use an example relating to the sphere of freedom of speech again: the
criminalization of incitement to genocide or terrorism serves to protect the human rights of
the (actual or potential) victims of those crimes. Whether and to what extent human rights
actually require states to criminalize conduct, and thus contain a ‘duty to protect’ against

 8
  In a similar vein, specific investigation and prosecution powers, such as the surveillance of
telecommunications, are regularly extended to more and more crimes, even if they are at first only
applicable to a few, very specific, most severe crimes.
 9
  On telecommunication data retention, see 3.2 below.
10
  See, inter alia, Burkhart (2014) 94, and the reasoning in the Extended Impact Assessment
annexed to the Proposal for a Directive of the European Parliament and of the Council on the
retention of data processed in connection with the provision of public electronic communication
services and amending Directive 2002/58/EC, SEC/2005/1131.
11
  On the interest by copyright holders to access such data, see Czychowski and Bernd
Nordemann (2008).
12
  On this metaphor first coined by Christine Van den Wyngaert, see Tulkens (2011) 578.
13
  See, inter alia, ibid. 579.
14
  The requirement of nullum crimen sine lege certa is strongly rooted in international human
rights instruments; just see International Covenant on Civil and Political Rights (ICCPR), Art. 15;
European Convention for the Protection of Human Rights and Fundamental Freedoms (ECHR),
Art. 7; Charter of Fundamental Rights of the European Union (CFR-EU), Art. 49(1).
15
  See, inter alia, Tulkens, ‘The Paradoxical Relationship Between Criminal Law and Human
Rights’, n. 12 above, 579–82; UNODC, Comprehensive Study on Cybercrime, n. 1 above, 107–10,
and, for the German context, Gärditz (2016) 642.
16
  See, inter alia, Tulkens, ‘The Paradoxical Relationship Between Criminal Law and Human
Rights’, n. 12 above, 582–92, and the jurisprudence of the European Court of Human Rights
(ECtHR) referenced therein, namely, the leading case X and Y v. The Netherlands, Appl. No. 8978/80,
ECtHR, Judgment of 26 March 1985, para. 24. On victims of cybercrime, see generally Martellozzo
and Jane (2017).

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 99 13/12/2018 15:25


100  Research handbook on human rights and digital technology

infringements of human rights by third (private) parties,17 is still an unsolved question in


the interplay between human rights, international, constitutional and criminal law. In any
case, both proscription and prescription of criminalization resulting from human rights only
covers the extremes. In between, there is a notable margin where it is primarily a matter for
the legislative branch to decide which conduct is to be criminalized, within the limitations
specified by constitutional law.18 As will be discussed in more detail below,19 these ‘shield and
sword’ dimensions of human rights and the margin in between is present not only in terms of
criminalizing speech, but also, for instance, relating to the protection of personal data and of
private life, of property, and even of ICT security20 by means of criminal law.
The aforementioned ‘shield and sword’ dimensions of human rights are also present in
the field of criminal procedure, criminal investigations, and – at least in some jurisdictions
– in terms of police powers. On the one hand, these investigatory powers are limited by
the shielding (= protections) offered by human rights. For example, criminal investiga-
tions relating to a minor misdemeanor cannot justify the secret surveillance of the ICT
used by the accused, as this would constitute an infringement of his or her human rights
protection of private life.21 On the other hand, criminal laws can only fulfill their purpose
if there is a clear and present risk for any perpetrator to actually be caught and convicted
of the crimes he or she has committed.22 Therefore, the ‘sword’ function of human rights
requires an (at least somewhat) effective configuration of the investigation and prosecu-
tion of crimes.23 In particular, it requires that sufficient resources are allocated to the
criminal justice system to provide a fair,24 just, equal and effective prosecution of crimes.

1.4  Core Challenges of Cybercrime

The same ‘shield and sword’ dimension is evident in the core challenges25 of cybercrime
and cybercrime-related investigations and prosecutions.
ICT is programmable. Thereby, it is possible to automate activities in a manner that even
minor causes can result in an enormous result. This may mean that a short tweet with an
argument for a political debate can easily reach hundreds of thousands of people in the

17
  See UNODC, Comprehensive Study on Cybercrime, n. 1 above, 110–16.
18
  On the disputed German perspective on this topic, see recently Burchard (2016); Gärditz,
‘Demokratizität des Strafrechts und Ultima Ratio-Grundsatz’, n. 15 above; Jahn and Brodowski
(2016); Jahn (2016).
19
  See 3 below.
20
  On the protection of ICT security by means of criminal regulations, see Brodowski (2015).
21
  For more details, see 4 below.
22
  On the link between the perceived sanction risk and the preventive effects of criminal law, see,
inter alia, Nagin (1998); Pauwels et al. (2011). On the German debate on a constitutional requirement
of an effective criminal justice system (‘Funktionstüchtigkeit der Strafrechtspflege’), see Hassemer
(1982); Jahn (2014); Landau (2007).
23
  See, with a specific focus on cybercrime, Brenner (2012) 144–70.
24
  On the enhanced preventive effects of proceedings perceived as fair, see Tyler (1988); Tyler
(2003); Paternoster et al. (1997).
25
  On these core challenges, partly described in a different configuration and/or from a different
perspective, see Brodowski and Freiling (2011) 53–58; Brodowski (2016) 335–37; Gercke (2008);
Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 132–37; Wall, Cybercrime, n.
1 above, 34–48.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 100 13/12/2018 15:25


Cybercrime, human rights and digital politics  101

blink of an eye. But it also means that a misappropriation of merely a tiny fraction of 1
Euro per transaction can result in millions of Euros in damages overall, with possibly only
minor damage to each victim, but serious damage to society as a whole.
The data stored in and transferred over ICT is (at least typically) volatile and can be
copied without loss of information. This means that digital traces may disappear quickly,
and data may be transferred at short notice. In combination with encryption technology
and the ability to transfer data across borders, this allows, for example, for a low-risk
whistle-blowing. Encryption and backups at various locations furthermore protect or at
least mitigate against attacks by cybercriminals. But these tools, in the hands of criminals,
enhance their chances to hide their identity and to cover their traces.
ICT is omnipresent but vulnerable. ICT is neither safe nor secure, but open to attacks.
From large-scale, automated attacks to so-called advanced persistent threats, ICT can be
misused by individual cybercriminals, by organized cybercrime, and by state-sponsored
agencies. On the other hand, one should never forget all the advantages ICT brings to
society in general, but also to the justice system in particular.

2.  C
 YBERCRIMES: AN OVERVIEW FROM A HUMAN-RIGHTS
BASED PERSPECTIVE

With the competence to enact and modify criminal laws generally26 vested in the
sovereign states, but the Internet and cybercrime having a strong international dimen-
sion, there is a pronounced need for an international approach to address cybercrime.
This relates both to international cooperation in criminal matters27 and to the closely
intertwined question of an international consensus on and harmonization of substan-
tive cybercrime laws.28 Therefore, I will primarily focus on international harmonization
agreements on cybercrime29 and related model laws,30 and on further international31

26
  With notable exceptions being core crimes under international criminal law and (although
disputed) some EU competences to enact Regulations containing substantive criminal law, espe-
cially pertaining to the protection of the financial interests of the EU (Treaty on the Functioning
of the European Union (TFEU), Art. 325).
27
  See 4 above.
28
  See generally Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 132–57.
29
  Most prominently, the Council of Europe Convention on Cybercrime of 2001 (Budapest
Convetion, often abbreviated CCC), ETS No. 185, and its Additional Protocol of 2003, ETS
No. 189. See additionally the Agreement on Cooperation among the Member States of the
Commonwealth of Independent States (CIS) in Combating Offences relating to Computer
Information (Minsk, 1 July 2001) (CIS-CI); Arab Convention on Combating Information
Technology Offences (Cairo, 21 December2010) (Arab ITO Convention); and African Union (AU)
Convention on Cyber Security and Personal Data Protection (Malobo, 27 June 2014) (AU-CS).
30
  See, in particular, Commonwealth Model Laws on Computer and Computer-related Crime
(2002); East African Community Draft Legal Framework for Cyberlaws (2008); ITU/Secretariat
of the Pacific Community Model Law on Cybercrime (2011); Southern African Development
Community (SADC) Model Law on Computer Crime and Cybercrime (2012); and the review by
Jamil (2014).
31
  On UN, ITU and academic approaches, see the overview in Brodowski, ‘Transnational
Organised Crime and Cybercrime’, n. 25 above, 340–41. Of notable interest are the academic

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 101 13/12/2018 15:25


102  Research handbook on human rights and digital technology

and supranational32 approaches, even if the actual crimes are defined in the national
implementation of such instruments.

2.1  General Part, Including Jurisdiction to Prescribe

Criminal statutes are enacted almost exclusively at the national level, but are often not lim-
ited to situations wholly within the national level. States routinely extend their jurisdiction
to situations where either only the conduct or only the effect occurs within their territory.
In accordance with international law, they may moreover extend their jurisdiction to
prescribe to extra-territorial situations, as long as there is some genuine link.33 This means
that the same person’s conduct may be governed by a multitude of jurisdictions. As the
criminalization of cybercrimes may differ, either explicitly or implicitly,34 across borders,
but conduct online routinely has a cross-border dimension, the question of conflict of
jurisdiction gains a special and yet unsolved relevance in the regulation of the Internet.35 In
particular, conduct may be protected under constitutional rights in one state (such as free-
dom of speech, privacy, and freedom of occupation), but still face criminal punishment in
another state. In the leading case on this topic, the German Federal Court of Justice ruled
in 2000 that Germany could extend jurisdiction to an Australian national who uploaded
infringing material (here: denying the Shoah (Holocaust)) onto an Australian Internet
server while sitting in Australia, on the sole ground that the content could be accessed
from Germany and thereby could incite hatred in violation of a German criminal statute,
even though this conduct was not prohibited under Australian laws at that time.36 Besides
the underlying question of freedom of speech and its limitations, similar conflicts of
regulation by criminal laws are likely in the domains of data protection versus disclosure
requirements, and closely regulated business activities such as online gambling.37
In relation to the general part, another major peculiarity of cybercrime is its attribution
and any limits thereto. If A fails to keep his or her ICT systems up to date, and then B
misuses A’s ICT systems to defraud, or even physically hurt or kill, as in the case of an
autonomous vehicle, a victim using software developed by C, relying on access codes
phished by D, while being connected to the open Wifi network of E, then each person

findings by Sofaer and Goodman (2001), containing a ‘Stanford Draft International Convention
to Enhance Protection from Cyber Crime and Terrorism’, and the resolutions of the XIXth
International Congress of the AIDP, see International Association of Penal Law (AIDP) (2014).
32
  See, in particular, EU Directive 2013/40/EU of the European Parliament and of the Council
of 12 August 2013 on attacks against information systems and replacing Council Framework
Decision 2005/222/JHA; and the Economic Community of West African States (ECOWAS)
Directive C/DIR. 1/08/11 on Fighting Cyber Crime Within ECOWAS of 19 August 2011.
33
  See already Permanent Court of International Justice, Case of the S.S. ‘Lotus’ (France v.
Turkey).
34
  Due, inter alia, to the influences of the general part, of the constitution, and procedural law.
35
  See generally, Böse et al. (2013–2014).
36
  German Federal Court of Justice, Judgment of 12 December 2000 – 1 StR 184/00 = BGHSt
46, 212. But see also, more recently, German Federal Court of Justice, Judgment of 3 May 2016 – 3
StR 449/15, at 15.
37
  See Brodowski and Freiling, Cyberkriminalität, Computerstrafrecht und die digitale Schatt­
enwirtschaft, n. 25 above, 167–68.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 102 13/12/2018 15:25


Cybercrime, human rights and digital politics  103

sets a sine qua non for the commission of the specific crime. Whether and to what extent
the crime can be attributed not only to the main perpetrator B, but also to C, D, E, and
possibly even A, requires a profound analysis of objective and subjective attribution based
on the specific rules, preparatory offenses, and (in relation to A and E) negligence offenses
of each jurisdiction. The situation is further complicated by the fact that organized
cybercrime38 still tends to operate based on what Wall labeled a ‘flat e-commerce business
model’,39 which is usually described as follows: perpetrators tend to focus on their specific
technical skill-set, collaborate loosely and on a case-by-case basis with each other, and
conduct a criminal activity for individual economic benefits.40
Modern and future trends in the automation of decision-making raise further questions
relating to the general part of criminal law, but also possibly to the regulation of ICT in
general. What decisions should ICT take, how should ICT be ‘trained’ and programmed
to decide? Commonly, this question is discussed relating to autonomous driving, the
‘Weichensteller’41 or Trolley dilemma42 and its variations. Let us assume that a self-driving
car determines that if it keeps track, it will crash and cause the death of three people cross-
ing the street. If the car steers to the left, two children will die. And if it steers to the right,
one woman will be killed. What should it do?43 Besides the criminal law-specific question
of whether the deadly outcome can and should indeed be attributed to a specific program-
mer (or, in case of corporate criminal liability, to his or her company and/or the company
producing/selling the car), such questions highlight the ethical challenges of the evolving
automation of ever more areas of our lives, and the need to thoroughly reflect on human
rights in the regulation of ICT. The starting point for such discussions should be, in my
opinion, that a person using ICT for a specific purpose, having specific knowledge of its
operation, can and is to be held responsible for the foreseeable results of its operation. If
a doctor (mis-)designs and uses an online triage tool to schedule patient visits, and this
tool causes him or her to violate medical standards of how to handle emergency cases, he
or she is to be held (criminally) liable for this negligence. Problems only arise insofar as
the modus operandi is unknown and/or may legitimately be unknown to the user (such as
is the societal aim for self-driving cars).

2.2  Offenses Against ICT

Crimes targeting information and communication technology (ICT)44 can affect property
rights, but – possibly even more importantly – the private life and personal data of its

38
  On organized cybercrime, see Brenner (2002); Broadhurst et al. (2014); Brodowski,
‘Transnational Organised Crime and Cybercrime’, n. 25 above; Kshetri, The Global Cybercrime
Industry, n. 1 above; Lusthaus (2013); Wall (2010).
39
  Ibid. 54.
40
  See Kshetri, The Global Cybercrime Industry, n. 1 above; Wall, ‘The Organization of
Cybercrime and Organized Cybercrime’, n. 38 above; summarized in Brodowski, ‘Transnational
Organised Crime and Cybercrime’, n. 25 above, 338.
41
  Welzel (1954) 51.
42
  Foot (1967).
43
  On this ongoing debate, see, inter alia, Hilgendorf (2017).
44
  See generally Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 139–43;
UNODC, Comprehensive Study on Cybercrime, n. 1 above, 82–96; Kochheim (2015).

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 103 13/12/2018 15:25


104  Research handbook on human rights and digital technology

users. In terms of fundamental rights protections against encroachment by the state,


the German Federal Constitutional Court even coined a new ‘fundamental right to the
guarantee of the confidentiality and integrity of information technology systems’.45
For the aforementioned reasons, protecting the confidentiality, integrity and avail-
ability (so-called ‘CIA offenses’) of ICT is a common theme of criminal law and its
international frameworks. In particular, most common international frameworks call for
the criminalization of:

● violating the confidentiality of ICT by intentionally accessing an ICT system


without right, in particular if there is ‘an [additional] intent to (unlawfully) obtain
data, or to circumvent security measures’46 (e.g. Council of Europe Convention on
Cybercrime 2001 (Budapest Convention, CCC) Article 2);
● violating the confidentiality of ICT by intentionally intercepting information
transferred between ICT systems (e.g. CCC, Article 3);
● violating the integrity of ICT by intentionally damaging, deleting, deteriorating,
altering or suppressing computer data without right (e.g. CCC, Article 4(1)); and
● violating the availability of ICT by serious and intentional interference with its
operation (e.g. CCC, Article 5).

From the standpoint of human rights, the separate criminalization requirement of inter-
national instruments which relates to the ‘handling of devices and especially computer
programs “designed or adapted primarily for the purpose of committing” the CIA
offences, . . . as well as [the handling of] access codes such as passwords’47 (e.g. CCC,
Article 6) raises specific concerns, which I have referred to already elsewhere:

Besides dogmatic controversies regarding the early stage a perpetrator becomes criminally liable,
the main dispute relates to the aspect that IT security specialists require to use the same or similar
(‘dual-use’) tools as cybercriminals in order to analyse computer systems for weaknesses. In this
context, therefore, particular emphasis has to be put on the mens rea requirements, which relate
both to the object of the crime and to the future commission of a specific crime against the
confidentiality, integrity, and availability of a computer system or computer data.48

2.3  Computer-Related and Content-Related Offenses

From the wide range of offenses which can be committed using ICT (up to murder), two
areas warrant a closer look.
Under the umbrella term of ‘computer-related offenses’,49 international frameworks
call for the criminalization of computer-related forgery (e.g. CCC, Article 7) and
computer-related fraud (e.g. CCC, Article 8), thereby protecting directly or indirectly the

45
  German Federal Constitutional Court, Judgment of the First Senate of 27 February 2008 – 1
BvR 370/07 = ECLI:DE:BVerfG:2008:rs20080227.1bvr037007.
46
  Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 343.
47
  Ibid. 344.
48
  Ibid. 345.
49
  See generally Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 144–46;
UNODC, Comprehensive Study on Cybercrime, n. 1 above, 96–100.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 104 13/12/2018 15:25


Cybercrime, human rights and digital politics  105

property of victims. The criminalization of computer-related fraud gains special relevance


to address the automation of contractual or economic decision-making (such as sending
out goods or an ATM issuing money).50
The range of content-related offenses51 is broad. While the comprehensive incrimina-
tion of depicting child and youth exploitation (child pornography) (e.g. CCC, Article 9,
with some aspects addressed in more detail by EU instruments52) reflects an extensive
consensus shared even by most cybercriminals, other areas of content-related offenses
raise possibly the most intricate human rights implications, in terms of both the ‘shield’
and ‘sword’ dimensions. The international consensus is much less clear on what consti-
tutes free speech and what constitutes hate speech, as can be seen, e.g, in the ongoing
dispute over the deletion or blocking of Facebook messages which are in line with its
self-given ‘community rules’ but are in apparent violation of (e.g. German) criminal law.
In a similar vein, different jurisdictions may draw the line between copyright protection
(also by means of criminal law) and freedom of speech and expression (and thus e.g. fair
use of copyright-protected material), and between the protection of personal data and the
lawful usage of such data, differently.

3.  C
 RIMINAL INVESTIGATIONS IN CYBERSPACE:
CHALLENGES FROM THE PERSPECTIVE OF HUMAN
RIGHTS

In light of the core challenges and the ‘shield and sword’ dimension outlined above, the pecu-
liarities of the normative53 framework of investigating and prosecuting54 the aforementioned
‘core’ cybercrimes and further crimes committed online warrants a closer look. Again, the
national level, which still conducts nearly all criminal investigations and prosecutions,55 will
be analyzed here, namely, from the perspective of the respective frameworks of international
agreements on cybercrime, model laws, and of further international and supranational

50
  Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 346.
51
  See generally Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 146–53;
UNODC, Comprehensive Study on Cybercrime, n. 1 above, 100–16.
52
  Directive 2011/92/EU of the European Parliament and of the Council of 13 December 2011
on combating the sexual abuse and sexual exploitation of children and child pornography, and
replacing Council Framework Decision 2004/68/JHA.
53
  For the technical side of obtaining, analyzing and interpreting digital traces (digital foren-
sics), see generally Casey (2004); Dewald and Freiling (2015); Holt et al. (2015); Walden (2016); for
legal implications, see Heinson (2015).
54
  Relating to German court proceedings and the introduction of digital evidence, see Momsen
(2015); for a common law perspective, see Brenner, Cybercrime and the Law, n. 23 above; Walden,
Computer Crimes and Digital Investigations, n. 53 above.
55
  With the notable exception of the prosecution and adjudication of core crimes of inter-
national criminal law and, in future, the prosecution of crimes to the detriment of the financial
interests of the European Union by means of the European Public Prosecutor‘s Office. Concerning
an extension of its mandate to cybercrime, see Brodowski, ‘Cyber-Sicherheit durch Strafrecht?’, n.
20 above, 268–71. A proposal by Schjolberg, The History of Cybercrime, n. 1 above, to introduce an
international tribunal for the investigation and prosecution of the most severe cybercrimes has not
yet gained track.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 105 13/12/2018 15:25


106  Research handbook on human rights and digital technology

approaches.56 In a similar vein, the question of transnational investigations, be it investiga-


tions conducted by one state in the territory of another, be it classical forms of mutual legal
assistance and letters rogatory, will be discussed here from a bird’s eye view.

3.1  Coercive Measures

As ICT hardware is tangible, it can be searched for and seized by ‘classic means’.57 Thereby,
authorities can gain access to the data stored therein. As the information stored in ICT
is often at the core of the authorities’ interest in cybercrime investigations, international
instruments call for national criminal justice systems to provide for a search and seizure
of data as well.58 This shift allows for the use of ‘live forensics’ and enables the access of
remotely attached storage devices.59 All these measures are, however, limited to the open
access (i.e. search and seizure operations the suspect is to be made aware of), and may
be of less use if the data is encrypted. Relating to persons not implicated with a crime in
particular, a less invasive form of accessing data is to require them to hand it over to the
authorities (production order),60 and possibly even to decrypt it (decryption order).61 To
address the volatility of data, an expedited but preliminary preservation order regularly
complements the aforementioned access modes foreseen by international instruments.62
Covert telecommunication surveillance has already been in most jurisdictions’ toolbox
for criminal investigations for many decades. It naturally extends to ongoing63 person-
to-person communication over the Internet, but usually also to other forms of Internet
usage, even if the communication between persons is mediated by computers, websites,
forums, social media sites, etc.64 Covertly accessing telecommunication is, however, a
severe encroachment of private life. Therefore, sufficient human rights protections are
necessary in the national laws on telecommunication surveillance. The European Court of
Human Rights (ECtHR), for example, specifically requires the scope of such measures to
be foreseeable and limited, and dependent on judicial or otherwise independent authoriza-
tion and oversight.65 An ongoing debate relates to the question of which service providers

56
  See n. 28 above and the further instruments referenced below.
57
  Just see Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 351.
58
  E.g. CCC, Art.  19; Arab ITO Convention, Art. 26(f); ECOWAS-Cybercrime, Art. 30;
AU-CS, Art. 31(3)(a), (b).
59
  See 3.3 below on the legality of accessing data stored abroad.
60
  E.g. CCC, Art. 18; Arab ITO Convention, Art. 25. On the underlying concept, see Sieber,
‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 160.
61
  Not in relation to suspects, however, due to nemo tenetur implications. See Brodowski,
‘Transnational Organised Crime and Cybercrime’, n. 25 above, 352.
62
  E.g. CCC, Art. 16; Arab ITO Convention, Art. 23; ECOWAS Directive, Art. 31; AU-CS,
Art. 31(3)(d).
63
  Data temporarily in storage at any location, e.g. in a cache or in a mailbox, may be accessible
also through (open) search and seizure operations.
64
  At least in Germany, see German Federal Constitutional Court, Decision of the Third
Chamber of the Second Senate of 6 June 2016 – 2 BvR 1454/13 = ECLI:DE:BVerfG:2016:rk2016
0706.2bvr145413; Singelnstein (2012) 594.
65
  Malone v. United Kingdom, Appl. No. 8691/79, ECtHR, Judgment of 2 August 1984; Amann
v. Switzerland, Appl. No. 27798/95, ECtHR, Judgment of 16 February 2000; Bykov v. Russia, Appl.
No. 4378/02, ECtHR, Judgment of 10 March 2009.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 106 13/12/2018 15:25


Cybercrime, human rights and digital politics  107

(only telecommunication providers strictu sensu, or also over-the-top (OTT) providers


such as WhatsApp or mere software projects like XMPP) may be required to provide
access to the unencrypted data transmitted over their network, or whether they may offer
true end-to-end encryption with no direct access for themselves and the authorities.66
New forms of covert investigation techniques are discussed, so far, less on the interna-
tional but on the national level.67 For example, the use of remote forensic software and of
key loggers create even more severe encroachments of private life and similar (national)
human rights protections. These tools allow ‘to intercept encrypted telecommunication at
a stage prior to encryption [or past decryption], to covertly search the data of the suspect,
or to survey their actions for a longer period’ of time.68 It is clear that the use of such tools
requires stricter safeguards compared to the interception of telecommunication – if one
considers the usage of these tools to be in compliance with human rights at all.
Last but not least, other means of investigation, such as using covert agents, question-
ing witnesses, or interrogating suspects, have not lost any of their importance in the digital
context. More thought should be spent on the question, however, whether the widespread
use of ICT by the authorities, the linking of data (‘data mining’), and the broad acces-
sibility of data to the authorities means that they could find evidence of (at least minor)
crimes committed by nearly everyone, if they only looked closely enough – and what that
means for society.

3.2  Identification of Subscribers and Telecommunication Data Retention

The real-time identification of a subscriber based on an IP address,69 as well as the real-


time surveillance of telecommunications metadata (traffic data),70 do not pose additional
human rights implications compared to the surveillance of telecommunication content
already discussed above.71 While some practical limitations are obvious (it need not be
the subscriber him- or herself who communicated), metadata has proven to provide a
most valuable investigatory tool, on the one hand, while on the other hand accessing such
metadata has severe human rights implications. Therefore, at least metadata information,
but also the de-anonymization of Internet traffic by identifying the subscriber by an IP
address, should require similar, but possibly slightly less strict safeguards compared to the
surveillance of telecommunication content.
Due to the volatility of data and the inherent slowness of the criminal justice system
which often only learns of crimes long after they were committed, there is a legitimate
interest to establish which subscriber used a specific IP address at a specific time in the
past. Authorities are often also interested to learn with whom a suspect communicated in
the past, e.g. in order to determine whether there were any co-perpetrators or accomplices.

66
  See Tropina (2016) 29–35, 53–58; Grünwald and Nüßing (2016).
67
  See generally Sieber, ‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 161;
UNODC, Comprehensive Study on Cybercrime, n. 1 above, 131–33.
68
  Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 352, citing Sieber,
‘Mastering Complexity in the Global Cyberspace’, n. 1 above, 161.
69
  See e.g., CCC, Art. 16(f); Arab ITO Convention, Art. 24.
70
  See e.g., CCC, Arts 20, 28.
71
  See supra 3.1.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 107 13/12/2018 15:25


108  Research handbook on human rights and digital technology

As telecommunication service providers tend to store telecommunication metadata, which


is required to answer these questions, only for short periods of time (if at all), data reten-
tion regulations come into play. Such a normative framework requires service providers
to store such traffic data for an extended duration for all users, regardless of suspicion. As
the storing of data, irrespective of its later use, already constitutes an encroachment on the
right to the protection of personal data, at least according to a European understanding
of this fundamental right, such measures have repeatedly been challenged from a human
rights perspective. The most prominent of such frameworks, EU Directive 2006/24/EC,72
was struck down by the European Court of Justice in 2014, on the grounds of it providing
too unspecific access rules and insufficient data protection safeguards.73 And while, for
example, the German Federal Constitutional Court considers data retention not to be
unconstitutional per se,74 a recent judgment by the European Court of Justice on national
data retention laws in Sweden and the United Kingdom raises severe doubts whether a
‘catch-all’ data retention, covering all users at any time, can actually ever be in compliance
with EU fundamental rights.75

3.3  International Cooperation and the Jurisdiction to Enforce

Owing to the transnational nature of many cybercrimes, there is an obvious need for close
international cooperation in criminal matters to investigate and prosecute cybercrimes,
and to enforce any judgments relating to cybercrimes. Therefore, international frame-
works relating to cybercrimes regularly call for such cooperation, in the form of mutual
legal assistance (MLA) and extradition, ‘to the widest extent possible’,76 provide for
communication channels (such as the G8 24/7 network on high-tech crime),77 and clarify
the scope of MLA requests.78 Of particular relevance are rules allowing for the spontane-
ous exchange of information relating to cybercrime.79 In combination, these tools, as
well as the trend of initiating shadow investigations and proceedings in multiple affected
jurisdictions,80 somewhat mitigate the volatility and transnationality of data required for
the investigation of cybercrimes.
Supranational instruments such as the European Arrest Warrant (EAW)81 and the

72
  Directive 2006/24/EC of the European Parliament and of the Council of 15 March 2006 on
the retention of data generated or processed in connection with the provision of publicly available
electronic communications services or of public communications networks and amending Directive
2002/58/EC.
73
  Joined Cases C-293/12 and C-394/12 Digital Rights Ireland, ECJ, Judgment of the Court
(Grand Chamber) of 8 April 2014, ECLI:EU:C:2014:238.
74
  German Federal Constitutional Court, Judgment of the First Senate of 2 March 2010 – 1
BvR 256/08 = ECLI:DE:BVerfG:2010:rs20100302.1bvr025608.
75
  Joined Cases C-203/15 and C-698/15 Tele2/Watson, ECJ, Judgment of the Court (Grand
Chamber) of 21 December 2016, ECLI:EU:C:2016:970.
76
  CCC, Art. 23.
77
  Cf. Schjolberg (2014) 43–45.
78
  See e.g., CCC, Art. 23 ff.
79
  See e.g., CCC, Art. 26; CIS-CI, Art. 5(a); Arab ITO Convention, Art. 33.
80
  Cf. Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 353–56.
81
  Council Framework Decision of 13 June 2002 on the European Arrest Warrant and the
surrender procedures between Member States, as amended.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 108 13/12/2018 15:25


Cybercrime, human rights and digital politics  109

European Investigation Order (EIO)82 further enhance judicial cooperation in cybercrime


matters. For cybercrimes within the meaning of the aforementioned EU Directive
2013/40/EU, there is no longer a double criminal requirement for extradition and MLA
requests, and the EIO provides for specific and generally more simple procedures for
transnational interception of telecommunication content data (EIO, Article 30(f)) and
for the transnational accessing of subscriber information based on an IP address (EIO,
Article 10(2)(e)).
Nonetheless, the aforementioned inter- and supranational instruments still require
member states and their authorities to cooperate – often leading to the need to adhere
to formal procedures, to translate documents, and to wait for a response by the other
actors involved. The matter is further complicated by data sometimes being moved across
borders, data being duplicated, and/or data being shielded by proxy or cache servers
(catch-phrase: ‘Cloud’), with authorities not knowing exactly where the data is actually
stored (‘loss of location’ or, more precisely, ‘loss of knowledge of location’83). Due to the
volatility of data, authorities have therefore gained a strong interest in accessing data
stored abroad or at an unknown location directly, without requesting prior permission or
assistance from the other jurisdiction(s) affected. As I have already analyzed elsewhere,84
the following questions need to be distinguished in this context:

● Information publicly available on the Internet is, by means of an international


custom, generally accessible to authorities in other jurisdictions.85 A distinct ques-
tion, however, is the human rights implications of transferring personal data abroad
in the course of accessing publicly available information, e.g. when searching for a
suspect’s name on Google.
● While the jurisdiction of each state relates to all of their territory and therefore
also to data stored on their territory and to accessing such data,86 they can consent
to actions by other states in their own territory. Therefore, they can consent to
(preliminary) cross-border telecommunication surveillance (EIO, Article 31), and
they can consent to another state accessing data within their borders under the con-
dition that the other state ‘obtains the lawful and voluntary consent of the person
who has the lawful authority to disclose the data’ (CCC, Article 32(b)87). While,
for example, Russia strongly objects to such a rule also due to its ambiguity,88 the
underlying concept is gaining track in other areas of the world. Some states (e.g. the
United States) allow other states (e.g. Germany) to request some data directly from
Internet Service Providers (ISPs) located in their territory (e.g. Google), with the

82
  Directive 2014/41/EU of the European Parliament and of the Council of 3 April 2014
regarding the European Investigation Order in criminal matters.
83
  Koops and Goodwin (2014) 8–9.
84
  Brodowski, ‘Transnational Organised Crime and Cybercrime’, n. 25 above, 355–56.
85
  Based on such an understanding shared, e.g. by Sieber (2012) C-144–C-145; the provisions
in CCC, Art. 32(a); Arab ITO Convention, Art. 40(1), merely codify customary international law.
86
  This view is not shared by the Swiss Federal Court, however, in a recent judgment: Swiss
Federal Court, Judgment of 24 May 2017, 1B_29/2017, para. 7.10.
87
  See also Arab ITO Convention, Art. 40(2).
88
  Sieber (2012) C-78–C-79.

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 109 13/12/2018 15:25


110  Research handbook on human rights and digital technology

service provider then deciding on its own whether to voluntarily comply with such
requests – truly a privatization of international cooperation in criminal matters.89
● Another recent trend is to avoid the tedious process of international cooperation by
compelling ISPs or other natural and legal persons to transfer data inland,90 and/or
to domesticate situations (i.e. to presume situations are solely internal in nature91).
Once the data is inland, it may then be accessed by normal means of criminal
procedure.92 Such requests regularly conflict with the law of the other affected
state, which validly may refer to its jurisdiction, its data protection laws and to the
primacy of the aforementioned methods of international cooperation in criminal
matters. It is yet a largely unresolved question on how to untangle this mixture
of production orders, data protection laws, and possibly even criminal liabilities
stemming from violation of one or the other.93
● A distinct question arising from the aforementioned matters of international law
and international cooperation in criminal matters is whether evidence obtained in
violation of these standards is admissible in criminal proceedings. In any case, it
should be taken into account that a person may actually have stored his or her data in
a specific jurisdiction on purpose, leading to a reasonable expectation that his or her
data remains there, with access to the data being governed solely by the rules of this
jurisdiction.

4. OUTLOOK

Stemming from the ‘shield and sword’ dimensions of human rights inherent to criminal
law and criminal procedure, the interplay of these spheres of law in relation to the
Internet continues to constitute an area of promising research. As outlined above, the
human rights implications are most prominent in terms of human rights limitations to
the criminalization of speech and other forms of expression, as well as in the protections
offered by the right to privacy and by the right to the protection of personal data in the
field of criminal procedure. They are, however, similarly relevant in terms of the overall
design of a criminal justice system: What consequences may result from the fact that
so much data of past and present behavior might become accessible to authorities for
data mining, combined with criminological evidence pointing out that nearly everyone
has committed at least one minor crime in their lives? What data – telecommunication
metadata and beyond – may validly be covered by requirements of data retention? How
far does the mere threat of surveillance, even if no crime was committed, change one’s
behavior? Much research remains to be done. Sapere aude!

89
  See Brodowski (2017) 11, 20; Burchard (2017).
90
 Cf. In the Matter of a Warrant to Search a Certain E-Mail Account Controlled and Maintained
by Microsoft Corporation, US District Court (SDNY) Memorandum and Order of 25 April 2014,
13-Mag-2814, upheld by Order of the US District Court (SDNY) of 11 August 2014, Appeal (pend-
ing), Microsoft Corp. v. United States, US Court of Appeals (2d Cir.) 14-2985-CV; Swiss Federal
Court, Judgment of 14 January 2015, 1 B_344/2014, para. 5.12. See additionally Velasco (2015).
91
  See e.g., Swiss Federal Court, Judgment of 24 May 2017, 1B_29/2017, para. 7.10.
92
  See 3.1 above.
93
  But see Schaub (2011).

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 110 13/12/2018 15:25


Cybercrime, human rights and digital politics  111

REFERENCES

Böse, Martin, Frank Meyer and Anne Schneider (2013–2014), Conflicts of Jurisdiction in Criminal Matters in
the European Union (2 vols, Baden-Baden: Nomos)
Brenner, Susan W. (2002), ‘Organized Cybercrime? How Cyberspace May Affect the Structure of Criminal
Relationships’, 4 North Carolina Journal of Law and Technology 1–50
Brenner, Susan W. (2012), Cybercrime and the Law (Lebanon: Northeastern University Press)
Broadhurst, Roderic, Peter Grabosky, Mamoun Alazab and Steve Chon (2014), ‘Organizations and Cyber
Crime: An Analysis of the Nature of Groups Engaged in Cyber Crime’, 8(1) International Journal of Cyber
Criminology 1–20
Brodowski, Dominik (2015), ‘Cyber-Sicherheit durch Strafrecht? Über die strafrechtliche Regulierung des
Internets?’ in Hans-Jürgen Lange and Astrid Bötticher (eds), Cyber-Sicherheit (Wiesbaden: Springer VS)
Brodowski, Dominik (2016a), Verdeckte technische Überwachungsmaßnahmen im Polizei- und Strafverfahrensrecht
(Tübingen: Mohr Siebeck)
Brodowski, Dominik (2016b), ‘Transnational Organised Crime and Cybercrime’ in Pierre Hauck and Sven
Peterke (eds), International Law and Transnational Organised Crime (Oxford: Oxford University Press)
Brodowski, Dominik (2017), ‘Strafrechtsrelevante Entwicklungen in der Europäischen Union – ein Überblick’,
11(1) Zeitschrift für Internationale Strafrechtsdogmatik 11–27
Brodowski, Dominik, Manuel Espinoza de los Monteros de la Parra, Klaus Tiedemann and Joachim Vogel (eds)
(2014), Regulating Corporate Criminal Liability (Cham: Springer)
Brodowski, Dominik and Felix Freiling (2011), Cyberkriminalität, Computerstrafrecht und die digitale
Schattenwirtschaft (Berlin: Forschungsforum Öffentliche Sicherheit)
Burchard, Christoph (2016), ‘Strafverfassungsrecht – Vorüberlegungen zu einem Schlüsselbegriff’ in Klaus
Tiedemann, Ulrich Sieber, Helmut Satzger, Christoph Burchard and Dominik Brodowski (eds), Die
Verfassung moderner Strafrechtspflege (Baden-Baden: Nomos)
Burchard, Christoph (2017), ‘Vorbemerkungen zu § 1 IRG’, in Paul-Günter Pötz, Heinrich Grützner and Claus
Kreß (eds), Internationaler Rechtshilfeverkehr in Strafsachen (Heidelberg: C.F. Müller)
Burkhart, Patrick (2014), Pirate Politics: The New Information Policy Contests (Cambridge, MA: MIT Press)
Casey, Eoghan (2004), Digital Evidence and Computer Crime (London: Academic Press)
Chawki, Mohamed, Ashraf Darwish, Mohammad Ayoub Khan and Sapna Tyagi (2015), Cybercrime, Digital
Forensics and Jurisdiction (Cham: Springer)
Czychowski, Christian and Jan Bernd Nordemann (2008), ‘Vorratsdaten und Urheberrecht – Zulässige Nutzung
gespeicherter Daten’, 43 Neue Juristische Wochenschrift 3095–99
Dewald, Andreas and Felix C. Freiling (2015), Forensische Informatik (2nd edn, Norderstedt: Books on
Demand)
Foot, Philippa (1967), ‘The Problem of Abortion and the Doctrine of the Double Effect’, 5(1) Oxford Review
5–15
Gärditz, Klaus Ferdinand (2016), ‘Demokratizität des Strafrechts und Ultima Ratio-Grundsatz’, 71(13)
JuristenZeitung 641–50
Gercke, Marco (2008), ‘Die Bekämpfung der Internetkriminalität als Herausforderung für die Strafverfol­
gungsbehörden’, 11(5) Multimedia und Recht 291–98
Grünwald, Andreas and Christoph Nüßing (2016), ‘Kommunikation over the Top’, Multimedia und Recht
91–97
Hassemer, Winfried (1982), ‘Die “Funktionstüchtigkeit der Strafrechtspflege” – ein neuer Rechtsbegriff ?’,
Strafverteidiger 275–80
Hassemer, Winfried (1989), ‘Symbolisches Strafrecht und Rechtsgüterschutz’, 12 Neue Zeitschrift für Strafrecht
553–59
Heinson, Dennis (2015), IT-Forensik (Tübingen: Mohr)
Hilgendorf, Eric (ed.) (2017), Autonome Systeme und neue Mobilität (Baden-Baden: Nomos)
Holt, Thomas, Adam Bossler and Kathryn Seigfried-Spellar (2015), Cybercrime and Digital Forensics (Oxford:
Routledge)
International Association of Penal Law (AIDP) (2014), ‘Resolutions of the XIXth International Congress of
Penal Law: Information Society and Penal Law’, 85(3–4) Revue International de Droit Pénal 607–68
Jahn, Matthias (2014), ‘Das verfassungsrechtliche Gebot bestmöglicher Sachaufklärung im Strafverfahren’,
Goltdammer’s Archiv für Strafrecht 588–601
Jahn, Matthias (2016), ‘Strafverfassungsrecht: Das Grundgesetz als Herausforderung für die Dogmatik des
Straf- und Strafverfahrensrechts’ in Klaus Tiedemann, Ulrich Sieber, Helmut Satzger, Christoph Burchard
and Dominik Brodowski (eds), Die Verfassung moderner Strafrechtspflege (Baden-Baden: Nomos)
Jahn, Matthias and Dominik Brodowski (2016), ‘Krise und Neuaufbau eines strafverfassungsrechtlichen
Ultima Ratio-Prinzips’, 71(20) JuristenZeitung 969–80

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 111 13/12/2018 15:25


112  Research handbook on human rights and digital technology

Jamil, Zahid (2014), Cybercrime Model Laws: Discussion Paper Prepared for the Cybercrime Convention
Committee (T-CY), previously available at www.coe.int/t/dghl/cooperation/economiccrime/Source/Cybercrime/
TCY/2014/3021_model_law_study_v15.pdf
Kochheim, Dieter (2015), Cybercrime und Strafrecht in der Informations- und Kommunikationstechnik (Munich:
Beck)
Koops, Bert-Jaap and Morag Goodwin (2014), Cyberspace, the Cloud, and Cross-border Criminal Investigation,
available at www.gccs2015.com/sites/default/files/documents/Bijlage%201%20-%20Cloud%20Onderzoek.pdf
Kshetri, Nir (2010), The Global Cybercrime Industry (Berlin: Springer)
Landau, Herbert (2007), ‘Die Pflicht des Staates zum Erhalt einer funktionstüchtigen Strafrechtspflege’, 3 Neue
Zeitschrift für Strafrecht 121–29
Laufer, William S. (2006), Corporate Bodies and Guilty Minds: The Failure of Corporate Criminal Liability
(Chicago, IL: University of Chicago Press)
Lusthaus, Jonathan (2013), ‘How Organised is Organised Cybercrime?’, 14(1) Global Crime 52–60
Martellozzo, Elena and Emma A. Jane (eds) (2017), Cybercrime and its Victims (Oxford: Routledge)
Momsen, Carsten (2015), ‘Zum Umgang mit digitalen Beweismitteln im Strafprozess’ in Christian Fahl,
Eckhart Müller, Helmut Satzger and Sabine Swoboda (eds), Festschrift für Werner Beulke zum 70. Geburtstag
(Heidelberg: C.F. Müller)
Nagin, Daniel S. (1998), ‘Criminal Deterrence Research at the Outset of the Twenty-First Century’, 23 Crime
and Justice 1–42
Paternoster, Raymond, Robert Brame, Ronet Bachman and Lawrence W. Sherman (1997), ‘Do Fair Procedures
Matter? The Effect of Procedural Justice on Spouse Assault’, 31(1) Law and Society Review 163–204
Pauwels, Lieven, Frank Weerman, Gerben Bruinsma and Wim Bernasco (2011), ‘Perceived Sanction Risk,
Individual Propensity and Adolescent Offending: Assessing Key Findings from the Deterrence Literature in
a Dutch Sample’, 8(5) European Journal of Criminology 386–400
Pieth, Mark and Radha Ivory (eds) (2011), Corporate Criminal Liability (Dordrecht: Springer)
Schaub, Martin (2011), ‘Zur völkerrechtlichen Zulässigkeit des amerikanischen Editionsbefehls an die UBS im
Streit um die Kundendaten’, 71 Zeitschrift für ausländisches öffentliches Recht und Völkerrecht 807–24
Schjolberg, Stein (2014), The History of Cybercrime, 1976–2014 (Norderstedt: Books on Demand)
Sieber, Ulrich (2008), ‘Mastering Complexity in the Global Cyberspace’ in Miereille Delmas-Marty, Mark
Pieth and Ulrich Sieber (eds), Les chemins de l’harmonisation pénale (Paris: Société de Législation Comparée)
Sieber, Ulrich (2012), Straftaten und Strafverfolgung im Internet, Gutachten C zum 69. Deutschen Juristentag
(Munich: Beck)
Singelnstein, Tobias (2012), ‘Möglichkeiten und Grenzen neuerer strafprozessualer Ermittlungsmaßnahmen
– Telekommunikation, Web 2.0, Datenbeschlagnahme, polizeiliche Datenverarbeitung & Co’, 11 Neue
Zeitschrift für Strafrecht 593–606
Sofaer, Abraham D., and Seymour E. Goodman (eds) (2001), The Transnational Dimension of Cyber Crime and
Terrorism (Stanford, CA: Hoover Institution Press)
Tropina, Tatjana (2016), ‘Comparative Analysis’ in Ulrich Sieber and Nicolas von zur Mühlen (eds), Access to
Telecommunication Data in Criminal Justice: A Comparative Analysis of European Legal Orders (Freiburg:
Max-Planck-Institut)
Tulkens, Françoise (2011), ‘The Paradoxical Relationship Between Criminal Law and Human Rights’, 9 Journal
of International Criminal Justice 577–95
Tyler, Tom R. (1988), ‘What is Procedural Justice? Criteria Used by Citizens to Assess the Fairness of Legal
Procedures’, 22(1) Law and Society Review 103–35
Tyler, Tom R. (2003), ‘Procedural Justice, Legitimacy, and the Effective Rule of Law’, 30 Crime and Justice
283–358
United Nations Office on Drugs and Crime (UNODC) (2013), Comprehensive Study on Cybercrime (New York:
United Nations)
Velasco, Cristos (2015), ‘Cybercrime Jurisdiction: Past, Present and Future’, ERA Forum (24 June 2015), avail-
able at https://rm.coe.int/16806b8a7a
Vogel, Joachim (2008), ‘Towards a Global Convention Against Cybercrime’, Revue électronique de l’AIDP
C-07:1-10, available at www.penal.org/sites/default/files/files/Guadalajara-Vogel.pdf
Walden, Ian (2016), Computer Crimes and Digital Investigations (2nd edn, Oxford: Oxford University Press)
Wall, David S. (2007), Cybercrime: The Transformation of Crime in the Information Age (Cambridge, MA: Polity
Press)
Wall, David S. (2010), ‘The Organization of Cybercrime and Organized Cybercrime’ in Marcello Bellini, Phillip
Brunst and Jochen Jähnke (eds), Current Issues in IT Security (Berlin: Duncker & Humblot)
Welzel, Hans (1954), ‘Zum Notstandsproblem’, 63(1) Zeitschrift für die gesamte Strafrechtswissenschaft 47–56

Dominik Brodowski - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:15AM
via New York University

WAGNER_9781785367717_t.indd 112 13/12/2018 15:25


7.  ‘This is not a drill’: international law and
protection of cybersecurity
Matthias C. Kettemann*

1. INTRODUCTION
‘Ballistic missile threat inbound to Hawaii. Seek immediate shelter. This is not a drill.’
This emergency alert was sent out to thousands of Hawaiians on 13 January 2018,
at 8:08 a.m. local time.1 Luckily, it turned out to be not a drill, but rather the result
of human error. An employee, it appeared, had pushed the wrong button (twice).
The ­reactions on social media and in real life – the media was full of stories of
scared Hawaiians hiding in garages – tells us a lot about our times both in terms of
international security (ballistic missiles threats were a Cold War staple) and in terms
of cybersecurity: Human errors can cause real damages – and humans are an integral
part of cybersecurity.
Be it online or offline,2 be it in the kinetic world or in cyberspace, we need norms – and
have always needed them. ‘In the long march of mankind from the cave to the computer’,
as Malcolm N. Shaw puts it in his introduction to international law, ‘a central role has
always been played by the idea of law – that order is necessary and chaos inimical to a
just and stable existence’.3
This chapter is premised upon the assumption that ensuring cybersecurity (or ‘protect-
ing the Internet’) is essential for ensuring international security and the effective fight
against cybercrime. Political and legal approaches to preventing cyberwar4 and cyber-

*  Dr. Matthias C. Kettemann, LL.M. (Harvard), is head of the research programme


‘Regulatory Structures and the Emergence of Rules in Online Spaces’ at the Leibniz Institute for
Media Research – Hans-Bredow-Institut, Hamburg, Germany. This chapter draws from a study
by the author for Deutsche Telekom and two published contributions: M. Kettemann, ‘Ensuring
Cybersecurity Through International Law’ (2017) Revista Española de Derecho Internacional
281–90 and M. Kettemann, ‘The Common Interest in the Protection of the Internet: An
International Legal Perspective’ in W. Benedek, K. de Feyter, M. Kettemann and C. Voigt (eds),
The Common Interest in International Law (Antwerp: Intersentia, 2014) 167–84.
1
  Alana Abramson, ‘This is Not a Drill’, Time.com, 13 January 2018: ‘“This is not a
drill”: Hawaii just sent out an incoming missile alert. It was a mistake’, available at http://time.
com/5102389/this-is-not-a-drill-hawaii-just-sent-out-an-incoming-missile-alert-it-was-a-mistake.
2
  Wolff Heintschel von Heinegg, ‘Legal Implications of Territorial Sovereignty in Cyberspace’
in Christian Czosseck, Rain Ottis, and Katharina Ziolkowski (eds), Proceedings of the 4th
International Conference on Cyber Conflict (NATO CCD COE Publication, 2012) 7, 14.
3
  Malcolm N. Shaw, International Law (6th edn, Oxford: Oxford University Press, 2008) 1.
4
  See Katharina Ziolkowski, Peacetime Regime for State Activities in Cyberspace: International
Law, International Relations and Diplomacy (Tallinn: NATO CCD COE Publications, 2013),
available at http://ccdcoe.org/427.html; and Michael N. Schmitt (ed.), The Tallinn Manual on the
International Law Applicable to Cyber Warfare (Cambridge: Cambridge University Press, 2013),
available at http://issuu.com/nato_ccd_coe/docs/tallinnmanual?e=5903855/1802381.

113
Matthias C. Kettemann - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 113 13/12/2018 15:25


114  Research handbook on human rights and digital technology

crime5 have attracted considerable attention internationally. This study shows that the
Internet is critical infrastructure in itself and also functionally essential for other critical
infrastructure. The UN General Assembly, in Resolution 64/211 of 21 December 2009,
promoted the creation of a global culture of cybersecurity6 and expressed concerns in
light of growing threats to the reliable functioning of the Internet (technically: ‘of critical
information infrastructures and to the integrity of the information carried over those’).
These affect ‘domestic, national and international welfare’. The Resolution affirms that
states are obliged to deal systematically with these threats and coordinate both with
national stakeholders and internationally with the goal of facilitating the achievement of
cybersecurity.
The 2013 Group of Governmental Experts (GGE) report underlined that applying
norms derived from ‘existing international law relevant to the use of ICTs by States is
an essential measure to reduce risks to international peace, security and stability’. The
Charter of the United Nations is applicable to the whole gamut of socio-economic activ-
ity on the Internet and thus ‘essential to maintaining peace and stability and promoting
an open, secure, peaceful and accessible ICT environment’.7 Applying international law
to the Internet lies in the common interest and safeguarding the Internet as such also lies
in the common interest because a stable, secure and functional Internet is essential for
international security. Building on this consensus, the 20158 report of the GGE again
confirmed that international law, the UN Charter and international legal principles apply
to the Internet,9 stating, inter alia, that the international community aspired to regulate
the Internet in a peaceful manner ‘for the common good of mankind’:10

[t]he adherence by States to international law, in particular their Charter obligations, is an


essential framework for their actions in their use of ICTs and to promote an open, secure, stable,
accessible and peaceful ICT environment.11

Ensuring cybersecurity and protecting the Internet’s stability, security and functionality
in the form of reducing the potential for criminal misuses of computers and networks is
important in the international fight against cybercrime. Common criminals continue to
misuse the Internet both for online variants of common scams and for Internet-specific

 5
  See, on the taxonomy of cybercrime, David S. Wall, Cybercrime (Cambridge: Cambridge
University Press, 2007); and Jonathan Clough, Principles of Cybercrime (Cambridge: Cambridge
University Press, 2010) (arguing that broad consensus on fighting cybercrime has led to the success-
ful Council of Europe Cybercrime Convention, at 22).
 6
  UN General Assembly, Resolution 64/211 of 21 December 2009, Creation of a Global
Culture of Cybersecurity and Taking Stock of National Efforts to Protect Critical Information
Infrastructures, A/RES/64/211 (17 March 2010).
 7
  Report of the Group of Governmental Experts on Developments in the Field of Information
and Telecommunications in the Context of International Security, A/68/98 (24 June 2013) para. 19
(‘GGE Report (2013)’).
 8
  Developments in the Field of Information and Telecommunications in the Context of
International Security, Report of the Secretary General, A/70/174 (22 July 2015), available at www.
un.org/ga/search/view_doc.asp? symbol=A/70/­174 (‘GGE Report (2015)’).
 9
  GGE Report (2015), n. 8 above, para. 26.
10
  Ibid. para. 28(c).
11
  Ibid. para. 25.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 114 13/12/2018 15:25


International law and protection of cybersecurity  115

(technology-based) attacks. The size of monetary loss is small compared to the scale of
the Internet economy,12 but the problem is nevertheless serious.13 Black markets for both
buying cybercrime tools (such as complete botnets for as little as US$50) and for selling
on the proceeds of cybercrime activities (such as credit card information) are increasingly
sophisticated, resilient and international.14 This threat requires international cooperation.
At the same time, some criminal misuses of the Internet are not direct threats to the
Internet’s integrity, as criminal networks rely on the Internet to conduct their activities.
But other attacks, including DDoS attacks or others not linked to monetary gain (such
as attacks to cause social unrest) can amount to substantial threats to cybersecurity and
the Internet’s functionality.
Different levels of Internet security awareness and capabilities globally matter to
all states because of the interconnectedness of ICTs and the networked nature of the
Internet. ‘These vulnerabilities are amplified’, the UN’s Group of Governmental Experts
on IT and Security complained in 2013, ‘by disparities in national law, regulations and
practices related to the use of ICTs’.15 Only international cooperation in the protection
of the Internet can help meet these challenges. The role of the Internet’s integrity in
contributing to international security and the threats posed by international cybercrime
are further reasons why ensuring cybersecurity and protecting the integrity of the Internet
lies in the common interest.
This chapter examines the protection of cybersecurity under international law and
develops a future-oriented approach to increasing cybersecurity through international
law. After outlining my concept of cybersecurity (2) and explaining why protecting cyber-
security lies in the common interest of all states (3), I will outline which international legal
norms protect cybersecurity (4). After that, I will identify approaches to better protect
cybersecurity in international law (5). The chapter ends with conclusions (6).

2. TOWARDS A COMPREHENSIVE CONCEPT OF


CYBERSECURITY

Cybersecurity is defined very broadly by some states and covers risks and threats such
as cyberwarfare, cyberterrorism, cybercrime and cyber-espionage.16 There is no doubt

12
  According to its latest available report, the US-based Internet Crime Complaint Center has
received, in 2012, less than 300,000 consumer complaints with an adjusted dollar loss of US$530
million, an increase of 8.3% since 2011 (see Internet Crime Complaint Center (I3C), Internet Crime
Report 2012 (2012), available at www.ic3.gov/ media/annualreport/2012_IC3Report.pdf).
13
  See Robert Moore, Cybercrime: Investigating High-Technology Computer Crime (Oxford:
Elsevier, 2011) (arguing that cybercrimes have been steadily increasing in recent years and now
amount to ‘a very serious problem’, at 5).
14
  Lillian Ablon, Martin C. Libicki and Andrea A. Golay, Markets for Cybercrime Tools and
Stolen Data (Hackers’ Bazaar, RAND National Security Research Division, March 2014), avail-
able at www.rand.org/pubs/research_reports/RR610.html.
15
  Ibid. para. 10.
16
  See Myriam Dunn Cavelty and Camino Kavanagh, Chapter 5. They differentiate between
two notions of cybersecurity: a technical information security approach and a national security-
focused approach. My use of the concept encompasses elements from both conceptions.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 115 13/12/2018 15:25


116  Research handbook on human rights and digital technology

that cybersecurity is a crucial part of national, domestic, security, foreign and defence
policy.17 As a central theme of Internet policy,18 cybersecurity is closely linked with the
stability, robustness, resilience and functionality of the Internet.19 Cybersecurity can be
threatened by cybercrime and cyberterrorism, but also by a lack of legal and technical
cooperation between states and a lack of preventive measures, such as developing crisis
intervention centres and teams, as well as transnational crisis communication structures
for cyber incidents.
The May 2017 WannaCry Ransomware attack, for example, shows how vulnerabilities
are caused by a number of factors, including software companies who fail to provide
updates or no longer service vulnerabilities, affected companies that have slow patch cycles,
secret services that stockpile vulnerabilities, and states that do not force essential service
providers (like healthcare companies) to ensure that their systems are stable and secure.20
Fostering and ensuring cybersecurity are a prerequisite for the stability of national
economic processes and the international business and financial system, for transnational
communication flows and for the functioning of energy grids, the enforcement of human
rights, and the performance of national, regional and international defence infrastruc-
tures. Ultimately, cybersecurity is fundamental to the full realization of all human rights.
It is too often the case that (cyber)security is contrasted with (Internet) freedom and the
two concepts are played against each other to the detriment of both. This view misses the
point. As emphasized in the Cybersecurity Strategy for Germany from 2016, what matters
is that ensuring both freedom and security are among the core duties of the state – offline
and online:

It is therefore the duty of the state vis-à-vis the citizens and enterprises in Germany to protect
them against threats from cyberspace and to prevent and pursue crime in cyberspace.21

Similarly, the Spanish National Cybersecurity Strategy of 2013 is aimed at ensuring that
Spain:

makes practical use of ICT systems, reinforcing the capacities of prevention, defence, detection
and response to cyber-attacks, and building confidence in the use of ICTs.22

17
  Most states have Cybersecurity Strategies. For an overview, see NATO Cooperative
Cyber Defense Centre of Excellence, ‘Cyber Security Publication Library’, https://ccdcoe.org/
publication-library.
18
  Adam Segal, The Hacked World Order: How Nations Fight, Trade, Maneuver, and Manipulate
in the Digital Age (New York: Public Affairs, 2016).
19
  Eneken Tikk-Ringas (ed.), Evolution of the Cyber Domain: The Implications for National and
Global Security (London: Routledge, 2015).
20
 CCDCOE, WannaCry Campaign: Potential State Involvement Could Have Serious Consequences
(16 May 2017), available at https://ccdcoe.org/wannacry-campaign-potential-state-involvement-
could-have-serious-consequences.html.
21
  German Federal Ministry of the Interior, Cyber-Sicherheitsstrategie für Deutschland 2016
[Cybersecurity Strategy for Germany 2016], at 8, available at www.bmi.bund.de/SharedDocs/
Downloads/DE/Themen/OED_Verwaltung/Informationsgesellschaft/cybersicherheitsstrategie-2016.
pdf.
22
  Prime Minister’s Office, Spain, Estrategia de Ciberseguridad Nacional 2013, available
at www.lamoncloa.gob.es/documents/20131332estrategiadeciberseguridadx.pdf. Cf. Alexander

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 116 13/12/2018 15:25


International law and protection of cybersecurity  117

National documents define cybersecurity differently. The Cybersecurity Strategy for


Germany defines cybersecurity as ‘IT security of the networked or network-capable
information technology systems in cyberspace’. IT security is understood to be ensuring
the ‘intactness of the authenticity, confidentiality, integrity and availability of informa-
tion technology systems and the data processed and stored within these systems.’23 This
definition is very technology-oriented and too short-sighted in light of how cybersecurity
is practically perceived by business and society. The security on the Internet and of the
Internet cannot simply be equated with the security of systems and data. To view cyber-
security correctly, it must be regarded holistically. Each meaningful cybersecurity concept
should be comprehensive and value-based: a commitment to a stable, secure, resilient,
fully functional, open and free Internet and its protection against state and private attacks
of any kind.24

3.  P
 ROTECTING CYBERSECURITY LIES IN THE COMMON
INTEREST OF ALL STATES

3.1  Cybersecurity and the Common Interest

In 1998, experts working on behalf of the UNESCO posed the question whether the
‘United Nations General Assembly [could] affirm the principle of regarding cyberspace
as “the common heritage of humanity”’25 and thus safeguard it. They failed to answer
the question, which was well enough because it was the wrong one. It is not ‘cyberspace’
we should concern ourselves with, but rather the Internet itself; and rather than using the
concept of common heritage of mankind, which presupposes a certain historical stability
the Internet does not have, we rather need to establish why protecting the Internet lies in
the global common interest.
During the World Summit on the Information Society (WSIS), which the United
Nations convened in 2003 and 2005 to establish a normative trajectory for the Internet,
states affirmed their common desire and commitment to build an information society
that is ‘people-centred, inclusive and development-oriented’ and in which individuals,
communities and peoples can:

achieve their full potential in promoting their sustainable development and improving their
quality of life premised on the purposes and principles of the Charter of the United Nations and
respecting fully and upholding the Universal Declaration of Human Rights.26

Cendoya, National Cyber Security Organisation: Spain (Tallinn, 2016) 9, available at https://ccdcoe.
org/sites/default/files/multimedia/pdf/CS_organisation_SPAIN_092016.pdf.
23
  Cybersecurity Strategy for Germany 2016, n. 21 above, 24.
24
  See Marietje Schaake and Mathias Vermeulen, ‘Towards a Values-Based European Foreign
Policy to Cybersecurity’ (2016) 1 Journal of Cyber Policy 1, at 75–84.
25
 UNESCO, Report of the Experts’ Meeting on Cyberspace Law, Monte Carlo, 29–30
September 1998, para. 9, available at http://unesdoc.unesco.org/ images/0011/001163/116300e.pdf.
26
 WSIS, Geneva Declaration of Principles, WSIS-03/GENEVA/DOC/4-E (12 December 2003)
para. 1.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 117 13/12/2018 15:25


118  Research handbook on human rights and digital technology

We have here a first clear answer to the question what ‘vision’ the international com-
munity pursues regarding the impact of the Internet.
Clearly, achieving world peace and international security (as enshrined in the UN
Charter), ensuring human development (as codified in the Millennium Development
Goals and the Sustainable Development Goals) and respecting, protecting and imple-
menting human rights (as normatively ordered in the Universal Declaration of Human
Rights (UDHR)) are in the global common interest. Different normative orders within
international law contribute towards achieving these common interests. By establish-
ing finalities for a normative order, such as WSIS has done for Internet governance,
the normative development of that order is anchored in the finality of ensuring the
common interest, lying in peace, security and development. Creating a people-centred,
development-oriented information society is thus part of the broader normative com-
mitment in international law to protecting the common interest of ensuring world peace,
international security and development.
We have thus established that there is a global common interest in ensuring a
people-centred and development-oriented information society. But is protecting cyber-
security and thus the Internet’s integrity essential for reaching these goals? UN Special
Rapporteur for Freedom of Expression, Frank La Rue, in a groundbreaking report on
the impact of the Internet on freedom of expression, described the Internet as a ‘catalyst
for individuals to exercise their right to freedom of opinion and expression’. Freedom
of expression on the Internet itself is a ‘facilitator of the realisation of a range of other
human rights’.27 The multiple challenges that the Internet brings for human rights
notwithstanding,28 the absence of the Internet (or of an Internet without cybersecurity)
would seriously challenge the realization of human rights. There are important corollary
rights that are premised upon exercising free speech on the Internet. These include the
freedom of assembly and association online, the right to (digital) education and the
right of access to digital knowledge.29 From all these rights we can also derive a right to
access to online information and communication, which is crucial for human develop-
ment. Similar arguments have been voiced by courts and international organizations.30
The Internet introduces new threat vectors to human rights, but greatly enhances the
potential of people to realize their human rights. It is similarly a facilitator for human
security.
In the same vein, the 2013 report of the UN GGE determined that the application of
norms derived from existing international law is ‘essential’ to minimize risks to world

27
  F. La Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression, A/HRC/17/27 (16 May 2011) paras 22 and 23.
28
  See already R.F. Jørgensen (ed.), Human Rights in the Global Information Society (Cambridge,
MA: MIT Press, 2006); but also R. Deibert, J. Palfrey, R. Rohozinski and J. Zittrain (eds), Access
Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (Cambridge, MA: MIT Press,
2010); R. Mackinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom
(New York: Basic Books, 2012).
29
  See W. Benedek and M.C. Kettemann, Freedom of Expression and the Internet (Strasbourg:
Council of Europe, 2014).
30
 See Yildirim v. Turkey, Application No. 3111/10, ECtHR (2012); Council of Europe,
Parliamentary Assembly Resolution 1877 (2012) on the Protection of Freedom of Expression and
Information on the Internet and Online Media.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 118 13/12/2018 15:25


International law and protection of cybersecurity  119

peace and international security and stability.31 Viewed in the context of information
technology challenges, cybersecurity is now one aspect of ‘world peace’.32 When analys-
ing the protection of cybersecurity under international law, the GGE report from 201533
which was adopted in consensus by a representative group of governmental experts, is
very helpful,34 stating, inter alia, that the international community aspires to regulate the
Internet in a peaceful manner ‘for the common good of mankind’.

3.2  Ensuing International Legal Duties

Protecting cybersecurity is a common interest of all states. This community interest is


not an aggregate of the individual sets of interests; rather, it lies at their intersection. If
a protected interest is part of the community interest, this entails consequences relevant
to international law. States are therefore responsible to the international community with
regard to cybersecurity according to their judicial authority over critical infrastructures
pertinent to it.35
To the extent that a state controls Critical Internet Resources (CIRs), it has to exercise
this jurisdiction mindful of threats to cybersecurity and especially the security of
these CIRs in a manner that ensures the common interest. To the extent that national
politics (and policies) can impact the Internet negatively, a state has to refrain from their
formulation and implementation. A state’s sovereignty is reduced because of the limits
the protection of the Internet as a common interest lays upon him. The United States,
as former delegated manager of the CIRs, therefore only had ‘custodial sovereignty’.36
This implied that the United States had to enter into consultations with other states with
regard to the management of CIRs and make sure that the management was transparent
and accountable, ideally to the international community and now, after the transition,37 to
the global multi-stakeholder community.38 All other states, insofar as they can endanger

31
  Report of the Group of Governmental Experts on Developments in the Field of Information
and Telecommunications in the Context of International Security, A/68/98 (24 June 2013) para. 16.
32
  Ibid.
33
  GGE Report (2015), n. 8 above.
34
  Members: Belarus, Brazil, China, Colombia, Egypt, Estonia, France, Germany, Ghana,
Israel, Japan, Kenya, Malaysia, Mexico, Pakistan, Russian Federation, Spain, United Kingdom
and United States.
35
  For documentation of the reports of individual member states, see the Group of Governmental
Experts on Developments in the Field of Information and Telecommunications in the Context of
International Security, www.un.org/disarmament/topics/informationsecurity.
36
  See W. Scholtz, ‘Custodial Sovereignty: Reconciling Sovereignty and Global Environmental
Challenges Amongst the Vestiges of Colonialism’ (2008) 3 Netherlands International Law Review
323–41.
37
 NTIA, NTIA Finds IANA Stewardship Transition Proposal Meets Criteria to Complete
Privatization (9 June 2016), available at www.ntia.doc.gov/press-release/2016/iana-stew​ ard​
ship-​​transition-proposal-meets-criteria-complete-privatization.
38
  ICANN IANA Stewardship Transition Coordination Group (ICG), Proposal to Transition
the Stewardship of the Internet Assigned Numbers Authority (IANA) Functions from the
US Commerce Department’s National Telecommunications and Information Administration
(NTIA) to the Global Multistakeholder Community (March 2016), available at www.icann.org/
en/system/files/files/iana-stewardship-transition-proposal-10mar16-en.pdf; and ICANN CCWG

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 119 13/12/2018 15:25


120  Research handbook on human rights and digital technology

the Internet’s stability, security and functionality, are pre-empted from doing so. In
assessing whether politics may impact the Internet negatively, it makes sense to suggest
the precautionary principle as a guide to decision-making.39
A second consequence is that humankind – that is all actors making up the international
community, and not only states – have a legitimate interest in making sure that all other
actors protect the Internet (or at least do not actively harm it). Though sovereignty-
oriented states40 have suggested exclusive sovereignty over national Internet segments,
the character of the Internet, as protected by international law as a common interest,
limits their sovereignty. The 2005 Tunis Agenda, though valuable in its description of
the normative goals of the information society, still lays down that ‘policy authority for
internet-related public policy issues’ is a ‘sovereign right of States’.41 This needs to be
qualified in light of a common interest approach to protecting the Internet in that the
states’ policy authority is no longer an exclusively sovereign right, but part and parcel
of their sovereignty, to be exercised in the common interest. What sovereignty they have
over Internet-related resources has to be understood in light of changing circumstances
and the importance of the pursuance of common interests. As far as international law
extends to non-state actors, these are also under obligation to support the pursuance, by
state actors, of the global interest in safeguarding the Internet’s integrity. This means,
for example, that there is an international law-based duty, via the Ruggie Principles,42 for
companies to respect human rights and rights of victims of business-related abuses and
their right to access to remedies.
International law has arrived at a point in its development where pursuing the common
interest, as Bardo Fassbender put it, is the only reasonable answer to its Sinnfrage.43
Ensuring a people-centric and development-oriented Internet is one of the key challenges
for international law today. To that end, the protection of the integrity of the Internet and
the guarantee of cybersecurity as a common interest are essential.

Accountability, Supplemental Final Proposal on Work Stream 1 Recommendations (February


2016), available at www.icann.org/en/system/files/files/ccwg-accountability-supp-proposal-work-
stream-1-recs-23 feb16-en.pdf.
39
  See the work of the Council of Europe’s Ad Hoc Advisory Group on Cross-border Internet
(MC-S-CI), www.coe.int/t/dghl/standardsetting/media/MC-S-CI/default_en.asp, especially its
report International and Multi-stakeholder Co-operation on Cross-border Internet (2010).
40
  See International Code of Conduct for Information Security, Annex to the letter dated
12 September 2011 from the Permanent Representatives of China, the Russian Federation,
Tajikistan and Uzbekistan to the United Nations addressed to the Secretary-General, A/66/359 (14
September 2011).
41
 WSIS, Tunis Agenda (2005) para. 35.
42
  See Report of the Special Representative of the Secretary-General on the issue of human
rights and transnational corporations and other business enterprises, J. Ruggie, Guiding Principles
on Business and Human Rights: Implementing the United Nations ‘Protect, Respect and Remedy’
Framework, A/HRC/17/31 (21 March 2011).
43
 The question regarding its role: see B. Fassbender, ‘Zwischen Staatsräson und
Gemeinschaftsbindung. Zur Gemeinwohlorientierung des Völkerrechts der Gegenwart’ in H.
Münkler and K. Fischer (eds), Gemeinwohl und Gemeinsinn im Recht: Konkretisierung und Realisierung
öffentlicher Interessen (Berlin: Akademie Verlag, 2002) 231–74, at 231.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 120 13/12/2018 15:25


International law and protection of cybersecurity  121

4.  CYBERSECURITY AND INTERNATIONAL LAW

4.1  International Law and the Internet

International law is the only area of law with which global (public) goods can be man-
aged and global public interest protected.44 The ubiquity of the technology underlying
the Internet, which is not restricted by national borders, renders strictly single-state regu-
lation largely ineffective. International law is needed to legitimately and effectively ensure
cybersecurity in the common interest of all states. This is not a new insight.45 Without
legitimate and effective protection of cybersecurity under international law, individuals
and societies cannot develop to their full potential. A similar self-commitment to an
international law-based Internet policy was presented by the European Commission in
2014.46

4.2  Cybersecurity in International Treaty Law

International law applies to the Internet.47 Individual norms from the UN Charter
(e.g. the ban on aggression and intervention) are relevant to the international law of
cybersecurity.48 However, there is no single treaty that is primarily concerned with
the regulation of the Internet and the key topic of cybersecurity. Although treaties
provide (legal) certainty (especially in the eyes of powerful states or states relying on
traditional sovereignty concepts),49 bilateral cybersecurity treaties usually do not live up
to the complexity of the issue due to the universality of the Internet, while multilateral
treaties can only be attained through lengthy negotiation processes with an uncertain

44
  Detailed comparisons: Matthias C. Kettemann, Völkerrecht in Zeiten des Netzes: Perspektiven
auf den effektiven Schutz von Grund- und Menschenrechten in der Informationsgesellschaft zwischen
Völkerrecht, Europarecht und Staatsrecht (Bonn: Friedrich-Ebert-Stiftung, 2015), available at
http://library.fes.de/pdf-files/akade­mie/12068.pdf.
45
  Developments in the Field of Information and Telecommunications in the Context of
International Security, A/RES/53/70 (4 January 1999) para. 2 lit. c, available at www.un.org/ga/
search/view_doc.asp?symbol=A/RES/53/70.
46
  Communication by the European Commission to the European Parliament, the European
Council, the European Economic and Social Committee and the Committee of the Regions,
Internet Policy and Internet Governance: Europe’s Role in Shaping the Future of Internet Governance,
COM/2014/072 final.
47
  The scope of this study does not allow for going into detail on all applicable principles of
international law. See instead Michael N. Schmitt and Liis Vihul, The Nature of International Law
Cyber Norms, Tallinn Paper No. 5 (NATO CCD COE, 2014) 16, available at https://ccdcoe.org/
sites/default/files/multimedia/pdf/Tallinn %20Paper%20No%20%205%20Schmitt%20and%20Vihul.
pdf; Katharina Ziolkowski, ‘General Principles of International Law as Applicable in Cyberspace’ in
Katharina Ziolkowski (ed.), Peacetime Regime for State Activities in Cyberspace: International Law,
International Relations and Diplomacy (Tallinn: NATO CCD COE Publications, 2013) 135–84, at
151–52.
48
  See 4.3 below. The ban on aggression and intervention is based on international treaty law,
but also on customary international law.
49
 Heise.de, USA und China wollen Vertrag zur Begrenzung von Cyberangriffen (20 September
2015), available at www.heise.de/newsticker/meldung/USA-und-China-wollen-Vertrag-zur-Begre​
nzung-von-Cyberangriffen-2822083.html#mobile_detect_force_desktop.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 121 13/12/2018 15:25


122  Research handbook on human rights and digital technology

outcome.50 There is still no treaty on cybersecurity, though. Binding international law


on cybersecurity can therefore only be derived to date from customary international law
and the principles of international law.

4.3  Cybersecurity and Customary International Law

In the Tunis Agenda, adopted at the UN World Summit on the Information Society
(WSIS), and in the Tunis Commitment (2005) (building on the Geneva documents of
2003), the signatory states of the world committed themselves to a:

people-centered, inclusive and development-oriented Information Society, premised on the pur-


poses and principles of the Charter of the United Nations, international law and multilateralism,
and respecting fully and upholding the Universal Declaration of Human Rights51

to ‘the universality, indivisibility, interdependence and interrelation of all human rights


and fundamental freedoms, including the right to development, as enshrined in the
Vienna Declaration’,52 to a stable and secure Internet as a worldwide institution, and to
the multi-stakeholder approach.53 Of particular importance for protecting c­ ybersecurity
by means of international law is the commitment to international law and the significance
of the Internet as a – stable and secure – worldwide institution.
For one thing, the international body of law pertaining to cybersecurity provides
protection based on human rights. Communicators, recipients and the contents of
communications are protected by Article 19 of the International Covenant on Civil and
Political Rights (ICCPR), parts of which are based on customary law. Along with these
state obligations based on individual human rights (prohibiting states from interfering
with certain communicative rights), positive obligations related to Internet infrastructure
can also be derived from the state obligations to provide protection and assurance
pertaining to information- and communication-related rights. States are therefore obliged
to ensure the sustainability and security of the networks in order to allow individuals to
realize their human rights.
Particularly relevant to ensuring and promoting cybersecurity are the following prin-
ciples of international law,54 some of which have been translated into treaty law in the
UN Charter, are protected under customary international law or are recognized as part
of the general principles of international law: sovereign equality, the ban on aggression

50
  See Jack Goldsmith, Cybersecurity Treaties. A Skeptical View, Hoover Institution Future
Challenges Essays (2011), available at http://media.hoover.org/sites/default/files/documents/Futu​
reChallenges_Goldsmith.pdf; and Robert S. Litwak and Meg King, Arms Control in Cyberspace,
Wilson Center Digital Futures Project (2015), available at www.wilson­center.org/publication/
arms-control-cyberspace.
51
  World Summit on the Information Society (WSIS), Tunis Commitment, WSIS-05/TUNIS/
DOC/7-E (18 November 2005) no. 2.
52
  Ibid. no. 3.
53
 WSIS, Tunis Agenda for the Information Society, WSIS-05/TUNIS/DOC/6 (Rev. 1)-G (18
November 2005) no. 31.
54
  GGE Report (2015), n. 8 above, para. 26.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 122 13/12/2018 15:25


International law and protection of cybersecurity  123

and intervention, peaceful settlement of disputes, the protection of human rights, the
cooperation principle (which draws on the principle of good neighbourliness (‘no harm’)
and the precautionary principle (‘due diligence’)).
The principle of sovereign equality (UN Charter, Article 2(1)) is a key principle of inter-
national law. As a ‘pivotal principle’55 it is also of special importance for cybersecurity.
Each state has jurisdiction and power over its territory and over the ICT infrastructure
located there; this also means, however, that it bears a responsibility to ensure that no
attacks against other states or institutions, which would infringe on international law, are
organized or carried out from its territory.
In addition, the non-intervention principle (UN Charter, Article 2(7)) can be brought
to fruition: an intense damage to Internet functionality in another state (e.g. by cyber-
attacks) could constitute an intervention, although attribution problems will regularly
arise.56 Only some of the attacks originating from the territory of a state represent an
‘intervention’ in terms of international law, because most attacks will be attributable to
non-governmental protagonists or to protagonists whose association with governmental
agencies cannot be proven.57
The ban on aggression (UN Charter, Article 2(4)) prohibits states from using measures
of power beyond simple ‘intervention’ (the former being stated in the non-intervention
principle). In the context of the Internet, this Article could only be applied to especially
serious cases of cyber-attacks with substantial kinetic effects.
The principle of peaceful settlement of disputes is relevant to cybersecurity insofar as any
state has the duty, in the event of an incident, to first determine the facts and to collect
evidence for attribution of a breach of international law to a particular state. Even if this
succeeds, peaceful means of dispute settlement should first be sought.
The principle of the protection of human rights is a fundamental principle of interna-
tional law that is also relevant to cybersecurity. What is problematic under international
law are attempts by a state to enforce cybersecurity through an excessive control of the
Internet (such as setting up and using surveillance capabilities or government screening
of all Internet communication).
The principle of good neighbourliness (UN Charter, Article 74), or ‘no harm’ principle,
can be considered as a global principle in the Internet era. Originally only relevant in
terms of the relationship with adjacent states, the principle has been gradually extended.58
In the Corfu Channel case, the International Court of Justice described the principle as
‘every state’s obligation not to knowingly allow its territory to be used for acts contrary

55
  Samantha Besson, ‘Sovereignty’ in Rüdiger Wolfrum (ed.), MPEPIL (2011) para. 1.
56
  Sven-Hendrik Schulze, Cyber-‘War’, Testfall der Staatenverantwortlichkeit (Tübingen: Mohr,
2015).
57
  See GGE Report (2015), n. 8 above: ‘the indication that an ICT activity was launched or
otherwise originates from the territory or the ICT infrastructure of a State may be insufficient in
itself to attribute the activity to that State. The Group noted that the accusations of organizing
and implementing wrongful acts brought against States should be substantiated’ (ibid. para. 28(f)).
58
  See UN GA Resolution 46/62, ‘Developing and Strengthening of Good-Neighbourliness
Between States’, A/RES/46/62 (9 December 1991) para. 2 (good-neighbourliness is an obligation,
‘whether or not they [the states] are contiguous’).

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 123 13/12/2018 15:25


124  Research handbook on human rights and digital technology

to the rights of other states’.59 The ‘no harm’ principle has its roots in the Trail Smelter60
and Lac Lanoux61 cases. It was formulated in Principle 21 of the Stockholm Declaration
(1972)62 and Principle 2 of the Rio Declaration (1992):63 signatory states are committed
‘to ensure that activities within their jurisdiction or control do not cause damage to the
environment of other States or of areas beyond the limits of national jurisdiction’. This
can easily be applied to other areas in analogy to the environment.
The obligation to prevent cross-border damage has crystallized into customary law.
Contractual confirmation can be found in Article 194(2) of the UN Convention on the Law
of the Sea64 and Article 20(1) of the ASEAN Agreement on the Conservation of Nature and
Natural Resources (1985). Most recently, the ICJ confirmed in the Nuclear Weapons case65
that the threat to the environment is ‘no abstraction but represents the living space, the qual-
ity of life and the very health of human beings, including generations unborn’. In accord-
ance with the ‘no harm’ principle, no state may utilize its dominion in such a manner that
would cause damage to other states. In the preventive dimension of the ‘no harm’ principle,
a state must take measures to prevent such hazards. Among other things, a commitment to
an appropriate infrastructure, the development of emergency plans and the establishment
of an international crisis cooperation structure (and culture) can be construed from this.
The precautionary principle (‘due diligence’) is of special importance for cybersecurity.
First, the due diligence principle entails information and consultation obligations.66 In
the scientific world, it is controversial as to what extent the precautionary principle has a
‘due diligence’ dimension or whether the precautionary obligations of states are covered
in practice by the ‘no harm’ principle. The principle of due diligence was also applied in
the field of combating terrorism and the financing of terrorism.67

4.4  Shift Towards Due Diligence

With some justification, normative principles for the regulation of cybersecurity can there-
fore be derived from the principle of due diligence. As a result, it is the responsibility of states,

59
 ICJ, Corfu Channel Case (United Kingdom v. Albania) [1949] ICJ Reports 4, 22.
60
  Trail Smelter Case (United States v. Canada), First Decision, (1949) III RIAA 1905, (1941)
35 AJIL 684, 16 April 1938, Arbitration.
61
  Lake Lanoux Arbitration (France v. Spain) (1963) XII RIAA 281, (1961) 24 ILR 101, 16
November 1957, Arbitration.
62
  Stockholm Declaration of the United Nations Conference on the Human Environment, A/
CONF.48/14/Rev.1, 3; A/CONF.48/PC/6, Principle 21.
63
  United Nations Environment Programme (UNEP), Rio Declaration on Environment and
Development, A/CONF.151/5/Rev.1; A/CONF.151/26/Rev.1 Vol.1, Annex 1, Principle 2.
64
  United Nations Convention on the Law of the Sea (UNCLOS), 10 December 1982, Art.
194(2): ‘States shall take all measures necessary to ensure that activities under their jurisdiction or
control are so conducted as not to cause damage by pollution to other States and their environment,
and that pollution arising from incidents or activities under their jurisdiction or control does not
spread beyond the areas where they exercise sovereign rights in accordance with this Convention’.
65
  Legality of the Threat or Use of Nuclear Weapons Advisory Opinion [1996] ICJ Reports 226,
para. 29.
66
  Timo Koivurova, ‘Due Diligence’ in Wolfrum (ed.), MPEPIL (2010) (online) para. 3.
67
  Cf. Vincent-Joel Proulx, ‘Babysitting Terrorists: Should States be Strictly Liable for Failing
to Prevent Transborder Attacks?’ (2005) 23 Berkeley Journal of International Law 615, at 629.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 124 13/12/2018 15:25


International law and protection of cybersecurity  125

inter alia, ensuing from this principle, to prevent cyber-attacks originating from their own
territory and to (proactively) establish a legal system that ensures and fosters cybersecurity.68
This can be fulfilled, for instance, by ‘passing stringent criminal laws, conducting vigorous
investigations, prosecuting attackers, and, during the investigation and prosecution, coop-
erating with the victim-states of cyberattacks that originated from within their borders’.69
In its preventive dimension, the due diligence principle helps to identify the obligations
of states with regard to cybersecurity, particularly with regard to cybercrime, global
cooperation and establishment of capacities. Cybersecurity due diligence was described as
part of customary international law, whereby particularly the following preventive duties
have emerged as recognized obligations under international law:

● that governments and other stakeholders bolster cybersecurity and develop cyber-
security strategies to protect crucial infrastructures; 70
● that states (and other relevant stakeholders) work together more closely in the fight
against cybercrime and cyberterrorism,71 and that they ratify conventions such as
the Convention on Cybercrime of the Council of Europe;72
● that states conclude treaties promoting cooperation between their police authori-
ties; 73 and
● that states establish confidence-building measures and increase the level of informa-
tion sharing, both generally as well as (and especially) in the event of cybersecurity-
related incidents.74

Confidence-building measures are more extensive than obligations under international


law ensuing from the duty of cooperation.75 They are relevant to the evolution of legal

68
  Michael N. Schmitt, ‘In Defense of Due Diligence in Cyberspace’ (2015) 125 Yale Law
Journal Forum 68–81.
69
  Matthew J. Sklerov, ‘Solving the Dilemma of State Responses to Cyberattacks: A Justification
for the Use of Active Defenses Against States Who Neglect Their Duty to Prevent’ (2009) 201
Military Law Review 1–85, at 62.
70
  See UN GA Resolution 64/221, ‘Creation of a Global Culture of Cybersecurity and Taking
Stock of National Efforts to Protect Critical Information Infrastructures’, A/RES/64/211 (17 March
2010) (with reference to other such resolutions, including Resolutions 55/63 of 4 December 2000 and
56/121 of 19 December 2001 on ‘Combating the Criminal Misuse of Information Technologies’;
57/239 of 20 December 2002 on the ‘Creation of a Global Culture of Cybersecurity’ and 58/199
of 23 December 2003 on ‘The Creation of a Global Culture of Cybersecurity and the Protection
of Critical Information Infrastructures’; 53/70 of 4 December 1998, 54/49 of 1 December 1999,
55/28 of 20 November 2000, 56/19 of 29 November 2001, 57/53 of 22 November 2002, 58/32 of
8 December 2003, 59/61 of 3 December 2004, 60/45 of 8 December 2005, 61/54 of 6 December
2006, 62/17 of 5 December 2007 and 63/37 of 2 December 2008 on ‘Developments with respect to
Information Technologies in the Context of International Security’).
71
  See UNODC, Resolution 22/8 on ‘Promoting Technical Assistance and Capacity Building
to Strengthen National Measures and International Cooperation Against Cybercrime’, UNODC/
CCPCJ/2013/RES/22/8, para. 4.
72
  See GGE Report (2013), n. 7 above, para. 22.
73
  Ibid. para. 22.
74
  Ibid. para. 26 et seq.
75
  Organization for Security and Co-operation in Europe, Decision No. 1202, ‘OSCE

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 125 13/12/2018 15:25


126  Research handbook on human rights and digital technology

norms, however, insofar as they indicate the direction of how obligations under interna-
tional law are developing.76

5. EMBRACING CYBERSECURITY NORMS IN


INTERNATIONAL LAW

Any attempt to embrace cybersecurity in international law must be preceded by the


recognition of the significance of the multi-stakeholder approach in the normative
development of international Internet law.77 Since the process of the World Summit on the
Information Society (WSIS) began, with just a few exceptions, states have been commit-
ting to the integration of all stakeholders in the normative processes relevant to Internet
regulation. The multi-stakeholder approach is put into practice in the development and
application of instruments and processes for Internet regulation by governments (states),
the private sector (business enterprises) and civil society (individuals and NGOs) ‘in their
respective roles’.78
Given the intricacies of treaty-making processes, with diplomatic nuances and sensibili-
ties, multi-stakeholder forums are not well suited to drafting international law. They can
only offer the opportunity for states (or other stakeholders) to accomplish preparatory
work relevant to the normative process. Jointly discussing ‘norms’ can be useful, because
common values will be expressed already in the process of discussion79 and the positions
taken by participating states serve as indications of their attitudes toward an international
juridification of cybersecurity.
A promising approach for embracing binding cybersecurity norms in international
law in the long term is the negotiation and adoption of an international treaty. It
remains the ‘gold standard’ of international law and is the most important authoritative
source of law. The United Nations Framework Convention on Climate Change,80 which
entered into force on 4 November 2016, has shown that even today international treaties
covering ­complex topics such as regulations relating to global public goods (for example,
cybersecurity as a prerequisite for a well-functioning Internet) can be successfully
­
concluded.

Confidence-Building Measures to Reduce the Risks of Conflict Stemming from the Use of
Information and Communication Technologies’, PC.DEC/1202 (10 March 2016).
76
  See Katharina Ziolkowski, ‘Confidence Building Measures for Cyberspace’ in Katharina
Ziolkowski (ed.), Peacetime Regime for State Activities in Cyberspace: International Law, International
Relations and Diplomacy (Tallinn: NATO CCD COE Publications, 2013) 533–64.
77
  See Matthias C. Kettemann, ‘Grotius goes Google: Der Einfluss der Internet Governance
auf das Völkergewohnheitsrecht’ in Christoph Vedder (ed.), Tagungsband 37. Österreichischer
Völkerrechtstag 2012 (Vienna: Peter Lang, 2013) 89–104.
78
 WSIS, Tunis Agenda for the Information Society, WSIS-05/TUNIS/DOC/6 (Rev. 1)-G (18
November 2005) no. 31.
79
  See Eneken Tikk-Ringas, ‘International Cyber Norms Dialogue as an Exercise of Normative
Power’ (2016) 17 Georgetown Journal of International Affairs 3, at 47–59.
80
  UN Framework Convention on Climate Change, available at http://unfccc.int/2860.php.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 126 13/12/2018 15:25


International law and protection of cybersecurity  127

6. SUMMARY

In the light of the importance of the Internet for states, business and society, cybersecurity
– as a prerequisite for a reliably functioning and secure Internet – has become a global com-
munity interest which needs protection. The technologicalization and informationalization
of many key infrastructure provision functions and of many industry control systems open
up new vulnerabilities within states that can threaten, if an attack is substantial, interna-
tional peace and security. Similarly, the increased use of mobile devices, Cloud computing
and the use of social networks increases the vulnerability of citizens to acts of cybercrime.
Cybersecurity is a key functional condition for an era defined by information and
communications technology. A secure Internet lies in the interest of each individual
state and also collectively in the interest of all states of the world as a global community.
Cybersecurity thus lies in the global common interest. It has a key security interest for
which all the states of the world bear separate and common responsibility. This responsibil-
ity must be met through an application of the principles and processes of international law.
International law is to be fully applied to the Internet, including with regard to regulat-
ing cybersecurity. Customary international law and the general principles of international
law particularly restrict (and define) national Internet policy. Consequently, each state
has protection obligations vis-à-vis the international community – to avert threats to the
stability, integrity and functionality of the Internet – which can be derived from custom-
ary international law. At the same time, states may not restrict freedom on the Internet
without limitation under the guise of imposing ‘security’. The global, cross-border
Internet traffic may not be adversely affected or in any way destabilized by states due to
national legislation and policy.
In addition to post-incident information and communication requirements, preventive
obligations also arise from the due diligence principle and the tenets of good neighbourli-
ness and can in part only be met in cooperation with non­-governmental stakeholders.
This binding cooperation principle of customary international law provides mandatory
guidance to states in their development of strategies for promoting cybersecurity. But
what complicates the situation is that in responding to cybersecurity threats, states are
often fully dependent on private entities and their security experts.
The May 2017 WannaCry Ransomware attack, for example, shows how vulnerabilities
are caused by a number of factors, including companies who fail to provide updates or
no longer service vulnerabilities, affected companies that have slow patch cycles, secret
services that stockpile vulnerabilities, and states that do not force essential service provid-
ers (like healthcare companies) to ensure that their systems are stable and secure.
As the NATO Cooperative Cyber Defence Centre of Excellence concluded with regard
to the recent WannaCry Ransomware attacks:

Campaigns like WannaCry should remind decision-makers of the importance of baseline cyber
security, since in this case the victims could have prevented the spread of ransomware by fairly
simple security measures.81

81
 CCDCOE, WannaCry Campaign: Potential State Involvement Could Have Serious Consequences
(16 May 2017), available at https://ccdcoe.org/wannacry-campaign-potential-state-involvement-
could-have-serious-consequences.html.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 127 13/12/2018 15:25


128  Research handbook on human rights and digital technology

Be it baseline cybersecurity or more advanced forms that are necessary for states to imple-
ment in order to meet their obligations under international law: the debate on how best
to ensure a stable and resilient Internet for all is far from over – and it is international law
that provides the impetus, frame and objective of the debate.
The normative frame of cybersecurity and the real-world application of it are not yet
stable enough to ensure convincingly that honest mistakes, serious crimes and international
cyber-attacks are effectively stopped. This should worry us. As Dutch Ruppersberger, a
long-time Democratic US Congressman from Maryland, was quoted in 2013 by Rolling
Stone magazine as saying: ‘People ask me all the time, “What keeps you up at night?”
And I say, “Spicy Mexican food, weapons of mass destruction, and cyber attacks”’.82
Changing one’s diet might help with the first issue, arms reduction and arms control
regimes with the second, and an international law-based global cybersecurity regime with
the third: less chili then, and a more nuanced normative order of cybersecurity with clear
commitments for each stakeholder.

82
  Dutch Ruppersberger (D-Maryland), US Congressman, as quoted in Steven Hsieh, ‘Congress
is trying to kill Internet privacy again’, Rolling Stone Magazine, 13 February 2013, available at
www.rollingstone.com/politics/news/congress-is-trying-to-kill-internet-privacy-again-20130213.

Matthias C. Kettemann - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:21AM
via New York University

WAGNER_9781785367717_t.indd 128 13/12/2018 15:25


8.  First do no harm: the potential of harm being
caused to fundamental rights and freedoms by
state cybersecurity interventions
Douwe Korff

1. INTRODUCTION
1.1 General

We live our lives increasingly not only in the physical, but also in the digital world, in an
augmented reality in which our actions in the physical world have become inseparably
intertwined with our actions on the Internet, linked to the Internet, and in the wider ‘cyber
space’. The Internet of Things (IoT) (or even ‘of Everything’) means we are surrounded
by, monitored by, and to an increasing extent controlled by devices and sensors that
interact with us, and with each other – and with third parties including both commercial
and state agencies. Those devices, and those third parties moreover increasingly use self-
learning algorithms (or in the latest faddish term ‘artificial intelligence’) in the taking of
decisions about us.1
But this new digital environment is vulnerable, in many ways. The creators of the
Internet did not and could not fully foresee the ubiquity and centrality to the modern age
of their creation. The digital environment suffers from technical weaknesses, weaknesses
in governance, and lack of appropriate legal frameworks and laws. Storms and floods
can destroy the infrastructures on which the digital environment relies. ‘Bugs’ can make
them fail. Criminals and terrorists seek to exploit the weaknesses – as do states and state
agencies. Current laws are not ‘fit for purpose’ in the digital context.
This has led in recent years to major efforts to try and strengthen the digital environ-
ment, and to create tools to counter, prevent or at least minimize the harm caused by
technical deficiencies, natural disasters and deliberate attacks. Oxford’s Global Cyber

1
  See Ian Brown and Douwe Korff, The Internet of Things: Security-, Privacy- & Data Protection
Risks (Organisation for Economic Cooperation and Development (OECD), 2015) (not yet online).
As to the term ‘artificial intelligence’, Peter Sommer rightly issues this warning: ‘[M]any claims are
made for “artificial intelligence” techniques. But most computer scientists as opposed to marketing
droids no longer use the phrase “artificial intelligence” or its contraction “AI” because concepts of
what it is keep on changing in the light of developments in computer science and investigations by
biological scientists in how the human brain actually works. Moreover, AI consists of a number of
separate techniques all with their own value but also limitations. It can include pattern recognition
in images, the identification of rules in what initially appears to be random data, data mining, neural
networks, and machine learning in which a program follows the behaviour of an individual or
event and identifies patterns and linkages. And there are more and there are also many overlaps in
definitions and concepts.’ Peter Sommer, blogspot, 3 February 2018, available at https://pmsommer.
blogspot.co.uk/.

129
Douwe Korff - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 129 13/12/2018 15:25


130  Research handbook on human rights and digital technology

Security Capacity Centre (GCSCC) was established in 2013 with just that aim: to help
states and state agencies increase their defensive capacities and capabilities in this regard.
However, it recognized from the start that such capabilities could also undermine human
rights issues and values, and that its efforts should also seek to ensure that respect for those
should not be undermined:2

[GCSCC’s] work is focused on developing a framework for understanding what works, what
doesn’t work and why – across all areas of cybersecurity capacity. This is important so that
governments and enterprises can adopt policies and make investments that have the potential to
significantly enhance safety and security in cyberspace, while also respecting core human rights’
values and interests, such as privacy and freedom of expression.

In 2016, a number of scholars from the Centre released a paper that sought to
define ‘cyber harm’ and associated concepts and provide a taxonomy and means to
measure such harm.3 This chapter is based on, updates and revises a blog entry by
the  author,  written in response to that paper (hereafter referred to as ‘the Oxford
paper’).
It starts with a discussion of that term and those concepts, and that taxonomy.
However, it then focuses on the potential harm that can be caused – and the actual harm
that has already been caused – by the actions in cyberspace of state actors aimed (or
purporting to be aimed) at countering ‘cyber harm’ by others; on what the Oxford paper
calls ‘cybersecurity interventions’ by states and state agencies, or demanded by such agen-
cies of private actors. It looks at those in five overlapping contexts: ‘cyberwarfare’; mass
surveillance by intelligence agencies; blocking of Internet content; policing by algorithm;
and the gathering by law enforcement agencies of information and evidence relating to
criminal investigations in cyberspace.
Some brief summaries and conclusions are provided at the end.
The aim of this chapter is to warn: We should not, in trying to protect the world and our
countries and citizens from ‘rogue states’, terrorists and cybercriminals, harm the values
underpinning societies based on the rule of law. The conclusions are intended simply to
stimulate debate on that dilemma.

1.2  ‘Cyber Harm’, ‘Cybersecurity’ and ‘Cybersecurity Interventions’

The Oxford paper defines ‘cyber harm’ as follows:4

Cyber harm is generally understood as the damaging consequences resulting from cyber-events,
which can originate from malicious, accidental or natural phenomena, manifesting itself within
or outside of the Internet.

2
  See www.oxfordmartin.ox.ac.uk/cybersecurity.
3
  Ioannis Agrafiotis, Maria Bada, Paul Cornish, Sadie Creese, Michael Goldsmith, Eva
Ignatuschtschenko, Taylor Roberts and David M. Upton, Cyber Harm: Concepts, Taxonomy and
Measurement, Saïd Business School WP 2016-23 (August 2016), available at https://ssrn.com/
abstract=2828646 (‘the Oxford paper’).
4
  Ibid. 2.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 130 13/12/2018 15:25


State cybersecurity interventions  131

The concept is deliberately distinguished from, but still closely related to, concepts such as
‘cyber risk’, ‘cyber threat’ and ‘cyber-attack’, but broadly encompasses ‘the harm caused
by inadequate cybersecurity’ (ibid.).
‘Cybersecurity’ is generally described in very broad terms such as ‘protecting the cyber
environment and organization and users’ assets’ (International Telecommunications
Union). Within that broad sweep, it is held by different entities to cover many specific
matters including: protecting both critical national infrastructure and non-critical
private ICT infrastructure from state- or criminal actors’ attacks (i.e., ranging from
‘cyberwarfare’ through DDoS attacks on companies’ websites to minor but still harmful
‘hacking’, etc.); protecting such infrastructure from natural phenomena such as flood-
ing; and protecting state bodies, companies and individuals from cybercrime (itself very
broadly defined in the Cybercrime Convention,5 further discussed later in this chapter),
which need not as such affect any cyber infrastructure (e.g. posting ‘hate speech’ on the
Internet).
Actions aimed at enhancing cybersecurity and countering cyber threats consequently
also cover a broad range, from national defence strategies involving ‘cyberwarfare’ and
states’ national security agencies’ mass surveillance to detect ‘terrorists’ (however defined)
or other (even less defined) real or perceived opponents of the state or society, through
provision of software to detect and stop infections of PCs by malware or the taking over
of IoT devices as ‘botnecks’, to actions by law enforcement agencies in cyberspace to
detect possible ‘cybercrimes’ – and other crimes – and collect evidence about them and
their suspected perpetrators.
The ‘cyber events’ mentioned in the definition of ‘cyber harm’, above, therefore include
the actions of malicious non-state actors (black hackers and other cybercriminals, terror-
ists, etc.), as well as those of state actors such as military signals intelligence (SIGINT)
agencies engaged in (preparation for) cyberwarfare; (other) national security agencies
engaged in counter-terrorism; and law enforcement agencies trying to counter cybercrime
(or looking for evidence in cyberspace, e.g. in ‘cloud’ facilities used by suspects, in other-
wise not cyber-related crimes). And they also include actions required of private actors
(in particular, Internet companies) by law, or which they are more insidiously prompted
to take by states (and international and regional organizations including the European
Union (EU)) without being formally required to take them.6

2.  P
 OTENTIAL FOR HARM POSED BY STATE
CYBERSECURITY INTERVENTIONS IN FIVE CONTEXTS

No-one doubts the serious, indeed potentially global harm that can be caused by the
actions in cyberspace of rogue states, terrorists and serious criminals, or the pressing need
to try and minimize that harm. All this is well addressed in the Oxford paper. However, in

5
  Council of Europe Convention on Cybercrime, Budapest, 23 November 2001, CETS No.
185, also known as the ‘Budapest Convention’, available at www.coe.int/en/web/conventions/
full-list/-/conventions/treaty/185.
6
  See in particular the section on general monitoring of communications in order to block
‘undesirable’ content (2.3 below).

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 131 13/12/2018 15:25


132  Research handbook on human rights and digital technology

trying to prevent or mitigate such harm, potential negative effects of the counter-measures
should not be ignored. State cybersecurity interventions, aimed at avoiding harm, can
themselves harm, if they impinge on fundamental rights and freedoms – specifically:
the rights and freedoms guaranteed in international human rights instruments such as
the International Covenant on Civil and Political Rights (ICCPR) or (in Europe) the
European Convention on Human Rights (ECHR) and the EU Charter of Fundamental
Rights – in ways that are unnecessary or disproportionate.
State cybersecurity interventions can cause harm to fundamental rights and interests
in five (overlapping) contexts:

● state-on-state cyber-attacks (see 2.1 below);


● mass surveillance and the undermining of encryption by intelligence agencies to
counter terrorism and protect national security (2.2);
● general monitoring of communications in order to block ‘undesirable’ content (2.3);
● policing (including predictive policing) by algorithm (2.4); and
● the gathering by law enforcement agencies of information and evidence relating to
criminal investigations in cyberspace (2.5).

The first and last of these are covered by established (although traditionally clearly dis-
tinct) legal regimes, respectively: by international law on the use of force (ius ad bellum)
and international humanitarian law (IHL, ius in bello); and by criminal and police
law and procedure, and in relation to cross-border investigations, by the Cybercrime
Convention and so-called Mutual Legal Assistance Treaties (MLATs). However, the
application of the traditional rules to actions in the new digital context still poses serious
challenges.
The other areas are much less clearly defined and much less legally regulated under
current laws, and national and international legislators and rule-makers are still struggling
to draw up rules that are suited to address them.

2.1  State-on-State Cyber-Attacks

The threat of ‘cyberwars’ (both ‘hot’ and ‘cold’) has caught the headlines in the newspa-
pers for some years, and recently.7 Of course, basically it is the very aim of warfare to cause
harm to the enemy, and cyberwarfare is no different in that respect. To the extent that
actions would deliberately target civilian infrastructure, such attacks would already con-
stitute war crimes under international humanitarian law. But the international legal rules

7
  See e.g., the following reports of 2016: ‘Vladimir Putin fires the first shot in “cyber war with
US”’, Daily Star, 17 August 2016; ‘Barack Obama warns of Cold War-style “cyber arms race”
with Russia’, Daily Telegraph, 5 September 2016; ‘Obama tells CIA to prepare for cyber war with
Russia’, ZeroHedge, 14 October 2016; ‘Russia slams “unprecedented, insolent” US cyber threats,
vows retaliation’, ZeroHedge, 15 October 2016. Those were not isolated reports. At the time of the
revising and updating of this chapter, the UK Defence Secretary warned that Russia could cause
‘thousands and thousands and thousands of deaths’ by crippling UK infrastructure: see BBC
News, www.bbc.co.uk/news/uk-42828218 (28 January 2018); Daily Mirror, www.mirror.co.uk/news/
politics/russia-wants-cause-thousands-thousands-11916740 (27 January 2018).

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 132 13/12/2018 15:25


State cybersecurity interventions  133

in these regards are also being further developed and refined in relation to ­cyberspace, in
particular in NATO’s so-called Tallinn Manual (now in its second version), which at least
acknowledges that the rules of war also apply in cyberspace.8
Three points may be made. First, there is the problem of attacks being perpetrated
by non-state actors reportedly backed by states, in non-declared wars and ‘asymmetric
conflicts’. That problem already existed in the offline world, and there is some law on it,9
but it is particularly pressing in the digital environment, in which attribution of actions –
and attacks – to specific, identified actors is often difficult if not impossible. This makes
it difficult to know whether, in a particular situation, the rules of war apply to a cyber
incident, or not.
Secondly, it is often difficult to differentiate between offensive and defensive measures,
actions and products, and again, this problem is particularly acute in relation to cyberspace.
The very tools that are being developed to prevent or counter a cyber-attack by a foreign
state can also be used to attack that state itself. Enhancing one’s cyberwarfare defensive
capabilities can thus easily develop into a cyber arms race. As the UK’s Chancellor of the
Exchequer (Finance Minister), Philip Hammond, said when announcing major ‘cyber­
security’ funding in response to cyber-attacks on targets in the United Kingdom attributed
by the UK’s security agencies to Russia (although the latter denied responsibility):10

The ability to detect [and] trace [cyber attacks] and retaliate in kind is likely to be the best
­deterrent . . . We will not only defend ourselves in cyberspace, we will strike back in kind when
we are attacked. (Emphasis added)

This fails to note the strong international legal rules (now considered customary law)
against reprisals, in particular against protected persons or infrastructure.11 If this
approach becomes common, state cyberwarfare interventions, even if supposedly only
defensive, will cause more harm than good: they will lead to more, not less, more harmful,
not less harmful actions by the protagonists, including against civilian targets.
And third, there is the phenomenon of states (or, again, non-state actors allegedly
acting on behalf of or under the direction of states) interfering in the political processes

 8
  As it is put on the website of NATO’s Cooperative Cyber Defence Centre of Excellence
(CCDCOE): ‘The Tallinn Manual on the International Law Applicable to Cyber Warfare, written
at the invitation of the Centre by an independent “International Group of Experts”, is the result
of a three-year effort to examine how extant international legal norms apply to this “new” form of
warfare’. The full text of the first version is available at https://ccdcoe.org/tallinn-manual.html. A
second, updated version, Tallinn Manual 2.0, was released in February 2017, see https://ccdcoe.org/
tallinn-manual.html; hwww.cambridge.org/gb/academic/subjects/law/humanitarian-law/tallinn-
manual-20-interna​tional-law-applicable-cyber-operations-2nd-edition?format=PB#CL1D2IHOL
TeORjZQ.97 (not available for free).
 9
  See, in particular, the International Court of Justice’s judgment in Case Concerning Military
and Paramilitary Activities in and against Nicaragua (Nicaragua v. USA), Merits, 27 June 1986.
10
  ‘UK must retaliate against cyber-attacks says chancellor’, BBC News, 1 November 2016,
available at www.bbc.co.uk/news/technology-37821867.
11
  See in particular Rule 145 of the International Committee of the Red Cross (ICRC) database
of rules of customary international humanitarian law, summarized as ‘Where not prohibited by
international law, belligerent reprisals are subject to stringent conditions’. For details, see https://
ihl-databases.icrc.org/customary-ihl/eng/print/v1_rul_rule145.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 133 13/12/2018 15:25


134  Research handbook on human rights and digital technology

in other states, through overt propaganda or the broad or targeted dissemination of ‘fake
news’ in ‘cyberspace’. If in earlier wars, or the run-up to earlier wars, state propaganda
was an acknowledged tool, then this new(ish) phenomenon can also be seen as part of
belligerent actions; and counter-measures as ‘cyber interventions’ in the context of state-
on-state conflict, i.e. as part of ‘cyberwarfare’. However, for now, these counter-measures
are still mainly taken outside of the parameters of the laws of war and presented as
being adopted within the framework of the ‘normal’, peacetime legal framework (albeit
influenced by the ‘semi-permanent quasi-emergency’ mindset noted in the next section).
Those counter-measures will therefore in this chapter be discussed next. Here, it may
suffice to note that they could easily escalate in ‘hotter’ conflict contexts, to become part
of a state’s defence operations. On the Internet, too, truth – already undoubtedly under
attack – is likely to become the first victim of war, indeed before any shots are fired.
If one wants to measure the ‘maturity’ of the cybersecurity capacities of states in relation
to cyber warfare, the serious potential harms of any policies of retaliation and escalation
adopted by them, and their consequent potential of serious harm to civilian (enemy) popula-
tions, should be part of the calculation.

2.2  M
 ass Surveillance and the Undermining of Encryption by Intelligence Agencies to
Counter Terrorism and Protect National Security

2.2.1  Spying in the olden days and in the new digital environment
Spying on a declared enemy at times of war is generally regarded as lawful under interna-
tional law. However, outside of wars, and indeed even to some extent during them, spies
have tended to work essentially ‘outside the law’ – as their traditional name, ‘clandestine
services’ already indicates. As Field Marshall Montgomery put it in his seminal work on
the history of warfare:12

In all secret service activities, which are handled by the central government, the operations of
spies, saboteurs and secret agents generally are regarded as outside the scope of national and
international law. They are therefore anathema to all accepted standards of conduct.

Even the services themselves and their leading officials often remained in the dark:
the United Kingdom did not formally acknowledge the existence of its secret services
(effectively established in the run-up to World War I) until 1992; the head of the foreign
intelligence service, MI6, was only known as ‘C’.13
The obscurity extended beyond ‘secret agents’ spying on real or potential targets abroad
to domestic intelligence agencies, often known as the ‘secret police’ (or more euphemisti-
cally, in the United Kingdom, as ‘Special Branch’ (within the police) and ‘security service’
(MI5, technically within the army: the abbreviation stands for Military Intelligence,

12
  Field Marshal Montgomery of Alamein, A History of Warfare (new edn, Jane’s, 1982) 17.
13
  See Keith Jeffery, MI6: The History of the Secret Intelligence Service 1909–1949 (2011).
The use of ‘C’ to denote the head of the service stems from Captain Sir Mansfield George Smith
Cumming, KCMG, CB (1 April 1859–14 June 1923), who was the first director of what would
become the Secret Intelligence Service (SIS), also known as MI6. See also Simon Usborne, ‘Top
secret: a century of British espionage’, Independent, 5 October 2009, available at www.independent.
co.uk/news/uk/politics/top-secret-a-century-of-british-espionage-1798168.html.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 134 13/12/2018 15:25


State cybersecurity interventions  135

Section 5)).14 While the secret police forces have undoubtedly, at times, indeed protected
their nations’ security, their histories are also riddled with transgressions and abuses – not
just in acknowledged authoritarian systems such as the Soviet countries, but also in the
leading Western democracies and their colonies; and since the latters’ and the former
Soviet satellites’ independence, in the new countries; and not just during the Cold War.
These problems with the operations of ‘clandestine’ agencies are exacerbated in the
modern age in several ways.
First, as noted in a recent report, there is a trend toward countries undertaking cyber
interventions in relation to terrorism under what could be called ‘semi-permanent states
of quasi-emergency’:15
[I]n almost all [of the 14 countries reviewed], the authorities are given extremely wide-ranging
powers at times of war and national emergencies ‘threatening the life of the nation’ (to use the
words of the international human rights treaties). However . . . most of the special laws [allowing
surveillance and other cyber interventions] we have examined do not purport to be limited to
times of war or such national emergencies.
  Rather, the mass surveillance powers are granted in laws that are supposed to apply within
the normal constitutional frameworks – yet at the same time, they challenge these frameworks.
Thus, laws that would not normally be deemed acceptable are becoming an ingrained part of
the permanent legal fabric of the countries surveyed. They are creating a ‘semi-permanent
quasi-emergency’ legal framework, not fully in accordance with the normal rules but also not
formally seen as emergency law.

The trend is not new: the author coined the phrase in a presentation of an Amnesty
International report on emergency law to the Council of Europe as far back as 1980.16
But the trend is clearly accelerating since ‘9/11’ and the recent extremist jihadist attacks
in Europe, Asia, Africa and the Middle East.
Second, in this trend, the lines between traditionally, or supposedly, distinct areas –
external defence, ‘signals intelligence’ and ‘external intelligence’; internal counter-terrorism
(action against what used to be called domestic subversion) and external counter-terrorism;
and law enforcement – are increasingly blurred, also in terms of institutions: in the United
States, the FBI is now formally presented as both a law enforcement and a national secu-

14
  More chillingly, in Nazi Germany and the countries it occupied and repressed, the agency
was known as the secret state police or Geheime Staatspolizei, GESTAPO; in the German
Democratic Republic as the state security service, Staatssicherheitsdienst or StaSi. For the long
series of acronyms used in Soviet Russia (Cheka, NKVD, GPU, GUGB, KGB, etc., in which
‘GB’ stands for gosudarstvennoy bezopasnosti, i.e. state security), see https://en.wikipedia.org/wiki/
Chronology_of_Soviet_secret_police_agencies. In the United Stat­es, the FBI was and is the prime
institution dealing with internal security threats. See n. 17 below.
15
  See the global comparative report covering Colombia, DR Congo, Egypt, France, Germany,
India, Kenya, Myanmar, Pakistan, Russia, South Africa, Turkey, United Kingdom, United States,
prepared for the World Wide Web Foundation by Douwe Korff, Ben Wagner, Julia Powles, Renata
Avila and Ulf Buermeyer, Boundaries of Law: Exploring Transparency, Accountability, and Oversight
of Government Surveillance Regimes (World Wide Web Foundation, January 2017), available at
https://ssrn.com/abstract=2894490.
16
  Submission by Amnesty International to a Conference of the Parliamentary Assembly of the
Council of Europe on ‘Defence of Democracy against Terrorism in Europe: Tasks and Problems’,
presented by Douwe Korff as Head of Europe Region of AI, 12–14 November 1980, Council of
Europe Document AS/Pol/Coll/Terr(32)26.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 135 13/12/2018 15:25


136  Research handbook on human rights and digital technology

rity/intelligence agency;17 in the United Kingdom, the intelligence c­ ommunications agency


GCHQ is now by statute authorized to be directly involved with law enforcement agencies
in the fight against terrorism and other crimes, such as distribution of child pornography;18
and the same is the case in many other countries.
Third – and this brings us to the central topic of this chapter – because of these trends,
the potential harms that can be caused by cyber interventions by the secret agencies in
secret operations is spreading from relatively narrow ranges of potential victims in special
times and places (such as active war zones) to entire populations in countries essentially
at peace (even if under threat from terrorist activity).

2.2.2  Risks of harm


Edward Snowden exposed the global mass surveillance systems operated by the United
States, the United Kingdom and their ‘Five Eyes’ partners, Australia, Canada and New
Zealand; and in the wake of these revelations similar mass surveillance activities have
been exposed, perpetrated by the national security agencies of, inter alia, Germany,
France and South Africa (at times without the approval or even knowledge of their own
governments or parliaments or oversight committees). In fact, many countries supposedly
under the rule of law appear to be engaged in such activities (acknowledged or otherwise),
and authoritarian states including Russia and China doubtless are no less active in this
respect, subject to, if anything, even less legal constraint.19
The measures in question, including the interception of (very broadly-defined)
communications metadata and contents in bulk (at least on foreign targets), and the
deliberate undermining of encryption in order to be able to read encrypted communica-
tions, are generally presented as aimed at protecting national security and countering
global, regional or domestic terrorism – and strongly defended as absolutely necessary to
those ends. Thus, in 2016, the United Kingdom adopted what has been called ‘the most
draconian surveillance law in the history of democracy’, creating new obligations on
ISPs to retain full Internet browsing histories of all users and make them available to a
range of authorities on demand, and new powers to ‘hack’ computers.20 These extremely

17
  The FBI website used to state explicitly that ‘The FBI has dual responsibilities as a law
enforcement and intelligence agency’, but is now less explicit, although its mission still clearly
covers both. See www.fbi.gov/about/mission. The press noted some years ago that the FBI had
changed an FBI Fact Sheet to describe its ‘primary function’ as no longer ‘law enforcement’, but
now ‘national security’: The Cable, 5 January 2014, available at http://thecable.foreignpolicy.com/
posts/2014/01/05/fbi_drops_law_enforcement_as_primary_mission#sthash.4DrWhlRV.dpbs. For
the dangers inherent in such blurring of the lines, see www.foreignpolicy.com/articles/2013/11/21/
the_obscure_fbi_team_that_does_the_nsa_dirty_work. See also n. 49 below.
18
  See the announcement of a new joint police (National Crime Agency, NCA) and GCHQ
unit, available at www.nationalcrimeagency.gov.uk/news/736-gchq-and-nca-join-forces-to-ensure-
no-hiding-place-online-for-criminals. Also, more critically (‘[The new unit] won’t just be targeting
child pornography though, but also “serious criminals”’): see https://motherboard.vice.com/en_us/
article/wnxeyn/the-uk-will-police-the-dark-web-with-a-new-task-force.
19
  For a comparative survey of law and practice in Colombia, DR Congo, Egypt, France,
Germany, India, Kenya, Myanmar, Pakistan, Russia, South Africa, Turkey, the United Kingdom
and the United States, see Korff et al., Boundaries of Law, n. 15 above.
20
  ‘United Kingdom passes the most draconian surveillance law in the history of democracy’,
The Register, 21 November 2016, available at www.philosophers-stone.co.uk/?p=15861; ‘UK passes

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 136 13/12/2018 15:25


State cybersecurity interventions  137

intrusive powers were claimed to be necessary, and indeed ‘essential’, to fight terrorism
and serious crime.21
Yet those broad surveillance and intrusion measures also undoubtedly constitute seri-
ous interferences with fundamental rights. In its judgments in the Digital Rights Ireland
and Safe Harbor cases,22 the Court of Justice of the EU (CJEU) made clear that:23
Legislation is not limited to what is strictly necessary [and thus incompatible with the EU
Charter of Fundamental Rights] where it authorises, on a generalised basis, storage of all the
personal data of [large numbers of people] without any differentiation, limitation or exception
being made in the light of the objective pursued and without an objective criterion being laid
down by which to determine the limits of the access of the public authorities to the data, and of
its subsequent use, for purposes which are specific, strictly restricted and capable of justifying
the interference which both access to that data and its use entail.
  In particular, legislation permitting the public authorities to have access on a generalised basis
to the content of electronic communications must be regarded as compromising the essence of
the fundamental right to respect for private life, as guaranteed by Article 7 of the Charter.

In the light of these judgments, most of the laws covering surveillance by states’ secret
intelligence services must be regarded as blatantly in violation of international human
rights law. (In some states, those activities are not regulated by law at all, or take place in
violation of domestic law; that of course is even more fundamentally contrary to human
rights law, being in breach of the basic principle of legality.)
Such unlawful laws (or practices outside of the law) unlawfully undermine privacy, in
that state agencies can see the most intimate minutiae of people’s lives without justifica-
tion. They equally unlawfully undermine data protection by greatly adding to the power
of the ‘data-haves’ (in this case, the states involved), at the expense of the power of the

new “draconian” mass surveillance law’, SOFREP News, 20 November 2016, available at https://
sofrep.com/68015/uk-passes-new-draconian-mass-surveillance-law/.
21
  See e.g., the comment by the head of the UK’s domestic intelligence service, MI5, Andrew
Parker, in ‘the first newspaper interview given by a serving spy chief’ that the new statutory bulk
surveillance powers being introduced in the United Kingdom, the use of which without clear statu-
tory authority had been held to be unlawful, are necessary and struck ‘the right balance’: ‘MI5 head:
“increasingly aggressive” Russia a growing threat to UK’, Guardian, 1 November 2016, available
at www.theguardian.com/uk-news/2016/oct/31/andrew-parker-increasingly-aggressive-russia-a-gro​
wing-threat-to-uk-says-mi5-head. As the then Home Secretary, Theresa May MP (now the Prime
Minister), put it: ‘The Bill ensures that the security and intelligence agencies and law enforcement
continue to have the powers they need to keep us safe – and no more’. Those powers are, in her
view, ‘essential to counter the threat from criminals and terrorists’. ‘Foreword’ to the Government
Response to Pre-Legislative Scrutiny [of the Investigatory Powers Bill] (1 March 2016) 2, available at
www.gov.uk/government/uploads/system/uploads/attachment_data/file/504174/54575_Cm_9219_
WEB.PDF.
22
  Respectively: C-293/12 Digital Rights Ireland, CJEU, Judgment of 8 April 2014; C-362/14
Schrems, CJEU, Judgment of 6 October 2015. The main considerations of the Court in these cases
are summarized in Korff et al., Boundaries of Law, n. 15 above, 23–25, under the heading ‘Case
Law of the CJEU’.
23
  The quotation is from para. 93 of the Schrems/Safe Harbor judgment, echoing para. 54 ff. of
the Digital Rights Ireland judgment. In the Digital Rights Ireland case, the population in question
were all citizens of the EU; in the Schrems/Safe Harbor case it was all the persons whose data had
been transferred from the European Union to the United States under the Safe Harbor agreement
between the EU and the United States.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 137 13/12/2018 15:25


138  Research handbook on human rights and digital technology

‘data-have-nots’ (those placed under such surveillance). Beyond that, mass surveillance
threatens freedom of expression and freedom of (online) association. Moreover, the
means used for such ubiquitous surveillance, in particular, effectively uncontrolled and
uncontrollable ‘backdoors’ into electronic communication systems, can be and have been
abused by those with access to them.24
The undermining of IT security by states, through ‘hacking’, ‘backdoors’ and the
­breaking of encryption, also fundamentally weakens the already fragile security of the
global digital infrastructure: once the intelligence agencies can break open supposedly
secure servers and cables, others will be able to do that too. And, of course, not all states
are benign.
For all these reasons, much stricter regulation of the secret services has become ­pressing
– but has come up against stiff resistance from the main international actors, as noted
next.

2.2.3  Attempts to regulate the spies


At the international level, only very recently, following the Snowden revelations, are
some tentative steps being taken to adopt an international-legal framework for secret
intelligence services. In 2015, a former head of the German external intelligence service,
the Bundesnachrichtendienst (BND), Mr Hansjörg Geiger, proposed that an ‘intelligence
codex’ be adopted at the international level to cover the activities of national security
agencies.25 In 2016–2017, a draft legal instrument on surveillance was drafted as part of
the EU-supported MAPPING project; it is still under discussion.26
The proposal was welcomed by the UN Special Rapporteur on Privacy, Joe Cannataci,
in his 2017 report to the UN General Assembly, in which he called for the adoption of ‘a
legal instrument regulating surveillance in cyberspace . . . complementary to other pieces
of existing cyberlaw such as the Cybercrime Convention’.27
But these are all still tiny, tentative steps. As the Special Rapporteur (SPR) noted:28

the evidence available to the SRP would suggest that a number of states, even some leading
democracies, regrettably treat the Internet in an opportunistic manner, as somewhere where their
LEAs [law enforcement agencies] and especially their SIS [secret intelligence services] can operate
relatively unfettered, intercepting data and hacking millions of devices, (smart-phones, tablets
and laptops as much as servers) world-wide. In doing so, approximately 15–25 states treat the

24
  For one example of such abuse, in Georgia, see the documents prepared for the
Transparency International Georgia Conference, ‘Secret Surveillance and Personal Data
Protection – Moving Forward’, held in Tbilisi on 24 May 2013, available at http://transparency.
ge/en/node/3068.
25
  See Pieter Omtzigt, Rapporteur of the Parliamentary Assembly of the Council of Europe
(PACE) on Mass Surveillance, Explanatory Memorandum, s. 5.2 (paras 115–18), attached to PACE
Resolution 2045(2015) on Mass Surveillance, adopted on 21 April 2015, available at http://assem​
bly.coe.int/nw/xml/XRef/Xref-XML2HTML-en.asp?fileid=21583&lang=en. The text of the rec-
ommendation itself is available at http://assembly.coe.int/nw/xml/XRef/Xref-XML2HTML-EN.
asp?fileid=21692&lang=en.
26
  See https://mappingtheinternet.eu/node/146.
27
  Joseph A. Cannataci, Report of the Special Rapporteur on the Right to Privacy, A/HRC/34/60
(24 February 2017) 20, para. j.
28
  Ibid. 19, para. e.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 138 13/12/2018 15:25


State cybersecurity interventions  139

Internet as their own playground over which they can squabble for spoils, ever seeking to gain
the upper hand whether in terms of cyber-war, or espionage or counter-espionage, or industrial
espionage. The list of motivations goes on while the other 175-odd states look on powerless,
unable to do much about it except hope that somehow cyber-peace will prevail.

As he felt compelled to ‘state . . . frankly’: a ‘tiny minority of states’ actively tried to
undermine his efforts at international regulation.29 He rightly stressed that:30

What the world needs is not more state-sponsored shenanigans on the Internet but rational, civilised
agreement about appropriate state behaviour in cyberspace. (Original emphasis)

But given the clout of that ‘tiny minority of states’, unnamed by the Special Rapporteur
but undoubtedly including the United States and the United Kingdom, which have major
control of the Internet infrastructure and cables,31 and presumably also France, Russia
and China (i.e. all the permanent members of the UN Security Council), that may still be
somewhat of a pious wish.
It is precisely because those ‘shenanigans’ are unlikely to be formally curtailed in an
international treaty to which those states would sign up, that this chapter warns against
the continuing potential risks.
The use of mass ‘cyber’ surveillance and Internet-security-undermining tools as part of
‘cybersecurity interventions’ by state intelligence agencies aimed at protecting national
security and countering terrorism inherently pose serious risks of harm to the interests and
fundamental rights and freedoms of the populations that are being watched and whose ICT
infrastructure and devices are being weakened.

2.3  General Monitoring of Communications in Order to Block ‘Undesirable’ Content

Increasingly, demands are made that ‘something be done’ about ‘undesirable’ and
‘harmful’ material on the Internet: online child abuse images and other criminal
pornography; ‘extremism’; ‘incitement to violence’; ‘hate speech’; – and more recently,
‘fake news’. Organizations representing holders of intellectual property rights similarly
demand that measures be taken to prevent the sharing of IP-protected materials online.
There is a widespread assumption that the ‘Internet Giants’ (Google, Apple, Facebook,
Twitter) have the means and resources to meet these demands, and should be forced to
do so.32
Yet it is problematic to leave such action to the discretion of the companies that provide

29
  Ibid. para. f.
30
  Ibid. para. g.
31
  See Council of Europe Commissioner for Human Rights, Issue Paper on The Rule of Law on
the Internet, CommDH/IssuePaper(2014)1 (December 2014) (prepared by Douwe Korff) ss. 2.2.2
(‘The Cloud that is not in the sky’) and 2.2.3 (‘The real backbones [of the Internet]’), available at
https://wcd.coe.int/ViewDoc.jsp?Ref=CommDH/IssuePaper(2014)1&Language=lanAll.
32
  Jeremy Darroch, ‘It’s time the internet giants were made accountable’, The Times, 28
November 2017, available at www.thetimes.co.uk/article/it-s-time-the-internet-giants-were-made-
accoun​table-2w9r3brvd.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 139 13/12/2018 15:25


140  Research handbook on human rights and digital technology

the platforms for the sharing of such content, both in terms of standards and in terms
of processes.33
Special problems, relevant to this chapter, arise in relation to what the Special
Rapporteur calls ‘automation or algorithmic filtering’. The typical example given is
the use of Microsoft’s ‘PhotoDNA’ to detect online child abuse pictures.34 However,
there are two main problems with such tools. First of all, as Peter Sommer explains in
a recent blog, while artificial intelligence (AI) tools may be able to identify materials
that have already been judged to constitute online child abuse pictures, by matching
hashes (albeit it not even then if the material is modified), it is much more difficult to
use them to try and identify text or images that may constitute objectionable or even
illegal content, i.e. where some judgment is required. In such cases, the context will
often be critical:35

The same photo may appear on a site promoted by a terrorist group and by a news organisa-
tion. Some sexually explicit photos may be justified in the context of medical and educational
research – or law enforcement. . . .
  How do you reliably distinguish a 16-year-old from an 18-year-old, and for all ethnicities?
How does an AI system distinguish the artistic from the exploitative or when in a sexual situation
there is an absence of consent?

This applies also, especially, to automated systems that use so-called natural language
processing (NLP) tools for analysing the text of social media posts. As the Center for
Democracy puts it:36

Today’s tools for automating social media content analysis have limited ability to parse the
nuanced meaning of human communication, or to detect the intent or motivation of the speaker.
Policymakers must understand these limitations before endorsing or adopting automate content
analysis tools. Without proper safeguards, these tools can facilitate overbroad censorship and
biased enforcement of laws and of platforms’ terms of service.

33
  See the UN Special Rapporteur on Freedom of Expression, David Kaye’s, Concept Note
on Content Regulation in the Digital Age, prepared for his June 2018 Human Rights Council
Report, available at https://freedex.org/wp-content/blogs.dir/2015/files/2017/09/Concept-Note-
Social-Media-Search-and-FOE.pdf. European Digital Rights has noted problems relating to the
deletion of perfectly legal content by providers relying on their Terms of Use; to overreliance on
‘trusted flaggers’ of contentious content; to ‘review processes, record-keeping, assessing coun-
terproductive effects, anti-competitive effects, over-deletion of content, complaints mechanisms
for over-deletion’; and to the lack of ‘investigation or prosecutions of the serious crimes behind,
for example, child abuse’. See https://edri.org/leaked-document-does-the-eu-commission-actually-
aim-to-tackle-illegal-content-online/. See also a subsequent article, available at https://edri.org/
commissions-position-tackling-illegal-content-online-contradictory-dangerous-free-speech/. On the
problems in relation to ‘fake news’, see Tarlach McGonagle, ‘“Fake News”: False Fears or Real
Concerns?’ (2017) 35(4) Netherlands Quarterly of Human Rights 203–9, available at http://journals.
sagepub.com/doi/pdf/10.1177/0924051917738685.
34
  See www.microsoft.com/en-us/photodna.
35
 Sommer, blogspot, n. 1 above.
36
  Mixed Messages? The Limits of Automated Social Media Content Analysis (Center for
Democracy and Technology, November 2017), available at https://cdt.org/files/2017/11/Mixed-
Messages-Paper.pdf.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 140 13/12/2018 15:25


State cybersecurity interventions  141

Even when the standard is clear, the judgment may be difficult. Depictions of people
having sex with animals are clearly defined as illegal in UK law. But does this mean
all historical or even recent depictions of Leda and the Swan (and other depictions of
mythological god/human-beast encounters) must be blocked? (Leda and her swan are
currently not blocked from Google searches.)
In relation to IP-protected material, there is the special, major problem of limits to and
exceptions from such protection, e.g. in relation to fair comment/reporting, criticism,
parody, caricature and pastiche, or to facilitate access to people with disabilities. The
scope and application of those exceptions is difficult to determine in individual cases by
lawyers and courts – and well beyond the capabilities of so-called ‘artificial intelligence’
and NLP.
Put simply: because of their unavoidable limitations and incapabilities, ‘algorithmic
filter’ tools are inherently inappropriate for the purpose of determining whether speech or
text (or pictures or videos) amounts to ‘hate speech’, ‘incitement to violence’, ‘support for
extremism’, etc. – or whether, if it seems to comprise copyright-protected materials, their
use is covered by one of the exceptions in the applicable national law (or indeed to determine
which law that is).
Yet astonishingly, the European Commission is proposing that precisely those tools
are to be used by information society service providers to detect copyright-protected
materials and ‘prevent’ them from being made available on their sites.
Specifically, article 13 of the proposed Copyright Directive,37 if adopted as proposed by
the Commission, would impose on all such providers a legal duty to conclude ‘agreements’
with right-holders (in practice, royalty collecting agencies) under which those service
providers must ‘prevent the availability on their services of works or other subject-matter
identified by right-holders’ (read: as protected by copyright). In other words, they will
be required by law to implement a system to block copyright-protected materials (as
identified by right-holders).
The proposal is seemingly less prescriptive when it comes to the means to be used to
achieve such blocking: it appears to say, in quite general terms, that ‘[t]hose measures’,
i.e. the measures used to block the relevant materials, ‘shall be appropriate and propor-
tionate’. But this is quite simply disingenuous, as is clear from the only example of such
measures mentioned in the text of the draft Directive (article 13(2)): ‘effective content
recognition technologies’.38
In fact, the only way to ‘prevent the availability’ of (i.e. to pre-emptively block)

37
  Proposal for a Directive of the European Parliament and of the Council on copyright in
the Digital Single Market, COM/2016/0593 final, 2016/0280 (COD), Brussels, 14 September 2016,
available at http://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52016PC0593.
38
  Interestingly, a new ‘possible’ text of art. 13, contained in an EU Presidency Discussion Paper
on Article 11 and Article 13 [of the proposed Copyright Directive], distributed to Member State
delegations on 6 February 2018, omits the ‘example’ of ‘effective content recognition technologies’:
see www.parlament.gv.at/PAKT/EU/XXVI/EU/01/03/EU_10322/imfname_10784225.pdf (see the
alternative text at 13). This is likely a further disingenuous attempt to effectively hide the issue: the
alternative ‘possible’ text still retains the requirement that ‘online content sharing services’ must
‘take effective measures to prevent the availability on its services of these unauthorised works or
other subject-matter identified by the rightholders’ (emphasis added), even though, as noted below,
in practice the only way of achieving this is to use algorithmic filters.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 141 13/12/2018 15:25


142  Research handbook on human rights and digital technology

copyright-protected content on the relevant platforms is to use such algorithmic filters.


This would affect a vast multitude of services, as described by EDIMA.39 Moreover, such
tools can, of course, be used to preventively detect, and then block, any predetermined
content. They are a gift to any government wanting to suppress the free flow and sharing
of information on the Internet.
Not surprisingly, European Digital Rights calls such tools ‘Censorship Machines’.40
The Global Network Initiative (whose members include Facebook, Google, LinkedIn and
Microsoft as well as human rights organizations)41 has adopted the view that:42

Governments should not mandate the use of filters or other automated content evaluation tools
in laws regulating speech.

In sum: The use of ‘algorithmic filters’ (or ‘content recognition-’ and/or ‘content
e­valuation technologies’) in order to detect and block objectionable or copyright-­
protected content in private sharing platforms must, because of the very function
they are to fulfil, involve the very ‘generalized’ monitoring of the content of com-
munications of whole populations that the CJEU denounced as incompatible with the
EU Charter of Fundamental Rights in relation to the mass surveillance by states we
discussed earlier.
Such tools are therefore both inappropriate for their stated aim and constitute major
and disproportionate – and thus unlawful – interferences with the fundamental rights of
the people in the populations against which they are used.
For the purpose of this chapter, this means that such tools should not be regarded as
‘cyber capabilities’, let alone promoted as strengthening such capabilities in the fight against
cybercrime. On the contrary, their indiscriminate use for ‘generalized’ monitoring of entire
populations and vast swathes of Internet users should be seen as undermining the rule of law
and fundamental rights in cyberspace.

39
  See http://edima-eu.org/wp-content/uploads/2018/01/Services-affected-by-Article-13-Infograp​
hic.jpg.
40
 See https://edri.org/civil-society-calls-for-the-deletion-of-the-censorshipmachine/. The​
European Commission belatedly responded to the open letter sent by EDRi and 56 other civil
society organizations on 1 February 2018: see https://edri.org/files/copyright/20180201-EC_
ReplyOpenLetter-Art13.pdf. For criticism of this response and of the Commission’s continuing
attempt to effectively require content-screening filters, see EDRi, 9 February 2018, available at https://
edri.org/smashing-the-law-without-breaking-it-a-commission-guide/.
41
  ‘GNI is a unique multi-stakeholder forum bringing together Information and Communications
Technology (ICT) companies, civil society organizations, investors, and academics to forge a
common approach to protecting and advancing free expression and privacy online.’ See www.
globalnetworkinitiative.org/; https://globalnetworkinitiative.org/participants/index.php.
42
 GNI, Submission to the UN Special Rapporteur on the Right to Freedom of Opinion
and Expression, David Kaye, on Content Regulation in the Digital Age (20 December 2017)
8, available at https://globalnetworkinitiative.org/sites/default/files/GNI-Submission-SR-Report-
Content-Regulation.pdf. See also https://globalnetworkinitiative.org/news/gni-provides-input-un-re​
port-content-regulation-digital-age.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 142 13/12/2018 15:25


State cybersecurity interventions  143

2.4  Preventive, Predictive Policing

2.4.1  ‘Identifying’ bad people and predicting bad behaviour by sensors


Many police forces are using facial recognition technologies, with stationary CCTV
systems increasingly enhanced by such technologies. The faces of half of the entire adult
US population are already held in police face recognition databases.43 In the United
Kingdom, the government-appointed Biometrics Commissioner found that the police still
held, and used, more than 20 million facial images on their searchable databases, more
than five years after the courts ruled that the inclusion of images of innocent people was
unlawful.44 Some, including the Dubai police, are linking drones equipped with video
cameras to such systems, to identify ‘wanted’ individuals. But the Dubai police have gone
a step further, by developing drones with much more elaborate sensors which, they claim,
can remotely capture the vital signs of targeted individuals, such as their blood pressure
and temperature – which (presumably after being combined with other data and analysed)
will allow the Dubai police to ‘predict if a person is willing to commit a crime or a terrorist
attack or something’.45
The United States is reported to have used similar technologies to operate ‘autonomous
weapon systems’ (or ‘killer drones’) in Pakistan and other conflict areas (although
Pakistan is not technically a war zone, which makes the use of such systems even more
dubious in international law).46
To the extent that such technologies can be said to deploy ‘cyber capacities’ to ‘identify’
potential criminals, or attack enemy combatants or (suspected) terrorists (or even less-
defined actors), they come within the scope of the ‘cyber interventions’ discussed in the
Oxford paper.

43
  Georgetown Law Center on Privacy and Technology, Press Release, Half of All American
Adults are in a Police Face Recognition Database, New Report Finds (18 October 2016), with link
to full report available at www.law.georgetown.edu/news/press-releases/half-of-all-american-adults-
are-in-a-police-face-recognition-database-new-report-finds.cfm.
44
  ‘Watchdog warns over police database of millions of facial images’, Guardian, 13 September
2017, available at www.theguardian.com/world/2017/sep/13/watchdog-warns-over-police-­database-
of-millions-of-facial-images. The full report is available at www.gov.uk/government/uploads/
system/uploads/attachment_data/file/644426/CCS207_Biometrics_Commissioner_ARA-print.pdf
(see paras 300–6).
45
 BBC Click, 3 February 2018, downloadable from the BBC iPlayer website until 3 January
2019, atwww.bbc.co.uk/iplayer/episode/b09qvd1w/click-law-and-order-dubai#. The claims are
made on the programme by Colonel Khalid Alrazooqi, Director-General of Artificial Intelligence,
and Lieutenant Mohammed al-Muhairi, Head of Unmanned Aircraft, Dubai Police.
46
 See http://arstechnica.co.uk/security/2016/02/the-nsas-skynet-program-may-be-killing-
thousands-of-innocent-people/. On the question of the legality of autonomous and automated
weapons in terms of international humanitarian law, see Peter Gallagher, When Algorithms Kill
(Society for Computers and Law, 24 August 2016), available at www.scl.org/articles/3721-when-
algorithms-kill. As Gallagher points out, most of the systems in question could be better described
as ‘automated weapons systems’ rather than ‘autonomous weapons systems’, but the latter term
is ubiquitously used. He also refers to the Group of Governmental Experts (GGE) on Lethal
Autonomous Weapon Systems established by the UN Conference of the Convention on Certain
Conventional Weapons (CCW). But this GCE has not yet produced any tangible results. See:
https://futureoflife.org/autonomous-weapons-open-letter-2017/; www.hrw.org/news/2017/11/28/un-
killer-robots-talks-fall-short.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 143 13/12/2018 15:25


144  Research handbook on human rights and digital technology

Clearly, such invasive, predictive remote-sensing and even killing technologies pose a clear
risk to individual rights and freedoms. Apart from their chilling effect – many will be deterred
from attending public functions or demonstrations if this means they will be monitored
and evaluated in this way, or even shot at – they will also unavoidably suffer from built-in
deficiencies and error rates (as discussed further below) and lead to wrongful arrests and
detentions, and even deaths. If such systems were to be classified as ‘cyber capabilities’ for
the purposes of the Oxford paper, it should not be concluded that their further development
is to be welcomed. On the contrary, given the serious doubts about the legality and morality
of such systems, their development should not be promoted until those issues have been fully
addressed.

2.4.2  ‘Identifying’ bad people and predicting bad behaviour by algorithms


Increasingly, data mining technologies, developed for commercial uses of ‘Big Data’,
are used by secret intelligence services and law enforcement agencies to look for signs
of threats to national security or public security, or of criminal activity, and to predict
such threats or activity and ‘identify’ (i.e. single out) individuals who (the algorithms
say) are likely to pose such threats or engage in such activity in future. The massive data
generated by the state surveillance activities discussed earlier are also used in this way,
often ‘enhanced’ or ‘enriched’ by merging those data with publicly or commercially
available data (e.g. population records, ‘loyalty’ scheme data, credit referencing data) or
data already held by government (e.g. driving licence records or immigration records).
Increasingly, states and regional organizations are requiring private companies to make
available, or even create, records on their customers, for use by the secret or police services.
This is done especially in relation to the large datasets created by airlines in relation to
travel by their customers, so-called passenger name records (PNR).
Even if the information obtained through mass on- or offline surveillance is not inten-
tionally abused, the increased reliance on algorithms to analyse the data and ‘identify’
people suspected of nefarious activities, or of being ‘likely’ to engage in such activities
in future, poses serious risks. In particular, as more extensively discussed elsewhere with
specific reference to the use of PNRs, ‘Big Data analyses’ aimed at detection of rare
phenomena (such as the few terrorists in a large general population) inherently and
unavoidably result in either excessive numbers of ‘false negatives’ (too many of the few
actual terrorists not being identified as such) or excessive numbers of ‘false positives’
(many thousands of innocent people wrongly marked as ‘possibly’ or even ‘probably’
being terrorists), or both.47

47
  The problem of unacceptably high false negatives and positives in searches for rare phenom-
ena in large datasets is known as the ‘base-rate fallacy’. This problem is mathematically unavoid-
able, making such searches inherently unsuited to their purported aim – but this is rarely accepted
by policy-makers. See Douwe Korff and Marie Georges, Passenger Name Records, Data Mining
& Data Protection: The Need for Strong Safeguards (Consultative Committee of the Convention
for the Protection of Individuals with regard to Automatic Processing of Personal Data (T-PD)
of the Council of Europe, June 2015) s. I.iii (‘The dangers inherent in data mining and profiling’),
available at www.coe.int/t/dghl/standardsetting/dataprotection/TPD_documents/TPD(2015)11_
PNR%20draft%20report%20Douwe%20Korff%20&%20Marie%20Georges_15%2006%202015.
pdf. The arstechnica article referenced in n. 45 above, provides some interesting calculations of

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 144 13/12/2018 15:25


State cybersecurity interventions  145

Moreover, it has been shown that the algorithmic decision-making underpinning the
‘identification’ (read: singling out) of (possible or probable) terrorists, or other rare
targets, inherently suffers from built-in biases; that it – again (almost?) inevitably – leads
to ‘discrimination by computer’.48
On top of that, decisions based on algorithmic decision-making are becoming effec-
tively unchallengeable, because often not even the nominal decision-makers are aware of
the logic underpinning their decisions, and to the extent that they do have insight, they
refuse to share it with the affected individuals or groups of individuals for reasons of
‘commercial’ or ‘national security’ secrecy.49
All of these deficiencies are greatly exacerbated if the techniques are used not only to
identify already-known persons or phenomena, but to predict ‘likely’ or ‘probable’, or
even ‘possible’ actions. To put it simply: computers, including even the ‘smartest’ (arti-
ficially) ‘intelligent’ ones, cannot be relied upon for such purposes. They fundamentally
undermine the rule of law.
Here, it will suffice to note that the use of ‘Big Data’ data mining technologies and algo-
rithms to predict bad behaviour and identify bad people (or even to predict which people will
become bad) in the online environment, to counter ‘cyber-threats’ and cybercrime, should
again not be presented as a positive ‘cyber capacity’, but rather, as technologies that pose
serious risks to the rule of law in the digital environment. As long as they are unreliable, have
built-in biases, yet are effectively unchallengeable, they cannot be seen as appropriate tools.
And all the evidence – indeed, mathematics – suggest that this will not, cannot be remedied.

2.4.3  Evidence-gathering by law enforcement agencies in cyberspace


As already noted at 2.2.1 above, the lines between national security and law enforcement are
increasingly blurred, especially in relation to terrorism. First, intelligence agencies fed law
enforcement with information relevant to criminal investigations, obtained through (at that
time, pre-Snowden, still largely secret) special means of information gathering, including
undercover agents and informers (‘human intelligence’ or HUMINT), ‘signals intelligence’
(SIGINT) and bulk data interception, that only they had access to, and that reached across
borders into the global digital environment. Next, law enforcement agencies wanted, and
were granted, similar powerful means of data gathering themselves, especially in relation
to that environment. Moreover, the law enforcement agencies, like their national security

the false negatives and positives in relation to the use of algorithms in killing-drones programmes
of the United States.
48
  See the section on ‘Discrimination by computer’ (with further references) in Korff and
Georges, Passenger Name Records, Data Mining & Data Protection, n. 47 above, 26–28. See also
Andreas Baur-Ahrens, Marco Krüger et al., How Smart Is ‘Smart Security’? Exploring Data
Subjectivity and Resistance (2015), which concludes that: ‘‘Smart security does not necessarily con-
tribute to a higher security level at airports. More fundamentally, the need for more security cannot
just be stated but has to be discussed within the civil society. Risk-based passenger screenings are
working through differentiation which inherently contains either positive or negative discrimination.
Data-driven predictions must not be seen as objective knowledge but as results of a probabilistic
process’, available at https://publikationen.uni tuebingen.de/xmlui/bitstream/handle/10900/66898/
How%20Smart%20Is%20Smart%20Security_IZEW_201511.pdf ?sequence=1&isAllowed=y.
49
  See the section on ‘The increasing unchallengeability of profiles’ (with further references) in
Korff and Georges, Passenger Name Records, Data Mining & Data Protection, n. 47 above, 28–32.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 145 13/12/2018 15:25


146  Research handbook on human rights and digital technology

­ artners, began to take a much more preventive/proactive approach to their tasks – again
p
initially especially in relation to terrorism, but this soon started to spread to other areas.
Ultimately, as noted in a 2015 Issue Paper of the Council of Europe Commissioner for
Human Rights, in many countries, the work of the two types of agencies became inseparably
intertwined, with law enforcement agencies increasingly involved in national security mat-
ters, and national security agencies being tasked (also) with supporting law enforcement.50
These developments, while perhaps understandable, carry risks to fundamental
rights. As it was already put in another, earlier Issue Paper of the Council of Europe
Commissioner for Human Rights:51

A further major change in the policing environment concerns the relationship between the police
and the secret services in a number of countries. Apart from obtaining information through the
advanced surveillance and ‘dataveillance’ technologies mentioned earlier, or receiving it from
the secret services, information is also obtained through undercover agents and informants. As
a result, the basis for police (and others’) ‘interest’ in a person, and the nature of the evidence
against that person, are increasingly hidden. This has a direct impact on the treatment of such
a person, who might be spied upon, harassed, arrested, denied a job or a research post52 – all
without knowing why, or able to challenge the reasons for such actions (or without even being
aware of it). The increasingly close relationship between the police and the secret services also
has the potential to undermine the fairness of trials against persons accused of being involved
in organised crime or terrorism, in that courts increasingly allow effectively secret evidence and
evidence from anonymous witnesses to form the basis for a conviction.53

As concerns the specific issue addressed in this chapter, the potential harm that can be
caused by ‘cyber interventions’: increasingly, law enforcement agencies are given legal
authorization to do things in cyberspace which, if they did not have such legal underpin-
ning, would constitute serious (cyber)crimes. Typically, law enforcement agencies in many
countries are now given the right (at least in certain circumstances and in relation to the
investigation of, or in order to prevent, certain (serious) crimes or threats to public order
or security) to perform acts such as:

● ‘hacking’ into computer systems;


● ‘interfering’ with computer data and/or systems, e.g. to extract or change data, or to
switch on the microphone or (video-)camera built into or attached to them (without
this becoming known to the owner or users of the device); and

50
  Council of Europe Commissioner for Human Rights, Issue Paper on The Rule of Law on the
Internet (prepared by Douwe Korff), n. 31, above, 29 (with reference to the US FBI and the UK
GCHQ in particular). See also n. 17 above.
51
  Council of Europe Commissioner for Human Rights, Issue Paper on Protecting the Right to
Privacy in the Fight Against Terrorism, CommDH/IssuePaper(2008)3 (December 2008) (also prepared
by Douwe Korff) 6, available at https://wcd.coe.int/ViewDoc.jsp?p=&id=1469161&direct=true.
52
  See e.g., UK Institute for Race Relations (a Government body), Press Release, New Study
Highlights Discrimination in Use of Anti-terror Laws’, regarding a study published on 2 September
2004. The Press Release is available at www.irr.org.uk/2004/september/ak000004.html; the full
study is available at www.irr.org.uk/pdf/terror_arrests_study.pdf.
53
  See John Vervaele, ‘Terrorism and Information Sharing Between the Intelligence and Law
Enforcement Communities in the US and the Netherlands: Emergency Criminal Law?’ (2005) 1(1)
Utrecht Law Review (September), available at www.utrechtlawreview.org/.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 146 13/12/2018 15:25


State cybersecurity interventions  147

● using special devices or software in the above, e.g. keystroke loggers to obtain
passwords.

In terms of the Cybercrime Convention (further discussed below), such acts constitute
the crimes of ‘illegal access [to computer systems]’ (Article 2); ‘illegal interception
[of computer data that are being transmitted]’ (Article 3); ‘data interference’ (Article
4), ‘system interference’ (Article 5); and ‘misuse of devices’ (Article 6), if committed
‘without right’. Actions of the authorities that use those methods thus only escape being
caught by those provisions if they act ‘with right’, i.e. if they are authorized to perform
those acts by law. However, as we shall see, many of the laws in question are seriously
defective even in their domestic terms and application; and as also further discussed
below, there are major questions about the international-legal acceptability of their
extra-territorial application, i.e. when they authorize (or are read as authorizing) the
relevant state’s authorities to ‘hack’ computers and interfere with data that are outside
the state’s own territory.
In addition, in ever-more countries the law now expressly allows – or is so wide and
vague as being able to be interpreted as allowing – the state to require providers of
electronic communication services and other information society services (including
providers of social media) to have ‘backdoors’ installed into their systems, through which
the state authorities can gain direct – and unmonitored – access to the data stored in the
providers’ systems. The providers are, at the same time, typically placed under a legal duty
to maintain secrecy about the penetration of their systems in this way and/or threatened
that they will lose their licences if they disclose the existence of the ‘backdoors’, even to
their clients and the data subjects (or even to state authorities such as data protection
authorities in other states).54
In other countries, such providers’ systems appear to have been ‘hacked’ by the authori-
ties to gain similar uncontrolled access behind the backs of the providers (sometimes with,
sometimes without legal authority).55 Thanks to Edward Snowden, we now also know that
the United States and the United Kingdom (and others) have installed ‘cable splitters’
and other devices into the very ‘plumbing’ of the global Internet, to extract data passing
through the cables and routers to and from many countries.
This leads to the somewhat ironic situation that some of the very (state) actors who are
charged with protecting society, inter alia, against the harm caused by the perpetration
via computer systems of the above-mentioned kinds of cybercrimes, and against attempts
by other states’ agencies to penetrate computer systems in their country, are themselves
carrying out the very acts they try to stop others from performing.

54
  For some examples, see Korff et al., Boundaries of Law, n. 15 above, 39–43. The European
Parliament had wanted to include in the recently adopted EU General Data Protection
Regulation 2016/679 a provision that would have required companies (including non-EU
companies) served with such ‘gagging orders’, and who processed data on EU data subjects, to
disclose those orders to the data protection authorities of the EU Member States where those
data subjects lived, but this provision (which came to be known as ‘the anti-NSA clause’) was
seriously watered down and in the final text no longer contains such a requirement (see art. 48
of the adopted text).
55
 Korff et al., Boundaries of Law, n. 15 above, 43.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 147 13/12/2018 15:25


148  Research handbook on human rights and digital technology

Once again, in part, this is not so different from the offline world: police entering a
house with a search warrant, or depriving a person of his liberty on suspicion of her or
him having committed a criminal offence, are doing things which, if they had not been
authorized by law to do them, would amount to criminal offences in themselves (i.e.
breaking and entering; false imprisonment and probably assault). The differences lie in
the absence or, where they exist, the glaring inadequacies, of laws regulating the new
powers; the absence of effective oversight; the lack of safeguards against the potential
harm that can be caused by the officials’ actions; and, especially, in the complications
caused by the inherently borderless nature of the digital environment, including the
Internet.
Thus, traditionally, in a state under the rule of law, police officers entering a house or
arresting a person operate under very tightly-written laws (police laws and/or criminal
procedure codes, etc.) with strong formal elements, such as individual (case-specific)
judicially-issued warrants that are reviewable (at least post facto). If the agents fail to
adhere to the law, they may be liable to disciplinary or even criminal sanctions; and any
evidence obtained by such unlawful means may be invalid in courts of law.
By contrast, in many countries the legal rules under which the state’s own agen-
cies use the new ‘special means of investigation’ (as they are often called, if they are
regulated at all) are vague, broad and unclear – and unforeseeable in their applica-
tion (and ­sometimes even secret).56 This means that, by international human rights
standards, those rules do not constitute ‘law’ at all – in breach of the very first, most
basic requirement of the rule of law. Moreover, as to actual practice, in many states
either no information is even kept on how, how often, and against how many targets,
those powers are used; or the data are censored or of very limited use – or entirely
suppressed.57 The exercise of these new, special, highly intrusive powers is all too often
subject to little legal constraints and unacceptable levels of secrecy, with little if any
accountability.
Moreover, there is a dangerous lack of clarity in national and international law about
the legality of the extra-territorial use of such powers in cross-border surveillance, ‘hack-
ing’ or data extraction, or about the installation of ‘backdoors’ and ‘cable-splitters’ in the
basic Internet infrastructure, affecting data traffic to and from many countries.
The traditional, basic rule of international law is that states cannot lawfully exercise so-
called ‘enforcement jurisdiction’, which includes the use by their agencies of investigative
powers such as those mentioned above, on the territory of any other state, without the
latter (target) state’s consent: to exercise such power there without such consent involves
a violation of the sovereignty of the target state.58 As already noted, the secret services

56
  Ibid. 45–47 (‘Transparency about the law’).
57
  Ibid. 47–51 (‘Transparency about practice’).
58
  See Douwe Korff, Expert Opinion, prepared for the Committee of Inquiry of the German
Bundestag into the ‘Five Eyes’ global surveillance systems revealed by Edward Snowden,
presented at the Committee Hearing, Berlin, 5 June 2014, available at www.bundestag.de/
blob/282874/8f5bae2c8f01cdabd37c746f98509253/mat_a_sv-4-3_korff-pdf-data.pdf (full text in
English, in spite of what it says on the cover page); www.bundestag.de/blob/282876/b90dd-
97242f605aa69a39d563f9532e7/mat_a_sv-4-3_korff_zusammenfassung-pdf-data.pdf (summary
in English).

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 148 13/12/2018 15:25


State cybersecurity interventions  149

(foreign intelligence agencies) of states have for many years simply ignored the law in this
regard, acting effectively as ‘clandestine agencies’.59
The situation is (or at least was) different in relation to law enforcement agencies. In
relation to them, numerous special bi- and multilateral international legal instruments
have been adopted covering the way in which such agencies should act in relation to the
collecting of evidence from abroad. The main such instruments are known as Mutual
Legal Assistance Treaties (MLATs). There are numerous MLATs, between individual
states or covering groups of states, often in the latter case under the umbrella of a
regional organization, such as (in Europe) the Council of Europe and the EU. Some
relevant matters are also covered by other instruments, such as data protection and data
retention legislation, the Cybercrime Convention and EU law enforcement data collection
instruments.60
MLATs offer precisely the kind of detailed legal framework that is needed in relation
to the often sensitive matters raised in cross-border law enforcement investigations.
They usually contain crucial human rights clauses and safeguards, e.g. on when a
requested state may refuse to act on a request for action (such as the securing of
evidence) or information, etc.; and they operate within relatively strong national
and international legal frameworks, in particular in relation to countries with strong
domestic ­constitutional safeguards and which are parties to relevant international
human rights treaties such as the European Convention on Human Rights and the
International Covenant on Civil and Political Rights, the Council of Europe Data
Protection Convention, etc.
However, law enforcement agencies complain that MLATs do not work as well as they
should: it often takes too long before requested states respond to requests for actions or
information; the procedures are said to be too cumbersome; there are language problems;
and not enough staff or resources are made available to manage the processes. These
problems are claimed to be especially serious in relation to the digital environment, in
which information is said to be ‘location-less’: data may be held in ‘cloud’ servers in dif-
ferent countries from the ones where the targets of inquiries may be, and may be instantly
moved between jurisdictions. This, it is claimed, seriously hampers the efforts of law
enforcement agencies.61
Various important recommendations have been made, in particular by the Council of

59
  Note that, although this may have been long-standing state practice, this did not lead to a
new rule of customary (international) law authorizing the behaviour, since there clearly was, and is,
no opinio iuris to the effect that such actions are lawful. On the contrary, whenever foreign agents
are discovered by target states to breach their laws, the latter forcefully protest against this (even if
they are hypocritical in such protests, since they themselves as often as not carry out the same acts
in relation to the other state).
60
  The EU has established an independent group of experts to carry out a fundamental rights
review of EU data collection instruments and programmes. See www.fondazionebrodolini.it/en/
projects/fundamental-rights-review-eu-data-collection-instruments-and-programmes.
61
 See Criminal Justice Access to Electronic Evidence in the Cloud: Recommendations for
Consideration by the Cybercrime Committee (T-CY) of the Council of Europe, Final Report of the
T-CY Cloud Evidence Group, Council of Europe Doc. T-CY (2016)5 (16 September 2016), available
at https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId
=09000016806a495e.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 149 13/12/2018 15:25


150  Research handbook on human rights and digital technology

Europe’s Cybercrime Convention Committee (T-CY), to improve the MLA system, inter
alia, through:62

● greater use of the evidence preservation powers provided for in the Cybercrime
Convention;
● strengthening the role and capacities of 24/7 MLA points of contact;
● establishing special procedures for emergency situations; and
● the development of online tools and standardized multi-language templates for at
least some standard requests.

However, those recommendations, though made two years ago, have apparently not
been implemented.63 In fact, it appears that some would prefer to largely bypass MLAT
systems altogether in the interests of expediency.
Thus, the experts on the Cloud Evidence Group (CEG) want to expand the possibili-
ties under Article 18 of the Convention for law enforcement agencies to ask domestic
and foreign e-communication service providers (such as ISPs, but also social media) for
‘subscriber information’ on people directly, without going through the ‘trouble’ of using
MLATs, although the concept of ‘subscriber information’ is sufficiently ambiguous to
be read as including potentially sensitive information on subscribers. Moreover, the
CEG experts would encourage greater ‘voluntary disclosure’ of information, includ-
ing personal data, by private sector entities to criminal justice authorities in foreign
jurisdictions, as is apparently quite often done by the US ‘Internet Giants’ Apple, FB,
Google, MS, Twitter and Yahoo (subject to the Giants’ discretion), even though they
realize that that would most likely be in breach of European data protection law. In both
cases, the disclosures would be made without the involvement or authority of the target
state, indeed without the authorities of the target state even being made aware of the
disclosures.64
Moreover, it would appear that Article 32 of the Cybercrime Convention is in any case
already widely used in practice to circumvent MLATs, although it was never intended to
be used in this way.65 This is worrying, given that cross-border criminal investigations can,
of course, relate to highly human rights-sensitive matters such as freedom of expression,
religious or political offences such as defamation of the head of state, blasphemy, denying
the Holocaust, or conversely calling certain mass killings genocide, promoting secession,
insurrection, etc.
The proposals of the CEG were strongly criticized in a letter and accompanying

62
  See the recommendations made in the T-CY Assessment Report: The Mutual Legal
Assistance Provisions of the Budapest Convention on Cybercrime, adopted by the T-CY at its 12th
Plenary, 2–3 December 2014, s. 5.2 (‘Recommendations’), in particular 5.2.1 (‘Recommendations
falling primarily under the responsibility of domestic authorities’), available at https://rm.coe.int/
CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016802e726c.
63
 See Criminal Justice Access to Electronic Evidence in the Cloud, n. 60 above, 9, para. 18.
64
  Ibid. ss. 3.2 and 3.5. The latter section contains interesting statistics on the Internet Giants’
disclosures to a range of countries.
65
  See the detailed discussion of this article and its background in Council of Europe
Commissioner for Human Rights, Issue Paper on The Rule of Law on the Internet, n. 31 above, s.
4.5.5 (‘Article 32 of the Cybercrime Convention’) 102–6.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 150 13/12/2018 15:25


State cybersecurity interventions  151

a­ nalysis, sent by civil society group European Digital Rights (EDRi) to the Council of
Europe in November 2016.66 EDRi and a global group of civil society organizations
followed this up by a subsequent submission in September 2017.67
The tensions caused by states trying to enforce data demands extra-territorially is
well illustrated by the Microsoft case, in which the firm is trying to resist demands
from US authorities that it hand over personal data stored on its servers in the
Republic of Ireland.68 The case, which at the time of writing (February 2018) is still
pending before the US Supreme Court, has attracted considerable attention in Europe,
because the US authorities’ demand is prima facie in clear breach of EU data protec-
tion law.
The European Commission intervened in the case by submitting an amicus curiae brief
to the court,69 and further amici briefs were submitted by Privacy International and other
human- and digital rights organizations, and by a number of EU data protection and data
privacy scholars (including the author).70
Undeterred by such criticism from Europe, on 6 February 2018, a new piece of
legislation was introduced in the US Congress on cross-border data requests, called the

66
  Letter from EDRi to the Council of Europe, 10 November 2016, with a note by Douwe Korff,
Key Points re the Cybercrime Convention Committee (T-CY) Report: Criminal Justice Access to
Electronic Evidence in the Cloud: Recommendations for Consideration by the T-CY, Final Report of
the T-CY Cloud Evidence Group (T-CY (2016)5 (16 September 2016), available at https://edri.org/
files/surveillance/letter_coe_t-cy_accesstoe-evidence_cloud_20161110.pdf (letter); https://edri.org/
files/surveillance/korff_note_coereport_leaaccesstocloud%20data_final.pdf (note). For earlier criti-
cism, see Paul De Hert and Gertjan Boulet, ‘Cloud Computing and Trans-border Law Enforcement
Access to Private Sector Data: Challenges to Sovereignty, Privacy and Data Protection’ (2013)
Future of Privacy Forum 23–26, available at https://cris.cumulus.vub.ac.be/portal/files/17906078/
pdh14_gbCloud_Computing_and_Trans_Border_Law_Enforcement_Access_to_Private_Sector_
Data._ChallengesSSRN_id2530465.pdf.
67
  Global Civil Society, Submission to the Council of Europe with Comments and Suggestions
on the Draft Terms of Reference for Drafting a Second Optional Protocol to the Cybercrime
Convention, submitted on 18 September 2017, available at https://edri.org/files/surveillance/cyb​
ercrime_2ndprotocol_globalsubmission_e-evidence_20170908.pdf; EDRi Press Release, see htt​ps​:
//edri.org/cross-border-access-data-edri-delivers-international-ngo-position-council-europe/. The
Council of Europe welcomed the submission: see www.coe.int/en/web/portal/-/new-legal-tool-on-elec​
tronic-evidence-council-of-europe-welcomes-civil-society-opinion.
68
  In the Matter of a Warrant to Search a Certain Email Account Controlled and Maintained by
Microsoft Corporation, United States of America, Petitioner v. Microsoft Corporation, Respondent,
US Supreme Court. For an analysis and overview of the issues, see www.lawfareblog.com/
primer-microsoft-ireland-supreme-courts-extraterritorial-warrant-case.
69
  Amicus brief submitted by the European Commission, 12 December 2017, available at
www.supremecourt.gov/DocketPDF/17/17-2/23655/20171213123137791_17-2%20ac%20Europe​
an%20Commission%20for%20filing.pdf.
70
  Amicus brief submitted by Privacy International and a range of human and digital
rights organizations (18 January 2018), available at www.supremecourt.gov/DocketPDF/​ 17/17-
2/28354/20180118170547648_172%20USA%20v%20Microsoft%20Brief%20of%20Privacy%20In​
ter​national%20Human%20and%20Digital%20Rights%20Organizations%20and%20Interna​tional​
%20Legal%20Scholars%20as%20Amici%20Curiae%20in%20Support%20of%20Respon​dent.pdf;
Amicus brief submitted on behalf of EU Data Protection and Data Privacy Scholars, 18 January
2018, available at www.supremecourt.gov/DocketPDF/17/17-2/28272/20180118141249281_17-2%20
BSAC%20Brief.pdf.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 151 13/12/2018 15:25


152  Research handbook on human rights and digital technology

CLOUD (Clarifying Lawful Overseas Use of Data) Act.71 It combines elements of two
previous US proposals for cross-border law enforcement access. First, it would allow US
law enforcement to issue requests to companies for data stored overseas. Second, it would
enable the conclusion of agreements under which law enforcement in countries that make
a deal with the United States would be able to go directly to US companies for data. The
Bill was immediately criticized by AccessNow as ‘a threat to global privacy’.72
Suffice it to note here that, to the extent that cross-border law enforcement actions can be
said to constitute ‘cyber interventions’ – as would certainly appear to be the case if they relate
to the investigation of alleged cybercrimes – the question of whether the states concerned use
formal, rule-of-law-based MLAT procedures, or bypass them, are matters to be taken into
account in assessing whether those interventions caused ‘cyber harm’. States that have strong
MLAT-supporting systems in place, and use those effectively, should be assessed as more
‘mature’ in cybersecurity terms in this area than those that do not have them, or deliberately
bypass them. Attempts to undermine (rather than improve) MLATs, such as the proposals
by the Council of Europe’s Cloud Evidence Group and the US CLOUD Bill, should be seen
as weakening, not as increasing, ‘cyber capacities’.
But the situation is particularly worrying as concerns the extra-territorial use by law
enforcement agencies of the ‘special means of investigation’ noted earlier: mass surveil-
lance and bulk data extraction, ‘hacking’, manipulation of computer data and the secret
switching on of microphones and cameras, ‘backdoors’ and ‘cable-splitters’. These special
technologies by their very nature bypass MLAT systems. It is here that the dangers of
law enforcement agencies beginning to act like their ‘clandestine’ intelligence service
colleagues become particularly apparent.
To the extent that, in a democratic society, such means should be granted to law
enforcement agencies at all, even purely domestically, they should be extremely tightly
regulated in clear and precise published laws that are foreseeable in their application.
There should be strong safeguards to protect the individuals targeted. There should be
strong, independent overview. And there should be the fullest possible transparency about
the use of such technologies. These principles should apply a fortiori to any use of such
means beyond a state’s borders, where they will almost inevitably infringe the sovereignty
of other states – and seriously interfere with, and quite possibly violate, the fundamental
rights of the citizens and residents of those other states.
In fact, the opposite tends to be the case: in these respects, the situation with regard to
law enforcement agencies is in many countries the same as that of the security agencies;
often, they are in this respect covered by the very same laws and/or operate so closely
together as to effectively inseparably link their powers and operations, with little restraints
on the sharing of information and data obtained by these means.73
Thus, as already noted, in this respect (as concerns the use of ‘special means of [online]
investigation’), the laws, and especially the subsidiary (often purely internal) guidelines
are vague, broad, unclear, and sometimes even secret: there is no transparency about the

71
  Text of the Bill available at www.hatch.senate.gov/public/_cache/files/6ba62ebd-52ca-4cf8-
9bd0-818a953448f7/ALB18102%20(1).pdf.
72
  ‘New U.S. CLOUD Act is a Threat to Global Privacy’, AccessNow, 7 February 2018, avail-
able at www.accessnow.org/new-u-s-cloud-act-threat-global-privacy/.
73
  See Korff et al., Boundaries of Law, n. 15 above.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 152 13/12/2018 15:25


State cybersecurity interventions  153

rules, and even less about their application in practice, most especially about their extra-
territorial application.74
In our view, such extra-territorial use of powers by law enforcement agencies under
unclear and in their application unforeseeable domestic laws, and outside of clear, agreed
international legal frameworks (which the Cybercrime Convention in this regard does not
provide) is inherently incompatible with the rule of law. The granting of such powers in such
circumstances should never be regarded as enhancing the ‘maturity’ of the cybersecurity
capacities of the state in question.

3. CONCLUSIONS
The above brief analysis of the human rights risks posed by state ‘cyber interventions’ in
the five areas examined shows the following.

● In the developing area of ‘cyberwarfare’, proposed state actions pose serious risks
of unlawful reprisals against civilian infrastructure and a ‘cyberwar arms race’.
● The use of mass ‘cyber’ surveillance and Internet-security-undermining tools
as part of ‘cybersecurity interventions’ by state intelligence agencies aimed at
protecting national security and countering terrorism inherently pose serious risks
of harm to the interests and fundamental rights and freedoms of the populations
that are being watched and whose ICT infrastructure and devices are being
weakened.
● Because of their unavoidable limitations and incapabilities, ‘algorithmic filter’ tools
are inherently inappropriate for the purpose of determining whether speech or text
(or pictures or videos) amounts to ‘hate speech’, ‘incitement to violence’, ‘support
for extremism’, etc., or whether, if it seems to comprise copyright-protected materi-
als, their use is covered by one of the exceptions in the applicable national law (or
indeed to determine which law that is).
● The use of ‘algorithmic filters’ (or ‘content recognition’ and/or ‘content evaluation
technologies’) in order to detect and block objectionable or copyright-protected
content in private sharing platforms must, moreover, because of the very function
they are to fulfil, involve the very ‘generalized’ monitoring of the content of com-
munications of whole populations that the CJEU denounced as incompatible with
the EU Charter of Fundamental Rights in relation to the mass surveillance by states
we discussed earlier.
● Such tools are therefore both inappropriate for their stated aim and constitute
major and disproportionate – and thus unlawful – interferences with the fundamen-
tal rights of the people in the populations against which they are used.
● Invasive, predictive remote-sensing and even killing technologies pose a clear risk
to individual rights and freedoms; and ‘Big Data’ data mining technologies and
algorithms, used to predict bad behaviour and identify bad people (or even to pre-
dict which people will become bad) in the online environment, are unreliable, have

74
  Ibid.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 153 13/12/2018 15:25


154  Research handbook on human rights and digital technology

built-in biases, yet are effectively unchallengeable; and all the evidence – indeed,
mathematics – suggest that this will not, cannot be remedied.
● To the extent that cross-border law enforcement actions can be said to constitute
‘cyber interventions’, as would certainly appear to be the case if they relate to the
investigation of alleged cybercrimes, the question of whether the states concerned
use formal, rule-of-law-based MLAT procedures, or bypass them, are matters to be
taken into account in assessing whether those interventions cause ‘cyber harm’.
● The extra-territorial use of powers by law enforcement agencies under unclear
and in their application unforeseeable domestic laws, and outside of clear, agreed
international legal frameworks (which the Cybercrime Convention in this regard
does not provide) is inherently incompatible with the rule of law.

Any attempt to try and assess the efficiency and efficacy, i.e. the ‘maturity’, of state ‘cyber
interventions’ in these areas should take these serious risks fully into account. Specifically:

● If one wants to measure the ‘maturity’ of the cybersecurity capacities of states in


relation to cyberwarfare, the serious potential harms of any policies of retaliation
and escalation adopted by them, and their consequent potential of serious harm to
civilian (enemy) populations, should be part of the calculation.
● ‘Algorithmic filter’ tools should not be regarded as ‘cyber capabilities’, let alone
promoted as strengthening such capabilities in the fight against cybercrime. On the
contrary, their indiscriminate use for ‘generalized’ monitoring of entire populations
and vast swathes of Internet users should be seen as undermining the rule of law
and fundamental rights in cyberspace.
● If predictive remote-sensing and even killing technologies were to be classified as
‘cyber capabilities’, it should not be concluded that their further development is
to be welcomed. On the contrary, given the serious doubts about the legality and
morality of such systems, their development should not be promoted until those
issues have been fully addressed.
● Data mining technologies and algorithms cannot be seen as appropriate tools to
counter ‘cyber threats’ and cybercrime and should also not be presented as a posi-
tive ‘cyber capacity’, but rather, again, as technologies that pose serious risks to the
rule of law in the digital environment.
● States that have strong MLAT-supporting systems in place, and use those effec-
tively, should be assessed as more ‘mature’ in cybersecurity terms in this area than
those that do not have them, or deliberately bypass them. Attempts to undermine
(rather than improve) MLATs, such as the proposals by the Council of Europe’s
Cloud Evidence Group and the US CLOUD Bill, should be seen as weakening, not
as increasing, ‘cyber capacities’.
● The granting of extra-territorial powers to law enforcement agencies under unclear
laws and outside of clear, agreed international legal frameworks should never be
regarded as enhancing the ‘maturity’ of the cybersecurity capacities of the state in
question.

To date, the above matters are not sufficiently taken into account in assessing the ‘matu-
rity’ of states’ cybersecurity capacities. Taking them into account may lead to significant

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 154 13/12/2018 15:25


State cybersecurity interventions  155

marking down of states that consider themselves to have a high level of respect for the
rule of law (including international law) and highly-developed cybersecurity capacity. But
that is no bad thing. It is certainly to be preferred to exporting some of their practices in
this regard which, on the above assessment, should be regarded as weakening the rule of
law and the security of the Internet and the wider digital world.

Douwe Korff - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:25AM
via New York University

WAGNER_9781785367717_t.indd 155 13/12/2018 15:25


9.  Access to the Internet in the EU: a policy priority,
a fundamental, a human right or a concern for
eGovernment?
Lina Jasmontaite and Paul de Hert*

1. INTRODUCTION
Figures reporting the take-up of the Internet in the European Union (EU) are
impressive,1 but the significance of the Internet lies not in the number of its users. It
lies in a wide range of possibilities that opened up for individuals and organizations
in both the private and public sectors. It is widely acknowledged that the Internet
facilitates the enjoyment of individuals’ rights and that, at the same time, it challenges
the role of various institutions, including public administrations (or governments). In
the EU, the vastly increased accessibility of Internet services has allowed public admin-
istrations   and governments to redefine themselves, both internally and­­externally.2
Internally, ­meaning  that traditional functions and services of government have moved
to the online sphere (eGovernment), and externally, meaning that governments employ
the Internet in order to facilitate citizens’ participation in policy-making (e-democracy).3
Despite the empowerment of citizens and the transformative nature of the Internet,
the EU legislators measure the impact of the Internet in terms of its economic benefits.
In the EU, Internet access is considered to be a policy objective, which can accelerate the
completion of the Digital Single Market. For example, the European Community (EC)
estimates that ‘a 10% increase in broadband penetration could yield a 1–1.5% increase in

*  The authors would like to thank Foivi Mouzakiti, a Ph.D Student at Queen Mary University,
London, for numerous discussions and intellectual exchanges on the topic. Additionally, the
authors are grateful to Prof. Wolfgang Benedek for constructive comments and suggestions that
helped to improve the quality of this chapter.
1
  For example, according to the estimates provided by the European Commission, 97 per cent
of European Union (EU) citizens have access to fixed broadband connections at the speed of at
least 2 Mbps at end-user level; 70.9 per cent of users can enjoy Next Generation Access (NGA)
connectivity level in the EU; the percentage of households with access to the Internet reached an
impressive 83 per cent in 2015. For more details see EC, Staff Working Document Accompanying the
Communication ‘Connectivity for a Competitive Digital Single Market: Towards a European Gigabit
Society’, SWD(2016)300 final (Brussels, 14 September 2016) 7, and Eurostat, ‘Information Society
Statistics – Internet, Level of Access, Use and Activities: Level of Internet Access – Households’,
http://ec.europa.eu/eurostat/statistics-explained/index.php/Digital_economy_and_society_statis​
tics_-_households_and_individuals (accessed 21 November 2017).
2
  P. Dunleavy, H. Margetts, J. Tinkler and S. Bastow, Digital Era Governance: IT Corporations,
the State and e-Government (Oxford University Press, 2006).
3
  Michael Margolis, ‘E-Government and Democracy’ in R. Dalton and H.D. Klingemann
(eds), The Oxford Handbook of Political Behavior (Oxford University Press, 2007).

157
Lina Jasmontaite and Paul de Hert - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 157 13/12/2018 15:25


158  Research handbook on human rights and digital technology

annual GDP or could raise labour productivity by 1.5% over the next five years’.4 This
means to an end-based approach is well reflected in the foreword of Jean-Claude Juncker
for the EC Communication titled A Digital Single Market Strategy for Europe:

By creating a connected Digital Single Market, we can generate up to EUR 250 billion of
additional growth in Europe in the course of the mandate of the next Commission, thereby
creating hundreds of thousands of new jobs, notably for younger job-seekers, and a vibrant
knowledge-based society.5

Building on the observation that economic incentives are the main driver for adopt-
ing measures promoting the deployment of Internet infrastructure and content in the
EU, this chapter aims at contributing to the wider debates on Internet governance. The
authors explain the role that the EU plays by reflecting on the EU policies and measures
applicable to Internet access. The EU as an intergovernmental body can be considered
to be a key stakeholder of Internet governance, as its achievements go further than most
of the multi-stakeholder platforms, such as the United Nations World Summit on the
Information Society (WSIS) or the International Telecommunications Union (ITU). The
EU regulatory measures addressing the digital environment have brought tangible results
within and beyond the territory of its Member States.6 Indeed, the EU not only sets a
policy agenda to attain certain objectives but it provides for extensive rules governing
digital networks (i.e. Internet architecture) and services (i.e. content). Even though the
authors deem that the EU policies and measures embody transferable knowledge that
could potentially shape the Internet’s development on the global scale, they challenge the
current EU regulatory approach and strategy promoting Internet take-up and usage.7
After outlining the relevant regulatory provisions governing access to the Internet in
the EU (section 2) and its Member States (section 3), and after summarizing arguments
supporting the introduction of the right to Internet access, the authors seek to broaden
the scope of social and legal debates on Internet access in the EU. In particular, they ques-
tion (a) whether the Internet is a vital element to achieve a decent standard of living in the
Gigabit society (section 4); and (b) whether it deserves a place alongside the fundamental
rights or human rights (section 5) and under what conditions it could be incorporated
among the EU fundamental rights (section 6). The following sections of the chapter
reflect on the potential scope of a right to Internet access (sections 7 and 8) and how
eGovernment could facilitate the introduction of such a right (section 9). Considerations
about limitations of a right to Internet access are addressed in section 10.
The authors believe that extending EU debates on Internet access beyond the telecom-
munications sector that owns the infrastructure layer (e.g. copper or fibre optic cables)
would be beneficial. The debate on Internet access could become more open and inclusive
in terms of the stakeholders and the definitions that are used. The current EU regulatory
framework is shaped by the private sector, EU institutions and national governments,

4
  European Commission, The Digital Agenda for Europe: Driving European Growth Digitally,
COM(2012)748 final, 8.
5
  European Commission, A Digital Single Market Strategy for Europe, COM(2015)192 final, 16.
6
  For example, consider the global reach of the EU Data Protection Directive 95/46/EC.
7
  For example, see European Commission, The Digital Agenda for Europe, n. 4 above, 8.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 158 13/12/2018 15:25


Access to the Internet in the EU  159

whereas the views and needs of civil society and citizens are often a minor consideration.
Access to the Internet is inherently an Internet governance matter and therefore its regula-
tion should entail a multi-stakeholder debate.8 Access to the Internet then would be seen
not only in a technical way as a communication service but as ‘the set of devices, services,
facilities and skills that allow people to connect to and use Internet services, applications
and content’.9 Perhaps, this shift in approach could strengthen the EU’s role within the
broader context of Internet governance.
The authors suggest that the EU debate on Internet access should employ a human
rights-based approach to Internet access because the social benefits brought by the
Internet cannot be defined by numbers. The authors conclude that acknowledgment
or recognition of Internet access as a fundamental right would be valuable as it would
encourage policy- and law-makers, as well as civil society, to reconsider the scope and
limitations imposed on this right.

2.  INTERNET ACCESS REGULATION IN THE EU

The EU, on several occasions, has emphasized that Internet access has opened up an
endless array of possibilities for cost-efficient provision of services, active citizenship,
as well as transparency and accountability of government.10 The main view expressed in
EU policy documents is instrumental: Internet access can contribute to improving the
functioning of the internal market by generating economic wealth and it also can provide
some social benefits to citizens.11
Internet access regulation falls under the scope of the EU telecom rules. Directive
2002/22/EC on universal service and users’ rights relating to electronic communications
networks and services (Universal Service Directive) as amended in 2009 by Directive
2009/136/EC seeks primarily to ensure the availability of ‘good-quality publicly available
services through effective competition and choice and to deal with circumstances in which
the needs of end-users are not satisfactorily met by the market’.12 The Universal Service
Directive requires Member States to guarantee their citizens access to at least one Internet
service provider at a fixed point.13 In the pursuit of this ambitious goal set out by the

 8
  United Nations World Summit on the Information Society (WSIS) in the Tunis Agenda the
Information Society 2005: ‘A working definition of Internet governance is the development and
application by governments, the private sector and civil society, in their respective roles, of shared
principles, norms, rules, decision-making procedures, and programmes that shape the evolution and
use of the Internet.’
 9
  Global Commission on Internet Governance (GCIG), One Internet (Centre for International
Governance Innovation and Chatham House, 2016).
10
  For example, European Commission, The European eGovernment Action Plan 2011–2015
Harnessing ICT to Promote Smart, Sustainable & Innovative Government, COM(2010)743 final;
European Commission, The Digital Agenda for Europe, n. 4 above.
11
  Directive 2002/22/EC on universal service and users’ rights relating to electronic com-
munications networks and services (Universal Service Directive) as amended in 2009 by Directive
2009/136/EC, recital 56.
12
  Directive 2002/22/EC (Universal Service Directive), art. 1.1.
13
  This objective is further specified in Universal Service Directive, art. 4.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 159 13/12/2018 15:25


160  Research handbook on human rights and digital technology

Directive, Member States have been given some leeway; they are only required to take
measures facilitating ‘reasonable requests for connection at a fixed location to a public
communications network’.14 The reference to ‘reasonable requests’ implies that in order
to evaluate whether a request is reasonable, one needs to consider the overall context and
circumstances under which the request to access the Internet was submitted.
The Universal Service Directive also establishes a minimum quality standard for
Internet access, implying that the Internet speed has to have a sufficient capacity. In
particular, it requires Member States to ensure that:
the connection provided shall be capable of supporting voice, facsimile and data communica-
tions at data rates that are sufficient to permit functional Internet access, taking into account
prevailing technologies used by the majority of subscribers and technological feasibility.15

Only a few Member States (i.e. Finland, Spain and Malta) have gone so far as to include
minimum speeds for Internet access in their legislative frameworks.16
In contrast to several of its Member States, Internet access in the EU is not included
among the fundamental rights and principles enshrined in its governing Treaties.17 The
legislative basis for EU action in the area of digital networks is found in Articles 170–172
of the Treaty on the Functioning of the European Union (TFEU). According to Article
170, the EU can adopt regulatory measures supporting the establishment and develop-
ment of trans-European networks in the area of telecommunications on the condition that
this will benefit EU citizens, economic operators and regional and local communities.18
Articles 171 and 172 further specify EU competence in this particular area.
One could suggest that the EU Charter of Fundamental Rights (‘EU Charter’) as the
most recent codification of fundamental rights could have integrated the right to access the
Internet among other EU fundamental rights.19 However, this would be too bold of a claim
to make. The EU Charter compiles ‘fundamental rights, as guaranteed by the European
Convention for the Protection of Human Rights and Fundamental Freedoms [ECHR] and
as they result from the constitutional traditions common to the Member States’.20 In prac-
tice, this means that the EU has not considered including the right to Internet access into
the most modern codification of fundamental rights because it was neither embodied in the
constitutional traditions of its Member States nor endorsed by the ECHR framework.21

14
  See the Universal Service Directive.
15
  Ibid. art. 4.1.
16
  European Parliament, Briefing: Broadband as a Universal Service (April 2016), available at
www.europarl.europa.eu/RegData/etudes/BRIE/2016/581977/EPRS_BRI(2016)581977_EN.pdf
(accessed 21 November 2017).
17
  For example, the amendments to the Estonian and Greek Constitutions aim at facilitating
access to the Internet. For more detail, see the next section.
18
  Note, Treaty on the Functioning of the European Union (TFEU), Art. 170 also covers other
key areas of infrastructure, namely, transport and energy.
19
  The Charter was promulgated in 2000 and revised in 2007, together with the other provisions
of the EU governing treaties.
20
  Consolidated Version of the Treaty on European Union and the Treaty on the Functioning
of the European Union (2010/C 83/01); Treaty on European Union (TEU), Art. 6.
21
  Christopher McCrudden, The Future of the EU Charter of Fundamental Rights, Jean
Monnet Working Paper No. 10/01 (18 March 2002) 9, available at http://ssrn.com/abstract=299639
(accessed 21 November 2017).

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 160 13/12/2018 15:25


Access to the Internet in the EU  161

3. INTERNET ACCESS REGULATION IN THE EU MEMBER


STATES

As described in the previous section, the EU Member States are obliged to adopt domestic
measures implementing the objectives of the Universal Service Directive, such as provid-
ing access to a broadband connection at fixed points. However, a few Member States have
decided to further strengthen the right to Internet access by enshrining it in their national
legal frameworks. In 2000, the Estonian Parliament passed a law declaring that ‘every
person shall be afforded the opportunity to have free access to public information through
the Internet in public libraries’.22 A year later, the Greek Constitution was amended with
a provision recognizing that ‘all persons have the right to participate in the Information
Society’. According to the amendment, ‘facilitation of access to electronically transmitted
information, as well as of the production, exchange and diffusion thereof, constitutes an
obligation of the State’.23
Considering that the governments of other Member States, as well as the jurisprudence,
are in support of this right, it may be suggested that there is, in fact, a tendency towards
integrating Internet access into the realm of human rights. For example, in 2010, the
Finnish government acknowledged that ‘a reasonably priced broadband connection
will be everyone’s basic right’.24 The UK Court of Appeal in R v. Smith and others
recognized access to the Internet as an ‘essential part of everyday living’ to which access
can be limited only under specific circumstances.25 In a somewhat similar vein, the French
Constitutional Court stated that access to online services is necessary in order to exercise
freedom of speech.26
In the following section, we will describe the EU vision of the Gigabit Society and
the growing policy targets. We will also introduce the concept of ‘universal services’ and
question to what extent services embodied within this concept can strengthen the claim
for the right to Internet access.

4. EU VISION OF THE GIGABIT SOCIETY: AN OPPORTUNITY


TO STRENGTHEN THE CLAIM FOR THE RIGHT TO
INTERNET ACCESS?

In the EU, it is taken for granted that everyone should have access to the online environ-
ment. Access to the Internet is an economic enabler that can allow active participation in
economic, political and civil spheres. It is a means to an end that can lead to the underlying
EU objective: economic prosperity. In fact, the EC considers that today ‘it’s no longer

22
  Public Information Act 2000, s. 33.
23
  Greek Constitution, art. 5A(2).
24
  Ministry of Transport and Communications, Press Release, 1 Mbit Internet Access a
Universal Service in Finland from the Beginning of July (29 June 2010), available at www.lvm.
fi/en/-/1-mbit-internet-access-a-universal-service-in-finland-from-the-beginning-of-july-782612
(accessed 21 November 2017).
25
  R v. Smith and others (Rev. 1) [2011] EWCA Crim 1772 (19 July 2011).
26
  Constitutional Council, Decision no. 2009-580 (10 June 2009).

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 161 13/12/2018 15:25


162  Research handbook on human rights and digital technology

about whether you are online or not, but whether you have a good quality connection at
a good price’.27
Somewhat ignoring the fact that a significant segment of the EU population may
remain unconnected, the focus of policy-makers has already shifted towards the
universal availability of fast Internet access. In autumn 2016, the EC presented its
roadmap paving the way for the European Gigabit Society.28 According to the EU’s
vision, members of this society will actively participate in, and make use of, various
services and products available on very high-capacity networks.29 The active use of
very high-capacity networks will bring socio-economic benefits, such as new jobs
and the completion of the Digital Single Market. In the European Gigabit Society,
‘every EU citizen shall have the right to be connected’.30 This right to be connected
would entail access to ‘a functional internet connection, at least at a fixed location,
that is affordable and allows full engagement with the digital economy and society’.31
Following the EC strategy, the European Gigabit Society will be achieved by (1)
updating the legislative framework for electronic communications (i.e. developing a
European  Electronic Communications Code and the Body of European Regulators
for Electronic Communications (BEREC) Regulation 1211/2009); (2) setting an action
plan for 5G connectivity across Europe; and (3) policy and financial measures at all
levels (e.g. a ‘Wi-Fi for Europe’ initiative could be developed at a EU, a national or a
local level).
Based on the EU top-down regulatory approach and policy debates, it can be claimed
that the discussion on a fundamental or human right to access the Internet is moot in the
EU. The latest EU policy documents confirm this position as they further explore the pos-
sibility of recognizing access to the Internet as a ‘universal service’ and set aside the debate
on the recognition of Internet access as a fundamental right.32
In the EU, ‘universal services’ are those services that ‘are made available at the quality
specified to all end-users in their territory, independently of geographical location, and, in

27
  European Commission, ‘Visual Summary of Telecoms Reform’, http://ec.europa.eu/news​
room/dae/document.cfm?doc_id=17180 (accessed 21 November 2017).
28
  European Commission, Proposal for a Regulation of the European Parliament and of the
Council amending Regulations (EU) 1316/2013 and (EU) 283/2014 as regards the promotion of
Internet connectivity in local communities, Explanatory Memorandum.
29
  According to the European Commission in Connectivity for a Competitive Digital Single
Market: Towards a European Gigabit Society, COM(2016)587 final: ‘“Very high-capacity network”
means an electronic communications network which either consists wholly of optical fibre elements
at least up to the distribution point at the serving location or which is capable of delivering under
usual peak-time conditions similar network performance in terms of available down- and uplink
bandwidth, resilience, error-related parameters, and latency and its variation. Network performance
can be considered similar regardless of whether the end-user experience varies due to the inherently
different characteristics of the medium by which the network ultimately connects with the network
termination point.’
30
  European Commission, Explanatory Memorandum, n. 28 above.
31
  Ibid.
32
  European Commission, Connectivity for a Competitive Digital Single Market, n. 29 above.
The concept of ‘universal services’ is present only in a few sectors, namely, energy, postal and
telecommunications.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 162 13/12/2018 15:25


Access to the Internet in the EU  163

the light of specific national conditions, at an affordable price’.33 In general, the concept
of universal services within the scope of the telecommunications sector allows the EU to
regulate the extent to which its Member States intervene into the market. At the moment,
the following services provided by the telecoms sector are considered to be ‘universal
services’ in the EU: (1) access to a fixed line; (2) emergency calls (e.g. ‘112’ European
emergency number); (3) at least one comprehensive telephone directory of all subscribers;
(4) access to the publicly available telephone services for all; (5) availability of payphones
in public spaces.
The European Commission reviews the list of ‘universal services’ every three years.
During the revision process, the EC takes into account social, economic and technological
developments, including the affordability, availability and acceptability of new technolo-
gies.34 The proposed European Electronic Communications Code suggests recognizing
access to the Internet services on mobile networks as a universal service.35 The European
Parliament seems to be in favour of this proposal.36 It can be argued that if the text of the
proposal is approved as it is, it would allow for further take-up of the Internet services.37
Another positive outcome of this would be a possibility for more diverse private entities
providing access to network infrastructure to apply for various funding schemes support-
ing broadband deployment at a national level.
It can be suggested that foreseeing two different ways to access the Internet (i.e. a
single narrowband network connection and a mobile broadband connection) in the
list of universal services would further strengthen the right to Internet access. It would
also require the Court of Justice of the European Union (CJEU) to review its position
taken in Base Company NV and Mobistar NV v. Ministerraad in its future case law. The
CJEU has concluded that within the current legislative set-up, ‘subscriptions for mobile
communication services, including Internet subscription services provided by means
of those mobile communication services’ do not fall within the scope of the foreseen
‘universal services’.38 Finally, expanding the list of ‘universal services’ by including
broadband access to the Internet would require Member States to take appropriate
measures ensuring the same conditions to access to the service for all citizens.39 While
this policy choice may result in a more coherent approach to Internet access, it should
be noted that measures taken by the Member States may differ due to economic and
political conditions.40

33
  Universal Service Directive, art. 3.
34
  Ibid. art. 15.
35
  Proposal for a Directive of the European Parliament and of the Council establishing
the  European Electronic Communications Code (Recast), COM/2016/0590 final, 2016/0288
(COD).
36
  European Parliament, Broadband as a Universal Service, n. 16 above.
37
  Currently, users are entitled to access the Internet only at fixed points, which may not be the
preferred option by end-users.
38
 C‑1/14 Base Company NV and Mobistar NV v. Ministerraad, CJEU, Judgment of 11 June
2015.
39
  The term ‘citizens’ in this context refers to consumers or end-users of the service.
40
  Universal Service Directive, as amended in 2009 by Directive 2009/136/EC, recital 7.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 163 13/12/2018 15:25


164  Research handbook on human rights and digital technology

5.  E
 U MEMBER STATES’ RESPONSIBILITIES WITH REGARD
TO INTERNET ACCESS: PROMOTION OF UNIVERSAL
AVAILABILITY OF BROADBAND AND DIGITAL LITERACY

We discussed the current EU policy obliging Member States to take measures ensuring
that its citizens are able to access the communications infrastructure at a reasonable
price (section 3). This Internet access policy can be considered to be comprehensive as
it includes measures that both address access to the communications infrastructure and
encourage usage of online services.41 In the following two paragraphs we reflect on dif-
ferent policy objectives and consider how they shape EU Member States’ responsibilities
concerning Internet access. Additionally, we aim at establishing a link between broadband
penetration, eGovernment services and citizens’ online engagement.42
The first set of policy objectives, promoting the universal availability of broadband,
often defined by a high-speed Internet access, stem from the EU regulatory framework,
including numerous policy documents.43 Both the EC communications, ‘the Digital
Agenda for Europe (DAE)’ and ‘the Digital Single Market Strategy’ affirm the EU’s
commitment to broadband development that is considered to have a superior capacity
to support innovation and economic growth than its predecessor technologies.44 The
EC argues that ‘the success of e-commerce, the reliability of e-health applications, the
user experience of video and audio content in gaming and streaming all depend on the
quality of networks’.45 It is also crucial to recognize the dominant focus of EU policies

41
  According to the Europe 2020 Strategy, these measures may include the development of
operational high speed Internet strategies, as well as a legal framework for coordinating the effort
to reduce the costs of network deployment (European Commission, EUROPE 2020, A Strategy
for Smart, Sustainable and Inclusive Growth, COM(2010)2020 final). Additionally, Member States
should encourage the usage of online services ranging from applications in the fields of eGovern-
ment and eHealth to smart cities and home (European Commission, EUROPE 2020). See favour-
ably about this comprehensive approach, Colin Rhinesmith, Digital Inclusion and Meaningful
Broadband Adoption Initiatives (Evanston, IL: Benton Foundation, January 2016), available at
benton.org/broadband-inclusion-adoption-report. This author suggests that to attain digital inclu-
sion, policy-makers should consider the combination of different measures.
42
 OECD, Measuring Innovation: A New Perspective (OECD, 2010) 88–89; Xavier Fernández-
i-Marín, ‘The Impact of e-Government Promotion in Europe: Internet Dependence and Critical
Mass’ (2011) 3(4) Policy and Internet, art. 2.
43
  Broadband entails the use of various high-speed transmission technologies, including digital
subscriber line (DSL), cable modem, fibre optics, wireless, satellite, broadband over powerlines
(BPL).
44
  European Parliament, Resolution of 6 July 2011 on ‘European Broadband: Investing in
Digitally Driven Growth’, 2013/C 33 E/09, para. A; International Telecommunications Union,
Impact of Broadband on the Economy (2012) 3–27, available at www.itu.int/ITU-D/treg/broadband/
ITU-BB-Reports_Impact-of-Broadband-on-the-Economy.pdf (accessed 21 November 2017);
H. Gruber, J. Hätönen and P. Koutroumpis, ‘Broadband Access in the EU: An Assessment
of Future Economic Benefits’, paper presented at 24th European Regional Conference of
the International Telecommunication Society, Florence, Italy, 2013, available at www.econstor.
eu/bitstream/10419/88492/1/773374760.pdf (accessed 21 November 2017). ‘Very high-capacity
networks’ are a prerequisite for the European digital economy to flourish (European Commission,
Connectivity for a Competitive Digital Single Market, n. 29 above.
45
  European Commission, Explanatory Memorandum, n. 28 above.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 164 13/12/2018 15:25


Access to the Internet in the EU  165

promoting access to the Internet on economic benefits that outweigh possible social
implications concerning easier exercise of human rights and the EU fundamental rights
listed in the EU Charter.46 In this context, we note that the EU is aware that reaching its
ambitious targets depends heavily on the investment in broadband infrastructure.47 In
order to achieve its targets, the EU has laid down the framework for the liberalization of
the telecommunications markets and the introduction of exceptions to the strict state aid
rules. The EU intends fuelling the telecommunications sector by various funding schemes
(in total EUR120 million). At the same time, the EU insists that its Member States and
private actors cooperate in the pursuit of very high-capacity networks.48
The second group of measures aim at promoting digital literacy. Enhancing media
literacy and digital skills is now perceived to be incremental in ensuring the meaningful
broadband adoption. A meaningful broadband adoption means that states not only
encourage the deployment of new and affordable networks but also proactively create the
value of broadband access and services for its citizens.49 For example, this could be done
by connecting digital literacy training with relevant content and services or by adopting
policies focusing on the so-called ‘demand side’.50
Numerous EU policy documents recognize the need to address the demand side. In
particular, the New Skills Agenda for Europe invites Member States to devise national
digital skills strategies by 2017.51 Important ingredients of this policy are digital literacy

46
  The liberalization of the telecommunications markets had a limited impact on the deploy-
ment of new communication infrastructures and therefore it was necessary to provide a possibility
for Member States to interfere (European Commission Communication, European Broadband:
Investing in Digitally Driven Growth, COM(2010)472 final, 3–4). In the EU, whether the liberalized
telecoms market receives public funds for the establishment or upgrade of broadband networks
is subject to state aid rules (TFEU, Art. 107). Indeed, the high investment cost of an upgrade
from ‘basic broadband’ networks to the so-called ‘Next Generation Access’ networks has deterred
service providers from the rollout of networks (Filomena Chirico and Norbert Gaál, ‘A Decade of
State Aid Control in the Field of Broadband’ (2014) 1 EStAL 28, at 30). In order to address this
‘market failure’, which is a situation in which markets, when left to their own devices, are unable to
achieve an efficient outcome for society (European Commission, EU Guidelines for the Application
of State Aid Rules in Relation to the Rapid Deployment of Broadband Networks, 2013/C 25/01, para.
37), public intervention may be warranted.
47
  Richard Cawley, ‘The Influence of European Union Policies and Regulation’ in Wolter
Lemstra and William H. Melody (eds), The Dynamics of Broadband Markets in Europe: Realizing
the 2020 Digital Agenda (Cambridge University Press, 2014).
48
  European Commission, Connectivity for a Competitive Digital Single Market, n. 29 above.
49
 Rhinesmith, Digital Inclusion and Meaningful Broadband Adoption Initiatives, n. 41 above.
50
  Robin Mansell and W. Edward Steinmueller, ‘Digital Infrastructures, Economies, and Public
Policies: Contending Rationales and Outcome Assessment Strategies’ in William H. Dutton (ed.),
The Oxford Handbook of Internet Studies (Oxford University Press, 2013) 512. Rhinesmith suggests
that the meaningful broadband adoption entails the four-part digital inclusion strategy, including
low cost broadband, digital literacy training, low-cost computers and public access computing
(Rhinesmith, Digital Inclusion and Meaningful Broadband Adoption Initiatives, n. 41 above). In a
similar vein, Kongaut and Bohlin propose two types of demand-side policies: first, those seeking
to increase the value of broadband services; and second, those seeking to reduce the costs of these
services (Chatchai Kongaut and Erik Bohlin, ‘Towards Broadband Targets on the EU Digital
Agenda 2020: Discussion on the Demand Side of Broadband Policy’ (2015) 17 Info 1, at 3–4).
51
  European Commission, A New Skills Agenda for Europe: Working Together to Strengthen
Human Capital, Employability and Competitiveness, COM(2016)381/2, 7.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 165 13/12/2018 15:25


166  Research handbook on human rights and digital technology

programmes,52 awareness campaigns,53 and the promotion of online public services, such
as eGovernment.54 Indeed, an increase in the availability of useful and practical content
through online eGovernment platforms is expected to stimulate broadband adoption,
and vice versa.55 Of course, simply reducing the cost of broadband services is arguably a
more straightforward task for states, not to mention one that is likely to deliver significant
changes in broadband adoption levels. Indeed, affordability has been identified as one
of the central impediments to broadband take-up.56 A standard solution, which has
been implemented in a number of EU Member States, is the provision of subsidies for
the acquisition of the necessary equipment.57 This may take the form of tax breaks, or
more commonly, the provision of computers to low-income households and schools.58
These initiatives may be welcome but they only partially remit the issue of affordability.
A broadband connection results in a rolling cost, which some individuals may not be able
to support. To address this issue, Public Internet Access Points in easily accessible public
places, such as libraries or squares, have been established in many Member States.59

52
  See ECORYS UK for the Departments for Culture Media and Sport, and Business
Innovation and Skills, Digital Skills for the UK Economy (January 2016) 58, available at www.gov.uk/
government/publications/digital-skills-for-the-uk-economy; European Commission, ISA, Joinup,
Communities, ‘Promoting Digital Literacy in Portugal Through a Multi-Stakeholder Initiative’,
https://joinup.ec.europa.eu/community/epractice/case/promoting-digital-literacy-portugal-through-
multi-stakeholder-initiative (accessed 21 November 2017). Digital literacy programmes, especially
those targeted at segments of the population with limited or no digital skills (e.g. the ‘Grandparents
& Grandchildren’ initiative, www.geengee.eu/geengee/ (accessed 21 November 2017)), are likely to
act as an incentive for broadband adoption.
53
  The example that comes to mind is the United Kingdom’s ‘Do More Online’ campaign,
aiming to inform citizens and businesses of the benefits of broadband access. See www.greatbusi​
ness.gov.uk/domoreonline/ (accessed 21 November 2017).
54
  European Parliament, Resolution of 6 July 2011 on ‘European Broadband: Investing in
Digitally Driven Growth’ [2011] OJ C33 E/89, para. 46.
55
  European University Institute, Florence School of Regulation, Broadband Diffusion: Drivers
and Policies (2011) 64–66, available at http://fsr.eui.eu/Documents/CommunicationsandMedia/
FSRStudyonBBPromotion.pdf; World Bank Group, Broadband Strategies Toolkit: Driving
Demand (May 2013) 187–94, available at http://broadbandtoolkit.org/6 (accessed 21 November
2017). Ultimately, the extent to which eGovernment services are able to boost broadband demand
depends on the government’s ability to meet its citizens’ expectations. The more indispensable
eGovernment becomes for citizens, the more one is likely to seek a broadband connection that
would facilitate participation in online government. In other words, eGovernment services that fail
to make a concrete difference in peoples’ lives are unlikely to encourage demand for broadband.
The same applies to other public services, such as eHealth and eEducation.
56
  Tim Kelly and Maria Rossotto (eds), Broadband Strategies Handbook (World Bank
Publications, 2012) 259–67.
57
  Ibid.
58
  Ibid. In Hungary, in 2015 it was suggested to reduce the VAT rate on Internet services from
27 per cent to to 18 per cent, see http://hungarytoday.hu/news/internetkon-hungarian-government-
consider-lowering-vat-rate-internet-services-24722 (accessed 21 November 2017).
59
  See e.g., European Commission, ISA, Joinup, Communities, Vaiva Nemanienė, Rural Internet
Access Points in Lithuania (2007), available at https://joinup.ec.europa.eu/community/epractice/case/
rural-internet-access-points-lithuania (accessed 21 November 2017); Public Internet Access Points
(PIAPs) in Kortrijk (2007), available at https://joinup.ec.europa.eu/community/epractice/case/
public-internet-access-points-piaps-kortrijk; Citizen Spots Extend Online Services to Thousand-desk
Network (2015), available at https://joinup.ec.europa.eu/elibrary/case/citizen-spots-extend-online-

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 166 13/12/2018 15:25


Access to the Internet in the EU  167

Clearly, on the long path towards achieving universal access to the Internet at high-
speed levels, governments stumble upon significant regulatory and financial obstacles.
Some researchers suggest that governments could cover such obstacles by focusing on
demand-side policies which also promote broadband take-up and can attain desirable
digital inclusion targets. 60 Even though this suggestion is based on the thorough study of
30 OECD countries, it remains controversial to the extent it takes away emphasis from
better infrastructure availability.61

6.  S
 CENARIOS FOR RECOGNIZING INTERNET ACCESS AS
AN EU FUNDAMENTAL RIGHT
Our analysis of policy documents and the applicable legislative framework suggests that in
the EU, Internet access is considered to be part of the ‘civil and political rights’ spectrum to a
lesser extent than to the one on ‘economic, social and cultural rights’. Access to the Internet
may facilitate the enjoyment of fundamental rights to freedom of expression, assembly and
political participation, but it is predominately regarded as a medium enabling economic
participation. We are inclined to believe that the strong emphasis put on the economic
benefits of the Internet by the EU may reduce the overall value that access to the Internet
could have on European society. Therefore, in the paragraphs that follow we consider three
possible scenarios for the recognition of Internet access as a fundamental right in the EU.
The first scenario concerns policies and actions taken by Member States. Following the
reasoning embedded in Article 6 of the TEU, access to the Internet could be incorporated
among the EU fundamental rights if the majority of Member States recognize it to be a
fundamental right at a domestic level. Article 6 of the TEU foresees that general principles
of EU law result from ‘the constitutional traditions common to the Member States’. A
similar message is included in the Preamble to the EU Charter, which as of the enforce-
ment of the Lisbon Treaty is legally binding. The Preamble to the EU Charter notes that
the EU is founded on common and ‘universal values of human dignity, freedom, equality
and solidarity’.62 Furthermore, the EU should act proactively and ensure:

the preservation and the development of . . . [the] common values while respecting the diversity
of the cultures and traditions of the peoples of Europe as well as the national identities of the
Member States and the organisation of their public authorities at national, regional and local
levels.63

services-thousand-desk-network (accessed 21 November 2017); Free Internet Zones are Available


in Major Public Spaces in Lithuanian Cities, available at www.lithuania.travel/en-gb/attractions/
wi-fi-zone/17151 (accessed 21 November 2017).
60
  Girish J. Gulati and David J. Yates, ‘Different Paths to Universal Access: The Impact of
Policy and Regulation on Broadband Diffusion in the Developed and Developing Worlds’ (2012)
36(9) Telecommunications Policy 749–61, ISSN 0308-5961, available at http://dx.doi.org/10.1016/j.
telpol.2012.06.013 (accessed 21 November 2017).
61
  Ibid.
62
  Consolidated Version of the Treaty on European Union and the Treaty on the Functioning
of the European Union (2010/C 83/01); EU Charter, Art. 391.
63
  Ibid. The latter implies that in a situation where many Member States developed meas-
ures recognizing access to the Internet as a fundamental right, the EU, in response to such a­

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 167 13/12/2018 15:25


168  Research handbook on human rights and digital technology

The European Parliament resolution on the Open Internet and Net Neutrality in Europe
could be considered to be an example of such a proactive action. The resolution encour-
aged ‘the competent national authorities to ensure that traffic-management interventions
do not involve anti-competitive or harmful discrimination’ and that ‘specialized (or
managed) services should not be detrimental to the safeguarding of robust “best effort”
internet access’.64
When considering the probability of this scenario, relying on political developments
in Member States, one should also take into account the impact and breadth of the term
‘constitutional traditions common to Member States’. The CJEU, without examining
the meaning of this term at length, recognized it as the source of inspiration for its case
law, together with international treaties for the protection of human rights on which
the Member States have collaborated or to which they are signatories.65 The Advocate
General, in the request for a preliminary ruling in the Omega case, suggested that ‘consti-
tutional traditions common to Member States’ provide ‘the general legal opinion of all of
the Member States [which] is essential to the particular evaluation of fundamental law’.66
According to the second scenario, the two European Courts may trigger the recognition
of a fundamental rights character with regard to Internet access. The Preamble to the EU
Charter notes that the EU fundamental rights are comprised, among many other things,
of the case law of the CJEU and of the European Court of Human Rights (ECtHR).67
The first one has been described as ‘filling the void left by the legislative branch’, also in
the area of fundamental rights.68 The second one has a well-established role in shaping

development, would be obliged to integrate the newly emerged right or principle among its funda-
mental rights. The EU top-down-driven regulation, as seen in the case of personal data protection
regulations, can have a significant impact on Member States’ constitutional rights. For example,
Hungary, Slovakia and the Czech Republic have recognized the protection; Gloria González
Fuster, The Emergence of Personal Data Protection as a Fundamental Right of the EU (Springer
International Publishing, 2014) 175.
64
  European Parliament, Resolution of 17 November 2011 on ‘The Open Internet and Net
Neutrality in Europe’, 16.
65
 C-36/02 Omega, CJEU, Judgment of 14 October 2004, para. 33.
66
  Ibid. Opinion of Advocate General Stix-Hackl, 18 March 2004.
67
  While the two courts operate within different legal set-ups, case law of both courts not only
provides legal interpretation upon applicants’ requests but it often entails new and innovative solu-
tions in response to the challenges of implementing prescribed legal obligations.
68
  Oreste Pollicino, ‘Legal Reasoning of the Court of Justice in the Context of the Principle
of Equality Between Judicial Activism and Self-restraint’ (2004) 3 German Law Journal 288. See
Francesca Ferraro and Jesús Carmona, Fundamental Rights in the European Union: The Role of
the Charter after the Lisbon Treaty (2015) 6, available at www.europarl.europa.eu/RegData/etudes/
IDAN/2015/554168/EPRS_IDA(2015)554168_EN.pdf (accessed 21 November 2017); Gabriel N.
Toggenburg, ‘“LGBT” Go to Luxembourg: On the Stance of Lesbian Gay Bisexual and Transgender
Rights before the European Court of Justice’ (2008) European Law Reporter 5. The jurisdiction of
the Court of Justice of the European Union is subject to certain limitations. The CJEU interprets
and reviews the legality of legislative acts issued by the EU institutions and agencies that have legal
effects on third parties. The CJEU was the first EU actor to explain that fundamental rights as
they are recognized in Member States constitute and are part of the EU (at that time the European
Community) general principles (Case 11/70 Internationale Handelsgesellschaft, CJEU, Judgment
of 17 December 1970). Following the Strauder case, the CJEU has been expanding the list of EU
general principles by introducing fundamental rights on a case by case basis. The growing case law

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 168 13/12/2018 15:25


Access to the Internet in the EU  169

the human rights landscape,69 and has paved the way for the recognition of new human
rights in treaties, such as the right to data protection.70 With an increasing number of
legal measures regulating online environment, both courts are confronted with cases rais-
ing questions about legal issues involving the Internet. Many cases concern the EU and
states’ attempts to regulate Internet content, its access and use. Freedom of expression
and the right to privacy are the most frequently contested rights. For example, the CJEU
in Scarlet Extended SA v. SABAM concluded that requiring Internet Service Providers
(ISPs) to use systems filtering and blocking electronic communications may impinge on
the fundamental rights of that ISP’s customers to protection of their personal data and
their freedom to receive or impart information.71 The ECtHR on several occasions has
recognized that access to the Internet and its capacity to store and communicate vast
amounts of information are crucial ‘in enhancing the public’s access to news and facilitat-
ing the dissemination of information in general’.72
As ICT mediate a growing portion of societal interactions between different actors,
it can be anticipated that both courts will have to address questions arising from the
use of the Internet in the near future. Perhaps, in one of these cases it may be necessary
to address the meaning of access to the Internet. As a result of legal interpretation,
new and innovative elements can be introduced in legal practice. To avoid the ‘judicial
activism’ criticism, uncontrolled extensions and innovations are not warranted. For
example, in Jankovskis v. Lithuania, the ECtHR recognized that ‘Internet access has
increasingly been understood as a right’ but the court has not further elaborated on

of the CJEU resulted in a set of principles that were applicable in a particular area. This posed a
potential risk of fragmented legal protection awarded to citizens in different policy areas. It can be
suggested that judicial activism by the CJEU provoked political discussions on the limitations of
the EU law which subsequently paved the way for the creation of the EU Charter. The EU Charter
now provides a comprehensive protection system of fundamental rights in all policy areas falling
within the EU competences. Furthermore, the CJEU via its case law often advances debates about
societal issues, such as the recognition of lesbian, gay, bisexual and transgender rights.
69
  The European Court of Human Rights (ECtHR) reviews acts issued by the participating
states in light of the rights and freedoms enshrined in the European Convention on Human Rights
(ECHR) and its amending protocols. The ECtHR judgments are adopted by a majority vote but
a judge or a group of judges who participated in the judicial process can provide a concurring or
dissenting opinion. In cases where such opinions are attached to the main judgment, heated public
debates tend to break out. For example, following on the observations of the ECtHR that the right
to protect privacy entailed certain limitations with regard to the protection of individuals’ data,
the Council of Europe adopted a Convention for the Protection of Individuals with regard to
Automatic Processing of Personal Data No.108.
70
  Paul De Hert and Serge Gutwirth, ‘Data Protection in the Case Law of Strasbourg and
Luxemburg: Constitutionalisation in Action’ in Reinventing Data Protection? (Springer, 2009) 3–45.
71
 C‑70/10 Scarlet Extended SA v. Société belge des auteurs, compositeurs et éditeurs SCRL
(SABAM), CJEU, Judgment of 24 November 2011, para. 50.
72
  Ahmet Yıldırım v. Turkey, Application no. 3111/10, ECtHR, 18 December 2012; Delfi v.
Estonia, Application no. 64569/09, ECtHR, 16 June 2015; Magyar Tartalomszolgáltatók Egyesülete
(MTE) and Index.hu Zrt v. Hungary, Application no. 22947/13, ECtHR. The ECtHR so far has
relied on the margin of appreciation doctrine, which neither requires setting some minimum crite-
ria for Internet content suppression nor invokes proportionality or necessity tests, which go beyond
the requirement of the rule of law (Delfi v. Estonia, Application no. 64569/09). Additionally, see W.
Benedek and M.C. Kettemann, Freedom of Expression and the Internet (CoE, 2013) 75–78.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 169 13/12/2018 15:25


170  Research handbook on human rights and digital technology

what such a right could mean.73 Instead, the ECtHR limited its findings to the context
of the case and it has concluded that ‘Article 10 cannot be interpreted as imposing a
general obligation to provide access to the Internet, or to specific Internet sites, for
prisoners’.74 At the same time, it is widely recognized that both the CJEU and the
ECtHR function in a particular legal system, where they may have to rely on high level
documents representing political agreements between countries.75 This point is closely
linked with the third scenario.
Thirdly, there is a possibility of formally recognizing Internet access as a fundamental
right in the EU through developments taking place at an international level.76 While at
first it may seem that legislative developments in Member States may carry the most
potential to introduce new fundamental rights into the EU Charter (see our first scenario,
discussed above), this third scenario may be similarly realistic. The 2011 UN Report on
the right to freedom of opinion and expression, prepared by the UN Special Rapporteur
on the area, Frank La Rue, fuelled this debate on the recognition of Internet access as
a fundamental right, urging states to work towards attaining the objective of universal
Internet access in the name of fundamental rights.77 Following up on the report, some
scholars were considering the content of this right and some even argued that access to
the Internet is already protected by the current human rights framework.78 However,
suggestions and observations made by academics had little impact on the perception of
access to the Internet at an international level. In June 2016, the United Nations Human
Rights Council adopted a resolution declaring states’ measures preventing or disrupting

73
  Jankovskis v. Lithuania, Application no. 21575/08, ECtHR, 17 January 2017, para. 62.
74
  Ibid. para. 55.
75
  Pollicino, ‘Legal Reasoning of the Court of Justice in the Context of the Principle of
Equality Between Judicial Activism and Self-restraint’, n. 68 above.
76
  According to the Preamble to the EU Charter, the EU fundamental rights are also comprised
of ‘international obligations common to the Member States, the European Convention for the
Protection of Human Rights and Fundamental Freedoms, the Social Charters adopted by the
Union and by the Council of Europe’. In other words, this list provides several examples of how
rights recognized at international level could eventually be incorporated in the EU fundamental
rights framework.
77
  UN General Assembly, Frank La Rue, Report of the Special Rapporteur on the Promotion
and Protection of the Right to Freedom of Opinion and Expression, A/HRC/17/27 (16 May 2011).
The Special Rapporteur acknowledged the Internet as a major facilitator of exercising the right to
freedom of opinion and expression, as well as other rights reinforced by the freedom of opinion
and expression (ibid. para. 22). The Special Rapporteur urged states to work towards attaining the
objective of universal Internet access, since only then would the Internet realize its full potential
as a medium enabling freedom of expression and opinion (ibid. para. 60). This position – that the
Internet enables the exercise of many other rights – has been reaffirmed by a recently issued, non-
binding United Nations Resolution. This Resolution builds on the idea ‘that the same rights that
people have offline must also be protected online’, but it does not suggest that access to the Internet
should be considered a new human right (United Nations, Human Rights Council, Resolution
on the ‘Promotion and Protection of All Human Rights, Civil, Political, Economic, Social and
Cultural Rights, including the Right to Development’, A/HRC/32/L.20 (27 June 2016)).
78
  Jason M. Tenenbaum, 503 Error, Service Unavailable: Access to the Internet as a Protected
Right (28 August 2013), available at SSRN: https://ssrn.com/abstract=2422566; Stephen Tully, ‘A
Human Right to Access the Internet? Problems and Prospects’ (2014) 14(2) Human Rights Law
Review 175–95.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 170 13/12/2018 15:25


Access to the Internet in the EU  171

access to or dissemination of information online to be infringements of the human rights


framework.79 This resolution by affirming ‘that the same rights that people have offline
must also be protected online’ indicates that the perception of access to the Internet has
been evolving at the global level.80 States are now not only required to increase their efforts
facilitating access to the Internet but also to ensure that any limitations, including the ones
imposed due to security concerns, respect obligations stemming from the international
human rights framework.81
In 2014, the Parliamentary Assembly of the Council of Europe went a step further
than its previous recommendations or La Rue’s report. In the Resolution on the Right to
Internet Access, underpinned by a logic similar to La Rue’s report, the Council of Europe
recommended that its Member States should ‘ensure the right to Internet access’ on the
basis of 12 principles.82 The Council of Europe further elaborated its position in the
Internet Governance Strategy for 2016–2019 in which the Internet is considered to be not
only an enabler of individuals’ rights, but also an enabler of democracy.83 It is important
to note that the ECtHR in its case law often refers to soft-law measures adopted by the
Council of Europe and other internal organizations, as these measures signify a growing
consensus on the perception of Internet access.
In sum, predicting the likelihood of each scenario is difficult. Nonetheless, mapping
scenarios is a useful thought exercise that helps formalize situations which can lead to a
policy change. After describing actions that would take place in each scenario, we have a
strong sense that these scenarios should not be considered in isolation. For example, the
ECtHR may refer to a resolution adopted by the Council of Europe. In this situation, the
second and third scenarios may be interlinked. Similarly, the first scenario could relate
to the third one, if discussions and obligations stemming from international tools shaped
policy actions at a domestic level. We deem that the possibility of two scenarios interact-
ing with each other demonstrate legal complexity and increase the probability that access
to the Internet is recognized as a fundamental right in the EU.

7.  A
 DDED VALUE OF INTERNET ACCESS AS A
FUNDAMENTAL RIGHT IN THE EU

The main argument for granting human rights status to Internet access is based on the
‘political’ conception of human rights.84 According to this conception, ‘the distinctive
nature of human rights is to be understood in light of their role or function in modern

79
  UN General Assembly, Human Rights Council, Thirty-second Session, ‘The Promotion,
Protection and Enjoyment of Human Rights on the Internet’, A/HRC/32/L.20 (27 June 2016).
80
  Ibid.
81
  Ibid.
82
  Council of Europe, Parliamentary Assembly, Resolution 1987, ‘The Right to Internet Access’
(2014).
83
  Council of Europe, Internet Governance: Council of Europe Strategy, CM(2016)10-final,
para. 6.
84
  Rowan Cruft, S. Matthew Liao and Massimo Renzo, Philosophical Foundations of Human
Rights (2015).

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 171 13/12/2018 15:25


172  Research handbook on human rights and digital technology

international political practice’.85 This in practice means that access to the Internet
should be recognized as a human right because the Internet is now deeply embedded in
contemporary life and it has justifiably gained the status of a driving force of personal,
social and economic growth. It is widely recognized that Internet access facilitates the
exercise of various rights, such as freedom of expression and information, freedom of
assembly and of association, and the right to education. It is an unpredictably strong
force, since technological innovations powered by the Internet are rapid. Undoubtedly,
the future of the Internet is difficult to anticipate. This unpredictability could be used as
an argument against the creation of a new right on Internet access. At the same time, it
must be recognized that it is reassuring that the concept of Internet access as a human
right has emerged gradually. The investigation into the validity of such a new right needs
to be further advanced86 as any newly proposed measure to govern Internet access should
be able to withstand the test of time.
Our discussion above reveals that recognizing access to the Internet as a human right
on an international scale is presently attainable only within limits. At the same time,
developments at the Council of Europe indicate an emerging consensus on Internet
access recognition as a self-standing right in the European region. Here, the Internet is
no longer seen as a medium determining the effectiveness of individuals’ rights but it is
considered to be valuable for its positive impact on a modern society.87 The European
developments demonstrate that over the period of time the perception of access to the
Internet has evolved: from access to the Internet understood as a static structure (i.e. a
medium to exercise certain rights) to a more dynamic notion. According to the current
understanding, access to the Internet is a condition to enjoy benefits of democratic
societies, in particular, the enjoyment of cultural, civil and political rights.88 We think
it is useful to understand access to the Internet as (1) a right to access information; (2)
access to expression; and (3) an extension of other existing access rights, such as the
right of access to public spaces or government documents.89 Internet access is more
than just an instrument serving the right to exercise freedom of expression and other
interrelated rights; it is also a right to participate in public life. This conceptualization
has implications when one questions the scope of state obligations with regard to such
a right.

85
  Ibid.
86
  Michael L. Best, ‘Can the Internet be a Human Right?’ (2004) 4 Human Rights and Human
Welfare 23; Brian Skepys, ‘Is There a Human Right to the Internet?’ (2012) 5(4) Journal of Politics
and Law 15; Paul De Hert and Dariusz Kloza, ‘Internet (Access) as a New Fundamental Right:
Inflating the Current Rights Framework?’ (2012) 3(3) EJLT 9–11; Nicola Lucchi, ‘Freedom of
Expression and a Right of Access to the Internet: A New Fundamental Right?’ in E. Monroe,
E. Price, Stefaan Verhulst and Libby Morgan (eds), Routledge Handbook of Media Law (Routledge,
2013); Tully, ‘A Human Right to Access the Internet?’, n. 78 above.
87
  Council of Europe, Parliamentary Assembly, Resolution 1987, n. 82 above.
88
  Ibid.
89
  Perhaps the best example of such public space could be eGovernment portals which provide
diverse content and services.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 172 13/12/2018 15:25


Access to the Internet in the EU  173

8.  POTENTIAL SCOPE OF THE RIGHT TO INTERNET ACCESS

Whether Internet access merits recognition as an autonomous human right may be the
core dilemma for policy-makers and academics, but additional and more critical questions
remain unanswered. What would access to the Internet entail? Typically, the Internet can
be understood as ‘1) a network infrastructure, 2) access at the transport layer and services,
or 3) access to digital content and applications’.90 Which of these three options should be
placed at the core of the right to Internet access? Delineating the characteristics and scope
of such a right is a complex task and any entitlements of individuals pursuant to such a
right are far from clear.91 Clarifying these matters becomes even more crucial if one also
considers the likelihood of understanding Internet access not only as a fundamental right,
but also as a fundamental right creating positive legal obligations.92
Perhaps, the extensive case law developed by the ECtHR, and its reliance on concepts
and principles such as proportionality, reasonable effort and the margin of appreciation
could offer some guidance at this point.93 According to the ECtHR, positive human
rights duties include an obligation to take active measures, such as the adoption of an
enforceable legislative framework that ensures the protection of a particular right.94
However, the scope of the positive obligations can be limited by the principle of
margin  of appreciation.95 It is not known how this margin of appreciation will play

90
  Nicola Lucchi, ‘Internet Content Governance and Human Rights’ (2014) 16 Vanderbilt
Journal of Entertainment and Technology Law 809.
91
  Tully, ‘A Human Right to Access the Internet?’, n. 78 above. For example, the European
Commission is of the view that Member States should provide access to the communications
infrastructure in ‘outdoor spaces accessible to the general public’, such as public offices, libraries,
health centres and outdoor public spaces. See European Commission, Explanatory Memorandum,
n. 28 above.
92
  De Hert and Kloza, ‘Internet (Access) as a New Fundamental Right’, n. 86 above. Most
human rights instruments, both at an international and at a European level, recognize negative (do
not interfere) and positive (make possible) human rights obligations. Recognizing Internet access
not only as a negative human rights obligation for states (i.e. ‘do not interfere with my Internet
access’), but also as a positive human rights obligation would oblige a state to guarantee that
everyone within its jurisdiction enjoys Internet access by taking positive actions towards achieving
that goal. The question becomes, what would be the scope of such actions?
93
  Margin of appreciation is believed to prevent the ECtHR from inflating the human rights
framework, Robin C.A. White, Clare Ovey and Francis Geoffrey Jacobs, The European Convention
on Human Rights (5th edn, Oxford University Press, 2010) 20.
94
  Council of Europe, Health-related Issues in the Case-law of the European Court of Human
Rights (June 2015).
95
  For example, in the 2003 judgment in Sentges v. The Netherlands, the ECtHR concluded
that although ECHR, Art. 8 entails a positive duty to protect a person’s physical and psychological
integrity, these positive duties under Art. 8 did not extend to the state’s obligation to provide a
severely disabled person with a robotic arm. The state that had rejected the claim had therefore
acted within the scope of ‘the margin of appreciation afforded to it’ (Sentges v. The Netherlands,
Application no. 27677/02, ECtHR). The ECtHR decision explains that Art. 8 can be invoked only
in exceptional cases. These exceptional cases would include situations where the ‘State’s failure
to adopt measures interferes with that individual’s right to personal development and his or her
right to establish and maintain relations with other human beings and the outside world’ (ibid.).
Furthermore, an individual should be able to demonstrate ‘a special link between the situation
complained of and the particular needs of his or her private life’.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 173 13/12/2018 15:25


174  Research handbook on human rights and digital technology

out in conflicts related to Internet access. At the same time, it can be anticipated that
positive measures taken by states are going to be constrained by available financial
resources.

9.  I MPACT OF EGOVERNMENT POLICIES ON THE SCOPE OF


THE RIGHT TO INTERNET ACCESS

It seems that the EU approach focusing on enabling widespread access to the Internet
(discussed above) aligns closely with Sandra Fredman’s point of view. According to
her, an assessment of whether a positive duty is fulfilled need not be based on a set of
predetermined steps, but rather on the principles of effectiveness, participation, account-
ability and equality of the measures taken.96 An important element in this discussion
is whether governments are doing enough, and whether possible positive human rights
obligations in this area are fulfilled, as the governments increasingly rely on ICT for the
purpose of delivering eGovernment services. Questions such as the following could be
asked: Do individuals have access to or possess a device enabling Internet access? Have
they subscribed to an Internet service provider? Do they have skills to browse the web and
find relevant information?
With various aspects of human activity gradually shifting into the online sphere, many
government functions and services have become available online. The Internet, embed-
ded in government functions, carries the promise of efficiency, increased convenience,
flexibility and transparency that could not only reshape the provision of public services,
but also the public administration’s relationship with its citizens.97 eGovernment is said to
benefit both sides of the governance spectrum: citizens and government.98 On the supply
side, governments have improved the quality of their public services by moving a portion
of their activities to cost-efficient online applications.99 At the same time, the emergence
of Web 2.0100 and the associated social media platforms have generated an endless array of
possibilities for interaction in the virtual sphere and thus increased the expectations of

 96
  Sandra Fredman, Human Rights Transformed: Positive Rights and Positive Duties (Oxford
University Press, 2008) 77.
 97
  H. Margetts, ‘Transparency and Digital Government’ in S. Hood and D. Heald (eds),
Transparency: The Key to Better Governance? (Oxford University Press, 2006); John C. Bertot, Paul
T. Jaeger and Justin M. Grimes, ‘Using ICTs to Create a Culture of Transparency: E-government
and Social Media as Openness and Anti-corruption Tools for Societies’ (2010) 27 Government
Information Quarterly 264, at 265–66. Consequently, the familiar ‘e’ prefix, already partnered with
a handful of traditional notions (‘e-commerce’ being the most prominent example) has joined the
term ‘government’. Therefore, ‘eGovernment’ refers to the usage of Internet, as well as Internet and
communications technologies (ICT), to carry out governmental functions.
 98
  P. Dunleavy, ‘Governance and State Organization in the Digital Era’ in C. Avgerou,
R. Mansell, D. Quah and R. Silverstone (eds), The Oxford Handbook of Information and
Communication Technologies (Oxford University Press, 2007) 410–25.
 99
  P. Henman, ‘Government and the Internet: Evolving Technologies, Enduring Research
Themes’ in W. Dutton (ed.), The Oxford Handbook of Internet Studies (Oxford University Press,
2013) 294–98.
100
  Tim O’Reilly, ‘What Is Web 2.0: Design Patterns and Business Models for the Next
Generation of Software’ (2007) Communications and Strategies 17.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 174 13/12/2018 15:25


Access to the Internet in the EU  175

the demand side (i.e. the citizens).101 That is a positive development for the EU, since its
Member States are among the world eGovernment leaders.102
In the EU, a significant segment (47 per cent) of the overall population makes use of
eGovernment services.103 Therefore, we consider that a new argument supporting the
recognition of access to the Internet as a universal service and a fundamental right in the
EU could be developed on the basis of the number of individuals who use eGovernment
services. All EU citizens should be entitled to access eGovernment services developed by
public administrations without undue restrictions. Indeed, governments when digitalizing
public services should consider creating accessible online services, providing access to
equipment (e.g. Wifi access points, and user-friendly software) and teaching digital skills,
which would allow society to make the most of the truly impressive research and policy-
making effort that has been developed in pursuit of the EU’s eGovernment vision.104

10.  LIMITATIONS TO THE RIGHT OF INTERNET ACCESS

In general, restrictions limiting access to the Internet are perceived to be similar to restric-
tions to freedom of expression and therefore they can be divided in two groups. The first
group includes Internet content suppression practices that hinder the possibility to access,

101
  eGoverment therefore is expected to enhance transparency and active participation of
citizens in policy-making. See S. Coleman and J. Blumler, The Internet and Democratic Citizenship
(Cambridge University Press, 2009) 90–116; A. Meijer, B. Koops, W. Pieterson, S. Overman and
S. ten Tije, ‘Government 2.0: Key Challenges to Its Realization’ (2012) 10 Electronic Journal of
E-Government 59–60.
102
  United Nations, E-Government Survey 2014: E-Government for the Future We Want (2014)
15. At the time of writing, 15 Member States of the EU are ranked as ‘World E-Government
Leaders’ by the UN; as early as 2001, the United Kingdom, the Netherlands, Denmark and
Germany were among the ten highest-ranking countries globally with regard to eGovernment
capacity. While the prerogative of developing eGovernment services rests with the Member States,
the EU has been setting the agenda for eGovernment. For more than a decade, the EU has been
encouraging Member States to launch eGovernment high quality services that would serve interests
of citizens and businesses. The EU has assumed the role of the coordinator to secure the level of
homogeneity among different national policies. The coordination has proved to be necessary to
facilitate the functioning of the Single Market. See Francesco Amoretti and Fortunato Musella,
‘Toward the European Administrative Space: The Role of e-Government Policy’ (2011) 3 European
Political Science Review 35, at 36–37.
103
  European Commission, A Digital Single Market Strategy for Europe, n. 5 above.
104
  Ron Davies, e-Government: Using Technology to Improve Public Services and Democratic
Participation (European Parliamentary Research Archive, 2015) 9–13; for an overview of the
various reports and studies, see https://ec.europa.eu/digital-single-market/en/newsroom/reports-
studies-etc/public-services (accessed 21 November 2017). Since 2006, the Commission has pre-
sented three eGovernment Action Plans, each outlining the steps to be taken over a five-year span.
The third Action Plan covers the period 2016–2020, principally aimed at promoting the adoption
of eGovernment practices across Member States. It focuses on the development of better and more
cost-efficient public services for citizens and businesses through a spirit of cooperation among
Member States. In line with its predecessor, the current Action Plan determines policy objectives in
order to accommodate the objectives of the Digital Single Market Strategy. In particular, the third
Action Plan aims to ‘modernise public administration, achieve cross-border interoperability and
facilitate easy interaction with citizens’.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 175 13/12/2018 15:25


176  Research handbook on human rights and digital technology

receive or impart information.105 The second group includes measures imposed on certain
individuals, such as prison inmates or sex offenders.106
Human rights instruments, both at an international and a European level, specify
the possibility of legitimate restrictions on content in order to protect the rights and
interests of others.107 Content that can be legitimately restricted include (but may be not
limited to) child pornography,108 hate speech109 and incitement to commit genocide,110
discrimination, hostility or violence.111 In practice, any measure targeting the restriction
of content must meet the criteria of legality, legitimacy and necessity, in order to be
justifiable.112
In some cases, courts may go beyond their well-established practice and principles
when addressing issues related to the online environment and introduce new concepts
to explain the normative content of existing rights. For example, the ECtHR noted that
the legal grounds for limiting individuals’ Internet rights contain several requirements. In
Ahmet Yildirim v. Turkey, concerning court orders (in third party cases) resulting in the
generic blocking of access to online services, the ECtHR, when considering the formal
and material criteria for blocking an online service, concluded that a wording ‘prescribed
by law’ requires ensuring a certain quality of a regulatory measure.113 In particular, the
ECtHR ruled that a regulatory measure limiting individuals’ rights in the online environ-
ment should be precise, accessible to the public and concern a person. Additionally, a
regulatory measure limiting one’s right should result in a predictable outcome and be
compatible with the rule of law.
In Jankovskis v. Lithuania, the ECtHR clarified its stand on limitations of rights
for prisoners, who are deprived of certain liberties. The ECtHR found that although
‘Article 10 cannot be interpreted as imposing a general obligation to provide access to the
Internet, or to specific Internet sites, for prisoners’, in this particular case Mr Jankovskis
should have been provided with a possibility to access the Internet. In particular, Mr
Jankovskis should have been provided with a possibility to access a website that contained

105
  Paul de Hert and Lina Jasmontaite, ‘Internet Content Suppression’ in Andreas J. Wiesand,
Kalliopi Chainoglou, Anna Śledzińska-Simon and Yvonne Donders (eds), Culture and Human
Rights: The Wroclaw Commentaries (ARCult Media, 2016). Internet content suppression, some-
times referred as Internet censorship, may result from legislative (e.g. notice and take down require-
ment) or technical measures (e.g. filtering or blocking an Internet Protocol).
106
  Alisdair A. Gillespie, ‘Restricting Access to the Internet by Sex Offenders’ (2011) 19(3)
International Journal of Law and Information Technology 165.
107
  Ibid.
108
  Directive 2011/92/EU of 13 December 2011 on combating the sexual abuse and sexual
exploitation of children and child pornography, and replacing Council Framework Decision
2004/68/JHA [2011] OJ L335/1, art. 25.
109
  Council Framework Decision 2008/913/JHA of 28 November 2008 on combating certain
forms and expressions of racism and xenophobia by means of criminal law [2008] OJ L328;
Europa, Code of Conduct on Countering Illegal Hate Speech Online, available at http://ec.europa.
eu/justice/fundamental-rights/files/hate_speech_code_of_conduct_en.pdf (accessed 21 November
2017).
110
  Convention on the Prevention and Punishment of the Crime of Genocide (1948), Art. 3(c).
111
  International Covenant on Civil and Political Rights, Art. 20.
112
  Tully, ‘A Human Right to Access the Internet?’, n. 78 above, 192.
113
  Ahmet Yıldırım v. Turkey, Application no. 3111/10, ECtHR, 18 December 2012, para. 57.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 176 13/12/2018 15:25


Access to the Internet in the EU  177

information about study programmes in Lithuania.114 The ECtHR based its ruling on
the finding that the public has a right to receive information of general interest that is in
the public domain.115 The ECtHR also deemed that accessing this particular website was
closely related to his right to pursue education while being in prison, which was provided
by Lithuania. This judgment corresponds to the previous case of Kalda v. Estonia,
which concerned access to three specific webpages providing legal advice.116 Both cases
demonstrate that the ECtHR does not accept the justification provided by the states that
the restriction of Internet access by prisoners is necessary to deprive them of the ability
to commit new crimes. At the same time, the ECtHR is reluctant to consider access to
the Internet in a broader meaning. In fact, the ECtHR limits itself to the specific circum-
stances of a case and it does not take an opportunity to elaborate on questions of societal
relevance, such as a right to access the Internet or a right to access official information
held by public administration or governments.
Finally, the ECtHR case law demonstrates that currently, questions concerning access
to the Internet are addressed through the freedom of expression lens. This in turn means
that the limitations that may be imposed on access to the Internet are similar to the
restrictions to freedom of expression. These limitations have to be legal, proportional,
justified and necessary in a democratic society; otherwise, they are at risk of infringing
fundamental human rights. But would these limitations be similar if the discussion on a
right to Internet access is decoupled from the freedom of expression that is the touchstone
of many rights?

11. CONCLUSION

In this chapter, we scrutinized the EU and its Member States’ regulatory approaches to
Internet access. We refer to Internet access as an element of the universal service idea,
as a constitutional right, as a fundamental right and a human right without indicating
a preference for any of these options. With our chapter, we aimed at contributing to the
societal and legal debates on the recognition of Internet access as a fundamental right in
the EU.
In spite of the techno-centric EU regulatory approach to Internet access, building
upon the notion of access rooted in the telecoms sector, numerous policy documents and
developments in the case law point to Internet access in EU Member States being more

114
  Jankovskis v. Lithuania, Application no. 21575/08, ECtHR, para. 59.
115
  Ibid. paras 52–53.
116
  Kalda v. Estonia, Application no. 17429/10, ECtHR, 19 January 2016. The ECtHR in
Kalda v. Estonia sheds some light on this complex issue. Mr Kalda, an Estonian national who was
sentenced to life imprisonment, was refused access to three websites due to security and economic
considerations. Two of the websites were state-run and one belonged to the Council of Europe. Mr
Kalda wished to access them in order to carry out legal research for the court proceedings in which
he was involved. Given that Estonian law granted limited Internet access to prisoners, the Court
found that the refusal in question violated Mr Kalda’s freedom of expression. However, the Court
stressed that ECHR, Art. 10 could not be interpreted as obliging contracting states to provide
Internet access to prisoners. By analogy, a potential right to Internet access would be subject to
similar restrictions.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 177 13/12/2018 15:25


178  Research handbook on human rights and digital technology

than a technical issue – it is an essential part of daily life. We deem that the recognition
of a right to access the Internet as a universal service or a fundamental right may lead
to positive policies, expand infrastructure and improve digital literacy. Whilst focusing
on a satisfactory level of broadband coverage within the EU territory can foster the
creation of a Digital Single Market, it does not demonstrate reasons for having a right
to Internet access that would resonate with the human rights framework and that could
be transferred to other regions and countries that are not driven by economic incentives.
Access to the Internet may facilitate the enjoyment of fundamental rights and therefore
it should not be regarded only as a medium enabling economic participation. A strong
emphasis on the economic benefits of the Internet may reduce the overall value that
access to the Internet could have on European society. An ambition to connect everyone
who does not have access to the Internet should not be limited to the development of
better infrastructure. We pointed out that while the objective of having ‘[a]ccess to an
inexpensive, world-class communications infrastructure and a wide range of services’ set
by the Lisbon Strategy served well and allowed the advancement of Internet take-up at
a remarkable pace for almost two decades, more attention could be paid to policies and
measures addressing the demand side.117 Measures focusing on the demand side have
only recently surfaced in several policy documents and therefore access to the Internet is
still primarily considered to be an enabler of economic growth. Building on the observa-
tions of Girish J. Gulati and David J. Yates, we suggest that instead of focusing on the
potential economic benefits, the EU should emphasize the importance of demand-side
policies promoting access, participation in societal debates and use of eGovernment
services.118
In order to improve the current situation, the EU could consider, in the line of a
2014 Council of Europe Recommendation, recognizing access to the Internet as an
autonomous fundamental right.119 The Council of Europe position may be seen as a sign
of sufficient political will among the European countries to include access to the Internet
among other rights and recognize it as a self-standing right. We found that EU Member
States, the Council of Europe and both European Courts in Luxembourg and Strasbourg
are well equipped under the EU primary law to recognize access to the Internet as a fun-
damental right. In section 6 we discussed three plausible scenarios for recognizing access
to the Internet as a fundamental right in the EU. The scenarios are based on the reading
of the primary EU law, namely Article 6 of the TEU and the Preamble to the EU Charter.
Finally, we observed that currently, limitations to Internet access are similar to the ones
imposed on the freedom of expression. Perhaps, the recognition of access to the Internet
as a separate right would allow law- and policy-makers to reconsider the limits and scope
of such a right. It would then be possible to discuss the following questions: What do
we mean by access to the Internet and what kind of Internet we strive for? What kind of
Internet access would serve our society as a medium capable of enabling the exercise of
a human rights framework? How could a fair balance be struck between different rights,

117
  Lisbon Strategy, European Council, Presidency Conclusions, available at www.europarl.
europa.eu/summits/lis1_en.htm (accessed 21 November 2017).
118
  Gulati and Yates, ‘Different Paths to Universal Access’, n. 60 above.
119
  Council of Europe, Parliamentary Assembly, Resolution 1987 ‘The Right to Internet
Access’ (2014) para. 5.4.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 178 13/12/2018 15:25


Access to the Internet in the EU  179

such as the right to access the Internet and the rights to privacy, security and freedom of
expression?
The EU is a significant global player and it is reasonable to expect its regulatory
approach may have ripple effects on a global scale. We strongly believe that the EU can
have a positive impact on Internet governance and further development of the right to
access the Internet at an international level, if it expands its perception of Internet access
implications. A reasonable way to proceed forwards could be by reconsidering the role
that eGovernment could play within the scope of the debates on the right to access the
Internet. We suggest that the EU could strengthen its push for further developments and
take-up of broadband services if it recognizes that access to the Internet is essential for
the use of eGovernment services.

Lina Jasmontaite and Paul de Hert - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:34AM
via New York University

WAGNER_9781785367717_t.indd 179 13/12/2018 15:25


10.  Reflections on access to the Internet in Cuba as a
human right
Raudiel F. Peña Barrios

1.  RIGHT TO THE INTERNET AS A HUMAN RIGHT


The first fundamental premise for considering the Internet1 a human right, is because it
is the most important information source of humankind. We can only state something
that is clear, although some refuse to accept it, which is that the Internet is today the main
source of information available. Although access to it has not developed in a homogene-
ous way in all countries for various reasons (cultural, social, economic, technological,
political, among others), it is now essential to have this valuable tool. It can be said that
the construction of transparent and democratic societies, without having all the possibili-
ties that the network of networks provides, is a mere utopia today and with a view to the
future.
One hypothesis that some defend is that the use of the Internet in a free way by citizens
can be considered as a power derived from the right to information, without prejudice
to it being recognized normatively independently. If we consider how interrelated these
are and what is necessary in this connection for their proper exercise, we will be able to
understand that they arise from a common concept: information. The progress of the
communicative and informative media during the twentieth century, whose peak was the
development of the Internet, expanded and modified the ability of people to communi-
cate. This technological advance makes it necessary to reformulate certain legal concepts,
initially conceived to protect the issuer of information against the eventual third party
intervener (authority or individual), to extend its scope of protection to the recipients of
information.
The traditional freedoms of expression and of the press, undoubtedly two of the most
important political-juridical formulations inherited from the European and American
revolutions of the late eighteenth century, have now been reformulated to reach new
perspectives.2 That individuals can freely express opinions, ideas or thoughts, is com-
plemented by the truthful and timely reception of them by others. On the other hand,
contemporary societies need a media sphere as heterogeneous as possible. It is thus
possible to determine the information received by each person, and allow the consumer
of the information to have a critical position, both with regard to the medium itself and

1
  Throughout this chapter we will use the expressions ‘right to the Internet’ and ‘right of access
to the Internet’ as synonyms. Similarly, the terms ‘global network’ or ‘network of networks’ will be
used to refer to the Internet.
2
  As regards freedom of expression on the Internet, see L. Cotino (ed.), Freedom of Expression
and Information on the Internet and Social Networks: Exercise, Threats and Guarantees (Servei de
Publicacions de la Universitat de València, 2011).

180
Raudiel F. Peña Barrios - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 180 13/12/2018 15:25


Access to the Internet in Cuba  181

with regard to the materials it transmits. All this is impossible to do without access to the
Internet, because this is the only way to have a constant flow of information. By becom-
ing a source of information and knowledge, Internet use becomes an essential form of
the freedom of expression. It serves to implement, exercise and enjoy in fuller forms the
freedoms of expression, opinion and other human rights that underpin the democratic
ideal of any contemporary society.3 This last argument is in line with the transversality of
the right to information, which enables the enjoyment of other rights, particularly civil
and political, such as the right to participation,4 transparency and casting of votes. We
must consider, by way of example, that it is now possible to vote via the Internet following
the development of so-called electronic ballots.5
This issue has reached such relevance that the United Nations recognized the Internet
as a human right, following a report by the Office of the High Commissioner for Human
Rights in May 2011. In Resolution A/HRC/20/L.13 of the Human Rights Council (HRC)
adopted on 5 July 2012, and based on the reports of the Special Rapporteur on the
promotion and protection of the right to freedom of opinion and expression, presented
to the Council and the General Assembly, it was pointed out that the rights of individuals
should also be protected in cyberspace.6
At the same time, several states have been incorporating this into their legal systems,
with the corresponding development of actions to guarantee these rights in practice, for
example, Greece (2001), Spain and Finland (2003), Germany (2004) and Turkey (2010).
A further step was taken by Switzerland (in 2006, with implementation commencing
in January 2008), Finland (2009) and Spain (2011), by recognizing not just a right of
access to the Internet by any means, but by broadband, with a high speed of a minimum
of 1 Mb per second downstream. The Finns extended this to 100 Mb per second in
2015.7 Meanwhile, other international organizations which have established the use of
the Internet as a right are the European Union (EU) in 2002, and the Organization for
Security and Cooperation in Europe (OSCE) in 2011.

3
  N. Lucchi, ‘Access to Network Services and Protection of Constitutional Rights: Recognizing
the Essential Role of Internet Access for Freedom of Expression’ (2011) 19(3) Cardozo Journal of
International and Comparative Law (JICL) 10, available at www.cjicl.com/uploads/2/9/5/9/2959791/
cjicl_19.3_lucchi_article.pdf (accessed 4 April 2017).
4
  We refer to the explicit recognition that some Latin American constitutions adopted at
the end of the twentieth century and the beginning of the twenty-first include this right in par-
ticular, e.g. Bolivia, Ecuador and Venezuela. Yuri Pérez, ‘Political Participation as a Functional
Requirement of Democracy: Analysis from the Constitutional Design of the Bolivarian Republic
of Venezuela’ in R. Viciano (ed.), Studies on the New Latin American Constitutionalism (Ed Tirant
lo Blanch, 2012) 241–68.
5
  María Elizondo, ‘Ballot’ in E. Ferrer et al. (eds), Dictionary of Constitutional and Conventional
Procedural Law (Institute of Legal Research of the National Autonomous University of Mexico,
2014) vol. I, 126.
6
  J. García, ‘Internet (right to)’ in E. Ferrer et al. (eds), Dictionary of Constitutional and
Conventional Procedural Law (Institute of Legal Investigations of the National Autonomous
University of Mexico, 2014) vol. II, 721.
7
  José Carbonell and Miguel Carbonell, ‘Access to the Internet as a Human Right’ in J. Gómez
(ed.), Selected Themes of Private International Law and Human Rights: Studies in Homage to Sonia
Rodríguez Jiménez’ (Institute of Legal Investigations of the National Autonomous University of
Mexico, 2014) 35–36.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 181 13/12/2018 15:25


182  Research handbook on human rights and digital technology

In some countries, Internet access has been recognized as an instrumental right as a


consequence of freedom of expression. The most relevant examples are Portugal, Russia
and Ukraine. However, the most eloquent case is France, where this position was adopted
following a judgment of the Constitutional Council in 2009.8
Within the Latin American context, Mexico stands out. The constitutional reform
published in the Official Gazette of the Federation on 11 June 2013, among other items,
states in article 6, third paragraph, that the state will guarantee the right of access to
information and communication technologies, as well as to broadcasting and telecom-
munications services, including broadband and the Internet. The state’s goal is stated to
be to integrate Mexicans into the information and knowledge society through a universal
digital inclusion policy, with annual and six-year goals. In addition, it is provided that at
least 70 per cent of all households and 85 per cent of all micro-, small and medium-sized
enterprises (SMEs) at the national level must be able to use the Internet to download
information, with a real speed set in accordance with the average registered in the member
countries of the Organisation for Economic Co-operation and Development (OECD).
Finally, actions are to be taken to guarantee the use of broadband in premises of the
agencies and entities of the federal public administration, and the federative entities are
to do the same in the scope of their competence.9
A recent UN pronouncement came from the HRC, which adopted a resolution in 2016
stating that human rights should also be protected on the Internet.10 This Resolution gives
special importance to freedom of expression, since it is exercisable regardless of frontiers
and by any chosen procedure, in accordance with Article 19 of the Universal Declaration
of Human Rights and of the International Covenant on Civil and Political Rights. It
should be recalled that both international legal instruments refer not only to freedom of
expression but also to possibilities to investigate, receive information and opinions, as well
as their dissemination, and that they must be independent of sex, skin colour, religion,
language, political opinion, etc.
Based on these arguments, the HRC recognizes the global and open nature of the
Internet as a driving force in accelerating progress towards development in its various
forms. In this regard, it urges states to promote and facilitate access to the Internet and
international cooperation aimed at the development of the media and information and
communication services in all countries. It also establishes the Internet’s global and open
nature; affirms that the quality of education promotes digital literacy and facilitates
access to it; and encourages the analysis of security concerns in cyberspace in accordance
with the international obligations of states in the field of human rights to ensure freedom
of expression, association and privacy.

 8
  P. García, ‘The Right of Access to an Open Internet’, available at http://tecnologia.eldere​
cho.com/tecnologia/internet_y_tecnologia/derecho-acceso-Internet-abierto_11_953305005.html
(accessed 17 April 2017).
 9
  García, ‘Internet (right to)’, n. 6 above, 722.
10
  Human Rights Council, Thirty-second session, Agenda item 3: ‘Promotion and protection
of all human rights, civil, political, economic, social and cultural rights, including the right to
development. The promotion, protection and enjoyment of human rights on the Internet’, A/
HRC/32/L.20(27 June 2016), available at http://thehill.com/sites/default/files/a_hrc_32_l.20_eng​
lish- Or-30-june.pdf (accessed 15 March 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 182 13/12/2018 15:25


Access to the Internet in Cuba  183

Additionally, and in what undoubtedly constitutes the strongest element of its content,
the Resolution condemns torture, extra-judicial executions, enforced disappearances
and arbitrary detentions, as well as the expulsion, intimidation, harassment and gender
violence committed against persons for exercising their human rights and fundamental
freedoms on the Internet. It is clear that this is intended to require sanctioning of any
such behaviour, since it is common in some national contexts for individuals who use the
network to disseminate ideas not accepted by the authorities to be persecuted. Related to
this last aspect, states are urged to ensure accountability on this issue. Similarly, states are
urged to take measures aimed at preventing Internet content suppression practices, while
stressing the importance of combating hate speech, which constitutes an incitement to
discrimination and violence on the Internet.
From the above, we can understand the current relevance of the Internet as a human
right. Should this be accepted as part of the right to information; as a stand-alone right
wholly independent of any other; or only be recognized as instrumental or accessory to
freedom of expression? It is clear that the inclusion of such rights in the legal system
contributes to making our societies more democratic. If the relevance of the Internet
is accepted for the most effective exercise of the fundamental freedoms of the human
being, then what is really important is the state’s commitment and the corresponding legal
protection of such rights, rather than the theoretical position being accepted as valid for
formulation.
It should be noted that the guarantees of such a right should not only be legal but
also practical. The latter should be understood as the actions that the state develops,
in conjunction with civil society and the international community, to ensure full
access to the Internet. They range from the development of literacy campaigns to
the implementation of an adequate communication infrastructure, complemented by
the adequate legal regulation of each aspect linked to this theme. However, it is valid
to clarify that while the active use of the Internet has enabled the collapse of some
authoritarian political regimes, such as those that fell during the Arab Spring, this did
not automatically lead to democratic states which function correctly. For that reason,
it can be affirmed that the use of the communicative and informative advantages of
the Internet are no more than tools for the achievement of diverse ends, among others
of politicians, which may, or may not, lead to the construction of democratic states
and societies.

2.  C
 UBA AND THE GLOBAL NETWORK: POLITICAL-LEGAL
AND SOCIAL ISSUES

Generally, Cuban political discourse has identified as the main obstacle to access to
the global network, the problems of infrastructure that the country faces given its
economic limitations. It is widely known that in terms of Internet access, Cuba has one
of the lowest penetration rates in the world, especially from home. The International
Telecommunication Union (the United Nations agency of which the country is a
member, ITU) places it consistently at the bottom rungs of its ICT Development
Index (IDI), which integrates three dimensions: access, utilization and capabilities.
According to the 2015 edition, Cuba ranked 129th out of 167 countries, placing it in a

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 183 13/12/2018 15:25


184  Research handbook on human rights and digital technology

worse p ­ osition than five years previously, and irrespective of Cubans’ income level or
educational attainment.11
In addition to these problems, we must add the existence of other limitations that
have not allowed the recognition of the use of the Internet as a human right, which
are associated with inconsistencies inherent in the national legal system, the lack of
applicability of the Constitution, as well as other factors. In the first place, the negative
interpretation made by the Council of State of the Republic of Cuba of the precepts of
the Constitution, understood as a law, should be emphasized. The Constitution gives the
Council of State (in article 90 incise ch)12 the power to give the existing laws a general and
obligatory interpretation, but to date, at least in public, no agreements have been issued
that establish the content and scope of the rights recognized by the UN. Hence, there are
no pronouncements that allow the recognition of the use of the Internet as a right, based
on the interpretation of a constitutional postulate.
Another aspect to consider is the lack of legislative development regarding freedom of
expression and the press. Since 1976, the Constitution has recognized both freedoms under
the premise that the law will regulate their exercise.13 However, this has not been adopted
and the practice in the matter is still highly mediated by the information policy set by the
Cuban Communist Party (CCP). In our opinion, it is not a mere legislative neglect (there
are no casualties in politics) but a clear stance of inconsistency with what is established
by Cuban law. At the same time, this situation can be explained on the basis of successive
– and erroneous – interpretations of article 5 of the Constitution, where it is stated that
the CCP is ‘the superior governing force of society and the State’, which has allowed the
imposition of party guidelines outside the institutional boundaries of this organization.14
In the case of the press, for example, Granma newspaper is the ‘Official Organ of the CCP
Central Committee’, but other media, such as television and radio, are usually guided by
the same information lines. This complex panorama, among other things, results in a lack
of a coherent and harmonious legal system for the exercise of rights associated with the
free expression of ideas, including the use of the Internet for such purposes.
The Cuban courts have also not as yet ruled on the recognition of access to the Internet
as a human right, and this has been influenced by several factors. What stands out most
of all is the lack of direct applicability of the constitutional rules in the solution of cases
that are submitted before the judicial bodies. It should be considered that the Cuban text
was conceived more as a political manifesto or declaration of principles, than as a legal
provision per se. Hence, article 53, where the freedoms of expression and of the press
are now included,15 has not been interpreted judicially, which would have allowed the
establishment of the use of the Internet as part of this.

11
  R. Torres, Internet and the Future of the Cuban Economy (21 February 2017), available at http://
progresosemanal.us/20170221/Internet-y-el-futuro-de-la-economía cubana/ (accessed 5 May 2017).
12
  M. Prieto and L. Pérez, Legislative Selection of Cuban Constitutional Law (1st edn, Félix
Varela (ed.), 1999) 37.
13
  The full text of the Cuban Constitution can be found in (1976) 5(11) Cuban Journal of Law
(January–June) 135–81. The article on freedom of expression and of the press is at ibid. 52, see 153.
14
  Prieto and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 14.
15
  Note that the regulation of the exercise of these rights continues to be referred to law. Prieto
and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 55.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 184 13/12/2018 15:25


Access to the Internet in Cuba  185

In this same sense, two other issues that hinder jurisdictional intervention are evident.
One of them is related to the fact that in Cuba there are no special procedures for the
defence before the courts of the rights that the Constitution regulates; while the Civil,
Administrative, Labour and Economic Procedure Act excludes constitutional matters
from the jurisdiction of the Civil and Administrative Chambers.16 The other has to do
with the non-existence within the provisions of the Constitution of an open rights clause,
which accepts the progressivity of rights, and allows the incorporation of new rights
into the constitutional framework through interpretation, without the need for specific
reform.17
With everything said so far, it seems obvious that the problems of Internet access
in Cuba are not only ones of the economy or infrastructure. Independently of these,
which should not be overlooked, there are particular anomalies that have prevented an
adequate legal protection of the freedoms of expression and of the press, as well as of
other information rights, as is the case of the Internet. That is why these issues should be
taken into account with a view to the constitutional reform that is expected to take place
in 2018 or thereafter.
Despite the complexity of the national context, public services that are provided
directly by the state include the Internet, although its development as such is still in
progress. The Ministry of Communications (MINCOM) has absolute control in this
sector. This control is exercised through the Telecommunications Company of Cuba,
SA, which is a monopolistic entity in that it does not have any competition from another
national or foreign company. The Company offers services of mobile telephony with 2G
technology through another company, Cubacel, although some 3G networks are already
in operation and there are plans to reach the 4G standard in the short term. Therefore,
to date, only facilities for voice transmission and short messages (SMS) are provided, but
not data. However, Cubacel offers its users an email service accessible from the mobile
phone, called Nauta, whose cost is equivalent to US$1 per megabyte. This service requires
the previous registration of the user and does not offer minimum guarantees of privacy
of the communications. Although it is common in many countries that a single company
provides all these services, however, in the Cuban case this has an impact on the costs,
since the tariffs are fixed in one currency, the Cuban Unit Currency (CUC), whose value is
25 times higher than the currency in which salaries are paid, the Cuban Unit Peso (CUP).
On the other hand, the Company enables the use of the Internet through the so-called
navigation rooms (a kind of cybercafe) and other places. In June 2013, MINCOM issued
Resolution No. 197,18 which allowed the use of the global network (through Nauta
ETECSA) to citizens. For the first time it was possible to have extensive access depending
on the economic capacities of individuals and of the connectivity. As an initial step, 118
navigation halls were established in the country, with very high prices in correlation with

16
  Law no. 7, 19 August 1977, art. 657, s. 4, in National Organization of Collective Bodies, Law
of Civil, Administrative, Labor and Economic Procedure (ONBC Publishing, 2012).
17
  Daniela Cutié, ‘Rights in the Cuban Constitution of 1976: A Re-reading Needed for 40
Years of Effectiveness’ in Andry Matilla (ed.), The Cuban Constitution of 1976: Forty Years of
Effectiveness (UNIJURIS, 2016).
18
  Available at www.mincom.gob.cu/sites/default/files/marcoregulatorio/R%20197 13%20Con​
diciones%20Generales%20areas%20de%20Internet.pdf (accessed 5 May 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 185 13/12/2018 15:25


186  Research handbook on human rights and digital technology

average wages. Two years later, the Wifi service was expanded to different public spaces
in the country: there were 35 initial points and a cost reduction was established. By the
end of 2016, there were 611 rooms in the country with 2,005 booths, and 346 Wifi access
areas, which in their peak hours have traffic exceeding 200 Megabits.19 In January 2018,
the first steps were taken, through a so-called pilot plan, to achieve the connectivity of
people from their homes.
According to some research carried out in the country about the impact of the opening
up of the Internet for private use, interesting issues stand out from opinions gathered
among the consumers of this service. For many, it has become a source for researching
information for professional or student tasks. Only for a minority have these spaces been
constituted as a source to cover particular information agendas, which do not harmonize
with those of the traditional media.20 This means, in our opinion, that those who access
the Internet in this way do not use it, at least not in the majority, to break the monopoly of
the official press. Two elements, among others, one objective and another subjective, could
be influencing this conjuncture (although it should be clarified that it is not the purpose of
the present chapter to delve into this subject). First, we should consider the high cost of
the connection, which very often runs slowly and shows instability. Secondly, there is no or
little development of an information culture that may lead to the diversification of Cuban
information sources. The possibility for the Cuban people to access Internet services, and
that they are accessible within the average economic possibilities of the inhabitants of the
island, will be a step forward. On the one hand, it will place society in a more favourable
context to exercise better control over the work of the government, while facilitating
transparency and accountability. It should also serve to facilitate the full enjoyment of
certain fundamental freedoms, such as those of expression and the press, contributing
to a much-needed democratization and opening up, especially after the re-establishment
of diplomatic relations with the United States of America.
In addition, from the legal point of view, the limitations imposed on the provisions of
the International Covenant on Civil and Political Rights, in relation to the provisions of
the Constitution, could be pointed out. When signing the aforementioned international
treaty, the Cuban government declared that it was the revolution that allowed its people
to enjoy the rights enunciated in it, as well as that the civil and political rights of Cuban
citizens are those enshrined in the internal legal system. For this reason, it was recognized
that the policies and programs of the State guarantee the effective exercise and protection
of these rights for all Cubans. With respect to the scope and implementation of some of
the provisions of the pact itself, the Cuban government stated that it would make the
reservations or interpretative declarations that it considered appropriate.21

19
  J. Domínguez, Blessed Wi-Fi that I Do with You (20 April 2017), available at http://pro​
gresosemanal.us/20170420/bendita-wi-fi-hago-contigo/ (accessed 5 May 2017).
20
  Ibid.
21
  See the declaration of the Cuban government, available at www.jornada.unam.mx/2008/02/29/
index.php?section=mundo&article=031n1mun; and at www.google.com/url?sa=t&rct=j&q=​&es​
rc=s&source=web&cd=5&cad=rja&uact=8&ved=0ahUKEwjM55SV0YXZAhUFmlkKHRGOA​
CQQFghDMAQ&url=https%3A%2F%2Ftreaties.un.org%2Fdoc%2FTreaties%2F1976%2F03%2​
F19760323%252006-17%2520AM%2FActions%2Fdeclaration%2520by%2520Cuba.pdf&usg=A​O​
v​Vaw1a1-LTYsLZW7_5HEh9mM9B.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 186 13/12/2018 15:25


Access to the Internet in Cuba  187

3.  C
 ONSTITUTIONAL AND POLITICAL GROUNDS FOR
RECOGNIZING INTERNET ACCESS AS A HUMAN RIGHT:
WHERE ARE WE AND WHERE COULD WE GO?

The Cuban Constitution dates from 1976 (amended in 1978, 1992 and 2002), and as a
result of the historical moment in which it was adopted has a strong influence from so-
called socialist constitutionalism, emanating from the Union of Soviet Socialist Republics
(USSR) and other countries of Eastern Europe. This means that, from a philosophical
and ideological point of view, human rights were not conceived according to the liberal
logic that allows the possibility of opposing them to the state. On the contrary, the vision
that has been followed up to the present day considers human rights as comprising
specific faculties that citizens can enjoy, but always within the limits that the state imposes,
without the possibility of putting them before the state’s goals or objectives. This position
finds support in article 62 of the Constitution, which makes explicit the impossibility of
exercising a constitutional right against the existence of the socialist state, or against the
decision of the Cuban people to build socialism and communism. Therefore, the violation
of this principle is declared punishable.22
In spite of all this, it is our opinion that it is possible to find in the Constitution provi-
sions that underlie the need to recognize the right of access to the Internet, given the
current circumstances. The first of these is included in article 1, which establishes Cuba
as a socialist state of workers, constituted on the basis of the Marxist maxim ‘with all and
for the good of all’; for the enjoyment of political freedom, social justice and individual
and collective well-being and human solidarity.23 It is precisely these values that can be
considered the starting point to recognize the right of access to the Internet within the
catalogue of fundamental rights that are derived from the historical constitutional reform.
Given the rapid development of information technology, it is unquestionable to how
great an extent the Internet can be used as a tool for political freedom and the well-being
of society and individuals in the political, economic and social order. It should be noted
that Cuba has a considerable advantage when compared to countries in Latin America or
other countries with similar economic development, as regards the full use of the Internet,
since a basic element in this sense is to have high indicators of educational achievement.
Since the development of the literacy campaign in 1961 and for more than 55 years, the
country has consolidated an education system highly valued at the international level,
including by international organizations of which it is not a member such as the World
Bank (WB)24 and the International Monetary Fund (IMF). This places it in a very favour-
able position for effective use of the Internet in terms of economic, political and social
development, both collectively and individually.
As part of the so-called updating of the Cuban model, the management of the country
should be assessed in terms of the imprint of the global network as regards both economic
development and political participation, and its relation to the satisfaction of individual

22
  Prieto and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 31.
23
  Ibid. 13.
24
  S. Lanrani, World Bank Admits Cuba has the Best Education System in Latin America (4
March 2015), available at www.resumenlatinoamericano.org/2015/03/04/banco-mundial-admite-
que-cuba-has-the-best-educational-system-of-america latina/ (accessed 4 May 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 187 13/12/2018 15:25


188  Research handbook on human rights and digital technology

and collective interests. In the first of these fields, it is highlighted that the Internet has
made it possible, among other factors, to increase the transportability of services (stra-
tegic sector of the Cuban economy), making cross-border provision viable. These same
developments could facilitate the coordination of internationally dispersed activities.
Similarly, the use of the network would reduce the cost of decentralized data processing
and distribution. This favours access to information and the use of information for
productive purposes.25
In the case of participation in the public life of citizens, the Internet could be an effec-
tive tool.26 Where people currently have no possibility to participate, because spaces for
such purpose cannot be organized without breaking the social dynamics of today, the
Internet could help to make consultation of the people not as exceptional as it has been
until today. In this way, one could comment on draft laws under discussion, suggesting
different solutions to those proposed by the officials.27
In addition, it is worth noting the provisions of article 928 of the Constitution, since
it establishes a state obligation to ensure the educational, scientific, technical and cultural
progress of the country. The Internet can undoubtedly contribute in these fields, consider-
ing the breadth and diversity of sources of knowledge that come together on the web.
Consequently, in the Constitution itself (article 39) it is stated that the state ‘directs
and promotes education, culture and science in all its manifestations’. In its educational
and cultural policy, it maintains, among other postulates, that creative and investigative
activity in science must be free. Therefore, it stimulates and makes feasible scientific inves-
tigations and prioritizes those directed to solving the problems that concern the interest
of the society and the benefit of the people.29 Nevertheless, the political vision regarding
the Internet as a platform for the distribution of material arising in the United States was
not always viewed positively by the Cuban leadership. The then President of the Councils
of State and of Ministers, Fidel Castro Ruiz, in 1995 called it a ‘Trojan horse’, destined
to promote subversion and division – an instrument for the diffusion of imperialist
propaganda and the manipulation of consciences. Years later, in 2012, he would refer to
the global network as ‘a revolutionary instrument that allows us to receive and transmit
ideas, in both directions, something that we must know how to use’.30
Since the country is in the process of economic reform and for many, including myself,
it is irrefutable that transformations are also necessary in the political and legal order, I
consider that the creative use of the Internet is essential in both orders. During the most
recent congress of the CCP, celebrated in 2016, there was adopted the ‘Conceptualization

25
 Torres, Internet and the Future of the Cuban Economy, n. 11 above.
26
  About the potentialities of the global network to enable political participation, see D. Shah,
‘Information and Expression in a Digital Age: Modeling Internet Effects on Civic Participation’
(2005) 32(5) Communication Research (October) 531–65, available at http://crx.sagepub.com/cgi/
content/abstract/32/5/531 (accessed 5 February 2017).
27
  J. Fernandez, The ‘Hot Zone’ of Cuban Democracy (11 May 2017), available at http://oncu​
bamagazine.com/columnas/la-zona-caliente-de-la-democracia-cubana/ (accessed 11 May 2017).
28
  Prieto and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 14.
29
  Ibid. 24.
30
  O. Perez, Fidel Castro: ‘Internet is a Revolutionary Instrument’ (7 March 2012), available at
https://lapupilainsomne.wordpress.com/2012/03/07/fidel-castro-internet-es-un-Instrument-revolu​
tionary (accessed 4 May 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 188 13/12/2018 15:25


Access to the Internet in Cuba  189

of the Cuban Economic and Social Model of Socialist Development’ and the ‘National
Development Plan until 2030: Proposal of Vision of the Nation, Axes and Strategic
Sectors’. These documents, whose content according to the authorities constitutes
the programme of planned actions to boost national development in the medium and
long term, establishes some issues associated with the Internet. On the one hand, it is
recognized that information technologies, communications and automation should be
developed, so that they contribute to active citizen participation, especially for young
people; to the elevation of knowledge, the level and quality of life; to innovation, to the
perfection of the state; to the performance of the national economy and the social sphere.
The documents also describe information, communication and knowledge as public assets
and citizens’ rights which must be exercised responsibly.31 However, the documents do not
refer to Internet access as a human right, not acknowledging that information cannot be
conceived as such without the use of the information flows that it provides.
At the international level, the government’s position has not been criticized regarding
this issue. The most recent example was during the vote in 2016 on a resolution of the
UN Human Rights Council, which affirms the need to protect and promote human
rights in the digital sphere, to the same extent and commitment as in material space.32
Along with Cuba, others who also voted against the resolution were China, Russia, Saudi
Arabia, Qatar, United Arab Emirates, Bolivia, Venezuela, Ecuador, Burundi, Kenya,
South Africa, Republic of Congo, Indonesia, India and Bangladesh. However, this UN
body committed itself to further examining the promotion, protection and enjoyment
of human rights, including the right to freedom of expression, on the Internet and other
technologies, as well as the way in which this can be an important instrument for the
development and exercise of human rights. We believe that as the country progresses in
the implementation of public policies to promote access to the global network, it will have
the opportunity to change its position on this issue within the multilateral agenda. The
contrary would be, in our judgement, a gross political error, given the fundamental of the
question, and taking into account the recognition of the information as a citizen’s right,
established by the political documents previously cited.
Actually, in Cuba today, a positive vision regarding the Internet is now being imposed
from the apparatus of power. However, this change of position has not so far (as an
example of what has been commented on above) resulted in a change in the policies and
regulations of this field. The Internet is a good mechanism for the dissemination of con-
tent, as long as it does not contravene a set of rules and principles aimed at maintaining
the current socio-political and socio-economic regime. Any use of the Internet for other
purposes, particularly open criticism of the system, runs the clear risk of being removed
from the network by the authorities, without prejudice to actions that can directly be taken
in relation to the author of such content.
In addition, we consider that the actions that are being carried out by the state are

31
  Cuban Communist Party (CCP), ‘Conceptualization of the Cuban Economic and Social
Model of Socialist Development; National Development Plan until 2030: Proposal of Vision of the
Nation, Strategic Axes and Sectors’, GRANMA Newspaper (Havana, 2016) 8.
32
  F. Ravsberg, Cuba Votes Against Considering the Internet as a Human Right (28 June
2016), available at http://cartasdesdecuba.com/cuba-vota-en-contra-de-considerar-a-internet-as-
un-ddhh/ (accessed 19 April 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 189 13/12/2018 15:25


190  Research handbook on human rights and digital technology

largely motivated by the intention to show some flexibility in the management of the
Internet. Note that the expansion of the service from June 2015 was coordinated with the
first actions of the process of normalization of diplomatic relations between Cuba and
the United States. Since former US President Obama announced, as one of the measures
aimed at easing the embargo, the start of investments in the telecommunications sector in
Cuba, the government began to implement measures to expand connectivity capabilities.
As a significant example of the current vision in this area, mention should be made
of the speech by Miguel Díaz-Escanel Bermúdez, member of the Political Bureau of the
Central Committee of the Communist Party of Cuba and First Vice-President of the
Councils of State and of Ministers (and successor to Raul Castro at the head of the coun-
try) at the closing of the First National Workshop on Computerization and Cybersecurity
in Havana, on 20 February 2015. First, he affirmed the commitment to carry out a process
of computerization of society by increasing the use of information technologies to raise the
wellbeing of the population and accelerate economic and social development, as well as to
‘publicize the arguments of Cuba’ and ‘our truth in the network’. This last idea, together
with others exposed in the discourse itself, emphasizes the instrumental character of the
Internet and its interrelation with the use of the new technologies for the fulfilment of the
political ends of the state. In the same way, it exposes that the right of access to the Internet
is accompanied by duties of citizens and of organizations and institutions towards society.
This means that their use must be adequate and consistent with the Constitution and laws,
and implies the responsibility to ensure the defence of the country and its integrity, a task
that must fall on the control bodies that watch over the defence of the state.
In this sense, Díaz-Escanel emphasized that socialism gives a preferential place to the
right to information to achieve the full exercise of criticism and the participation of the
people, so that as long as we all define the projected country we want to see, it will become
clearer how the Internet can be put at our service. Hence, regulations governing access
to the network should be consistent with social norms, principles and policies, as well as
transparent for all citizens. Finally, emphasis should be placed on the speech’s references
to close cooperation with China and Russia on cybersecurity.33
Linked with this, he recalled plans to spy on governments and people using these
technologies in a perverse way. These latter ideas are relevant in two senses. On the one
hand, it highlights the notorious conceptions that these countries have in this area, which
deviate from international standards accepted in the framework of the UN and other
organizations. On the other hand, the reminder of the recent espionage using the Internet
could be interpreted as a manifestation of a defensive posture about the same and its
potential uses. In fact, in the discourse we have been discussing, emphasis is placed on
the tactical change represented by the flexibilization of US policy, in favour of Cuba,
but with continuing aims of destroying the Revolution, and penetrating subversively into
the society. This is why the need to advance the use of the global network as a kind of
battlefield is emphasized.

33
  R. Elizalde, Diaz-Canel: There is the Will to Put Informatization and the Internet at the
Service of All (+ Video) (20 February 2015), available at www.cubadebate.cu/noticias/2015/02/20/
diaz-canel-existe-la-voluntad-del-partido-y-el-gobierno-de-poner-la-internet-al-servicio-de-todos/
(accessed 4 March 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 190 13/12/2018 15:25


Access to the Internet in Cuba  191

The rest of the principles defined by Díaz-Escanel can be summarized in the following
ideas:34

● The strategy of access to the Internet must become a fundamental weapon of revo-
lutionaries to achieve social participation in the construction of the project of the
society that we want to build, from a comprehensive country design. The strategy
of using the Internet for sustainable human development, in accordance with the
model of Cuban society, must be led by the CCP and must involve all institutions
and society in order to achieve the fullest use of their potential in terms of develop-
ment. On this aspect, it is clarified that the whole system of work is directed by the
highest instance of partisan organization, the state and government, through the
Council of Computerization and Cybersecurity created in 2013, with the mission
of protecting, coordinating and controlling the policies and integral strategies of
this process.
● The Internet as a means of access to information and communication poses chal-
lenges to the prevailing forms of organization and social participation.
● The Internet poses challenges to traditional forms of social communication, to
the use of the media, to the role of individuals in the public space and requires
the existence of new policies, rules and ways of functioning, which should align
infrastructures, services and content to guarantee that right.
● The promotion and universalization of access and use of the Internet should be
part of the national cultural development process in its broadest sense and should
be accompanied by the promotion of national cultural production, promotion of
its values and the widest national and international dissemination.
● It is part of the basic infrastructure for the development of the economic and
business activities of the country and of the national capacities in this field. At the
same time, it is an economic activity with high development potential.
● In this context, the creation of an Internet infrastructure according to Cuba’s
possibilities should be encouraged, as a basis for the development of economic
activities at all levels, state, cooperative and self-employed sectors.
● The Internet is a potential generator of services and economic activities that are
themselves sources of employment and resources and economic growth.
● The Internet is a platform for national development that is subject to social control.
● It is imperative to ensure effective management of IT resources and establish
accountability mechanisms to verify the extent to which use of the Internet func-
tions to serve the country’s development goals and the improvement of social
standards.
● It is an administrative duty and responsibility to control that the resources put in
the way of social goals are used in that direction and that the use of the resources
available are set to support the nation’s priority goals.

As we said earlier, this speech by the First Vice-President of the Councils of State and
of Ministers was delivered at the First National Workshop on Computerization and

34
 See ibid.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 191 13/12/2018 15:25


192  Research handbook on human rights and digital technology

Cybersecurity, held from 18 to 20 February 2015 at the Center for Integrated Technologies
Research (CITR) located at the José Antonio Echeverría Polytechnic Institute.35 In this
space, discussions took place regarding, among other issues, the problems presented by
Internet access in the country, and the potential solutions to them. In addition, ideas were
gathered about the bases for the information society policy and the national priorities in
this sector. In short, the existence of a fragmented, sectionalized and disintegrated regula-
tory framework resulting from incoherent legislation was recognized as the base of the
current problems of Internet access in the country: inefficiency of services; existence of
illegalities; lack of transparency in the use of Cuban Internet resources; and complexity
in the approval of Internet access to individuals and institutions, among others.
Officially, this meeting adopted a document entitled Bases and Priorities for the
Improvement of the Computerization of Society in Cuba,36 the principles set out in which
indicated those areas in which, according to the official vision, it would be strategic
to make use of the network of networks. One of these refers to the legal framework
without mentioning the possibility of recognizing the Internet as a human right. It only
acknowledges the need to update existing legislation; to create a regulatory framework
that guarantees to promote and develop the use of information technologies; as well as
to reform the regulations on intellectual property and copyright in those matters related
to such technologies.
Also from the legal point of view, it is worth noting that, during the Workshop, it was
conceived to create a social organization which would group together the professionals
from the computer field under the name of Union de Informáticos de Cuba (UIC). The
main objective is to unite the professionals of this sector in a single entity, to maintain a
more direct and close link with the authorities.
In practice, this translates into the exercise of the strictest control from the structures
of power over the new organization, taking into account that particular limitations to the
right of association have been constitutionally recognized. According to article 7 of the
Cuban Constitution,37 the state recognizes and encourages mass and social organizations
that bring together different sectors of the population, represent their specific interests
and incorporate them into the tasks of building, consolidating and defending socialist
society. The existence of these organizations is limited by the state’s interest in accepting
into the socio-political dynamics a certain entity that adheres to the ends of the state itself.
According to article 54 of the Constitution,38 the rights of assembly, manifestation and
association are exercised by manual workers, intellectuals, peasants, women, students and
other sectors, which have set up the necessary means to that end. The mass and social
organizations have all the facilities for the development of these activities, in which their

35
  Available at www.cubadebate.cu/noticias/2015/02/19/reanudan-los-debates-del-i-taller-nac​
ional-de-informatizacion-y-ciberseguridad/#.WOKT97hOm1s/ (accessed 4 May 2017).
36
  This document is available in PDF format at www.cubadebate.cu/noticias/2015/02/19/rean​
udan-los-debates-del-i-taller-nacional-de-informatizacion-y-ciberseguridad/#. WOKT97hOm1s /
(accessed 4 May 2017).
37
  Prieto and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 14. On the
official website of the UIC there is an express reference to this article of the Constitution, see www.
mincom.gob.cu/?q=paneluic.
38
  Prieto and Pérez, Legislative Selection of Cuban Constitutional Law, n. 12 above, 29.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 192 13/12/2018 15:25


Access to the Internet in Cuba  193

members enjoy the widest freedom of speech and opinion, based on the unrestricted right
to initiative and criticism.
In correspondence with these limitations, the Act of Associations, in force since 1985,
develops this precept, establishing two fundamental requirements for the existence of
an association. First, the presence of the so-called relationship bodies within the state
structure is required, which are responsible for interacting with the organizations that
are created, and in the case of the UIC is the MINCOM. In addition, there can only
be one organization for each sector of society, so in the case of specialists in the area of
information technology and communications, the only one that is accepted as a legitimate
association for them is the UIC, and it is not possible to create another similar entity.39
More recently, in March 2017, a meeting of the Council of Ministers took place,
which approved a comprehensive policy for improving the computerization of society
in Cuba.40 At that meeting, it was determined that an integral policy was required
to define the computer technologies as a strategic sector for the nation, due to what
they could offer for the growth of the economy, and the development of society.
In a return to taking up an instrumental position with regard to the Internet, it
was established that it should become a weapon for the defence of the Revolution
which would guarantee the adequate security of cyberspace in the face of threats,
risks and attacks of all kinds. As a consequence, it was considered essential for the
evolution of this sector to preserve and continue the development of associated
human capital, while increasing the access of citizens to the use of new technologies.
The implementation of this policy implies the implementation of other actions, such as
the establishment of a national platform that encourages the generation of content and
guarantees the possibility of socializing such content. According to the official position,
they will be aimed at strengthening the identity and preserving the values of Cuban
society, as well as developing and modernizing the technological infrastructure, paying
special attention to the deployment of broadband in Cuba.

4.  CONCLUDING REMARKS

The contributions made by Internet access in terms of transparency and the right to
information are irrefutably present. It is of the essence for transparency demands to cryst­
allize the links between the citizens and their rulers, which becomes a means to control the
management of the latter and demand responsibility for their actions. Considering the
global context, there is no doubt that the openness to knowledge is of extraordinary value
in all orders, both for individuals and for society as a whole. Therefore, on these grounds, it
is logical that juridical doctrine, in the properly normative order, has recognized access to
the Internet as a human right, considering it either as part of the right to information and

39
  On the limitations imposed by the Constitution and the relevant law on the right of associa-
tion in Cuba, see R. Peña, Do We Need a New Law of Associations? (28 March 2017), available at
http://progresosanalanal.us/20170328/we need -a-new-law-associations/ (accessed 28 March 2017).
40
  Y. Puig, Meeting of the Council of Ministers (2 March 2017), available at www.granma.cu/
cuba/2017-03-02/efectuada-reunion-del-concilio-de-ministros-02-03-2017-21-03-10-policy on inter-
net / (accessed 2 March 2017).

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 193 13/12/2018 15:25


194  Research handbook on human rights and digital technology

to education, or as an autonomous right. In the Cuban case, there is no explicit recogni-


tion of the right to information and still less to the Internet, either as part of such right
or as a constitutional right of its own content. Given the characteristics of the political
and judicial systems, as well as the praxis in both fields, there have been no interpreta-
tions of the constitutional provisions which have made possible the recognition of the
aforementioned rights. This has transcended the current Conceptualization of the Cuban
Economic and Social Model of Socialist Development, and the National Development
Plan until 2030: Proposal of Vision of the Nation, Axes and Strategic Sectors, in which,
despite referring to the potential of access to the Internet for the economic, political and
social development of the country, there is no recognition of it as a fundamental right.
The government’s position remains focused on controlling the use of the global network
by citizens, and allowing its use only as long as it is in accordance with state interests.
Hence, the current political discourses regarding the Internet, as well as the decisions
that have been taken in this field, denote a defensive posture with respect to the material
that might be circulated. Clearly, this constitutes a very strong obstacle to the potential
recognition of access to the Internet as a human right.
However, we do think that in the future, a wider opening up of the country in its
relations with the world will take place, something that will inevitably be mediated by the
complete normalization of its relations with the United States; and with the generational
change in the government, more progress can be achieved in this area. It is probable that in
the future in Cuba, under certain circumstances, economic incentives could shape political
debates on Internet access in a more efficient manner than arguments derived from the
logic of civil and political rights or economic, social and cultural rights. The stimulation
of access to the Internet could come hand in hand with specifically economic interests.
The reality will have to be recognized that it is pointless to pretend to contain the large
amount of information circulating on the Internet by building political and legal dykes.
The nation would be the winner if these resources were instead put into teaching Cubans
how to use the Internet, giving them tools that allow them to draw upon even the most
misleading of the current content; legitimizing its use as a space to disagree about, dissent
from and question the most complicated issues of national reality. This would be part of
the achievements of a country that thinks about its future, a similar result to what it meant
to universalize public education and health in the distant 1960s. It could be considered
an achievement of the new generation to promote the legal formalization of access to the
network of networks as a human right; an issue that will inevitably have to form part of
the necessary democratization of Cuba.

Raudiel F. Peña Barrios - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:38AM
via New York University

WAGNER_9781785367717_t.indd 194 13/12/2018 15:25


11.  Surveillance reform: revealing surveillance harms
and engaging reform tactics
Evan Light and Jonathan A. Obar

1. INTRODUCTION
In the years since the September 11, 2001 attacks on the World Trade Center and the
Pentagon, systems of indiscriminate state surveillance have targeted private communica-
tions, eroding human rights to privacy and expression. While state surveillance existed
previously, governments in Canada, the United States and the United Kingdom have,
since 2001, ‘reinforced and intensified’ existing surveillance systems, methods1 and
legislative frameworks.2 As David Lyon notes, ‘surveillance has become algorithmic,
technological, pre-emptive, and classificatory, in every way broadening and tightening the
net of social control and subtly stretching the categories of suspicion’.3 Digital forms of
community and sociality are now coupled with an unfettered digital surveillance, defined
here as the systematic and indiscriminate capture and analysis of digital communications
(telephonic or online). This process treats us as perpetual suspects, not because we engage
in illegal activities, but because we communicate and exercise our right to freedom of
expression. As a result, mass surveillance by the state is harmful to the relationships that
undergird both the infrastructure of digital communication4 and democratic society. In
societies premised on not simply concepts of human rights, but the everyday enactment
of these rights, this is problematic. While a certain level of surveillance in society may
be deemed necessary for the basic maintenance of the rule of law,5 it can and should be
undertaken in a manner that safeguards the right to privacy. Though the right to privacy
was enshrined in the United Nations Universal Declaration of Human Rights6 in 1948,
championed by subsequent national and international declarations7 and legislative

1
  David Lyon, Surveillance After Snowden 15 (2015).
2
  Lawyers Committee for Human Rights (US), Fiona Doherty and Eleanor Acer, Assessing
the New Normal: Liberty and Security for the Post-September 11 United States 15–30 (2003),
available at www.humanrightsfirst.org/pubs/descriptions/Assessing/AssessingtheNewNormal.pdf
(last accessed November 23, 2016).
3
  Lyon, Surveillance After Snowden, n. 1 above, 142.
4
  Fen Osler Hampson and Eric Jardine, Look Who’s Watching: Surveillance, Treachery
and Trust Online 8–15 (2016).
5
  Access et al., Necessary and Proportionate: International Principles on the Application
of Human Rights to Communications Surveillance (2014).
6
  United Nations, The Universal Declaration of Human Rights (1948), available at www.
un.org/en/documents/udhr/ (last accessed August 10, 2011).
7
  Ritu Khullar and Vanessa Cosco, Conceptualizing the Right to Privacy in Canada
(2010), available at www.cba.org/cba/cle/PDF/adm10_khullar_paper.pdf; Colin J. Bennett and
Charles D. Raab, The Governance of Privacy: Policy Instruments in Global Perspective 4
(2nd and updated edn 2006).

195
Evan Light and Jonathan A. Obar - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 195 13/12/2018 15:25


196  Research handbook on human rights and digital technology

efforts,8 technological development coupled with a concomitant lack of advancement in


law perpetuates a void of ungovernability, or what Dan McQuillan terms an ‘algorithmic
state of exception’.9 Technology simply advances more quickly than law does,10 making it
difficult if not impossible for individuals to maintain these rights today. The notion that
privacy is a privilege, rather than a right, has also been blithely perpetuated by industries
that increasingly generate revenue based on the collection, aggregation, and analysis of
personal data, and government seeking to ensure our collective security by engaging in
similar practice.11 Privacy, however, remains a human right and state mass surveillance is
an indiscriminate violation of this right on a global scale. In order for privacy to be a right
that is actionable, serious reform is required both in the legislative and the technological
realms. Adopting a diversity of tactics, surveillance reform efforts and policy-making
models of the future must be at once nimble and responsive, bold and proactive. These
efforts must seek to address current issues and to future-proof privacy safeguards by
making them integral to technological and legislative design.
We seek here to discuss state surveillance harms and to propose surveillance reform tac-
tics with the aim of supporting efforts to ensure fundamental human rights are respected,
and the legal, technical and social mechanisms of state surveillance are transparent and
accountable, governed and governable. Crafting policy and law is a community undertak-
ing and we hope for these tactics to be usable by policy-makers and civil society alike.
While the majority of examples we cite are Canadian, we believe they may be replicable
in other international contexts.
Reform, by definition, refers to a process or end defined by improvement. Thus, the
type of reform to be achieved is relevant to the perceived challenge to be addressed as
well as the desired outcome. For instance, when discussing surveillance reforms, one
might suggest that the US signals intelligence agency, the National Security Agency
(NSA), is engaging in reforms as it completes construction of a billion-dollar ‘spy
center’ in Bluffdale, Utah, capable of housing in its ‘near-bottomless databases’ all of the
communications that pass through US Internet infrastructure, ‘including the complete
contents of private emails, cell phone calls, and Google searches, as well as all sorts of
personal data trails – parking receipts, travel itineraries, bookstore purchases, and other
digital “pocket litter”’.12 Similarly, north of the border, reforms could refer to govern-
ment investment in Canada’s own billion-dollar signals intelligence facility in Ottawa,13

 8
  Rolf H. Weber and Dominic N. Staiger, Privacy Versus Security: Identifying the Challenges in
a Global Information Society, in Cybersecurity and Human Rights in the Age of Cyberveillance
(Joanna Kulesza and Roy Balleste eds., 2015).
 9
  Dan McQuillan, Algorithmic States of Exception, 18 Eur. J. Cult. Stud. 564–76 (2015).
10
  Jonathan A. Obar and Steven S. Wildman, Social Media Definition and the Governance
Challenge: An Introduction to the Special Issue, 9 Telecommunications Policy 745 (2015).
11
  Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and
Control Your World (2015).
12
  James Bamford, The NSA is Building the Country’s Biggest Spy Center (Watch What You
Say), WIRED (2012), available at www.wired.com/2012/03/ff_nsadatacenter/all/1/ (last accessed
November 30, 2016).
13
  Greg Weston, Inside Canada’s Top-Secret Billion-Dollar Spy Palace, CBC News (2013), avail-
able at www.cbc.ca/news/politics/inside-canada-s-top-secret-billion-dollar-spy-palace-1.1930322
(last accessed November 30, 2016); Communications Security Establishment, The Edward

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 196 13/12/2018 15:25


Surveillance reform  197

or the 2015 passage of the Anti-Terrorism Act (C-51) expanding the digital surveillance
capabilities of the Canadian government.14 Our conceptualization of surveillance reform,
however, is quite the opposite, and parallels definitions associated with historical and
ongoing media reform efforts throughout the world, which aim to shine a light on the
threats to civil liberties associated with the concentration of communicative resources
and power.15 Battles for media and now surveillance reform are, inherently, battles for
control over communication rights, and as a result, an extension of ‘part of a wider
challenge to social and economic inequalities and an essential component of a vision for
a just and democratic society’.16 One difference between media reform and surveillance
reform is the former often involves battling neoliberal attempts to deregulate that promote
corporatization and consolidation of the media, whereas surveillance reformers often
oppose regulatory efforts, like C-51, that aim to expand the surveillance capabilities of
governments. It is worth noting that there are examples where surveillance reform efforts
do champion regulatory expansion, such as calls in Canada (discussed further on) for
government investment in network infrastructure that might impede ‘boomerang routing’
of domestic Canadian Internet traffic through the United States, subjecting Canadian
transmissions to NSA surveillance.17 Where the greatest similarity exists, however, is that
both aim to advance:

a field in which multiple actors use a range of techniques and capacities to restructure communi-
cation systems in the interests not of privileged elites (whether state or market) but of ordinary
users, viewers, readers, and listeners so that they might participate better in and make informed
choices about the world in which they live.18

This chapter is divided into three sections. Surveillance harms are discussed first in
the context of journalism, lawyers and law-makers, the general public and the corporate
world. The second section addresses a number of tactics reformers can engage to address
these and other surveillance harms. The tactics include: digital mobilization against regu-
latory efforts to expand state surveillance; building a popular non-partisan movement;
advocating for repatriation of domestic Internet traffic through network sovereignty

Drake Building Government of Canada (2016), available at www.cse-cst.gc.ca/en/accommoda​


tion-installation (last accessed November 30, 2016).
14
  Legislative Services Branch, Consolidated Federal Laws of Canada, Anti-terrorism Act
(2003), available at http://laws-lois.justice.gc.ca/eng/acts/a-11.7/ (last accessed November 30, 2016);
Canadian Civil Liberties Association, Understanding Bill C-51: the Anti-terrorism Act,
2015 (2015), available at https://ccla.org/understanding-bill-c-51-the-anti-terrorism-act-2015/ (last
accessed November 30, 2016).
15
  Des Freedman and Jonathan A. Obar, Media Reform: An Overview, in Strategies for
Media Reform 3–18 (2016), available at www.jstor.org/stable/j.ctt1ctxqc9.4.
16
  Ibid. 5.
17
  Jonathan A. Obar and Andrew Clement, Internet Surveillance and Boomerang Routing: A
Call for Canadian Network Sovereignty, in Proceedings: Technology and Emerging Media
Division, Canadian Communication Association Conference (Philippe Ross and Jeremy
Shtern (eds.), Victoria, BC: University of Victoria, 2013), available at www.acc-cca.ca/resources/
Documents/TEM_proceedings/TEM_2013/OBAR-CLEMENT-TEM2013.pdf (last accessed
August 17, 2016).
18
  Freedman and Obar, Media Reform, n. 15 above, 5.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 197 13/12/2018 15:25


198  Research handbook on human rights and digital technology

efforts; financial activism; and the design of an international policy standard. The chapter
concludes with reflections on the potential for surveillance reform in Canada.
Just as mass surveillance has become a diversified enterprise, surveillance reform efforts
too must engage varied tactics at community, state and international levels. This demands
that reformers innovate both in terms of their thinking about the range of surveillance
harms, and concurrently, about the necessary tactics for reform.

2.  SURVEILLANCE HARMS

Systems of surveillance, accounting for the acts and movements of individuals, have
existed as tools for social control for centuries.19 The expansion of digital technology
capable of monitoring our everyday actions has, in part, led to a vast expansion of
corporate and governmental surveillance activities. In this section we discuss surveillance
harms that threaten journalism practice, lawyers and law-makers, the general public and
the corporate world.

2.1  Mass State Surveillance and Journalism

The human rights of privacy, freedom of expression and freedom of assembly are
closely interlinked, especially when considered as bedrocks of democratic society. In the
American case, these rights are enshrined in the First and Second Amendments of the
US Constitution;20 in Canada they are included in the country’s Charter of Rights and
Freedoms.21 The ability to exercise these rights on an ongoing basis contributes to the
development of critical journalism, and the social organizing and political innovation
that may follow. The threat that one’s communications may be constantly observed and
judged – whether or not this is truly the case – significantly alters the ways in which
individuals conduct themselves. This is the key to the classic concept of a self-regulating
prison (Jeremy Bentham’s Panopticon), that people behave differently when they believe
they may be watched at any moment.22 Systems of state mass surveillance extrapolate
this model of social control to the level of society itself and have demonstrable effects
on freedom of expression and democratic governance. For journalists playing the role
of the watchdog of industry and government, privacy and the ability to protect one’s
sources is crucial. Journalistic sources provide society with insight into worlds otherwise
off-limits, including government surveillance processes23 as well as political corruption.24

19
  Robin Tudge, The No-Nonsense Guide to Global Surveillance 22–47 (2010).
20
  National Archives, The Bill of Rights: What Does it Say? (2016), available at www.
archives.gov/founding-docs/bill-of-rights/what-does-it-say (last accessed November 16, 2016).
21
  Legislative Services Branch, Consolidated Federal Laws of Canada, Access to Information
Act (2015), available at http://laws-lois.justice.gc.ca/eng/Const/page-15.html (last accessed November
10, 2016).
22
  Emily Horne and Tim Maly, The Inspection House: An Impertinent Guide to Modern
Surveillance (2014).
23
  G. Alex Sinha, With Liberty to Monitor All: How Large-Scale US Surveillance is
Harming Journalism, Law and American Democracy (2014).
24
  Quebec Government Didn’t Care About Corruption Report, Says Whistleblower, Toronto

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 198 13/12/2018 15:25


Surveillance reform  199

A 2014 Human Rights Watch report quotes numerous journalists observing the new
post-Snowden reality wherein they understand that surveillance can be used to determine
the sources of leaked information.25 In November 2016, it was revealed that the Montreal
police and Quebec provincial police had surveilled nine journalists from a number of
media organizations in order to determine the sources of multiple leaks concerning
political corruption and abuses by the judiciary. The phone communications of some of
these journalists had been surveilled for five years at a time.26 None of these journalists has
been charged, but they and their sources have been directly intimidated. A recent study
of 129 writers and journalists in Canada showed that ‘mass surveillance is prompting
writers and journalists to self-censor’ both the content of their stories and the research
they undertake.27 The threat or enactment of surveillance directly impedes the ability
of journalists to work with unofficial sources such as whistle-blowers and to ask critical
questions without the threat of reprisal.

2.2  Mass State Surveillance, Lawyers and Law-Makers

Client-attorney privilege is seen to be a ‘principle of fundamental justice and a civil


right of supreme importance in Canada’,28 the United States29 and Europe.30 Since the
Snowden revelations, members of the legal communities in these areas have begun to
assess the impact of these surveillance activities on their profession. According to the Law
Society of British Columbia,31 the Council of Bars and Law Societies of Europe32 and
Human Rights Watch,33 surveillance of client-attorney communications is a threat to the
rule of law. Attorneys cannot necessarily defend their cases against the state if the state is
party to supposedly confidential communications. Surveillance is also a threat to the abil-
ity of individuals and civil society organizations to engage in legal challenges pertaining to

Star, available at www.thestar.com/news/canada/2012/06/14/quebec_government_didnt_care_


about_corruption_report_says_whistleblower.html (last accessed November 11, 2016).
25
  Sinha, With Liberty to Monitor All, n. 23 above, 27–28.
26
  Evan Light and Stephane Couture, In the Words of Snowden, Spying on Journalists
in Quebec ‘Is a Radical Attack on Journalism of the Free Press’ (2016), available at www.
glendon.yorku.ca/communications/2016/11/03/words-snowden-spying-journalists-quebec-radical-
attack-journalism-free-press/ (last accessed November 11, 2016).
27
  Centre for Free Expression, Ryerson University, Chilling Free Expression in
Canada: Canadian Writers’ and Journalists’ Views on Mass Surveillance 3–6 (2016), avail-
able at https://cfe.ryerson.ca/sites/default/files/Chilling_Free_Expression_in_Canada_FINAL_
NOV_9_2016.pdf.
28
  Law Society of British Columbia, Ethics Committee, Surveillance of Electronic
Communications 1 (2015), available at www.lawsociety.bc.ca/docs/about/Surveillance.pdf (last
accessed November 11, 2016).
29
  Sinha, With Liberty to Monitor All, n. 23 above, 4–5.
30
  Counseil des Barreaux Européens, CCBE Comparative Study on Governmental
Surveillance of Lawyers’ Data in the Cloud 2 (2014), available at www.ccbe.eu/NTCdocument/
EN_04042014_Comparat1_1400656620.pdf (last accessed November 11, 2016).
31
  Law Society of British Columbia, Ethics Committee, Surveillance of Electronic
Communications, n. 28 above.
32
  Counseil des Barreaux Européens, CCBE Comparative Study on Governmental
Surveillance, n. 30 above.
33
  Sinha, With Liberty to Monitor All, n. 23 above, 47–50.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 199 13/12/2018 15:25


200  Research handbook on human rights and digital technology

mass surveillance and other matters. For instance, one attorney at Privacy International
informed us that he assumed his web searches were being surveilled as he prepared for legal
challenges against the United Kingdom’s Government Communications Headquarters
(GCHQ).34 There have also been documented reports of state surveillance targeting
lawyers involved in cases as diverse as terrorism and trade disputes.35 The American Bar
Association has, for close to ten years, recommended its members be cautious when com-
municating electronically with clients.36 While the surveillance possibility has contributed
to an environment wherein many legal professionals cannot trust their communications
will be secure, some privacy advances have occurred. In the Netherlands in 2015, the
Council of Bars and Law Societies of Europe was able to successfully challenge the Dutch
government’s surveillance of lawyers’ communications.37
State surveillance also affects law-makers and their ability to consult in private. In light
of the Snowden revelations that unveiled the majority of UK telecommunications were
being surveilled by GCHQ, two sitting members of the UK Parliament and one former
member challenged this data collection programme at the Investigatory Powers Tribunal
(IPT), the judicial authority overseeing the UK intelligence services. The MPs claimed
that an agreement from 1966, called the Wilson Doctrine, protected sitting members of
Parliament from state surveillance. The IPT, however, ruled that this doctrine had no basis
in law and was, instead, a ‘gentleman’s agreement’.38 The parliamentarians contested the
ruling, stating that it impedes their abilities to be trusted law-makers capable of meeting
privately with constituents and working to hold government bodies accountable.39 While
similar situations may exist elsewhere, legislators in other countries have not yet made
such legal challenges, as judicial bodies such as the IPT rarely exist. Canadian parliamen-
tarians are unsure of their situation with regard to state surveillance, as members of the
Standing Committees charged with crafting and overseeing legislation rarely have the
security clearances necessary to acquire the necessary information.40 In addition, Canada
has no judicial oversight bodies to which parliamentarians or any other citizen can apply
for relief.

34
  Eric King, Interview with Eric King, Deputy Director of Privacy International, London,
England, 2015.
35
  Law Society of British Columbia, Ethics Committee, Surveillance of Electronic
Communications, n. 28 above; Sinha, With Liberty to Monitor All, n. 23 above, 56–58.
36
  Steve Thomas, The Expectation of Surveillance: Can Lawyers Ethically Continue
Using Unencrypted Email? (McGuire, Craddock & Strother, 2014), available at www.mcslaw.
com/firm-news/expectation-surveillance-can-lawyers-ethically-continue-using-unencrypted-email/
(last accessed November 14, 2016).
37
  Delissen Martens, Surveillance of Lawyers Delissen Martens, available at www.delissenmartens.
nl/nl/nieuws/surveillance-of-lawyers (last accessed November 11, 2016).
38
  Mr Justice Burton, Caroline Lucas Judgment (2015), available at www.ipt-uk.com/judgments.
asp?id=29.
39
  Tom Whitehead, GCHQ Can Spy on MPs, Tribunal Rules, Daily Telegraph (2015), avail-
able at www.telegraph.co.uk/news/uknews/defence/11930721/MPs-do-not-have-special-protection-
from-being-spied-on-tribunal-rules.html (last accessed November 16, 2016).
40
  Interview with Senator Grant Mitchell, Deputy Chair of Standing Committee on National
Security and Defence, Parliament of Canada, 2015; Kent Roach, Permanent Accountability Gaps
and Partial Remedies, in Law, Privacy and Surveillance in Canada in the Post-Snowden Era
163–203, at 164–65 (Michael Geist ed., 2015).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 200 13/12/2018 15:25


Surveillance reform  201

2.3  Mass State Surveillance and the General Public

Indiscriminate mass surveillance can affect what people are willing to say and do
­publicly.41 Research on Internet search behaviour in 11 different countries has shown
‘empirical documentation of a chilling effect, both domestically and internationally
in the longer term, that appears to be related to increased awareness of government
surveillance’.42 Other empirical findings suggest a similar effect on the use of Wikipedia.43
A 2015 Pew Center study suggested heightened public awareness within the United States
concerning surveillance. Of 534 respondents, 25 per cent claimed to have changed their
digital communication behaviour ‘a great deal’ or ‘somewhat’.44 Studies conducted in
the United Kingdom and the European Union found public awareness of state surveil-
lance increasing, with some noting that up to 82 per cent of respondents felt concerned
about their online communications.45 Similarly, a pair of surveys of 24,000 respondents
around the world conducted by CIGI and Ipsos has shown increasing levels of concern
about state censorship.46 Finally, PEN International’s 2015 study of writers suggests
that those living in liberal democratic countries have begun to engage in self-censorship
at levels approaching those seen in non-democratic countries. Writers are concerned
that expressing certain views even privately or researching certain topics may lead to
negative consequences.47 As the case of Maher Arar demonstrates, the negative effects
of mass surveillance on individuals are neither theoretical nor limited to self-censorship.
This Canadian-Syrian citizen was detained in a New York airport and renditioned to
Syria by the US government, tortured and imprisoned based on ‘faulty intelligence’.48

41
  Christopher Parsons, Beyond Privacy: Articulating the Broader Harms of Pervasive Mass
Surveillance, 3 Media Commun. 1, 7 (2015).
42
  Alex Marthews and Catherine Tucker, Government Surveillance and Internet Search
Behavior (SSRN 2412564, 2015), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_
id=2412564 (last accessed November 30, 2016).
43
  Jon Penney, Chilling Effects: Online Surveillance and Wikipedia Use, 31 Berkeley Technol.
Law J. 117 (2016).
44
  Lee Rainie and Mary Madden, How People are Changing Their Own Behavior (Pew
Research Center, Internet, Science & Tech, 2015), available at www.pewinternet.org/2015/03/16/
how-people-are-changing-their-own-behavior/ (last accessed October 5, 2016).
45
  Ibid.; Vian Bakir et al., Public Feeling on Privacy, Security and Surveillance (2015),
available at www.dcssproject.net/files/2015/11/Public-Feeling-on-Privacy-Security-Surveillance-
DATAPSST-DCSS-Nov2015.pdf; Maria Grazia Porcedda, Martin Scheinin and Mathias
Vermeulen, Surprise: Surveillance, Privacy and Security: A Large Scale Participatory
Assessment of Criteria and Factors Determining Acceptability and Acceptance of
Security Technologies in Europe (2013).
46
  Fen Osler Hampson and Eric Jardine, Look Who’s Watching: Surveillance, Treachery
and Trust Online 76–78 (2016).
47
  PEN America, Global Chilling: The Impact of Mass Surveillance on International
Writers 5 (2015), available at https://pen.org/sites/default/files/globalchilling_2015.pdf.
48
  Colby Itkowitz, From Tortured Terrorist Suspect to Entrepreneur: How This Canadian Father
Got His Life Back, The Washington Post (April 27, 2016), available at https://www.washing​
tonpost.com/news/inspired-life/wp/2016/04/27/from-accused-terrorist-to-canadian-entrepreneur-
maher-arar-is-finally-getting-his-life-back; Jeff Sallot, How Canada Failed Citizen Maher Arar,
The Globe and Mail (September 19, 2006), available at https://www.theglobeandmail.com/news/
national/how-canada-failed-citizen-maher-arar/article1103562/.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 201 13/12/2018 15:25


202  Research handbook on human rights and digital technology

A Canadian federal inquiry into his case absolved him of any wrong-doing.49 Numerous
other examples have been compiled in recent reports documenting the surveillance-related
endangerment, harassment, imprisonment or assassination of journalists, activists,
religious leaders and other citizens around the world.50 As harms emerge and members of
the general public feel threatened, strategies are being discussed for individuals to protect
themselves. For example, after the November 2016 election of Donald Trump as president
of the United States, a number of newspaper and web publication articles addressed state
capacity for surveillance as well as how citizens might protect themselves when engaged
in everyday acts of free speech51 and public protest.52

2.4  State Mass Surveillance and Harm to the Corporate World

The corporations that develop, own and maintain the world’s telecommunications
infrastructure, in large part, make state surveillance of digital communications pos-
sible. This occurs either through direct collaboration or through state hacking of
infrastructure.53 Corporations thus bear some responsibility for surveillance threats and
outcomes.54 The potential effects of state surveillance on public sentiment, in particular,
a distrust of digital communications and of its facilitators, suggest the potential for

49
  Commission of Inquiry into the Actions of Canadian Officials in Relation to Maher
Arar, Report of the Events Relating to Maher Arar: Analysis and Recommendations
(2006).
50
  Privacy International and Amnesty International, Two Years After Snowden,_Final
Report (2015), available at www.privacyinternational.org/sites/default/files/Two%20Years%20
After%20Snowden_Final%20Report_EN.pdf (last accessed September 4, 2015); International
Network of Civil Liberties Organizations, Surveillance and Democracy: Chilling
Tales from Around the World (2016), available at https://ccla.org/cclanewsite/wp-content/
uploads/2016/11/Surveillance-and-Democracy-INCLO-report.pdf.
51
  Micah Lee, Surveillance Self-Defense Against the Trump Administration, The Intercept
(2016), available at https://theintercept.com/2016/11/12/surveillance-self-defense-against-the-trump-
administration/ (last accessed November 14, 2016); Charlie Savage, Harsher Security Tactics?
Obama Left Door Ajar, and Donald Trump Is Knocking, New York Times, November 13, 2016,
available at www.nytimes.com/2016/11/14/us/politics/harsher-security-tactics-obama-left-door-ajar-
and-donald-trump-is-knocking.html (last accessed November 14, 2016); Joshua Kopstein, Signal
Downloads Spiked After Election Results, Motherboard (2016), available at http://mother​
board.vice.com/en_ca/read/signal-downloads-spiked-after-election-results (last accessed November
14, 2016); Kirstie Ball, Sally Dibb and Sara Degli Esposti, Citizen Summits on Privacy,
Security and Surveillance: Country Report United Kingdom, available at http://surprise-pro​​
ject.eu/wp-content/uploads/2015/04/SurPRISE-D6.9-Country-report-United-Kingdom-v1.1.pdf.
52
  Daniel Therrien, Journalistes surveillés: tous les citoyens sont à risque, La Presse, November
12, 2016, available at www.lapresse.ca/debats/votre-opinion/201611/10/01-5039924-journalist​
es-surveilles-tous-les-citoyens-sont-a-risque.php (last accessed November 14, 2016); Tristan
Péloquin, Des outils pour déjouer la surveillance électronique, Applications La Presse, available
at http://techno.lapresse.ca/nouvelles/applications/201611/06/01-5038297-des-outils-pour-dejouer-
la-surveillance-electronique.php (last accessed November 16, 2016); Savage, Harsher Security
Tactics?, n. 51 above; Lee, Surveillance Self-Defense Against the Trump Administration, n. 51 above.
53
  Schneier, Data and Goliath, n. 11 above; Glenn Greenwald, No Place to Hide: Edward
Snowden, the NSA, and the U.S. Surveillance State (1st edn 2014).
54
  Mike Zajko, Telecom Responsibilization: Internet Governance, Surveillance, and New Roles
for Intermediaries, 41 Can. J. Commun. 75–93 (2016).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 202 13/12/2018 15:25


Surveillance reform  203

harm to business. In the early months of the Snowden revelations, financial analysts
were already noting that sales of American networking products were in decline on
the international market and predicted that the American Cloud computing industry
would be dramatically affected.55 Researchers in Germany have shown more precisely
how, between 2013–2015, security breaches attributed to the NSA negatively impacted
the stock market values of affected firms.56 This pattern was also observed using other
methods.57 Research has also demonstrated a heightened use of privacy-oriented search
engines and anonymous web browsing tools, signifying a loss of revenue for online con-
tent providers whose services would have otherwise been used.58 As the advocacy group
Access Now explains,59 this may be the tip of the iceberg as state surveillance revelations
make carriers susceptible to legal action on the part of citizens and organizations seek-
ing remedy under the UN Guiding Principles on Business and Human Rights.60 The
case of multinational telecommunications provider TeliaSonera demonstrates a telling
example. TeliaSonera was shown to have collaborated in human rights abuses in the
Baltic region by providing network access to intelligence agencies. This resulted in both
independent and governmental investigations in Denmark and significant corporate
restructuring.61

3.  TACTICS FOR SURVEILLANCE REFORM

The next section of this chapter presents five tactics for surveillance reform. Each has
proven successful to varying extents in different cases, and thus, depending on the unique
context of concern (e.g. surveillance reform at the local, national or global level) we
propose that reformers consider different tactical combinations. The tactics discussed
include: (1) digital mobilization against regulatory efforts to expand state surveillance; (2)
building a popular non-partisan movement; (3) advocating for repatriation of domestic
Internet traffic through network sovereignty efforts; (4) financial activism; and (5) the
design of an international policy standard.

55
  Kashmir Hill, How the NSA Revelations are Hurting Businesses, Forbes, available at www.
forbes.com/sites/kashmirhill/2013/09/10/how-the-nsa-revelations-are-hurting-businesses/ (last acce­
ssed November 16, 2016).
56
  G Sinanaj et al., NSA Revelations of Privacy Breaches: Do Investors Care?, in Proceedings
of the 21st Americas Conference on Information Systems (2015).
57
  Peter Micek and Jeff Landale, Forgotten Pillar: The Teleco Remedy Plan 6–7 (2013),
available at www.accessnow.org/cms/assets/uploads/archive/docs/Telco_Remedy_Plan.pdf.
58
  Hampson and Jardine, Look Who’s Watching, n. 46 above, 84–88.
59
  Micek and Landale, n. 57 above.
60
  John Ruggie, Report of the Special Representative of the Secretary-General on the
Issue of Human Rights and Transnational Corporations and Other Business Enterprises 224
(2011), http://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.journals/nethqur44&section=14
(last visited Nov 16, 2016).
61
  Micek and Landale, Forgotten Pillar, n. 57 above, 11–12.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 203 13/12/2018 15:25


204  Research handbook on human rights and digital technology

Tactic One: Digital Mobilization Against Regulatory Efforts to Expand State


Surveillance

Case study: digital activism and the defeat of Bill C-30 in Canada
In February 2012, the Canadian government, led by then Prime Minister Stephen Harper,
introduced a ‘lawful access Bill’ into the Canadian House of Commons. Bill C-30 was
intended to enable the expansion of various surveillance, collection and decryption capa-
bilities of security and law enforcement entities in Canada. Provisions aimed to remove
impediments to various forms of digital search and seizure, with the goal of enhancing
and expanding the digital surveillance capabilities of the Canadian government. Elements
of the Bill included: (a) forcing telecommunication companies to install, at their own cost,
hardware and software to facilitate data flows to law enforcement; (b) access allowances
for real-time information gathering, including warrants for transmission data and preser-
vation orders forcing entities involved in collection to preserve data for later inspection; (c)
an enhanced capability to generate warrantless disclosures of the personal data of users;
and (d) a ‘gag order’ to counter Canadian privacy law, making it more difficult for carriers
to inform the public about the surveillance activities of the Federal government.62 A year
after introducing the Bill, in February 2013, the Canadian government removed C-30
from consideration. Canadian Justice Minister Rob Nicholson described the decision,
noting, ‘We will not be proceeding with Bill C-30’.63 He added that:

any attempts . . . to modernize the Criminal Code will not contain the measures contained in
C-30, including the warrantless mandatory disclosure of basic subscriber information or the
requirement for telecommunications service providers to build intercept capability within their
systems.64

When he was asked to offer a justification for the decision, Minister Nicholson sug-
gested that the strong opposition to the surveillance plans, as expressed by the Canadian
public, had considerable influence over the government’s decision, noting ‘We’ve listened
to the concerns of Canadians who have been very clear on this’.65
The finding that the general public, in this instance, was apparently able to affect plans
for enhancing and expanding the surveillance apparatus requires reflection and analysis.
While there are some issues addressed by government that are commonly debated in a
transparent fashion, decisions associated with national security and the surveillance appa-
ratus are often addressed in more secretive proceedings and far from processes associated
with democratic deliberation. The finding that the general public had something to say
about Bill C-30, and may have influenced the Canadian surveillance establishment, at
least in this instance, suggests that a democratically determined surveillance apparatus is

62
  Parliament of Canada, House Government Bill – C-30, First Reading (41-1), available at
www.parl.gc.ca/HousePublications/Publication.aspx?Language=E&Mode=1&DocId=5380965
(last accessed November 30, 2016).
63
  L. Payton, Government Killing Online Surveillance Bill: Justice Minister Rob Nicholson Says
Controversial Bill C-30 Won’t Go Ahead, CBC News (2013), available at www.cbc.ca/m/touch/news/
story/1.1336384.
64
  Ibid.
65
  Ibid.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 204 13/12/2018 15:25


Surveillance reform  205

a possibility, and that reform efforts should strive for similar outcomes. What remains is
to ask how Canadians were able to achieve this influence, especially since the Canadian
government did not call for public comment or hold public hearings about C-30 (sug-
gesting that the government was attempting to move forward with its expansions without
public involvement in the process).
Those that have studied the defeat of C-30 suggest that while a number of factors likely
contributed to the outcome, including some public relations blunders on the part of the
Federal government,66 it appears that the general public was able to express a clear mes-
sage of dissent via a digitally-mediated watchdog function.67, 68 This means that through
various forms of digital activism, some facilitated by activist groups, and some crowd-
sourced, members of the Canadian public were able to shift the direction of a surveillance
Bill being advanced by a Conservative Federal government in Canada.
Obar and Shade69 emphasize that three overarching digital activism strategies were
utilized to engage members of the general public and affect the C-30 outcome: (1) the
mobilization of a digitally-mediated community of individuals; (2) preparing digital
platforms to facilitate targeted user-generated content; and (3) creating anti-C-30 content
to be shared digitally.

(1) Mobilizing a digitally-mediated community of individuals    As Obar and Shade note,70


even before Bill C-30 was introduced, a number of activist groups, academics and engaged
citizens were already speaking out against attempts by the Canadian government to
enhance and expand the surveillance apparatus in Canada. Around the time C-30 was
introduced, the group OpenMedia began mobilizing an online community to further the
fight for civil liberties. Beginning with their email list of more than 600,000 individuals,
OpenMedia launched the Stop Online Spying Campaign. The campaign connected
OpenMedia’s followers with an eventual coalition of 42 organizations that comprised
the official Stop Online Spying Coalition. Along with a variety of unofficial members
including academics, journalists, lawyers, media reformers and other individuals and
organizations, the coalition was able to engage in the fight against Bill C-30 through
various digital activism tactics, as discussed in the following sections.

(2) Preparing digital platforms to facilitate targeted user-generated content    Once the
online community began to coalesce, various digital activism tactics were implemented to
raise awareness and express dissent about C-30.71 Online petitions were circulated, allow-
ing the contribution of user-generated content in the form of a petition signature. More

66
  Online Surveillance Critics Accused of Supporting Child Porn, CBC News (2012), available
at www.cbc.ca/news/technology/story/2012/02/13/technology-lawful-access-toews-pornographers.
html.
67
  Jonathan A. Obar and Leslie Regan Shade, Activating the Fifth Estate: Bill C-30 and
the Digitally-Mediated Public Watchdog, in Strategies for Media Reform: International
Perspectives (Des Freedman et al. eds., 2016).
68
  E. Dubois and W.H. Dutton, The Fifth Estate in Internet Governance: Collective Accountability
of a Canadian Policy Initiative, 4 Revue Française D’études Américaines 81–97 (2013).
69
  Obar and Shade, Activating the Fifth Estate, n. 67 above, 72.
70
  Ibid.
71
  Ibid.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 205 13/12/2018 15:25


206  Research handbook on human rights and digital technology

than 150,000 people signed OpenMedia’s petition. OpenMedia also developed a digital
form letter that made it easy for individuals to email their Member of Parliament about
C-30. The digital form letter could be accessed via the web, helped individuals identify
their government representative, and facilitated submission of an anti-C-30 form letter
to that representative.
Two tactics combining humour with dissent were also utilized to draw attention to the Bill
and to allow individuals to express concerns. The first was the ability, through OpenMedia,
to send then Public Safety Minister Vic Toews (part of the public face of the Bill) a valentine.
As the Bill was introduced around Valentine’s Day, this was a humorous and pointed way to
engage and dissent. On OpenMedia’s website they called individuals to action:

This month, send Hon. Vic Toews a Valentine informing him that you oppose the Online Spying
legislation. Follow the link, print out the Valentine, and send off your most intimate feelings
regarding the legislation. No postage required!72

The text of the Valentine’s card read:

Dear Minister Toews: I oppose mandatory Internet surveillance. This scheme is poorly thought


out, costly, and will leave my personal information less secure. Unchecked mass surveillance is a
breach of my fundamental right to privacy.73

One other tactic combining humour with dissent was a Twitter attack referred to as
#TellVicEverything. This particular strategy was not facilitated by OpenMedia, but
the viral nature of the hashtag was likely amplified by the coalition’s network. The idea
behind the hashtag was that contributors would both flood Minister Toew’s account
with messages, while also expressing public disdain for the government’s interest in mass
surveillance of even the mundane. Some of the tweets included:

Jason: Morning . . . I’ve showered, shaved, and made toast. The peanut butter was chunky . . .

Others were more pointed:

Steve: ‘Hey @ToewsVic I have 2 confess, I was a over speed-limit a bit on 404 today. U won’t tell
the @OPP_GTATraffic will you?
Kevin: ‘Hey @ToewsVic, I lost an email from my work account yesterday. Can I get your
copy?’74

Each of these tactics first involved the preparing of a digital platform to facilitate targeted
user-generated content, and then the crowdsourcing of content from both the Stop Online
Spying Coalition and the general public.

(3) Creating anti-C-30 content to be shared digitally    Similar to the second strategy, Obar
and Shade emphasize75 that various forms of targeted digital content were created by the

72
  Ibid. 48.
73
  Ibid. 48–49.
74
  Ibid. 48.
75
  Ibid.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 206 13/12/2018 15:25


Surveillance reform  207

coalition, journalists, activists and other individuals to be shared for the purpose of both
raising awareness and expressing dissent. For example, a number of academics in Toronto
produced a video entitled ‘(Un)Lawful Access: Experts Line Up Against Online Spying’,76
which was shared online and shown at various screenings. OpenMedia later produced
a number of pseudo-public service announcements presenting humorous-yet-critical
scenarios aimed to highlight the surveillance scenarios that could become normalized
with the power granted by Bill C-30. The videos typically involved an individual engag-
ing in something that could be considered private – opening a letter or speaking on the
telephone – followed by what looked like a police officer spying on the activity. A caption
would then appear saying ‘You wouldn’t let a police officer do this without a warrant . . .
So why should your online communication be any different?’.77
Digital art, sometimes in the form of memes, and other times in the form of political
cartoons were also created and shared, often by individuals that were not directly associ-
ated with the coalition.78 For example, an image of Vic Toews published in 2012 in the
Edmonton Journal, being attacked by Twitter birds, analogizing Hitchcockian horror
associated with public dissent in the form of a Twitter attack was published online and
shared.
Another Twitter attack, referred to as ‘Vikileaks’, combined the use of Twitter with the
strategy of creating content to be shared, namely the Vikileaks story. After the Bill was
introduced, the Twitter account @vikileaks30 was created with the description ‘Vic wants
to know about you. Let’s get to know Vic’.79 The account then tweeted out a variety of
embarrassing personal details about the Minister, including details about his divorce, an
alleged affair with a babysitter and how much he spent at restaurants. This tactic, employ-
ing an aggressive message combining irony and dissent, revealed to be the act of a Liberal
Party staff member (who eventually resigned), drew considerable attention from the
Canadian media, which also served to draw attention to the deliberations over Bill C-30.
When Canada’s then Conservative government, led by Prime Minister Stephen
Harper decided to remove Bill C-30 from consideration, Canadian Justice Minister
Rob Nicholson famously noted, ‘We’ve listened to the concerns of Canadians who have
been very clear on this’.80 Indeed, the digital activism efforts, led by OpenMedia’s Stop
Online Spying Coalition and enhanced by the efforts of a digitally-mediated community,
demonstrate that tactics capable of spreading awareness and dissent online can contribute
to policy influence and policy outcomes, even when a government is involved in secretive
efforts to advance forms of state surveillance.

Tactic Two: Building a Popular Non-Partisan Movement

Case study: constructing and mobilizing for the human right to water in Uruguay
Environmental movements have succeeded in developing and implementing a multiplicity
of tactics for advancing their causes through both legal and commercial avenues. The

76
  See www.unlawfulaccess.net/.
77
  See e.g., www.youtube.com/watch?v=QwqIYHwRcxY.
78
  Obar and Shade, Activating the Fifth Estate, n. 67 above.
79
  Ibid. 52.
80
 Payton, Government Killing Online Surveillance Bill, n. 63 above.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 207 13/12/2018 15:25


208  Research handbook on human rights and digital technology

successful campaign to make access to potable water and to sewage and sanitation services
a human right in Uruguay is one which serves as an example for other movements around
the world. An important tool that emerged from this campaign and its international
context is the concept of the human right to water. Campaign organizers successfully
reconceptualized a common thing central to human existence – water – as a human right
in need of a political apparatus. They did so in a way that crossed political and economic
boundaries, challenges that are important to address in organizing a successful surveil-
lance reform movement.
An integral part of the way we live, water (like privacy) ‘is indispensable stuff for
human bodies, but also for the social fabric’.81 It is also the focus of complicated sets
of power relations that ultimately decide, in part, what sort of environment we live
in.82 In Uruguay, water has long been culturally regarded as a common good, a status
attributed to the creation of the federal water company Obras Sanitarias del Estado
(OSE) in 1952.83 By the later part of the twentieth century, this centralized state
enterprise had succeeded in extending water infrastructure to more than 95 per cent
of the population and sewage services to between 50–60 per cent. Given this high rate
of accessible clean water, ‘the general population considered that Uruguay had no
water problems. It was a naturally (common) good, accessible, well organized and well
administered’.84 Thus, access to clean drinking water was taken for granted. In 1992,
Uruguay encountered the   wave of neoliberal policies then sweeping Latin America
and an attempt was made to privatize most state services. A plebiscite managed to
­counteract this, making Uruguay ‘the only country in the world that was consulted
on full-scale privatization and which has rejected the possibility by referendum’.85
Discussions on the privatization of the water system proceeded and in the late 1990s
water services were privatized in a small area and sold to French multinational Suez
Lyonnaise.86 Public workers attempted to resist privatization but were unable to design
an argument that transcended their rights as unionized workers and their efforts
failed.87 A further concession was made in 2000, effectively granting a 30-year contract
for water services in the department of Maldonado to Spanish multinational Aguas de
Bilbao Vizcaya.88

81
  E. Swyngedouw, Social Power and the Urbanization of Water: Flows of Power 1
(2004).
82
  Ibid. 23.
83
  Semi-structured Interview with Marcel Achkar, Professor of Geography, Universidad de la
República del Uruguay, Montevideo, Uruguay, 2010; Semi-structured Interview with Martín Ponce
de León, President of Obras Sanitarias del Estado (Uruguay), Director of Antel (Uruguay), 2010;
Javier Taks, ‘El Agua es de Todos/Water for All’: Water Resources and Development in Uruguay, 51
Development 17–22, at 18 (2008).
84
  Interview with Achkar, n. 83 above.
85
  The New Latin American Left 101 (Patrick Barrett, Daniel Chavez and César Rodriguez-
Garavito eds., 2008).
86
  Carlos Santos and Alberto Villareal, Uruguay: Direct Democracy in Defence of the Right to
Water, in Reclaiming Public Water 173–79, at 173–74 (2005).
87
  Interview with Achkar, n. 83 above.
88
  Semi-structured Interview with Adriana Marquisio, Member of Executive Committee,
Funcionarios de Obras Sanitarias del Estado, Uruguay, 2010.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 208 13/12/2018 15:25


Surveillance reform  209

In 2001, the government signed a letter of intent with the International Monetary
Fund which further advanced the proposition of extending water and sewage privatiza-
tion to other regions of the country.89 The following year, privatization of water services
became a growing topic of debate in various sectors of Uruguayan society as the sale of
the Guaraní aquifer was proposed and private water prices rose, in some cases, by 1000
per cent.90 That same year, actors from the water company union (FFOSE) and various
social organizations began to organize, ultimately creating the Comisión Nacional en
Defensa del Agua y de la Vida (the CNDAV or National Committee for the Defense of
Water and Life).91
The water movement in Uruguay was one of several developing in Latin America,
initially with no coordination between them. Through the World Social Forum in Brazil,
members of different water movements came to understand that their governments were
using similar privatization tactics and developing similar legislation.92 Such coordination
by governments would suggest that similar coordination could be undertaken by civil
society groups opposing privatization. The result was a development of discursive and
political tactics at the international level and the subsequent reformulation and application
of these tools according to the unique characteristics of individual countries. The most
important tool to emerge from the international space was the notion of the human right
to water. At the heart of the proposition is an interpretation of the 1976 United Nations
International Covenant on Economic, Social and Cultural Rights93 by the United Nations
Committee on Economic, Social and Cultural Rights, known as General Comment 15.
A non-binding interpretation of the Covenant, General Comment 15 lays out numerous
legal arguments articulating that the human right to water exists according to both the
Covenant and various other human rights declarations and treaties.94 With this weighty
tool in hand, local movements were able to initiate conversations at the grassroots level.
The International Principles on the Application of Human Rights to Communications
Surveillance, explored in Tactic Five below, could serve as a similar international starting
point for nationally-focused surveillance reform.
The members of the CNDAV were already involved in activism around water rights.
Learning from the earlier failure of unionized workers to involve a broader public, an
invitation was sent to all political sectors, social movements and social organizations
in the country.95 The coalition embarked on a campaign that aimed to cut through

89
  Ibid.; Santos and Villareal, Uruguay: Direct Democracy, n. 86 above,173–74.
90
  Interview with Achkar, n. 83 above.
91
  Ibid.; Interview with Marquisio, n. 88 above; Exploratory Interview with Maria Selva Ortiz
on Uruguayan Water Movement, 2009.
92
  Interview with Marquisio, n. 88 above.
93
  La Iniciativa MERCOSUR, Agua: Construcción Social de un Derecho Humano 5–6
(2007); Office of the United Nations High Commission for Human Rights, International Covenant
on Civil and Political Rights (1976).
94
  La Iniciativa MERCOSUR, Agua, n. 93 above, 6; United Nations, Economic and Social
Council, Committee on Economic, Social and Cultural Rights, Substantive Issues Arising in the
Implementation of the International Covenant on Economic, Social and Cultural Rights, General
Comment No. 15: The Right to Water (Arts. 11 and 12 of the International Covenant on Economic,
Social and Cultural Rights) (2002).
95
  Interview with Achkar, n. 83 above.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 209 13/12/2018 15:25


210  Research handbook on human rights and digital technology

partisan politics and to create a multiplicity of spaces for debate and popular education.
The ultimate goal was to collect 250,000 signatures in order to hold a plebiscite on a
constitutional amendment creating the human right to water provided for by the state.
The question would be asked at the polls during the October 2004 election. A significant
organizational strength was that there was at least one member of the FFOSE union in
every city, town and village in the country. Tactics were diverse and creative. Teachers
opened up their classrooms to organizers and campaigns were built up around World
Water Day and Earth Day. One group of activists ‘rode on horseback for 23 days through
the middle of the countryside’ to spread the word to remote communities. They held
plenaries in town squares, workshops in the streets and at the weekly outdoor markets,
and went door-to-door.96 A final step in the campaign was to create ‘Casas del Agua’ or
‘Water Houses’ whereby individuals offered their homes as neighbourhood organizational
centres for distributing information and working with the national coalition. Each Casa
del Agua was autonomous and able to take ownership of its own campaign. Members of
the public thus gained ownership of the campaign and came to understand its ultimate
goal: guaranteeing the human right to water by guaranteeing popular ownership of
political and regulatory processes.97 Ultimately, 300,000 signatures were collected98 and
65 per cent of the population voted in favour. The right to water and sanitation services
is now article 47 of Uruguay’s Constitution.99
The process of recognizing water as a human right created a number of new political
spaces. During the campaign, spaces emerged where this idea was debated, strengthened
and rebuilt in the local context. Following the plebiscite, spaces emerged that were
designed to regulate and ensure continuous public participation in water governance. The
plebiscite created the National Directorate of Water and Sewage (DINASA, now called
DINAGUA) and planted the seeds of another body – the Assessorial Commission on
Water and Sewage (COASAS). DINAGUA is the federal body charged with overseeing
the use of water policy, water resources and sewage infrastructure in Uruguay.100 Following
the plebiscite, law-makers and civil society groups spent five years collaboratively develop-
ing new environmental legislation, which includes the national water policy.101 Adopted
unanimously by all political parties, the law created COASAS, which provides an official
venue through which civil society can ostensibly take part in the oversight, design and
implementation of water policy.
The Uruguayan water movement was successful because traditional barriers were
broken down. In their place, the movement built organizational and ideological links
based on ‘solidarity, the free exchange of ideas, reciprocity, and non-monetary value’.102

 96
  Interview with Marquisio, n. 88 above; Semi-structured Interview with Maria Selva Ortiz,
Comision Nacional para el derecho a agua y vida, 2010.
 97
  Interview with Achkar, n. 83 above; Interview with Ortiz, n. 96 above.
 98
  Interview with Marquisio, n. 88 above.
 99
  Parlamento del Uruguay, Constitución de la República del Uruguay (2004), available at
https://parlamento.gub.uy/documentosyleyes/constitucion.
100
  Semi-structured Interview with José Luis Genta, Director of Direccion Nacional de Aguas
y Saneamiento, 2010.
101
  Interview with Achkar, n. 83 above.
102
  Interview with Marquisio, n. 88 above.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 210 13/12/2018 15:25


Surveillance reform  211

Water, like privacy, previously taken for granted, had become something of profound
social and cultural value upon which all individuals depended no matter their vocation,
economic status or political stripe. The ability to preserve water as a common good is
directly linked to the ability of Uruguayan society – the community – to exercise owner-
ship of the political spaces, institutions and processes connected to it. Reform was a
collaborative process between the public, civil society and Members of Parliament and
occurred on a human scale. Facilitating processes where these sets of actors can collabo-
rate as co-citizens is an important tactic for building a surveillance reform movement that
aims to have long-term effects.
The water movement provides a number of tactics that are appropriate for application
in surveillance reform at the domestic level: simultaneously making use of international
human rights frameworks; working with similar movements in other countries; engaging
in grassroots activism at the local level; and appealing to all members of society. The
Uruguayan example was the first success for an international movement that has contin-
ued to develop in countries around the world.103

Tactic Three: Advocating for Repatriation of Domestic Internet Traffic Through Network
Sovereignty Efforts

In 2006, one-time AT&T technician Mark Klein revealed that AT&T had allowed the US
National Security Agency (NSA) to install splitter technology in its switching centre at
611 Folsom Street in San Francisco.104 As presented in Figure 11.1, the splitter technology
had the ability to mirror all Internet traffic passing through the centre.
NSA whistle-blower Edward Snowden later expanded on Klein’s revelation, highlight-
ing that the splitter operation was a part of the NSA’s ‘upstream’ signals intelligence
strategy (see Figure 11.2). Snowden’s revelations, coupled with suggestions from academic
scholarship, suggest that it is likely that the NSA has the ability to collect components of
all Internet transmissions that cross into US jurisdiction.105
Why should this form of signals-intelligence surveillance matter to Canadians? Among
the Snowden revelations was the suggestion, from a top-secret NSA Powerpoint slide
that ‘much of the world’s communication flows through the US’ and that ‘your target’s
communication could easily be flowing into and through the US’ (see Figure 11.3).
Political economic relationships developed between Internet carriers, coupled with
the extent of American Internet infrastructure, likely contributes to the reality that
Internet traffic from other countries often enters US jurisdiction. It should come as no
surprise that accessing an American Internet service from Canada results in packets
entering the United States, passing through US Internet exchange points (IXPs), and
other infrastructure. This could potentially subject Canadian online behaviours to US

103
  Right to Water, The Rights to Water and Sanitation at the National Level, Rights to Water and
Sanitation, available at www.righttowater.info/why-the-right-to-water-and-sanitation/the-rights-to-
water-and-sanitation-at-the-national-level/ (last accessed November 25, 2016).
104
  M. Klein, Wiring up the Big Brother Machine . . . and Fighting It (Charleston, SC:
BookSurge Publishing, 2009).
105
  A. Clement, Ixmaps: Tracking Your Personal Data Through the NSA’s Warrantless
Wiretapping Sites, in Proceedings of the IEEE – ISTAS Conference (2013).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 211 13/12/2018 15:25


212  Research handbook on human rights and digital technology

Intercepting Communications at
AT&T Folsom Street Facility

Millions of communications from


ordinary Americans (AT&T customers)
A
B
C
AT&T Facility
D 611 Folsom Street San Francisco

NSA-controlled
Room (641A)
A B C D

Splitter Government
A
B Secret
C Network
D

Millions of communications from


ordinary Americans

Source:  EFF. Creative Commons Attribution License (CC BY 3.0 US).

Figure 11.1  NSA Splitter Operation Visualization from 611 Folsom Street

surveillance. Not as obvious, however, is that considerable domestic Canadian Internet


communication also crosses the border due to a process known as ‘boomerang routing’ (or
­‘tromboning’). Boomerang routing occurs when a packet’s path originates and terminates
in the same country (i.e. Canada), but along the way, the packet transits another country
(i.e. the United States) before arriving at its destination.106 Research by the IXmaps

106
  Obar and Clement, Internet Surveillance and Boomerang Routing, n. 17 above; Andrew
Clement and Jonathan A. Obar, Canadian Internet ‘Boomerang’ Traffic and Mass NSA Surveillance:

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 212 13/12/2018 15:25


Surveillance reform 213

Source: Snowden Digital Surveillance Archive, https://snowdenarchive.cjfe.org.

Figure 11.2 Snowden NSA Upstream and PRISM Slide

team in Toronto107 suggests that approximately 22–25 per cent of all Canadian domestic
Internet transmissions may boomerang through the United States.108 The IXmaps project
has identified boomerang transmissions associated with Canadian government services,
Canadian banks, Canadian cultural institutions and a variety of other organizations and
services, suggesting that each of these could be subject to NSA surveillance.
Boomerang routing raises network and data sovereignty concerns. In the context of
nation-states, network sovereignty is ‘the authoritative quality or process whereby an
entity (such as the state) or set of entities distinguishes the boundaries of a network and
then exercises a sovereign will or control within those boundaries’.109 Similarly, data
sovereignty refers to the coupling of data control and national sovereignty,110 meaning

Responding to Privacy and Network Sovereignty Challenges, in Law, Privacy and Surveillance in
Canada in the Post-Snowden Era (Michael Geist ed., Ottawa, ON: University of Ottawa Press,
2015).
107
See IXmaps website, www.ixmaps.ca/.
108
See n. 106 above.
109
Obar and Clement, Internet Surveillance and Boomerang Routing, n. 17 above.
110
Z.N. Peterson, M. Gondree and R. Beverly, A Position Paper on Data Sovereignty:
The Importance of Geolocating Data in the Cloud (HotCloud, 2011).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 213 13/12/2018 15:25


214 Research handbook on human rights and digital technology

Source: Snowden Digital Surveillance Archive, https://snowdenarchive.cjfe.org.

Figure 11.3 Snowden NSA ‘Monitoring a Target’s Communication’ Slide

that data created, used and/or housed within specific geographic boundaries would be
subject to the sovereign will of the entity that sets those boundaries. Indeed, Canadian
network sovereignty means that Canadians have the ability to control and contain the
transmissions over their network infrastructure, and Canadian data sovereignty means
that Canadians have the ability to control and contain the data created, used, modified
and stored by and for Canadians within Canadian geographic borders.
Any instance where Canadian Internet traffic crosses into the United States is subject
to the surveillance provisions of American law. Beyond the network and data sovereignty
concerns associated with the US monitoring of Canadian domestic communications – a
process generally invisible – a Canadian’s ability to raise privacy and other civil liberties
objections in this context is severely limited due to what Austin refers to as a ‘Constitutional
black hole’.111 In the United States, the courts have determined that Canadians are not
protected by the Fourth Amendment of the US Constitution, as ‘the Fourth Amendment

111
L.M. Austin, Technological Tattletales and Constitutional Black Holes: Communications
Intermediaries and Constitutional Constraints, 17(2) Theoretical Inquiries in Law 451–85
(2016).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 214 13/12/2018 15:25


Surveillance reform  215

does not apply to nonresident aliens’.112 At the same time, the Supreme Court of Canada
has determined that:

the Charter [i.e. the Canadian Constitution] has no application to a search or seizure undertaken
in a foreign territory, has no application to a Canadian request that foreign authorities initiate a
search in a foreign territory, and most likely will not trigger a Charter remedy at trial if a search
or seizure undertaken in a foreign territory is not Charter compliant.113

These threats provide the latest iteration of long-standing concerns over the sovereignty
of Canada’s ICT infrastructure, services and products. Indeed, the aforementioned poten-
tial for US surveillance of Canadian networks and data suggests that both Canadian
network and data sovereignty are being threatened by boomerang routing and as a result,
so too are the civil liberties of the users involved.

Promoting repatriation of Canadian domestic Internet traffic to reduce the US


surveillance threat
To promote repatriation of Canadian domestic Internet traffic and reduce the surveil-
lance threat from our neighbours to the south, at least two issues must be addressed: (1)
the infrastructure challenges that contribute to international routing; and (2) the political
economic relationships developed between Internet carriers.
The Canadian government can contribute to the repatriation of domestic Internet
traffic by investing in Internet infrastructure in Canada. One clear area for improvement is
the development of additional Canadian Internet exchange points (IXPs). The numerous
organizations that facilitate Internet transmissions often connect to each other at various
IXPs along the geographic path between sender and receiver. One of the contributors
to boomerang traffic is the large number of IXPs in the United States, compared with
Canada. As of February 2018, Canada has ten IXPs, while the United States has 88.114
Due to congestion as well as political economic concerns, the lack of IXPs in Canada,
compared to the relative abundance in the United States, increases the likelihood of inter-
national routing, even for domestic traffic. The Canadian Internet Registry Authority
(CIRA) argues that investing in the development of additional IXPs in Canada would
address this issue, noting:

Canadian Internet access is heavily and unnecessarily dependent upon foreign infrastructure,
especially US infrastructure . . . The provision of additional IXPs in Canada would address a
long-standing failure of coordination among Canadian networks. By all indications, Canada’s
dearth of IXPs results in large part from Canada’s proximity to the United States, where IXPs
are widespread.115

112
  Ibid. 22.
113
  Ibid. 474.
114
  Internet Exchange Point Growth, Packet Clearing House (2013), available at https://prefix.
pch.net/applications/ixpdir/summary/growth/.
115
  B. Woodcock and B. Edelman, Toward Efficiencies in Canadian Internet Traffic
Exchange 1, 4 (Canadian Internet Registration Authority, September 2012) available at https://
cira.ca/sites/default/files/attachments/publications/toward-efficiencies-in-canadian-internet-traffic-
exchange.pdf.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 215 13/12/2018 15:25


216  Research handbook on human rights and digital technology

CIRA’s president, Byron Holland, added:

There is one way to protect ourselves, to some degree, from having our data fall under the juris-
diction of a foreign country. We must ensure more of it travels to its destination via Canadian
routes . . . By building a robust Canadian Internet infrastructure, including a nation-wide fabric
of IXPs, we can ensure more Canadian traffic stays in Canada, and is therefore only subject to
Canadian law.116

Investment in IXPs is not enough. Political economic relationships developed between


carriers determine the extent to which peering takes place at IXPs, and at which IXPs. As
noted by CIRA:

An individual Canadian network may find it easier to connect to an IXP in the United States,
or simply to buy transit, than to coordinate with its competitors to form additional IXPs in
Canada.117

The statement that Canada-United States connections are easier to develop than Canada-
Canada connections, suggests that among the reasons for these connections are financial
incentives to connect internationally, as opposed to domestically. This is curious, as CIRA
suggests that Canada-Canada transmissions should be less expensive.118 While it is likely
that congestion concerns complicate this comparison, the likelihood, as CIRA notes,
that large Canadian carriers would prefer to develop relationships with American as
opposed to Canadian counterparts suggests that the reasons for boomerang routing are
more complicated than the congestion argument assumes. What furthers this argument is
the lack of peering connections between Canadian carriers at Canadian IXPs, evidenced
by the peering lists on various IXP websites and materials presented in carrier privacy
policies.119 This suggests that Canadian infrastructure is not being fully utilized, and as a
result, congestion concerns might be overstated.
Therefore, to address the surveillance threat imposed by boomerang routing,120 in
addition to investment in Canadian Internet infrastructure, the Canadian government
must address the political economic relationships that contribute to boomerang routing,
discouraging Canada-United States connections where Canada-Canada connections are
possible. Encouraging carriers to peer at Canadian IXPs would ensure that the infrastruc-
ture that exists is fully utilized, which would also push the government to pursue further
infrastructure expansion.

116
  B. Holland, PRISM, Internet Exchange Points and Canada, Public Domain (Blog), June 24,
2013, http://blog.cira.ca/2013/06/prism-internet-exchange-points-and-canada/.
117
  Woodcock and Edelman, Toward Efficiencies in Canadian Internet Traffic
Exchange, n. 115 above.
118
  Ibid.
119
  Andrew Clement and Jonathan A. Obar, Keeping Internet Users in the Know or in the Dark:
An Analysis of the Data Privacy Transparency of Canadian Internet Carriers, 6(1) J. Information
Policy 294–331 (2016); Andrew Clement and Jonathan A. Obar, Keeping Internet Users in the
Know or in the Dark: Data Privacy Transparency of Canadian Internet Service Providers
(2013), available at www.ixmaps.ca/transparency/2013-report.php; Jonathan A. Obar and Andrew
Clement, Keeping Internet Users in the Know or in the Dark: Data Privacy Transparency of
Canadian Internet Carriers (2017), available at www.ixmaps.ca/transparency.php.
120
  See n. 106 above.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 216 13/12/2018 15:25


Surveillance reform  217

Tactic Four: Financial Activism

Tactics for promoting surveillance reform often rely on judicial avenues to pressure gov-
ernment to abide by recognized norms respecting the human right to privacy.121 However,
numerous examples demonstrate government to be in conflict of interest when it comes to
being both the guarantor of this right and the regulator of infrastructure providers who
have been, purposefully or not, enablers of state mass surveillance. Edward Snowden’s
2013 revelations of mass surveillance documented how and to what extent the National
Security Agency (NSA) in the United States has infiltrated servers at Google, Microsoft,
Yahoo, Apple and a host of other trusted communications providers.122 In response,
many of these companies have become outspoken privacy advocates, with Apple notably
fighting a very public court battle in 2016 around being forced to access the locked iPhone
of a mass murderer.123 Privacy has become good for business, yet to date it is only service
providers, rather than infrastructure providers, that have been forced to reckon with the
power of concerned consumers. Amongst other revelations contained in Snowden’s docu-
ments are details of how the NSA and GCHQ conduct mass surveillance on the digital
data flowing over telecommunications infrastructure, with and without the cooperation
of infrastructure providers. For example, the United Kingdom’s Channel 4 has, with
the help of these documents, shown how Vodafone collaborated with GCHQ to provide
access to an estimated 85 per cent of digital communications traffic transiting through
the United Kingdom.124 Investigative reporting has also shown how AT&T and Verizon,
the two largest telecommunications providers in the United States, were active partners
in mass surveillance in and outside of the United States.125 Addressing such a situation,
where the state is able to force its mass surveillance imperative onto telecommunications
providers, demands new and creative reform strategies. While other chapters in this
volume deal with technological tools for safeguarding human rights, proposed here is
the beginning of a strategy for introducing reforms from within the corporate realm and,
thus, attempting to create new political spaces outside the immediate purview of the state.
If you are reading this, there is a high chance that you, as an academic researcher,
government employee or other sort of labourer, are indirectly linked (financially) to

121
  Colin J. Bennett, The Privacy Advocates: Resisting the Spread of Surveillance (2008).
122
  Barton Gellman and Laura Poitras, U.S., British Intelligence Mining Data from Nine
U.S. Internet Companies in Broad Secret Program, Washington Post, June 7, 2013, available
at www.washingtonpost.com/investigations/us-intelligence-mining-data-from-nine-us-internet-
companies-in-broad-secret-program/2013/06/06/3a0c0da8-cebf-11e2-8845-d970ccb04497_story.
html (last accessed November 25, 2016).
123
  Laura Hautala, The Snowden Effect: Privacy is Good for Business, CNET, June 3, 2016,
available at www.cnet.com/news/the-snowden-effect-privacy-is-good-for-business-nsa-data-collec​
tion/ (last accessed November 25, 2016).
124
  Geoff White, Spy Cable Revealed: How Telecoms Firm Worked with GCHQ, Channel 4
News, November 20, 2014, available at www.channel4.com/news/spy-cable-revealed-how-telecoms-
firm-worked-with-gchq (last accessed November 25, 2016).
125
  Julia Angwin Poitras, Charlie Savage, Jeff Larson, Henrik Moltke, Laura and James Risen,
AT&T Helped U.S. Spy on Internet on a Vast Scale, New York Times, August 15, 2015, available at
www.nytimes.com/2015/08/16/us/politics/att-helped-nsa-spy-on-an-array-of-internet-traffic.html
(last accessed November 25, 2016).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 217 13/12/2018 15:25


218  Research handbook on human rights and digital technology

one or many telecommunications providers. This often occurs through investments


made either by one’s pension fund or one’s employer. For example, the Universities
Superannuation Scheme (USS) is a pension fund for all higher education educators in
the United Kingdom. In 2017, the fund held ₤275.01 million of shares in Vodafone.126
Maintaining such a direct investment enables direct engagement by the investor in the
activities of the corporation, a practice called shareholder activism. A tool for chal-
lenging state mass surveillance, it is a tactic in its infancy but one successful example is
worth noting. In 2013, the American Civil Liberties Union (ACLU) Northern California
collaborated with the New York State Common Retirement Fund and Trillium Asset
Management LLC, filing a shareholder proposal demanding that AT&T and Verizon
both begin to issue transparency reports detailing the number of times that government
bodies have requested personal information concerning customers.127 Both companies
have issued such reports.
Further, the Principles for Responsible Investment (PRI) is a UN-hosted set of
investor principles aimed at integrating environmental, social and governance issues
into investment decisions. In November 2016, it counted close to 1,500 signatories, from
over 50 countries, representing over US$60 trillion.128 This is a three-fold increase since
2010, demonstrating a significant entrenchment of socially responsible investment as
an ideal practice.129 As suggested throughout this chapter, the safeguard of the human
right to privacy is an important social and governance issue with serious impacts on the
industries that facilitate digital communication, be they related to commerce or simple
communication. The rise of the PRI has been accompanied by an equal rise in shareholder
activism activities.130 Shareholder activism, either undertaken through large institutional
investors such as the USS or in the small-scale financial-hacking models of the Robin
Hood Asset Management Cooperative131 (playing the market to fund social movements)
and Rolling Jubilee132 (playing the debt market for the purpose of forgiving educational or
healthcare-related debt), is a tactic that could be used to bring about greater transparency
on the part of telecommunications providers with regard to their relationships with state
mass surveillance.

126
  Universities Superannuation Scheme Ltd, Universities Superannuation Scheme,
Report & Accounts for the Year Ended 31 March 2017 (2017).
127
  Abdi Soltani, NSA and Phone Companies: Can You Hear Us Now? (ACLU of Northern
California, 2013), available at www.aclunc.org/blog/nsa-and-phone-companies-can-you-hear-us-
now (last accessed August 28, 2015).
128
 UNPRI, About the PRI, Principles for Responsible Investment, www.unpri.org/about (last
accessed November 25, 2016).
129
  Nathan Cummings Foundation, Changing Corporate Behavior thru Shareholder
Activism 3–4 (2010), available at www.nathancummings.org/sites/default/files/Changning%20
Corporate%20Behavior%20thru%20Shareholder%20Activism.pdf (last accessed September 4,
2015).
130
  Arthur F. Golden, Shareholder Activism & Engagement, 2016 Harvard Law School
Forum on Corporate Governance and Financial Regulation (2016), available at https://cor​
pgov.law.harvard.edu/2016/03/14/shareholder-activism-engagement-2016/ (last accessed November
25, 2016).
131
  Robin Hood Asset Management Cooperative, About – Robin Hood Coop Robin Hood Coop,
http://robinhoodcoop.org/ (last accessed November 25, 2016).
132
  Rolling Jubilee, http://rollingjubilee.org (last accessed November 25, 2016).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 218 13/12/2018 15:25


Surveillance reform  219

Tactic Five: Design of an International Policy Standard

The International Principles on the Application of Human Rights to Communications


Surveillance133 were developed between 2012–2014 by an international group of non-
governmental organizations, security and privacy experts. They have been endorsed by
over 400 civil society organizations, over 300,000 individuals and parliamentarians in
Canada, Germany and France. These Principles, along with their accompanying back-
ground and international legal analysis134 and implementation guide, constitute a holistic
set of tools and guidelines for state mass surveillance reform. State surveillance suffers
from a profound lack of transparency, accountability and oversight.135 These Principles,
if used in the course of law-making and public participation in legislative and judicial
processes, provide hope of remedy. The implementation guide, in particular, responds to
recognition by the United Nations of the importance of this project and its potential.136
In our experience as researchers and policy advocates, we have witnessed a thirst on the
part of law-makers for expert information on which to base their legislative design. The
13 Principles are: legality; legitimate aim; necessity; adequacy; proportionality; compe-
tent judicial authority; due process; user notification; transparency; public oversight;
integrity of communications and systems; safeguards for international cooperation;
and safeguards against illegitimate access. These Principles need to be deployed into the
hands of law-makers by the people they represent and academic researchers can play
an important role in doing so. In our experience as researchers and policy advocates,
we have witnessed a thirst on the part of law-makers for expert information on which
to base their legislative design.137 In November 2016, Dr. Light invited the Canadian
House of Commons Standing Committee on Public Safety and National Security to
collaborate on a long-term public consultation on Canada’s surveillance laws and to
organize an event for sharing expert knowledge with the committee.138 Such a collabora-
tive approach, directed by The International Principles on the Application of Human
Rights to Communications Surveillance may suggest an additional tactic for advancing
surveillance reform efforts.

133
  Access et al., Necessary and Proportionate, n. 5 above.
134
  Electronic Frontier Foundation and Article 19, Background and Supporting
International Legal Analysis for the International Principles on the Application of Human
Rights to Communications Surveillance (2014); Access, Universal Implementation Guide
for the International Principles on the Application of Human Rights to Communications
Surveillance (2015), available at www.accessnow.org/page/-/docs/Implementation_guide_-_
July_10_print.pdf (last accessed September 1, 2015).
135
  Simon Davies, A Crisis of Accountability: A Global Analysis of the Impact of the
Snowden revelations (2014).
136
  Access, Universal Implementation Guide, n. 134 above, 4–5; United Nations Office
of the High Commissioner for Human Rights and Office of the High Commissioner and the
Secretary-General, The Right to Privacy in the Digital Age (2014).
137
  Based on Evan Light’s experience interviewing law-makers and, most recently, on his
testimony to the House of Commons Standing Committee on Public Safety and National Security
in Toronto, Canada, October 19, 2016.
138
  See further http://www.glendon.yorku.ca/communications.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 219 13/12/2018 15:25


220  Research handbook on human rights and digital technology

4. CONCLUSION: REFLECTING ON THE CANADIAN


EXAMPLE

As Canada is well positioned to take meaningful steps towards surveillance reform, some
concluding reflections are worthwhile. A primary concern for reformers – one that can be
seen throughout each of the five tactics explored in this chapter – is the ability to maintain
public awareness of the issue at hand. One of the reasons Canada is well-positioned for
reform is that critical analyses of surveillance tools, tactics and scandals regularly appear
in Canadian media. This means that reformers at civil society organizations benefit
from a more engaged audience that is already somewhat familiar with the issues. While
media reports concerning Canadian connections to the Snowden revelations have tapered
off,139 a number of recent events have served to maintain the presence of surveillance in
the public domain and draw particular public attention to surveillance harms. In 2016,
journalists in Quebec discovered they had been monitored for several years by multiple
police forces (see section 2 above). Also in 2016, a Canadian Federal Court ruled that the
Canadian Security Intelligence Service (CSIS) improperly retained surveillance metadata
for ten years when it had not been granted judicial authority to do so.140 Only months
later, in April 2017, Canada’s national broadcaster, the CBC, broke a story detailing
the use of IMSI catcher cellphone surveillance devices in Ottawa.141 This has been fol-
lowed142 by a series of journalistic investigations concerning the use of IMSI catchers143
by police forces across the country.144 Additional attention has been lent to the issue by
the federal government’s introduction of a revised anti-terrorism Bill, C-59 which, among
other things, would grant new expansive powers to Canada’s intelligence agencies while
omitting the introduction of substantive oversight mechanisms.145 Canadian civil society
organizations have capitalized on the public visibility of surveillance issues in Canada,
tailored tactics accordingly, and enhanced opportunities for reform outcomes.

139
  Snowden Digital Surveillance Archive, https://snowdenarchive.cjfe.org (last accessed March
2, 2018).
140
  Jim Bronskill, CSIS Broke Law by Keeping Sensitive Metadata, Federal Court Rules, CBC
News, November 3, 2016, available at www.cbc.ca/news/politics/csis-metadata-ruling-1.3835472
(last accessed March 2, 2018).
141
  Catherine Cullen, Brigitte Bureau, Someone is Spying on Cellphones in the Nation’s
Capital, CBC News, April 3, 2017, available at www.cbc.ca/news/politics/imsi-cellphones-spying-
ottawa-1.4050049 (last accessed March 2, 2018).
142
  Yvette Brend, Vancouver Police Admit Using StingRay Cellphone Surveillance, BCCLA
Says, CBC News, August 9, 2016, available at www.cbc.ca/news/canada/british-columbia/vancou​
ver-police-stingray-use-cellphone-tracking-civil-liberties-1.3713042 (last accessed March 2, 2018).
143
  Matthew Braga, Spies More Free to Use Cellphone Surveillance Tech Without Warrant,
Under Court Ruling, CBC News, November 28, 2017, available at www.cbc.ca/news/technology/
csis-court-stingray-imsi-catchers-1.4423871 (last accessed March 2, 2018).
144
  Cristina Howorun, Toronto Police Refuse to Acknowledge Use of Stingray Surveillance
Devices, CityNews, September 14, 2017, available at http://toronto.citynews.ca/2017/09/13/toronto-
police-refused-acknowledge-use-stingray-surveillance-devices/ (last accessed March 2, 2018).
145
  Christopher Parsons, Lex Gill, Tamir Israel, Bill Robinson and Ronald Deibert,
Analysis of the Communications Security Establishment Act and Related Provisions in Bill
C-59 (An Act Respecting National Security Matters), First Reading (December 18, 2017)
(The Citizen Lab, CIPPIC, 2017).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 220 13/12/2018 15:25


Surveillance reform  221

The Canadian example also reveals that sustained digital mobilization by civil society
actors can not only contribute to grassroots policy influence, but opportunities for direct
intervention. As explored above, Canadian civil society organizations have developed
an expertise for grassroots organizing, which includes explaining complex issues to the
general public, mobilizing a community of reformers, and translating this action into
policy effects. Most recently, similar to the organizing around Bill C-30, OpenMedia
has organized an online campaign opposed to C-59 asking that the Federal government
‘address dangerous new powers being proposed for (Canadian Security Establishment
Canada)’.146 This sustained digital presence has arguably advanced OpenMedia’s posi-
tioning within the ongoing state surveillance debate in Canada. Since their actions in
2012, OpenMedia’s influence has grown to the extent that it was invited to appear before
the House of Commons Standing Committee on Public Safety and National Security.147
Whereas previously, OpenMedia’s reform efforts mobilized massive public dissent outside
formal governmental process,148 the organization is now offered a seat at the policy table.
Canadian actors are also actively engaged (to an extent) in the repatriation of its
Internet traffic. Thanks to efforts by groups such as the Canadian Internet Registry
Authority (CIRA), the two Canadian IXPs in 2012 grew to eight in 2016, with plans for
11 in 2018.149 As civil society members win small victories, greater opportunities seem
possible. In unveiling the 2017 budget, the standing Liberal government announced it
would undertake a review of Canada’s Telecommunications Act. Its proposal specifically
states that ‘the Government believes in an open and transparent Internet environment
that emphasizes freedom – freedom to innovate, freedom to connect with others, and
freedom of discussion’.150 Such a review would provide the ideal venue for addressing the
need to repatriate domestic Internet traffic.
As surveillance technologies, techniques and potential harms evolve, so too should
tactics for reform. This chapter aimed to make visible an array of tactics different actors
inside and outside government can engage when working towards surveillance reform.
While the examples are mostly Canadian, we believe they should be appropriated, remixed
and translated in different contexts. We believe that, as in the case of the Uruguayan water
movement, tactics for surveillance reform can be developed internationally and adapted
domestically. They may be used in isolation or in concert, depending on shifting political,
economic and social winds in a given context. To engage in policy reform of any sort is
to attempt to work with a constantly moving and shape-shifting target. As the harms of
surveillance become more egregious and obvious, unavoidable to the media, to the public
at large and to their political representatives, challenges to mass surveillance become

146
  OpenMedia, https://act.openmedia.org/StopSpyPowers (last accessed March 2, 2018).
147
  SECU, Bill C-59, An Act Respecting National Security Matters, available at www.
ourcommons.ca/Committees/en/SECU/StudyActivity?studyActivityId=9807256#2018-02-08 (last
accessed March 2, 2018).
148
  Obar and Shade, Activating the Fifth Estate, n. 67 above.
149
  Canadian Internet Registry Authority, Canada’s Internet Infrastructure: Made-in-Canada
Internet Exchange Points (IXPs), https://cira.ca/canada%E2%80%99s-internet-infrastructure-
made-canada-internet-exchange-points-ixps (last accessed March 2, 2018).
150
  Minister of Finance, Building a Strong Middle Class, #Budget2017 106 (Government
of Canada, March, 22 2017), available at www.budget.gc.ca/2017/docs/plan/budget-2017-en.pdf
(last accessed March 2, 2018).

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 221 13/12/2018 15:25


222  Research handbook on human rights and digital technology

urgent. Like water, privacy is indispensable stuff for human bodies and minds, as well as
for the social fabric. Collective, continuous and diverse responses to mass surveillance are
necessary, and will hopefully benefit from discussions of tactics that have, in some cases,
advanced reforms.

Evan Light and Jonathan A. Obar - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:42AM
via New York University

WAGNER_9781785367717_t.indd 222 13/12/2018 15:25


12.  Germany’s recent intelligence reform revisited: a
wolf in sheep’s clothing?
Thorsten Wetzling

1. INTRODUCTION
This chapter seeks to inform readers about important recent changes to German intel-
ligence law. As will be elaborated further below, the revelations by Edward Snowden
and the Bundestag’s far-reaching intelligence inquiry provided the impetus for the most
substantial intelligence reform in decades. Given the broad consensus for significant
changes and the constant flow of media stories revealing irregularities and poor intel-
ligence governance by either the Chancellery or the Bundestag, it is remarkable how
the security and intelligence establishment managed nonetheless to drive home its
preferred outcome of more surveillance, modest restraints and ineffective controls. At a
time when the Bundestag inquiry was still in full swing, the executive used its initiative
and parliamentary acquiescence to secure a reform that extended the mandate of the
foreign intelligence’s service. The Chancellery’s winning strategy may thus have been
inspired by the playbook that Harold Koh developed in Why the President (Almost)
Always Wins in Foreign Affairs.1 Much like in past episodes of intelligence reform, the
problem was less ‘an aberration, a failure on the part of certain individuals within the
administration, but a set of deeper systemic flaws in the current legal structure’. What
would have been required, was ‘a prospective, legislation-oriented inquiry, rather than
a retrospective, fault-allocating exercise’.2 As will be shown in this text, Germany’s past
failures to properly regulate, steer and review the practice of electronic surveillance by its
foreign intelligence agency would have required a more in-depth and open consultation
on suitable structural remedies within legislation and the institutional design of oversight
mechanisms.
This chapter begins by depicting the political context of the reform followed by a brief
summary of its main changes. Next, the chapter elaborates on the reform’s main achieve-
ments, its constitutionality and its substantial shortcomings. Thereafter, the focus turns
to unresolved problems and open questions concerning the future practice of German
foreign intelligence.

1
  See Harold Koh, ‘Why the President (Almost) Always Wins in Foreign Affairs: Lessons of the
Iran-Contra Affair’ (1988) 97 Yale Law Journal 1255, available at http://digitalcommons.law.yale.
edu/cgi/viewcontent.cgi?article=2949&context=fss_papers.
2
  Ibid.

223
Thorsten Wetzling - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 223 13/12/2018 15:25


224  Research handbook on human rights and digital technology

1.1  Codified and Uncodified Surveillance Powers

This section introduces a number of key concepts used in German signals intelligence law
and briefly accounts for the country’s main legislative framework and its basic intelligence
oversight architecture.
The Bundesnachrichtendienst (BND) and the other 19 German intelligence services
have a wide range of digital powers at their disposal.3 Some powers are based in statute,
while others are exercised by executive decree, i.e. without ‘the legal embodiment of the
democratic will’.4 Table 12.1 lists a few known powers that pertain to the interception of
communications data.
The 2016 intelligence reform focused primarily on the practice of strategic surveillance
(strategische Fernmeldeaufklärung). This term refers to the bundled collection of large
quantities of communications data without concrete individual probable cause. It needs
to be distinguished from ‘targeted’ surveillance measures that are directed at an individual
suspect and his/her contacts.
Furthermore, German SIGINT law distinguishes between different types of stra-
tegic surveillance measures depending on whether the communication is labeled as
international or foreign. As regards the strategic surveillance of telecommunication
data to and from Germany (referred to as international communication), this has
long been codified and subjected to judicial review.5 The strategic surveillance of
communications data that has both its origin and destination outside of Germany

Table 12.1  Some of the BND’s surveillance powers and their legal basis

Targeted surveillance of communications data of Article 10 Law, section 3


 individual German citizens as well as residents and
legal entities in Germany
Targeted surveillance of communications data of Not codified; secret executive decree
  foreign individuals on foreign territory
Strategic (untargeted) surveillance of ‘international Article 10 Law, section 5
 communications data’, i.e. communications data
with either origin or destination in Germany
Strategic (untargeted) surveillance of ‘foreign BND Law, section 6 (2016 reform)
 communications data’, i.e. communications data
with neither origin nor destination in Germany
Computer network exploitation Not codified; secret executive decree
Bulk data acquisition Not codified; secret executive decree

3
  In addition to 16 intelligence services at the state level, Germany has two other federal intel-
ligence services: the Bundesverfassungsschutz (domestic intelligence service) and the Militärischer
Abschirmdienst (military intelligence).
4
  Hans Born and Ian Leigh, ‘Making Intelligence Accountable: Legal Standards and Best
Practices for Oversight of Intelligence Agencies’ (2002) Norwegian Parliament Press 17, available at
www.dcaf.ch/sites/default/files/publications/documents/making-intelligence.pdf.
5
  Hans-Jürgen Papier, ‘Strategische Fernmeldeüberwachung durch den Bundesnachrichtendienst’,
Deutsche Richterzeitung 01/17, 17–23.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 224 13/12/2018 15:25


Germany’s recent intelligence reform revisited  225

Table 12.2  Pre-reform framework for the BND’s strategic surveillance

Practice Foreign-domestic strategic Foreign-foreign strategic surveillance


 surveillance (Ausland-ausland-fernmeldeaufklärung)
(Strategische fernmeldeaufklärung)
Law Article 10 Law, section 5 BND Law, section 2.1 and secret
 interpretations
Surveillance BND requests them through the Unregulated
 orders   Interior Ministry
Review G10 Commission Only executive control (if at all)
 Body and (4 honorary members, 4 deputies)
composition
Warrants Default standard: ex ante n/a
 authorization with full knowledge
of search terms
Oversight Legality and necessity review; can n/a
 mandate  prompt immediate end of
measures deemed unlawful or
unnecessary
Investigation Full access to premises and n/a
 powers  documents
Effective Default standard: ex post n/a
 remedy  notifications
procedure
Data DAFIS Filter System DAFIS Filter System
 minimization
Quantity 20 % rule in Article10 Law, section None
 restriction  10.4

(referred to as foreign c­ommunication) lacked a comparable legal framework up


until December 2016.6 Included in this notion of foreign communication is data that
may be transitioning through Germany. For easier reference and comparison, Table
12.2 summarizes Germany’s legal and oversight framework for strategic surveillance
that existed prior to the 2016 reform, while Table 12.3 depicts the newly established
framework. The ­following sections will discuss this in greater detail. It is sufficient to
say here that German intelligence legislation consists of a ‘complicated, scattered, and
imperfect set of rules’.7

6
  Notice also that the so-called foreign-foreign traffic may still be transitioning through German
Internet hubs and, consequently, accessed by Germany’s foreign intelligence service on domestic
territory.
7
  Klaus Gärditz, ‘Legal Restraints on the Extraterritorial Activities of Germany’s Intelligence
Services’ in Russel Miller (ed.), Privacy and Power: A Transatlantic Dialogue in the Shadows of the
NSA-Affair (Cambridge University Press, 2017) 421.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 225 13/12/2018 15:25


226  Research handbook on human rights and digital technology

Table 12.3  Post-reform framework for the BND’s strategic surveillance

Practice Foreign-domestic strategic Foreign-foreign strategic surveillance


 surveillance (Ausland-ausland-fernmeldeaufklärung)
(Strategische fernmeldeaufklärung)
Law Article 10 Law BND Law
Surveillance BND requests them through BND requests them through
 orders   Interior Ministry  Chancellery
Review Body and G10 Commission Independent Committee (UG)
 composition (4 honorary members, 4 deputies) (3 members, 3 deputies)
Characterization Judicial oversight by quasi- Restricted judicial oversight by
  judicial body   administrative body
Review sessions Once a month Once every three months
Warrants Default standard: ex ante Default standard: ex ante authorization
 authorization with full  with limited knowledge of search
knowledge of search terms terms
Oversight G10 Commission can prompt UG can prompt immediate end
 mandate  immediate end of measures  of measures deemed unlawful or
deemed unlawful or unnecessary
unnecessary
Investigation Full access to premises and Not specified
 powers  documents
Effective remedy Default standard: ex post No notifications
 procedure  notifications
Data DAFIS Filter System DAFIS Filter System
 minimization
Quantity 20% rule in Article 10 Law, None
 restriction   section 10.4

1.2  Important Distinctions in German Intelligence Law

An important norm to guide the practice and judicial oversight of surveillance by the
intelligence services is article 10 of the German Constitution (Basic Law). It obliges
the state to refrain from interfering with private correspondence, post and telecom-
munications. It ‘protects the rights holder against tapping, monitoring and recording of
telecommunication contents . . . the analysis of their contents and the use of the data
thus gained’.8
Article 10 of the Basic Law is a fundamental right that primarily obligates the state to
refrain from interfering with private communication. When private telecommunication is
being monitored, ‘a deep intrusion into a fundamental right takes place. The infringement
is particularly severe given that the imperative secrecy of these measures means that the
targeted individuals are excluded from the authorization procedure’.9 Due to this, the

8
  Bundestagsdrucksache 18/3709, 2 and the jurisprudence of the German Constitutional Court,
BVerfGE 110, 33.
9
  Ibid.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 226 13/12/2018 15:25


Germany’s recent intelligence reform revisited  227

German Constitution demands a clear legal basis for all such derogations. The so-called
Article 10 Law provides the required legal basis for this. Established in 1968, it defines the
cases and scope for the three federal intelligence services to engage in different forms of
communication surveillance and sets the legal framework for judicial oversight.
Another statute discussed in this chapter is the BND Law.10 It provides the mandate for
the BND and was substantially reformed in 2016 to include provisions on the practice,
authorization and oversight of strategic foreign-foreign communications data surveil-
lance as well as international SIGINT cooperation.
As shown by Tables 12.2 and 12.3, the authorization and oversight process for strategic
surveillance in German intelligence law differs significantly depending on whether the
surveillance measures are deemed to affect German citizens or not. Prominent constitu-
tional scholars such as the former President of the German Constitutional Court argued
in 2014 before the members of the NSA-inquiry committee of the Bundestag that the
BND’s strategic foreign-foreign communications data surveillance practice infringes
upon the right to private communication guaranteed by article 10 of the Basic Law.11
This right, they argued, protects not just German citizens but every person. According
to their view, neither nationality of the communicating participants nor country of
residence are decisive criteria for the protection of civil rights.12 Rather, they argue, the
key aspect is that German public authorities are bound by the provisions of the Basic
Law at all times.
The German government and the 2016 reform did not adopt this position. Instead,
the government argued that the right guaranteed by article 10 of the Basic Law can be
territorially restricted so as to protect only German citizens at home and abroad as well
as residents and legal entities in Germany. This aspect, as well as questions regarding the
government’s ability to distinguish clearly between national and non-national data and
the effectiveness of its data minimization procedures will also be discussed later in the text.

1.3  Oversight Institutions and the Authorization Process

Important German institutions of intelligence oversight are the Bundestag’s permanent


intelligence oversight committee (Parlamentarisches Kontrollgremium, PKGr) and the
Trust Committee (Vertrauensgremium). The former performs mainly ex post review of
intelligence policy whereas the latter’s sole task is budget control. The G10 Commission,
a quasi-judicial body of the Bundestag, performs judicial oversight on communications
interception by the federal intelligence agencies. In addition, the German Federal Data
Protection Authority (BfDI) performs reviews on the handling of data by the federal
intelligence services. With the 2016 reform, the oversight landscape grew with the addition

10
  Other existing statutes such as the Act on Parliamentary Intelligence Oversight (PKGr Law)
as well as the Acts on the domestic and the military intelligence service and additional laws regulat-
ing the vetting and classification procedures are of minor relevance for this chapter.
11
  See the documentation of the first public session of the inquiry committee, see www.bunde​
stag.de/dokumente/textarchiv/2014/nsa_untersuchungsausschuss/279296.
12
  Matthias Bäcker, ‘Erhebung, Bevorratung und Übermittlung von Telekommunikationsdaten
durch die Nachrichtendienste des Bundes’, Stellungnahme zur Anhörung des NSA-
Untersuchungsausschusses (May 2014) 19, available at www.bundestag.de.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 227 13/12/2018 15:25


228  Research handbook on human rights and digital technology

of the Independent Committee (Unabhängiges Gremium). It is tasked with performing


reviews of the BND’s strategic foreign-foreign communications data surveillance as to
its legality and necessity. In addition, the reform created the institution of a permanent
intelligence oversight commissioner (Ständiger Bevollmächtigter) and further intelligence
oversight staff within the Bundestag’s administration.
The typical authorization process for strategic surveillance begins with the BND
requesting permission to intercept communications data from either the German Interior
Ministry (BMI)13 or the Chancellery (Bundeskanzleramt, BKAmt).14 The government
then prepares surveillance orders and presents them to either the G10 Commission
or the Independent Committee for ex ante judicial review, depending on whether the
envisaged interception concerns international or foreign communication data (see Table
12.1). Following their legality and necessity assessment, the G10 Commission or the
Independent Committee can then either accept these orders or call for their immediate
termination.15 Whether both commissions have sufficient mandates and resources to be
effective in their review function will also be discussed in the sections below.
What can be said from the outset is that neither the G10 Commission nor the
Independent Committee are judicial bodies sui generis. Thus, unlike in the United States
or in Sweden, ‘German intelligence law does not entail a preventive judicial control, this
blunt desideratum remains a gaping wound in the institutional body of the German
intelligence architecture’.16

2.  CONTEXT ON THE BND REFORM

2.1  A Compelling Case for Reform

The revelations by Edward Snowden resulting in the Bundestag’s far-reaching intelligence


inquiry provided the impetus for intelligence reform. The so-called NSA inquiry brought
to light major legal gaps, poor executive control and grave democratic deficits concerning
the governance of signals intelligence in Germany. This has caused harm to German
and European strategic interests and has led to unjustifiable spying on German and EU
citizens, EU Member States and EU institutions, as well as international organizations.
More specifically, it emerged in 2014 that Germany’s foreign intelligence agency
performed its single most important surveillance activity for decades without a clear legal
framework, let alone independent authorization and oversight. The activity in question is
the collection of communications data with its origin and destination outside of Germany
(ausland-ausland-fernmeldeaufklärung), referred to in this chapter as strategic foreign-
foreign communications data surveillance.
It is estimated that this practice makes up to 90 percent of the BND’s overall strategic

13
  For individual and strategic measures under the Article 10 Law.
14
  For strategic surveillance measures under the BND Law.
15
  See Article 10 Law, s.15.6, for foreign-domestic strategic surveillance or BND Law, ss.9.4,
9.5, for foreign-foreign strategic surveillance, respectively.
16
  Gärditz, ‘Legal Restraints on the Extraterritorial Activities of Germany’s Intelligence
Services’, n. 7 above, 431.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 228 13/12/2018 15:25


Germany’s recent intelligence reform revisited  229

surveillance activities.17 Yet, despite being so important and despite regularly infringing
upon the privacy rights of millions, German intelligence legislation lacked provisions
concerning the authorization of collecting and handling of personal data. Instead, as
shown in Table 12.2, prior to the 2016 reform, the BND intercepted, analyzed, stored
and transferred most of its strategic surveillance data solely on the basis of a very broad
provision in the BND Law and additional secret legal interpretations.18 When questions
regarding the legality of the BND’s strategic foreign-foreign communications data
surveillance became pressing in the NSA inquiry, the government revealed a number
of wide-ranging and underwhelming legal interpretations that it had used vis-à-vis the
intelligence services and private companies.19
Prior to 2016, a significant part of Germany’s SIGINT practices were also exempt from
any form of independent oversight: no parliamentary oversight body, no judicial review
commission and no data protection authority had any say on the BND’s strategic foreign-
foreign communications data surveillance. Instead, a very small circle within the executive
single-handedly ordered the collection of datasets, with rights infringements numbering in
the millions. The expenditure of public money notwithstanding, the BND’s strategic sur-
veillance practice had also never been independently evaluated for its effectiveness. Finally,
the government has yet to show that it provides a sufficiently robust protection for data that
is not meant to be subjected to strategic foreign-foreign communications surveillance.20
All these deficits emerged during the Bundestag’s NSA inquiry. As a result, the
already tarnished public trust in the German security and intelligence establishment,
the Bundestag’s oversight mechanisms and the Chancellery’s grip on executive control,
eroded even further.21 To make things worse, Germany had also introduced a resolution at

17
  Markus Löffelmann, ‘Regelung der Routineaufklärung’ (2015) 6 Recht + Politik 1, available
at www.recht-politik.de/wp-content/uploads/2015/06/Ausgabe-vom-22.-Juni-2015-Regelung-der-
Routineaufkl%C3%A4rung-PDF-Download.pdf.
18
  Prior to the 2016 reform, the German government justified the legality of the BND’s
foreign-foreign strategic surveillance practice with a broad provision in the BND Law according to
which ‘the Federal Intelligence Service shall collect and analyze information required for obtain-
ing foreign intelligence, which is of importance for the foreign and security policy of the Federal
Republic of Germany’ (s.1.2).
19
  For more information on these legal theories, including the so-called Weltraumtheorie (space
theory), Funktionsträgertheorie (functionary theory), see Kai Biermann, ‘Die Anarchos vom
BND’, Zeit Online (14 November 2014), available at www.zeit.de/politik/deutschland/2014-11/
bnd-bundesnachrichtendienst-gesetz-grundrecht, and Konstantin von Notz, ‘The Challenges of
Limiting Intelligence Agencies’ Mass Surveillance Regimes: Why Western Democracies Cannot
Give Up on Communication Privacy’ in Russel Miller (ed.), Privacy and Power: A Transatlantic
Dialogue in the Shadows of the NSA-Affair (Cambridge University Press, 2017).
20
  Note that BND Law, s.6.4, explicitly excludes national data from strategic foreign-foreign
surveillance. Due to widespread doubts as to the technical feasibility to ensure this protection in
practice, the NSA inquiry committee summoned expert opinions from an IT-security Professor
at the University of the Armed Forces and from the Chaos Computer Club. They elaborated
on the accuracy of modern geolocation filtering and both reports indicate that a 100 percent
success may only be approximated. See the reports available at https://cdn.netzpolitik.org/
wp-upload/2016/10/gutachten_ip_lokalisation_rodosek.pdf and www.ccc.de/system/uploads/220/
original/beweisbeschluss-nsaua-ccc.pdf.
21
  In every single legislative period over the last decade, the Bundestag established an ad hoc
inquiry into allegations of intelligence governance malfeasance. While those proceedings were

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 229 13/12/2018 15:25


230  Research handbook on human rights and digital technology

the United Nations for better privacy protection and intelligence oversight which, seen in
conjunction with its own deficits in this realm, cast doubt on the credibility of Germany’s
foreign (cyber) policy at that time.

2.2  Driving Factors

The NSA inquiry, as well as the parliamentary inquiries on intelligence matters in the two
preceding legislative periods, contributed to the widespread sentiment that intelligence
might be important but that the oversight structures in place were not fit for purpose and
needed to be reformed. This became a consensus view that was expressed also by con-
servative politicians, such as Clemens Binninger, former Chairman of the Parliamentary
Intelligence Oversight Body.22 These weaknesses of oversight and executive control
mechanism were acknowledged by all major stakeholders, including, at least in part, the
Chancellery, which admitted operational and technological deficits.23
Facing sustained negative media coverage and calls for a comprehensive reform of
intelligence governance from various sections of civil society and facing increasing
pressure for legal certainty from within the services and the telecommunication sector
concerning the constitutionality of foreign intelligence collection practices, intelligence
reform became a pressing dossier in the Chancellery towards the end of 2015.24 The
government needed a suitable response to growing calls for a radical overhaul. This
being said, voices not just within the intelligence community constantly warned that, if
anything, the security threats Germany faces have grown in severity and complexity and
that the powers of the BND need to be expanded rather than curtailed.25 To them, it was
essential that the BND’s powers remained untouched or better yet that the BND received
significantly more technical and human resources to professionalize its electronic surveil-
lance practice.26
In the end, the key players within the Chancellery realized that it was wishful thinking
to believe that they could get away with no reform. The intelligence sector’s actions – on
a weekly basis – illustrated that this storm would simply not pass. The German public
had also become much more aware of the Chancellery’s central role in the governance

doubtlessly politicized, they did provide enough material to show that the permanent intelligence
oversight mechanisms were neither sufficient nor fit for purpose. See Thorsten Wetzling, ‘Aufklärung
ohne Aufsicht? Über die Leistungsfähigkeit der Nachrichtendienstkontrolle in Deutschland’, vol.
43 Demokratie (Berlin: Heinrich-Böll-Stiftung, 2016), available at www.boell.de.
22
  Julian Heißler, ‘Kontrolle der Geheimdienste: Was bringt der verlängerte Arm?’, Tagesschau
(19 November 2016), available at www.tagesschau.de/inland/pkgr-reform-101.html.
23
  ‘Kanzleramt fordert Reform des BND’, Die Zeit (23 April 2015), available at www.zeit.de/
digital/datenschutz/2015-04/bnd-nsa-spionage-kanzleramt-reaktionen/.
24
  Ibid.
25
  See e.g., this public expression of that sentiment: ‘Merkel entmachtet BND: USA kon-
trollieren Spionage in Deutschland’, Deutsche Wirtschaftsnachrichten (7 June 2016), available
at https://deutsche-wirtschafts-nachrichten.de/2016/06/07/merkel-entmachtet-bnd-usa-kontro​
llieren-spionage-in-deutschland/.
26
  Their lobbying was strong and briefly put the entire reform effort on hold in the Spring of
2016: Stefan Aust, Manuel Bewarder and Florian Flade, ‘Kanzleramt legt BND-Reform vorerst
auf Eis’, Welt (18 March 2016), available at www.welt.de/politik/deutschland/article153455819/
Kanzleramt-legt-BND-Reform-vorerst-auf-Eis.html.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 230 13/12/2018 15:25


Germany’s recent intelligence reform revisited  231

of signals intelligence. Given that there was hardly any regulatory framework, let alone
public scrutiny regarding its role in the formulation of the National Intelligence Priority
Framework (Aufgabenprofil BND), the authorization of foreign intelligence collection
and international intelligence cooperation, the Chancellery may have also found it increas-
ingly difficult to refer to them in public without any reform.27
In May 2014, three of the country’s most renowned constitutional experts publicly
rebuked the government’s argument that the BND Law provided a sufficient legal
basis for the BND’s foreign intelligence collection practice. This aggravated exist-
ing concerns among members of the intelligence and security sector and national
telecommunication   providers. Upset by the legal limbo and out of genuine fear
­
for litigation they pushed hard for a modern legal basis for the BND’s surveillance
powers. Different cases brought to the constitutional and administrative courts by ISP
­providers, the G10 Commission, the opposition parties within the inquiry committee,
as well as several NGOs may also have propelled the Chancellery to seek a reform prior
to the next federal election and the conclusion of the parliamentary investigation in
2017.28
When the Chancellery publicly acknowledged ‘operational deficits’ in the BND’s
foreign intelligence collection programs,29 it became clear that an intelligence reform
had to be prepared. The following minimal consensus quickly emerged among the ruling
government coalition in the Bundestag: the BND’s mandate needed an update; future
data collection on European partners ought to be limited; and intelligence oversight was
currently not fit for purpose.
However, actual recommendations on how to reverse engineer an established signals
intelligence machinery so as to better accommodate human rights, oversight and account-
ability standards were in very short supply.30 The press and the Bundestag were primarily
concerned with the ongoing investigation of past malfeasances and the reporting on the
BND’s methods. This created a vacuum that the Chancellery took advantage of. It became
the real driver behind intelligence reform, albeit operating from quite a different vantage
point. From the fall of 2015 onward, a small circle of key players within the Chancellery,
the BND, the German foreign, interior and justice ministries, as well as a handful of

27
  ‘Kanzleramt fordert Reform des BND’, Die Zeit (23 April 2015), available at www.zeit.de/
digital/datenschutz/2015-04/bnd-nsa-spionage-kanzleramt-reaktionen/.
28
  A detailed review of each of those cases goes beyond the scope of this chapter. The biggest
Internet hub in Germany (DE-Cix) sued the government in a pending case over the legality of
surveillance orders. The G10 Commission lost a case against the government for access to the
so-called NSA selectors. The qualified minority lost a case against the government on an access
to information request. NGOs such as Reporters Without Borders, Amnesty International and
Gesellschaft für Freiheitsrechte have also sued the government over the constitutionality of its
surveillance practices.
29
  The government referred to ‘technical and organizational deficits at the BND that the
Chancellery identified as part of its executive control’ (Frankfurter Allgemeine Zeitung, 23 April
2015). Given that the Chancellery failed to provide clear briefing for SIGINT staffers on German
strategic interests and the risks of too credulous intelligence cooperation, a more critical self-
assessment of executive control would have been in order, too.
30
  This has also been publicly discussed in a position paper by the Social Democratic Party
which summarized the party’s original, pre-negotiation points on intelligence reform, see www.spd​
fraktion.de/system/files/documents/2015-06-16-spd-eckpunkte_reform_strafma-r-endfassung.pdf.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 231 13/12/2018 15:25


232  Research handbook on human rights and digital technology

Members of Parliament and their key advisors, worked behind closed doors on the new
rules for the BND and its overseers.31
Unlike in the United Kingdom or in the Netherlands which staged public pre-
legislative scrutiny proceedings and Internet consultation processes on draft versions
of the intelligence reforms, the German government kept the matter very close to its
chest and did not allow any form of scrutiny before a very swift legislative process. The
draft Bill was presented to Parliament on the day before its summer recess in July 2016.
Only one public hearing followed in September where experts suggested a whole range
of recommendations on how to improve the envisaged changes to the BND Law. Those
recommendations had virtually no effect on the reform. In October 2016, the Bundestag
and the Bundesrat (the Federal Council) adopted the Bills without any significant
changes.
While the reform entered into force in December 2016, it will take time for the new
institutions to become fully operational.32

2.3 Summary

The 2016 reform introduced a number of significant changes to existing German intel-
ligence law. The summary below focuses on aspects deemed particularly relevant for
international readers.33

2.3.1  New rules for strategic foreign-foreign communications data surveillance


The BND Law now includes several new provisions on the authorization, collection,
handling, transfer and oversight of strategic foreign-foreign communications data surveil-
lance (ausland-ausland-fernmeldeaufklärung). In so doing, Germany sets an important
new international standard.
Next to specifying the BND’s mandate to intercept such data from within Germany
(for transitioning traffic) it now also includes a provision on the use of such data that the
BND obtained abroad (section 7). The main takeaways from the new rules in section 6–18
are as follows.

Unrestricted metadata collection  The BND’s collection of metadata by means of


strategic foreign-foreign communications data surveillance remains unrestricted. The

31
  The following references point to the limited public knowledge, let alone pre-­ legislative
scrutiny, that accompanied the drafting process: ‘Abhören mit Auflagen’, ­available at www.sued​
deutsche.de/politik/geheimdienst-reform-abhoeren-mit-auflagen-1.2736352, and ‘Kanzleramt will
BND beim Abhören bremsen’, available at www.sueddeutsche.de/politik/geheimdienste-kanzlera​
mt-will-bnd-beim-abhoeren-bremsen-1.2823233.
32
  At the time of writing this has not been achieved and important positions have yet to
be filled. For example, the new Secretariat of the parliamentary intelligence oversight body
(PK1-Bundestagsverwaltung) has yet to appear on the organization chart of the Bundestag
­
administration and new positions (e.g. Leitender Beamter, PKrG Law, s.12.1) have yet to be filled.
33
  For a comprehensive list of individual expert reports on the draft BND reform, see state-
ments by Gärditz, Graulich, Wetzling, Schindler, Töpfer, Wolff and Bäcker on the draft intelligence
reform at Deutscher Bundestag 2016, Ausschussdrucksachen 18(4)653 A-G, available at www.
bundestag.de/inneres.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 232 13/12/2018 15:25


Germany’s recent intelligence reform revisited  233

retention of metadata is limited to six months. By contrast, content data may be retained
for up to ten years.34

Unrestricted acquisition and restricted collection of content data    As before, the BND
will continue to acquire raw data without any de jure restrictions when intercepting
foreign-foreign communications data in bulk. However, section 6.2 now obligates the
foreign service to use search terms when operationalizing (‘collecting’) content data
from its data pool.35 However, the law defines neither ‘search term’ nor ‘telecom-
munication nets’ any further. Obviously, this leaves significant operational latitude for
the intelligence community. In addition, section 12 (Eignungsprüfung) provides for an
important exception to the general search term provision. ‘Telecommunication nets’,
according to this rule, may be temporarily tested in order to assess the quality of their
output and to generate new search terms.

New content data protection hierarchy    In regard to the collection of content, the BND
Law distinguishes between four different groups for which different authorization proce-
dures, data protection standards and oversight provisions apply (see Tables 12.4, 12.5).36
In terms of prioritization, these groups are:

● Beneficiaries of G10 protection (i.e. German nationals, domestic legal entities and
persons on German territory):37 the amended BND law stipulates that the collec-
tion of content and metadata from this group by means of strategic foreign-foreign
communications data surveillance is not permissible (section 6.4). Any electronic
surveillance on this group is subject to the provisions of the Article 10 Law which
entails stricter rules for the authorization, handling, judicial oversight, as well as
notification procedures.
● Public institutions of the European Union and its Member States: the use of selec-
tors that target public bodies of EU Member States or EU institutions is restricted
to 12 warranted cases and requires orders that mention the individual search terms
(section 9.2).
● EU citizens: the use of selectors that target EU citizens is restricted to 21 warranted
cases. Interception orders are not required to mention the individual search terms
(section 9.2).
● Non-EU data: the least restrictive regime governs the steering of search terms
that aim at non-EU data. This is justifiable (a) to identify and respond to threats
to Germany’s domestic and external security; (b) to maintain Germany’s capacity
to act; and (c) with respect to other information relating to the government’s
secret national intelligence priority framework (Aufgabenprofil). The strategic

34
  BNDG, s.20.1 in conjunction with BverfSchG, s.12.
35
  Unless otherwise indicated, all references to sections in this text refer to the BND Law.
36
  Note: the purely ‘foreign’ strategic surveillance practice by the BND, i.e. the collection of
foreigners’ data on foreign soil, remains unregulated. This will be further explained in the analysis
section.
37
  Kurt Graulich, ‘Reform des Gesetzes über den Bundesnachrichtendienst’ (2017) 2(1)
Kriminalpolitische Zeitschrift 43, at 49.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 233 13/12/2018 15:25


234  Research handbook on human rights and digital technology

Table 12.4  Different authorization criteria for targets of the BND’s strategic surveillance

Group A Group B Group C Group D


German citizens at Public institutions EU citizens Rest of the world
home and abroad, of EU bodies and
all persons on Member States
German territory
and domestic legal
entities
Group A may Group B may be Group C may be targeted. Group D may be
not be subjected targeted. This requires Requires collection order targeted. Requires
to strategic collection order that but no need to mention collection order but
surveillance of must identify search search terms therein no need to mention
foreign-foreign terms search terms therein
communications Search terms may only be
data Search terms may only used if necessary Search terms can be
be used if necessary for for information related to used if necessary for
Any surveillance information related to 20 + 1 warranted cases: information related
must be done in 11 + 1 warranted cases: to 3 + 1 very broad
accordance with eight circumstances under warranted cases
Article 10 Law eight circumstances Article 10 Law, section 5.1
under Article 10 Law, three broad
(except for section 5 + justifications (BND
incidental Law, section 6.1)
collection) + three broad justifications without the third
(BND Law, section 6.1) if country relevance
three broad needed for third country caveat
justifications (BND information of particular
Law, section 6.1) relevance to Germany’s +
if needed for third security
country information of data collection under
particular relevance to + section 12
Germany’s security
nine justifications under
+ Article 10 Law, section 3.1
data collection under
BND Law, section 12 +

data collection
under BND Law, section 12
G10 quasi- Ex ante authorization Ex ante authorization Ex ante authorization
judicial oversight with knowledge of without knowledge of without knowledge of
and general search terms and search terms search terms
notification Chancellery notification
requirement to requirement No notifications to No notifications to
allow effective surveillance targets surveillance targets
remedy No notifications to
surveillance targets

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 234 13/12/2018 15:25


Germany’s recent intelligence reform revisited  235

Table 12.5 Different justifications for strategic surveillance measures in German


intelligence law

Three warranted cases of BND Law, section 6.1


● Risks to the internal or external security of the Federal Republic of Germany (FRD)
● FRD’s ability to act
● Information on developments of foreign and security policy significance that relate to the
National Intelligence Priority Framework
Eight warranted cases of Article 10 Law, section 5.1
● An armed attack against the nation
● Intent to carry out acts of international terror
● International proliferation of military weapons
● Illegal import or sale of narcotics
● Counterfeiting
● International money laundering
● Smuggling or trafficking of individuals
● International criminal, terrorist or state attack by means of malicious programs on the
confidentiality, integrity or availability of IT systems
Nine warranted cases of Article 10 Law, section 3.1
● Crimes of treason
● Crimes that are a threat to the democratic state
● Crimes that threaten external security
● Crimes against national defense
● Crimes against the security of NATO troops stationed in the Federal Republic of Germany
● Crimes against the free democratic order as well as the existence or the security of the country
● Crimes under the Residence Act
● Crimes under Criminal Code, sections 202a, 202b and 303a, 303b, in so far as they are
directed against the internal or external security of the Federal Republic of Germany, in
particular against security sensitive bodies of vital institutions
● Crimes under Criminal Code, section 13

f­oreign-foreign communication data surveillance must be administered on ‘tele­


communication nets’ the Chancellery identified in its interception orders. There is
no requirement for search terms to be listed in such orders.

Ban on economic espionage    The BND Law introduced an explicit ban on the use of
foreign-foreign communication surveillance for the purpose of economic espionage. It
does not, however, define economic espionage or provide a list of practices that could be
categorized as such.

2.3.2  A separate authorization and oversight regime


The reform created the Independent Committee (Unabhängiges Gremium, UG), a second
German authorization body for strategic surveillance. Situated at the Federal Court of
Justice in Karlsruhe, the UG provides ex ante authorization of strategic foreign-foreign
communications data surveillance by the BND. It consists of three members plus three
deputies. Its president and one member must be judges at the Federal Court of Justice. The

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 235 13/12/2018 15:25


236  Research handbook on human rights and digital technology

third member must be a federal public prosecutor at that court. The executive appoints the
members of the Independent Committee (section 16.2). It meets at least every three months
and has the power to require the immediate end to measures it finds unlawful or unnecessary.

2.4  Rules for International Intelligence Cooperation

With regard to SIGINT cooperation between the BND and its foreign intelligence
partners, the BND Law now contains a number of provisions that are also outstanding
by international comparison. Sections 13–15 mark the first specific provisions on inter-
national intelligence cooperation in German intelligence law.

● Section 13.3 states that any new cooperation between the BND and foreign intel-
ligence partners requires a prior written administrative agreement on the aims, the
nature and the duration of the cooperation. This also includes an appropriations
clause that the data may only be used for the purpose for which it was collected and
that the use of the data must respect fundamental rule of law principles. Agreements
also require a consultation among the foreign cooperation partners to comply with
a data deletion request by the BND.
● Section 13.4 defines seven broad permissible aims for new international SIGINT
cooperation involving the BND. These include ‘information on political, economic
or military developments abroad that are relevant for foreign and security’ to
‘comparable cases’.
● SIGINT cooperation agreements with EU, EFTA and NATO partners require the
approval of the Chancellery. Agreements with other countries require approval by
the head of the Chancellery. The executive is required to inform the parliamentary
intelligence oversight body about all such agreements.
● Sections 26–30 introduce new provisions on SIGINT databases. The BND can run
joint databases (section 27) or contribute to foreign-run databases (section 30). The
BND’s cooperation with foreign partners on databases is only permissible when (a)
deemed particularly relevant for Germany’s foreign and security interests; (b) basic
rule of law principles are being upheld within partnering states; (c) if all partners
agree to honor the reciprocity principle (section 26.2). The Chancellery’s authoriza-
tion and parliamentary oversight notification obligations are the same as those for
the SIGINT cooperation agreements. There is a similar requirement that the aims
and forms of cooperation on joint databases are documented in writing.
● Section 28 further requires that the BND must keep a detailed separate file arrange-
ment documentation for each database it uses with foreign intelligence partners and
for which it is in charge. The German Federal Data Protection Authority (BfDI)
must be consulted prior to the installation of a new database file arrangement. It
may review the creation of new databases by the BND as well as the data that the
BND contributes to joint databases.

2.5  New Rules and Institutions for Parliamentary Intelligence Oversight

The reform also introduces significant changes to the law on and future practice of
parliamentary intelligence oversight (PKGr Law). Most notably:

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 236 13/12/2018 15:25


Germany’s recent intelligence reform revisited  237

● It created the new institution of a permanent intelligence oversight coordinator.


As the nine members of the parliamentary intelligence oversight committee often
lack the time, resources and knowledge to perform their important mandate, the
coordinator can now perform investigations on their behalf. He or she can also
be tasked for additional budget control by the Bundestag’s Trust Committee. The
coordinator prepares the public reports by the intelligence oversight body and takes
part in the monthly G10 sessions, the meetings of the parliamentary oversight body
and the Trust Committee.
● It further clarified the reporting obligations of the executive. It has to report on the
general activities of the three federal intelligence services but also on developments
of particular relevance. The amended law now provides three examples for the
latter: (a) notable changes to Germany’s foreign and domestic security situation;
(b) internal administrative developments with substantial ramifications for the
pursuit of the services’ mandate; and (c) singular events that are subject to political
discussions or public reporting (PKGr Law, section 4.1).
● The reform will also create more than a dozen full-time positions for intelligence
oversight within the Bundestag’s administration.38

3. ANALYSIS

The following section provides a critical analysis of Germany’s recent intelligence reform.
The discussion begins with what the author sees as true improvements in the reform.
Especially, when compared to recent surveillance reforms in other countries, Germany’s
expansion of the authorization procedure to non-national data and its new requirements
for SIGINT cooperation stand out as progressive.

3.1  Improvements in Intelligence Reform

3.1.1  Democratic legitimacy for a key SIGINT practice


The reform now provides a specific legal footing for a significant part of the BND’s
SIGINT activities. Given the magnitude of past deficits and the ubiquitous calls for
a better legal framework, this may not seem like a major achievement. Yet despite the
reform’s many shortcomings, it is a fact that many European countries, let alone nations
throughout the world, operate according to intelligence laws that do not contain detailed
provisions on the practice of strategic communications data surveillance, let alone
restrictions and democratic oversight on the collection of foreigners’ data by national
intelligence services.39 At the very least, by means of this reform, the German Parliament

38
  See Bundesdrucksache 18/9040, available at https://dip21.bundestag.de/dip21/btd/18/090/​
1809040.pdf.
39
  For a good comparative overview, see the study on European SIGINT laws by the European
Fundamental Rights Agency, Surveillance by Intelligence Services: Fundamental Rights Safeguards
and Remedies in the EU (November 2015), available at http://fra.europa.eu/en/publica​tion/2015/
surveillance-intelligence-services. In addition, one can refer to the conclusion of another com-
parative study: ‘Not only are legal surveillance framework on “international communications”

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 237 13/12/2018 15:25


238  Research handbook on human rights and digital technology

has now democratically legitimized a much bigger part of its strategic communications
data surveillance.

3.1.2  Measures’ legality and necessity can be challenged


Furthermore, the reform provides ex ante authorization and some ex post control provi-
sions. Thus, de jure, the reform offers the possibility to challenge these measures on grounds
of legality or necessity by jurists that are not bound by instructions from the executive.

3.1.3  Rules on international intelligence cooperation


By international comparison, the reform includes detailed provisions governing the BND’s
future SIGINT cooperation with foreign intelligence partners. The BND leadership must
seek written agreements from foreign partners covering a number of aspects which, by
and large, aim to restrict the BND’s participation in measures that would be deemed
unlawful if performed solely under German jurisdiction. Moreover, next to attaching a
number of broad conditions to future SIGINT cooperation agreements, the reform also
introduces specific rules on joint databases, particularly those run by the BND. For the
latter, the German Federal Data Protection Authority’s review mandate covers all the data
that the BND contributes to joint databases with foreign partners.

3.1.4  New ministerial responsibilities


The reformed BND Law now requires more documentation for individual decisions and
includes a number of accountability provisions for the Chancellery’s or (as the case may
require) the Head of the Chancellery’s steering of SIGINT measures. For example, future
interception orders must refer to telecommunication nets determined by the Chancellery.
The use of selectors targeting EU institutions or EU Member States requires prior noti-
fication of the Chancellery and new SIGINT agreements, and the maintenance of joint
databases must be approved by the Chancellery. These and other provisions further reduce
the risk of plausible deniability by the executive vis-à-vis its foreign service.

3.2  Reform’s Contested Privacy Discrimination

The reform of the BND Law is based on the premise that the right to private communica-
tion as guaranteed in the German Constitution (Basic Law, article 10) can be territorially
restricted so as to protect only German citizens at home and abroad, residents and
domestic legal entities in Germany. In other words, the reform presumes that the privacy
infringements for non-Germans caused by these surveillance measures can be administered
with significantly less safeguards compared to those afforded to Germans (see Table 12.4).

very weak in the US and the UK, but the laws and practices in many other countries are just as
bad, and in some cases, worse. These frameworks are so feeble that they allow governments to
interfere arbitrarily with the right of confidentiality of communications of hundreds of millions
of people worldwide by collecting data in bulk without proven cause for suspicion’; see Douwe
Korff, Ben Wagner, Julia Powles, Renata Avila and Ulf Buermeyer, Boundaries of Law: Exploring
Transparency, Accountability, and Oversight of Government Surveillance Regimes, Global Report,
University of Cambridge Faculty of Law Research Paper No. 16/2017 (5 January 2017), available
at https://ssrn.com/abstract=2894490.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 238 13/12/2018 15:25


Germany’s recent intelligence reform revisited  239

The constitutionality of this restricted interpretation of Basic Law, article 10, is highly
contested and has yet to be answered by the Constitutional Court. Clearly, the drafters of
the intelligence reform took a significant risk: in case their interpretation of the limited
territorial reach of article 10 of the Basic Law fails to convince the Constitutional Court,
they will have to revise almost the entire 2016 reform. This is because the reform and
its explanatory memorandum carefully avoid any specific reference to article 10 of the
Basic Law, as well as to the existing regime for the authorization and judicial oversight of
strategic surveillance (Article 10 Law).
Instead of adding new rules for strategic foreign-foreign communications data surveil-
lance into the existing Article 10 Law, and instead of strengthening the G10 Commission’s
mandate, the government created a whole new parallel legal and oversight framework with
the amendments to the BND Law. The ensuing discrimination against privacy protections
and the fragmentation of the German oversight landscape could have been avoided. It
would have been possible to extend the basic right to private communication under article
10 of the German Constitution to foreigners abroad without necessarily extending the
ex post notification practice to them. But this would have come at the cost of extending
the territorial reach of article 10 and this was an option the government tried to avoid at
all costs.
Whereas the German Constitutional Court has not equivocally positioned itself on
the question of the territorial reach of Basic Law, article 10, in the past, it will soon have
to take a stance. Litigation is currently being prepared by the Society for Civil Rights
(Gesellschaft für Freiheitsrechte, GFF) that will require a definite position by the court.

3.3  Some of the Reform’s Many Deficiencies

3.3.1  Insufficient judicial oversight


The reform created a second authorization body for strategic communications data
surveillance by the BND. Despite being staffed by professional jurists and its proximity
to the Federal Court of Justice in Karlsruhe, the Independent Committee (UG) is neither
independent nor a court. Not only are its three members and deputies appointed by the
executive, one of the three members will also be a public prosecutor from the Federal
Public Prosecutor’s Office. This is problematic for potential conflicts of interest. Instead,
it may be referred to as an administrative body tasked with the ex ante authorization of
the newly codified surveillance measures.
While the BND reform created the UG, the new provisions say very little on its actual
oversight powers. By comparison, the G10 Commission is not only tasked to authorize
surveillance measures but it also has the authority to review the collection, subsequent
data handling and use of all personal data related to the surveillance measures. In order
to do so, the G10 Commission has guaranteed access to all documents, saved data and
data management programs used in conjunction with surveillance measures, as well as
access to any premises used for SIGINT purposes by all three federal intelligence agencies
(Article 10 Law, section 15.5). By contrast, the BND Law makes no mention of such judi-
cial oversight powers for the UG. Clearly, the UG is not meant to engage in any in-depth
judicial oversight. Interestingly, however, the law does grant the UG the authority to
conduct random checks as to whether the search terms used by the BND for the targeting
of EU data corresponds to the restrictions articulated in section 6.3.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 239 13/12/2018 15:25


240  Research handbook on human rights and digital technology

Apart from the missing provisions on the UG’s actual oversight powers, one can also
express serious concerns regarding the authorization procedure. More specifically, when
the UG assesses the legality and necessity of a surveillance measure it may do so on the
basis of interception orders that do not list the search terms. Any legality and necessity
assessment it makes without knowledge of the search terms is likely to lack credibility
and substance.

3.3.2  Further fragmentation of German intelligence oversight system


Instead of streamlining the existing oversight landscape, the reform has fragmented it
further. Next to the Trust Committee (budget oversight), the permanent parliamentary
oversight body, the G10 Commission and the Federal Data Protection Authority, demo-
cratic intelligence oversight will now also be administered by two new institutions: the
Independent Committee and the Parliamentary Oversight Commissioner. As indicated,
the fact that Germany now hosts two separate authorization bodies for strategic surveil-
lance, one in Berlin (G10 Commission) and one in Karlsruhe (UG), can only be explained
when seen as a corollary to the government’s position on the restricted territorial reach
of article 10 of the Basic Law. It would have made much more sense to just extend the
mandate of the existing G10 Commission. However, not only did the G10 Commission
show its own deficits (see next section), it had also displayed a newly-acquired audacity
to publicly challenge the government.40
While the reform has introduced limited measures to facilitate the exchange of
information among oversight bodies (for example, the oversight commissioner’s right
to attend the different meetings of the Trust Committee, the parliamentary oversight
body and the G10 Commission), many members still refer to their respective institutions
as silos. Individual members of the G10 Commission are not regularly in touch with
the members of the parliamentary oversight body and the reform has not foreseen any
specific exchange between the Independent Committee and the G10 Commission. Given
the similarity of interception orders and the similar role of telecommunication providers
compelled to assist the government, it would certainly be useful for both bodies to be
more systematically aligned, possibly even conducting joint assessments and visits on the
premises of the foreign service.41

3.3.3  Soft restrictions


As previously shown, the new provisions in the BND Law use unduly broad definitions
when regulating aspects that are meant to restrict surveillance. For example, consider
the minimal requirements for search terms used to collect information on non-EU data
(section 6.1) or the list of permissible goals for new international SIGINT cooperation
agreements (section 13.4). What, one may ask, falls under information required to secure

40
  In October 2015, the G10 Commission sued the government over access to NSA selectors.
Irrespective of the merits of this unsuccessful case for the G10 Commission, it is safe to assume
that the very fact that the Commission turned to the Constitutional Court has tarnished the
Chancellery’s trust in the Commission’s four honorary fellows.
41
  Recently, the German Federal DPA has spelled out in public the idea of joint control
visits together with the G10 Commission. See Tätigkeitsbericht der Bundesbeauftragten für den
Datenschutz und die Informationsfreiheit (2015–2016) 134.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 240 13/12/2018 15:25


Germany’s recent intelligence reform revisited  241

the Federal Republic’s capacity to act (section 6.1) or what defines ‘comparable cases’ that
may give rise to new international SIGINT cooperation (section 13.4)? Also, the rules
on any future international SIGINT cooperation agreement need only be unilaterally
declared once at the beginning. Follow-up procedures to monitor the adherence to those
rules have not been foreseen.
Equally concerning is the fact that the law provides no restrictions on the extensiveness
regarding the acquisition of data from ‘telecommunication nets’ (section 6.1). Hence,
Germany’s foreign service may acquire as much raw data as it likes or its resources
allow. Compare this, for example, with Article 10 Law, section 10.4, which stipulates that
interception orders for strategic communication that may also target national data must
identify the geographical area for which information is being sought and also the com-
munication channels (transmission paths). Furthermore, for any strategic surveillance
on foreign-domestic communication there is the requirement that the data subjected to
surveillance must not exceed 20 percent of the overall capacity of the communication
channels identified in the order (Article 10 Law, section 5).42
The BND Law also does not clarify what may be defined as ‘search term’ and how
these terms may be used in practice. It may be understandable that an intelligence law
does not provide detailed information on operational procedures which can also rapidly
change over time, the mere reference to ‘search term’ does provide ample latitude for the
intelligence sector to use the most powerful regular expressions, especially if oversight and
review bodies lack the knowledge and resources to review the use of regular expressions
in surveillance programs.

3.3.4  Abstract notification requirements instead of more transparency


Consider briefly the US government’s Implementation Plan for the Principles for
Intelligence Transparency and its various accompanying measures that seek to make
‘information publicly available in a manner that enhances public understanding of intel-
ligence activities’.43 For example, a Tumblr site (IC on the Record) features among other
declassified documents, dozens of orders and opinions by the US Foreign Intelligence
Surveillance Court (FISC), the rough equivalent of Germany’s quasi-judicial G10
Commission.44 In Germany, the G10 Commission itself has no reporting obligations.
Instead, the parliamentary intelligence oversight body reports annually to the German
Parliament on the measures performed in relation to the Article 10 Law. Those reports are
public but they only provide rudimentary factual information about G10 authorizations
rather than the Commission’s underlying decisions and interpretations of the law. For
example, the 2015 G10 report states that in regard to the steering of search terms per

42
  This particular provision stems from the pre-digital era and is highly problematic and open to
frequent abuse. See Thorsten Wetzling, The Key to Intelligence Reform in Germany: Strengthening
the G10-Commission’s Role to Authorize Strategic Surveillance, Policy Brief 02/16 (Berlin: Stiftung
Neue Verantwortung, 2016), available at www.stiftung-nv.de for a further elaboration. Given that
the reform steered clear of any changes to the existing Article 10 Law, this problem remains.
43
  Implementation Plan for the Principles for Intelligence Transparency, available at www.dni.
gov/files/documents/Newsroom/Reports%20and%20Pubs/Principles%20of%20Intelligence%20
Transparency%20Implementation%20Plan.pdf.
44
  See https://icontherecord.tumblr.com/.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 241 13/12/2018 15:25


242  Research handbook on human rights and digital technology

threat category per six months, the G10 Commission authorized 949 search terms that fall
into the category of ‘international terrorism’.45
Is this a broader issue of classified/declassified materials and procedures?
With regard to the foreign surveillance measures codified in the amended BND Law,
section 16.6 only requires that the Independent Committee informs the parliamentary
oversight body about its work at least every six months. There are no further requirements
in the law about the content and format, let alone public documentation of such reporting.

3.4  The Reform’s Important Omissions

3.4.1  No reform of the G10 Commission


Worse still, this also meant that the numerous legal and institutional deficits that exist
with the system for the authorization and judicial oversight of strategic surveillance as
regulated for in the Article 10 Law were left unaddressed by the reform. For example, the
G10 Commission (a) still lacks sufficient resources and does not review the handling of
data by the intelligence services; (b) remains staffed by four honorary fellows who come
together only once a month to authorize a stack of interception orders; (c) still operates
without adversarial proceedings (i.e. there is no-one within the commission to present the
views of those subjected to surveillance and argue for less infringing measures); and (d)
the abuse-prone and anachronistic rule that restricts the collection of foreign-domestic
surveillance data to a maximum of 20 percent of the capacity of the pertinent communi-
cation channels also remains in place.46

3.4.2  Other bulk powers not legislated


Despite valid criticism that the British intelligence reform granted too many surveil-
lance powers to the British intelligence and security services,47 Westminster did at least
provide a legal footing for a greater set of bulk powers.48 By contrast, the BND Law says
nothing about the foreign service’s hacking powers (computer network exploitation).
The BND’s acquisition of existing databases from the commercial sector also remain
unlegislated. Furthermore, German intelligence law continues to leave the so-called ‘reine
Auslandsaufklärung’, i.e. the BND’s strategic surveillance against foreigners on foreign
soil, without any search term requirements, unregulated.49

45
  See Parliamentary Intelligence Oversight Body, Annual G10 Report (2015) 8, available at
http://dip21.bundestag.de/dip21/btd/18/112/1811227.pdf.
46
  See Wetzling, The Key to Intelligence Reform in Germany, n. 42 above, for further elaboration
on those deficits.
47
  See e.g., the recent jurisprudence of the UK Appeal Court judges, referred to here: ‘UK mass
digital surveillance regime ruled unlawful’, Guardian, 30 January 2018, available at www.theguard​
ian.com/uk-news/2018/jan/30/uk-mass-digital-surveillance-regime-ruled-unlawful-appeal-ruling-
snoopers-charter. For detailed and nuanced discussion, see also Graham Smith’s writings on the
matter, e.g. his blogpost at www.cyberleagle.com/2016/09/a-trim-for-bulk-powers.html.
48
  This refers to bulk interception, bulk acquisition, bulk equipment interference and bulk
personal datasets. See e.g., the discussion in David Anderson, Report of the Bulk Powers Review
(August 2016), available at https://terrorismlegislationreviewer.independent.gov.uk/wp-content/
uploads/2016/08/Bulk-Powers-Review-final-report.pdf.
49
  Graulich, ‘Reform des Gesetzes über den Bundesnachrichtendienst’, n. 37 above, 47.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 242 13/12/2018 15:25


Germany’s recent intelligence reform revisited  243

3.4.3  Effectiveness of strategic surveillance still not assessed


Do these privacy infringements numbering in the millions actually pay off and offer a
concrete security gain? Unfortunately, despite such calls by the Chancellor’s coalition
partner,50 the reform did not establish an independent evaluation procedure so as to
assess the effectiveness of strategic surveillance and the accuracy of the data minimization
programs. Thus, the following two important claims cannot be verified: (1) this huge sur-
veillance infrastructure that was built and operationalized in the shadows of democratic
oversight produces actionable intelligence; and (2) our agents only get to see lawfully
collected data. The current oversight institutions lack the mandate and the IT resources
to perform any such evaluation and some of the data minimization procedures that have
been put in place are operating without independent verification.51 Parliamentarians
lack the imagination and the political will to use their legislative and budgetary powers
to close this pressing gap. Next to the American example, where the Presidential Civil
Liberties Oversight Board (PCLOB) performed effectiveness investigations, consider also
the Dutch independent oversight board, CTIVD. It has recently announced a new project
to review ‘the possibilities of systemic oversight on the acquisition, analysis and deletion
of large amounts of data’.52

3.4.4  No limit to metadata collection


A draft version of the BND Bill was at one point circulated which included restrictions
on the collection, use and transfer of metadata within the framework of strategic foreign-
foreign communications data collection. However, these restrictions were removed
from the Bill presented to Parliament. Given the importance of metadata for modern
surveillance and warfare, and given that metadata by itself is enough to construct highly
accurate personal profiles, it is regrettable that the reform did not introduce any limits on
the BND here.

3.5  Open Questions

3.5.1  Sufficient technical prowess to curb incidental collection?


The reform assumes the BND’s technical prowess great enough to neatly distinguish
between different data groups for which different authorization and oversight regimes
apply. Yet the tools used for modern communication and its transmission defy coarse
categorization. This brings up the question of data minimization. The BND uses the same
automated filter program (DAFIS) to sift through the acquired raw data. Even if – and
without the effectiveness review mentioned above this is pure guesswork – the filter does
reach an accuracy level of 98.5 percent, this would still mean that the BND incidentally

50
  Eckpunktepapier SPD Party (June 2015), available at www.spdfraktion.de/system/files/
documents/2015-06-16-eckpunkte_reform_strafma-r-endfassung.pdf.
51
  The author has elaborated on these points in a recent paper, Options for More Effective
Intelligence Oversight (November 2017), available at www.stiftung-nv.de/sites/default/files/options_
for_more_effective_intelligence_oversight.pdf.
52
  For an English translation of this announcement, see https://blog.cyberwar.nl/2017/04/
dutch-review-committee-on-the-intelligence-security-services-ctivd-to-self-assess-effectiveness-of-
lawfulness-oversight-re-large-scale-data-intensive-a/.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 243 13/12/2018 15:25


244  Research handbook on human rights and digital technology

Table 12.6  Selection of post foreign intelligence reform deficits in Germany

Insufficient Further Weak Lack of No adversarial


judicial oversight fragmentation of restrictions on transparency proceedings
powers and the intelligence surveillance and abstract at either G10
resources oversight reporting Commission or
architecture requirements the UG
Severe deficits Many other No independent Unlimited Significant
with the surveillance evaluation of metadata amount of
implementation powers remain the effectiveness collection for incidental
of Article 10 unlegislated of foreign measures under collection due to
Law remain surveillance the BND Law filter inaccuracies
unaddressed tools

collects, uses and transfers thousands of datasets daily without respecting the law that
requires G10 protections and notifications.53
Interestingly, section 10.4 already alludes to incidental collection of content and
metadata and requires the immediate deletion of wrongfully obtained data or the notifica-
tion of the G10 Commission in case the data will be retained. Here, the drafters used a
somewhat twisted logic: in the pursuit of strategic foreign-foreign communications data
collection, the BND must not collect data from German citizens at home or abroad,
residents or legal German entities (section 6.4). If it does (section 10.4) it must delete that
data or inform the G10 Commission if it retains that data. With filters working to 100
percent, there would be no need for this. Given that it will continue to happen, it adds – at
least de jure – significantly to the workload of the understaffed G10 Commission.
Table 12.6 summarizes the wide range of shortcomings with the recent German intel-
ligence reform discussed in the previous section.

4. CONCLUSION

Democracies need strong and agile security services to guard against a number of increas-
ingly networked threats. International cooperation among national intelligence services is
fundamentally important for our security. Yet, given the invasiveness of modern surveil-
lance, intelligence services ought to be subjected to effective democratic oversight. This
promotes rights-based and legitimate intelligence governance that is vital to the social
fabric of any democracy.

53
  By way of comparison, take a hypothetical scenario discussed in Stephan Martin, ‘Snowden
und die Folgen’ (2017) Infobrief 113/2017, RAV Berlin, available at www.rav.de/publikationen/
infobriefe/infobrief-113-2017/snowden-und-die-folgen/. In 2015, the data volume carried only
by broadband connection in Germany amounted to roughly 11.500 million gigabytes. If only 5
percent of this data would not be properly filtered, that would mean that 575 million gigabytes of
data would not be subject to proper data minimization. A standard article may amount to 0.00005
gigabyte. Put differently, and referring still to the hypothetical example, 11 trillion and 500 billion
datasets would not be subjected to proper data protection standards required by law.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 244 13/12/2018 15:25


Germany’s recent intelligence reform revisited  245

By and large, Germany’s recent intelligence reform did not pave the way toward meeting
this important objective. It was designed, first and foremost, to provide legal certainty
for intelligence service members and private companies involved in pursuit of strategic
surveillance.
Legal clarity and the rule of law were not the key objectives of this reform. In fact, the
reform created a number of new problems and left major deficits unresolved. Judicial
oversight of intelligence, which in theory is the most useful tool to rein in rogue elements
given its concrete and immediate sanctioning power, has been hollowed out with this
reform. Just take the new Independent Committee as an example. With its creation and
the missed opportunity to address the grave deficits of the G10 Commission, the reform
unduly fragmented the German oversight landscape and contributed to the retreat of
judicial intelligence control. A more fragmented, albeit more resourceful, system of
parliamentary intelligence oversight is hardly the appropriate response to Germany’s
problems with intelligence governance. The reform also did not address the urgent need
to provide for an independent evaluation of the effectiveness of the many surveillance
programs either. Besides, many key questions surrounding the use of data gained from
strategic surveillance remain within the sole and unchecked responsibility of the execu-
tive. Despite being in effect since December 2016, the reform is also far from being fully
implemented at the time of writing.54
However, especially with a view to recent intelligence reform in other countries,
Germany’s reform has also set a few benchmarks that deserve further recognition. Its
intelligence laws now cover a far greater spectrum of SIGINT activities than ever before.
The required authorization of foreign-foreign surveillance programs by a panel of jurists
also sets a new international standard. By comparison, the US FISA Court only reviews
surveillance programs that impact US nationals.55 While the terms used to restrict surveil-
lance on non-nationals are vague and the actual investigation powers of the Independent
Committee unclear, the BND Law, unlike other European intelligence laws, does give
special protection to EU citizens. Also, in regard to the new requirements for international
intelligence cooperation, Germany has gone further with its reform than many other
democracies.56 The reform created new documentation and authorization requirements
for the executive which may lead to more political accountability, a core problem identi-
fied by all intelligence inquiries in Germany over the past decade.

54
  See n. 32 above.
55
  To be fair, the US Privacy and Civil Liberties Board (PCLOB), currently operating its
important business with only one member, may also make recommendations designed to protect
the rights of non-US persons.
56
  For a recent review of how domestic legal frameworks in several Western democracies
include provisions on international intelligence cooperation, see Hans Born, Ian Leigh and Aidan
Wills, Making International Intelligence Cooperation Accountable (Geneva: DCAF, 2015), available
at www.dcaf.ch/sites/default/files/publications/documents/MIICA_book-FINAL.pdf.

Thorsten Wetzling - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:47AM
via New York University

WAGNER_9781785367717_t.indd 245 13/12/2018 15:25


13.  Liability and automation in socio-technical systems
Giuseppe Contissa and Giovanni Sartor*

1. INTRODUCTION

Today, the main productive, administrative and social organizations may be described
as complex socio-technical systems (STSs), namely, systems that combine technological
artefacts, social artefacts and humans.1
Technical artefacts, which to some extent involve the use of automated tools and
machines, determine what can be done in and by an organization, amplifying and con-
straining opportunities for action according to the level of their automated technology.
Social artefacts, like norms and institutions, determine what should be done, governing
tasks, obligations, goals, priorities and institutional powers. However, norms need to be
understood, interpreted, negotiated and actuated by humans. More generally, humans
play an essential role in the functioning of STSs, providing them with governance and
maintenance and sustaining their operation. The specific nature of STSs with respect to
other sorts of systems, where we can observe the interaction between man and machines,
is grounded in the fact that STSs involve humans not only in their role as users of the
system but also in their role as operators.2
For example, the functioning of an air traffic management (ATM) system may be
described as the result of the interplay of technical artefacts (aircraft, control towers,
airports, radars, etc.), human operators (pilots, air traffic controllers, airport operators,
safety operators, technicians, etc.), and social artefacts that coordinate behaviours
(including norms, such as aviation laws, technical procedures, manuals, and institutions,
such as air carriers, air navigation service providers, safety agencies).
These systems are increasingly reliant on computer technologies, and they operate by
interconnecting their information systems, as well as by employing automated technolo-
gies that sometimes replace humans, though they more often form part of organizational
procedures and structures based on human-machine interaction. The complexity of
automated socio-technical systems requires that the interaction between the technical
component and the behaviour of the users be coordinated by sets of norms and rules
applying to operators (legal norms and operative rules, determining what measures must

*  This chapter reflects ideas the authors have worked out in common. The Introduction was
jointly written by Giovanni Sartor and Giuseppe Contissa; all remaining sections, from 2 to 7,
were written by Giuseppe Contissa. The work has been developed in the context of the ALIAS &
ALIAS II projects, co-financed by Eurocontrol acting on behalf of the SESAR Joint Undertaking
and the European Union as part of Work Package E in the SESAR Programme.
1
  Jan Kyrre Berg Olsen, Stig Andur Pedersen, and Vincent F. Hendricks, A Companion to the
Philosophy of Technology (John Wiley & Sons, 2012) 223–26.
2
  Pieter Vermaas et al., ‘A Philosophy of Technology: From Technical Artefacts to Sociotechnical
Systems’ (2011) 6(1) Synthesis Lectures on Engineers, Technology, and Society 1, at 70.

247
Giuseppe Contissa and Giovanni Sartor - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 247 13/12/2018 15:25


248  Research handbook on human rights and digital technology

be taken and in what situations) and to technical artefacts (standards and technical rules
describing how technologies should be designed and used).
In the context of STS, the allocation of functions, responsibilities and resulting
liabilities may be viewed as a governance mechanism making it possible to enhance the
functioning of the STS.
Although the introduction of automation in STSs usually ensures higher levels of effi-
ciency and safety, their internal complexity, and the unpredictability of the environment
in which they operate, often lead to accidents, which may be caused by different factors:
reckless behaviour by human agents (as in the Costa Concordia accident); technical
failures; inappropriate conducts (as in the Linate airport accident);3 and organizational
defects (as in the Überlingen mid-air collision).4
Although humans play a crucial role in STSs, the more automation is deployed, the
more operations are performed by technological tools, while humans are coupled with
them in order to carry out a mere control function.
In the case of malfunctioning, the human being is called on to take the place of the
machine, even if he has already lost the necessary experience needed to perform the
task. In safety studies,5 it is shown how the STS risks can be reduced thanks to proper
technological and organizational choices and thanks to a shared safety culture.6
The development, deployment and use of highly automated technologies in STSs gives
rise to new legal questions, especially as concerns liability for accidents, calling for new
models of allocating decision-making tasks between humans and machines. This change
will make it necessary to critically revise the way tasks, roles and liabilities are allocated
and, as a consequence, to adapt this to the applicable legal context. However, the legal
literature is still fragmented and has not yet addressed these problems on a unified
approach in the context of complex STSs.7
In this chapter, we will analyse the impact of automation in the allocation of liability
within STSs. We will first discuss the relation between responsibility for the execution
of a task and legal liability. Then, we will analyse how the introduction of automation
in STS gives rise to a redistribution of tasks between human and artificial agents and
therefore a reallocation of the liability burden. In this regard, we will present an actor-
based analysis of liability allocation, taking account of the main types of liability, and
in particular liability for software failures. The analysis is based on common aspects of
contemporary Western legal systems, though we will use some examples concerning the
law of particular European countries. Next, we will present some remarks on final liability
allocation, taking account of issues such as the level of automation of the technology. In

3
  C.W. Johnson, Linate and Überlingen: Understanding the Role that Public Policy Plays in the
Failure of Air Traffic Management Systems (ENEA, 2006).
4
  Simon Bennett, ‘The 1st July 2002 Mid-Air Collision over Überlingen, Germany: A Holistic
Analysis’ (2004) Risk Management 31.
5
  James Reason, Human Error (Cambridge University Press, 1990).
6
  Charles Perrow, Normal Accidents: Living with High Risk Systems (New York: Basic
Books, 1984).
7
  Among the existing literature, see Trevor O. Jones, Janet R. Hunziker and others, Product
Liability and Innovation: Managing Risk in an Uncertain Environment (National Academies Press,
1994).

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 248 13/12/2018 15:25


Liability and automation in socio-technical systems  249

the final section of the chapter, we will present the Legal Case, a methodology by which
to analyse liability allocation in automated STSs and to assess the resulting legal risk for
all actors involved.

2. TASK-RESPONSIBILITIES AND THE IMPACT OF


AUTOMATION

In order to introduce the analysis of liability issues in STSs, we need to refer to the concept
of task-responsibility. According to Hart:

whenever a person occupies a distinctive place or office in a social organisation in which specific
duties are attached to provide for the welfare of others or to advance in some specific way the
aims or purposes of the organization, he is properly said to be responsible for the performance
of these duties, or for doing what is necessary to fulfil them.8

Thus, when saying that a person x is task-responsible for an outcome O,9 we mean that
x, given his role or task, has a duty to ensure that O is achieved. In particular, we shall use
the term ‘task-responsibility’ to cover both commitments linked to the agent’s permanent
position within the organization (e.g. in aviation, the controller’s task of taking care of a
sector of the airspace) and commitments linked to particular situations (e.g. a controller’s
task of taking care of a particular aircraft having accepted a request by an unavailable
colleague). Such responsibilities can concern the direct achievement of a certain outcome,
as well as supervision of the work of colleagues and collaborators. Task-responsibilities
may also be assigned to groups rather than to single persons.
A crucial problem in complex STSs is that technological risks, vulnerabilities and
failures often occur because responsibilities are inappropriately assigned. Moreover, as
Hollnagel and Woods have pointed out,10 when a high level of technology and automa-
tion is introduced in STSs, the increased system complexity often leads to increased task
complexity.
This may seem to be something of a paradox, since it is often assumed that technologi-
cal innovations tend to simplify human tasks. On the contrary, innovations may increase
the overall performance of the system while reducing the human workload, but they
rarely reduce task complexity, adding human supervisory-task functions to those directly
discharged by humans. Therefore, without a way to effectively connect responsibilities to

 8
  Herbert L.A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford
University Press, 2008) 212. As Hart says, here we must understand the notion of a role in a very
broad way, namely, as including any ‘task assigned to any person by agreement or otherwise’ (ibid.).
The term ‘role responsibility’ suggests the idea that such duties are tied to fixed positions within
a permanent organizational structure. We prefer to use the term task-responsibility to clarify that
these positions can depend on particular contingencies or agreements between peers, not necessar-
ily flowing from a given organization chart.
 9
  In this context, ‘outcome’ should be understood as encompassing both situations where the
person will have the responsibility to obtain a specific result, and situations in which the responsi-
bility could be limited to the best efforts towards a goal.
10
  Erik Hollnagel and David D. Woods, Joint Cognitive Systems: Foundations of Cognitive
Systems Engineering (CRC Press, 2005).

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 249 13/12/2018 15:25


250  Research handbook on human rights and digital technology

tasks and roles in the organization, accountability may not be able to be ensured, since it
may not be clear who had responsibility for a particular aspect of the system’s functioning.
However, mapping tasks and responsibilities (and assigning correlated liabilities) is not
always sufficient to ensure safety and efficiency within STSs. In particular, it has been
pointed out that precisely and univocally detailed task mapping might be reasonable for
systems that can be completely specified, but may not be sufficiently adequate for systems
that are underspecified.
In particular, complex STSs, where the technical component includes advanced techni-
cal devices and infrastructures with high levels of automation, will require a broad and
flexible allocation of tasks: the relative autonomy of such technical components means
that their behaviour can be difficult to predict,11 and flexible adjustments, rather than
mechanical responses, are required by humans supervising or directing them.
On the one side, their use by human operators, or misuse with associated failures, is
relevant in assessing liabilities; on the other side, they may often be considered not only as
tools in the hands of human operators but also as agents themselves, actively contributing
to carrying out the tasks within the STS, which may have an impact on the allocation of
liabilities for failures.
According to several studies in the area of cognitive-systems engineering, when auto-
mation in an STS has more or less completely taken over, humans become controllers of
automated systems rather than operators. These systems perform cognitive functions,
acquire information from the environment, process it, and use the knowledge so obtained
to achieve the goals assigned to them, as specified by their users. It has been observed that
when one or several operators and one or several automated support systems interact for
the fulfilment of a task, it would be better to describe humans and technology not as two
interacting ‘components’, but as making up a joint (cognitive) system.
The term ‘joint cognitive system’12 means that control is accomplished by a combina-
tion of cognitive systems and (physical and social) artefacts that exhibit goal-directed
behaviour.
Several studies describe these fusions between humans and machines as ‘hybrids’.13 In
hybrids, the participating individuals or collective actors are not acting for themselves but
are acting for the hybrid as an emerging unit, namely, the association between humans and
non-humans. They do so in the same way as managers who are not acting on their own
behalf but are ‘agents’ or ‘actants’14 representing their ‘principal’, which is the corporation
as a social system. In these cases, agency pertains not only to humans or to machines but
also to the hybrid itself, such that human machine interaction and trust play a decisive
role in assessing and allocating liability. From this perspective, a relevant (and still open)
question is that of how to deal with cases in which, as in the Überlingen aviation accident,
conflicting information is provided to human agents (pilots) by other humans (control-

11
 Vermaas et al., ‘A Philosophy of Technology’, n. 2 above, 78.
12
  Erik Hollnagel, ‘The Human in Control: Modelling What Goes Right Versus Modelling
What Goes Wrong’ in Human Modelling in Assisted Transportation (Springer, 2011).
13
  Gunther Teubner, ‘Rights of Non-Humans? Electronic Agents and Animals as New Actors
in Politics and Law’ (2006) 33(4) Journal of Law and Society 497.
14
  Bruno Latour, Politics of Nature: How to Bring the Sciences into Democracy (Harvard
University Press, 2004) 75 ff.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 250 13/12/2018 15:25


Liability and automation in socio-technical systems  251

lers) and automated systems, and more generally what kinds of priorities should be given
to different signals, and when humans may override automatic devices.

3.  ACTOR-BASED ANALYSIS OF LIABILITY

Let us now analyse the impact on liability caused by the introduction of automation
and the resulting changes in the allocation of tasks. In order to analyse this impact, we
choose to adopt an actor-based approach, namely, to analyse the grounds for attribution
of liability for the main actors involved in an STS, and to assess the impact resulting from
the introduction of automation. The actors in an STS may be broadly classified into two
main categories:

(1) human operators working with the technologies, where the ground for attribution is
mainly negligence;
(2) enterprises, where the grounds for attribution are usually vicarious liability (when
employees are at fault) and product liability (for designing, manufacturing and
maintaining the technologies), but also organizational liability.

There may, of course, be other actors (certificators, standard setters, the government,
safety bodies, etc.), but we will focus on analysing the liability risk for humans and
enterprises working with technologies.

3.1  Liability of Individuals

Whenever there is a failure in an STS, we try to connect the failure with the missing or inad-
equate execution of a task, and so with the (natural or legal) persons who were responsible
for the task. As a consequence of the failure to comply with their task-responsibilities,
these persons are subject to blame, penalties, and/or the payment of damages.
This approach, namely, using individual schemes of responsibility and liability, may be
inadequate in complex STSs, since many people may have contributed to the failure at
different levels in the organization, and since liability should in some cases be placed on
the organization rather than (or in addition to) being placed on particular individuals.
First of all, it may not be possible to link the failure only to the inadequate performance
of a single person. This is the so-called problem of many hands:15 it may be difficult, or
even impossible, to find any one person who can be said to have independently and by
his own hands carried out the behaviour that resulted in a failure; on the contrary, the
failure results from the sum of a number of mistakes by different human or artificial
agents in different positions. This means that, as the responsibility for any given instance
of conduct is scattered across more people, the discrete responsibility of every individual
diminishes proportionately.
In complex organizations, there is an observed tendency to manage this problem by

15
  M.A.P. Bovens, The Quest for Responsibility: Accountability and Citizenship in Complex
Organisations (Cambridge University Press, 1998).

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 251 13/12/2018 15:25


252  Research handbook on human rights and digital technology

attributing most failures to the mistakes of human operators whose behaviour is nearest
to the proximate cause of the accident. As Perrow points out:

Formal accident investigations usually start with an assumption that the operator must have
failed, and if this attribution can be made, that is the end of serious inquiry. Finding that faulty
designs were responsible would entail enormous shutdown and retrofitting costs; finding that
management was responsible would threaten those in charge, but finding that operators were
responsible preserves the system, with some soporific injunctions about better training.16

The focus on operators’ liability is reflected also in the outcomes of many legal judgments
related to accidents in different activities and sectors (especially that of transportation:
trains, aircraft; that of medicine, among others).
With the introduction of automation, as task-responsibilities are progressively del-
egated to technology, a different approach tends to prevail: liability for damages shifts
from human operators to the organizations that designed and developed the technology,
defined its context and uses, and are responsible for its deployment, integration and
maintenance within the STS.17
This new paradigm becomes important not only ex post, when liability needs to be
allocated, but also ex ante, when prevention and precaution need to be applied: this is how
this paradigm contributes to reinforcing a ‘just culture’ in the field of safety and security.
Advocates for ‘just culture’ have convincingly argued that sanctions on operators
alone cannot prevent accidents from happening. On the contrary, sanctions often induce
operators to take excessive precautions in order to avoid personal responsibility rather
than ensuring the efficiency and safety of the outcome.18 Consequently, according to the
‘just culture’ approach, front-line operators should not be subject to criminal sanctions
for actions, omissions or decisions taken by them that are commensurate with their
experience and training, while gross negligence, wilful violations and destructive acts
should not be tolerated.

3.2  Liability of Enterprises

Enterprise liability could be seen as quasi-strict, in the sense that it attaches to an


enterprise regardless of its intentions or negligent behaviour. There are two main types
of enterprise liability: vicarious liability and organizational (or ‘systemic’) fault liability.
(1) Vicarious liability is the liability of the employer (enterprise) for the wrongful act of
an employee, when the act is performed within the scope of the employment agreement.
The law of employer liability varies among legal systems. For example, common law
systems and many civil law systems (such as the Italian and French systems) acknowledge
a no-fault liability of employers for the wrongful act of their employees, while under the
German Civil Code (BGB, article 831) an employer is only liable if it is itself at fault (the
employer is presumed to be at fault, but this presumption is rebuttable).

16
 Perrow, Normal Accidents, n. 6 above, 146.
17
  Hanna Schebesta, ‘Risk Regulation Through Liability Allocation: Transnational Product
Liability and the Role of Certification’ (2017) 42(2) Air and Space Law 107.
18
  Sidney Dekker, ‘The Criminalization of Human Error in Aviation and Healthcare: A
Review’ (2011) 49(2) Safety Science 121.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 252 13/12/2018 15:25


Liability and automation in socio-technical systems  253

(2) Organizational (‘systemic’) fault19 liability emerges when the harm is directly related
to or caused by business activity.20 There should also be a ‘fault’ in the enterprise in its
behaviour, but this has to be understood in a special way: the enterprise is at fault when-
ever the harm it caused could have been prevented by reasonable investment to improve
the safety and security of the product or activity in question. This idea is captured to
some extent by what is known as Learned Hand’s formula: one is negligent when the cost
of taking adequate precautions to avoid the harm is inferior to the expected cost of the
injury, an expected cost obtained by multiplying the probability of injury by the cost of
the injury (if it were to happen). The burden of proof for organizational fault falls on
the enterprise: it is up to the enterprise to exonerate itself by proving that there was no
reasonable measure of care that could have been taken to avoid the harm. Enterprise
liability can go beyond ‘fault’ liability, covering all insurable losses and injuries: those
that may be linked to quantifiable risks incident to the enterprise’s activity. In this way,
risk can be spread in the most effective way possible (in that the enterprise can distribute
the costs of expected injuries over its customers by increasing the cost of the products it
sells or the services it offers). As concerns employee liability, employees are liable to third
parties for intentional harm but are usually exonerated for negligent acts (except when
these are reckless). Organizational liability may be complemented by the personal liability
of individual managers responsible for faulty organizational choices.
Enterprise liability, then, covers certain special cases in which the forms of liability
listed above do not apply. These special cases are (a) strict liability for technical risks; (b)
product liability; and (c) statutory liability.
(a) Strict liability for technical risks is a kind of liability specifically created to deal
with risks in transportation, where we have a ‘specific risk of operating these technical
transport means’,21 which may go out of control and lead to serious damage. This applies
in particular to trains, automobiles and aircraft. A recent trend has been to introduce a
general clause providing strict enterprise liability for highly dangerous activities, such as
those involving nuclear power production, while possibly imposing compulsory insur-
ance. Strict liability for technical risks can be avoided only (i) in case of force majeure,
covering all natural and human-caused events and phenomena which are unavoidable or
uncontrollable (such as earthquakes, floods, military activities, and power failures); or (ii)
if the damage was entirely caused by a deliberate third-party act or omission specifically
designed to bring about the damage in question; or (iii) if it was entirely caused by the
negligence or other wrongful act of a government entity. Liability caps are usually estab-
lished for strict liability. It is also possible to establish insurance pools, ensuring insurance
coverage for victims by having different insurers share the burden of compensation.

19
  The term ‘systemic’ is used here because, unlike vicarious liability, where there needs to be a
wrongful act in order for an employer to be held liable, organizational liability attaches regardless
of any wrongful or negligent acts on the part of any individual within the organization, as it is a
liability arising directly out of inadequate organization of the business activities of the enterprise
itself.
20
  Gert Brüggemeier, Common Principles of Tort Law: A Pre-statement of Law (British
Institute of International and Comparative Law, 2006) 117–32.
21
  Gert Brüggemeier, Modernising Civil Liability Law in Europe, China, Brazil and Russia: Texts
and Commentaries (Cambridge University Press, 2011) 102.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 253 13/12/2018 15:25


254  Research handbook on human rights and digital technology

(b) Another kind of special-case liability is product liability. This kind of liability is
grounded on a number of distinct doctrines and policy rationales under different jurisdic-
tions. In the EU legal framework, product liability has been harmonized by Directive
85/374.22 According to the Directive, the necessary conditions for this kind of liability
to emerge are a harm, a defect in a product, and a causal link between the harm and the
defect. A possible line of defence against claims concerning harm caused by defective
products is provided by developmental-risk doctrine, according to which an enterprise
could be exonerated from liability if it proves that at the time it developed the product, the
current state of the art in the field could not enable it (or anyone else) to spot the defect.
In other words, the enterprise should prove that the defect was undetectable at the current
state of knowledge in the particular field of business activity. Product liability also covers
damages for pain and suffering inflicted on the victims, but the procedure as to which
basis this should be done may vary across different jurisdictions. In the European Union,
it is left for individual Member States to frame according to their national laws.
(c) The last type of special case in enterprise liability is statutory liability: if an enter-
prise breaches a regulation on how certain activities should be organized, performed and/
or managed, then that enterprise is liable for the harmful effects of such activities. In other
words, liability automatically follows from the violation of a standard of care established
by the law for certain activities posing a danger to people, property or the environment.
The difference between statutory liability and organizational liability consists in the fact
that the former is related to violation of standards established by the law, while the latter
is assessed with reference also to technical and economic standards.

4.  SOFTWARE AND LIABILITY

Automated systems are usually the result of a combination of hardware and software
components. Liability for both hardware and software failures is therefore particularly
relevant in this context. While hardware failures usually fall into product liability, we need
to analyse separately some specific issues related to software failures.
As is well known, the use of software always implies the possibility of a failure, since
not all software defects can be detected in the development and validation phases, and
so it is impossible to guarantee that a piece of software will be absolutely error-free,23
even though good development methods and skilful programming can keep mistakes
to a minimum. Thus, operators must be ready to mitigate the consequences of software
malfunctioning, and be capable of manually handling high-priority tasks when there is a
software failure. Competent effort in both preventing malfunctioning and mitigating its
impact is particularly important when software is the core component of a safety-critical
system (which is the case for many STSs), whose failure could result in death, injury or
illness, major economic loss, mission failure, environmental damage, or property damage.

22
  Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations
and administrative provisions of the Member States concerning liability for defective products.
23
  Frances E. Zollers et al., ‘No More Soft Landings for Software: Liability for Defects in an
Industry that Has Come of Age’ (2004) 21 Santa Clara Computer and High Tech. L.J. 745.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 254 13/12/2018 15:25


Liability and automation in socio-technical systems  255

The number and complexity of roles and tasks involved in the development, imple-
mentation and use of software makes the assignment of responsibility a problematic
issue, so it is often very difficult to identify exactly what went wrong, who is responsible
as regards damaged third parties, and to what extent: the developer of the software, the
implementer-customizer, or the operator.
There is currently much discussion as to whether strict (no-fault) liability should be
imposed on the producer/manufacturer of software for loss or injury caused by the soft-
ware. Those24 in favour of this proposition claim that producers are in the best position to
prevent defects in the products. Moreover, it may be argued that producers can best absorb
or spread losses in cases where damage was caused by a software defect, even though no
negligent behaviour took place during software development. Usually, to mitigate this
approach, the concept of misuse (or contributory negligence) is introduced, such that a
user might be held partly or fully responsible whenever he uses the software in an incorrect
or improper way, and as a consequence acts negligently. Moreover, software development
contracts and licences usually include strong liability limitations or even exemptions
from developers/providers’ liability for losses or injuries caused by their products. Such
a restriction, however, only applies to the parties to such contracts and licences, not to
damaged third parties.
Others25 claim, on the contrary, that by making software producers liable for all dam-
ages caused by their software, we would put many producers out of the market (in par-
ticular those who are delivering free or open-source software), and reduce the creativity
of the software industry, thus stifling technological progress. Moreover, insuring risks for
software liability is very difficult, since these risks are very difficult to assess and quantify.
Finally, making software producers liable may decrease the incentive that users have to
identify defects and help to fix them in cooperation with developers.
The liability regime applicable to loss or injury resulting from defective software may
vary depending on (a) whether the software is deemed a service or a product; and (b)
whether liability-exclusion clauses have been included in the contractual relations between
the parties.
(a) A software product’s qualification as a product or as a service is significant because
only fault-based liability usually applies to services, while a stricter liability for damages
usually applies to producers of defective products, as established, for example, in the
European Union under Directive 85/374/EEC.
In particular, the issue of whether software may be viewed as a source of information
or as a technical device is relevant because the liability standards for providing erroneous
information are higher than those for providing a faulty device. Of particular interest in

24
  See D.J. Ryan, ‘Two Views on Security Software Liability: Let the Legal System Decide’
(2003) IEEE Security and Privacy (January-February) 70; Zollers and others, ‘No More Soft
Landings for Software’, n. 23 above, 778 ff.
25
  See Seldon J. Childers, ‘Don’t Stop the Music: No Strict Products Liability for Embedded
Software’ (2008) 19 University of Florida Journal of Law and Public Policy 166; Patrick E. Bradley
and Jennifer R. Smith, ‘Liability Issues Regarding Defects in Software’ (2000) 19 Product Liability
Liability and Strategy (November 2000) 5; T. Randolph Beard et al., ‘Tort Liability for Software
Developers: A Law and Economics Perspective’ (2009) 27 Marshall J. Computer and Info. L. 27
(2009): 199.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 255 13/12/2018 15:25


256  Research handbook on human rights and digital technology

this context are the US ‘aeronautical charts’ cases,26 where the courts categorized charts
as products rather than as sources of information, assuming that a nautical chart or an
airline chart is similar to other navigation instruments, such as a compass or radar finder
which, when defective, can prove to be dangerous. Since the charts were considered to
be a product, the judges held their producer liable under a strict liability rule. Thus, the
judges considered charts to be different from books, newspapers, or other sources of
information, as well as from consultancy services, to which judges usually do not apply
strict liability standards (they do not consider authors, publishers or consultants liable
for providing, without any fault, wrong information reliance on which leads people to
harmful consequences). The chart cases, by analogy, support the view that software, too,
may be viewed as a product (rather than as a service) and may be subject to strict liability.
However, the debate on this matter is still open, as the case law is uncertain, even more
so when addressing new subject matter, such as the liability of providers of GPS services
for mistaken information provided to their users. However, in the EU, the question of
how software ought to be qualified under the EU product liability regime remains partly
unresolved, since in many cases the conceptual distinction between faulty software and
faulty instructions remains unclear.27
(b) Also relevant are contractual relations between the parties, which may include
liability-limitation clauses. However, as briefly noted, the strong liability clauses written
into software development contracts and licences can only reallocate liabilities between
the parties to such contracts, without applying to third parties damaged by the software’s
malfunctioning (e.g. passengers and/or their relatives seeking damages). In addition, it
should be borne in mind that liability waivers are carefully reviewed by courts in light of
their (potentially diverging) national standards concerning the legal validity and enforce-
ability of such waivers in highly regulated sectors, such as ATM.
When the issue of the liability of software producers is addressed from a consumer-
protection or a business-to-business perspective, while implementing different liability
rules on the basis of such distinct contractual relationships, uncertainties inevitably arise
with regard to computer programs released under an open source licence and whose
source code is developed and shared within a community. Considering the nature of open-
source software development, holding open-source developers liable for their code would
imply a form of collective responsibility that would be hard to define. In addition, the
developer and the user of open source software often have neither a business-to-business
nor a business-to-consumer relationship with each other, such that it would be hard for
the courts to decide what kind of liability rule should be applied in light of an uncertain
(and possibly non-existent) contractual relationship between them.

26
  Aetna Casualty & Surety Co. v. Jeppesen & Co., 642 F.2d 339, 342–43 (9th Cir. 1981);
Saloomey v. Jeppesen & Co., 707 F.2d 671, 676–77 (2d Cir. 1983); Brocklesby v. United States, 767
F.2d 1288, 439 (9th Cir. 1985); Fluor Corp. v. Jeppesen & Co., 170 Cal.App.3d 468, 475, 216 Cal.
Rptr. 68, 71 (1985).
27
  Geraint G. Howells, Comparative Product Liability (Dartmouth Publishing Group, 1993)
34–35.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 256 13/12/2018 15:25


Liability and automation in socio-technical systems  257

5. LEVELS OF AUTOMATION AND ALLOCATION OF


LIABILITY

When automated systems are increasingly introduced into STSs, the main effect is that
liability for damage or harm is gradually transferred from humans to enterprises using
the automated technology that replaced the human operator and/or to the technology
developer (programmer, manufacturer) that created the technology.
While the trend of transferring liability from the individual to the enterprise has been
observed for quite a long time in STSs (in ATM, for example), new technologies currently
being developed and implemented will accelerate this trend, since they will deeply impact
the tasks of human operators, not only quantitatively but also qualitatively, replacing
human operators in their higher cognitive functions, ranging from the analysis of infor-
mation, to the selection of a decision or an action, to the fully automated implementation
of the chosen action.
Of course, not all advanced technological systems will possess all those cognitive func-
tions to the same extent. For example, many currently employed automated systems are
not designed to automatically implement the chosen actions, but only to suggest actions
to be executed by the human operator.
In order to evaluate the final liability allocation between different actors, it will be
necessary to assess each technology’s different levels of automation in performing differ-
ent cognitive functions (acquiring information, analysing information, making decisions,
and acting on them).
Different levels of automation for different cognitive functions will usually imply a differ-
ent distribution of the corresponding task-responsibilities of the actors involved, including
operators, as well as other actors involved in the technology (managers, producers, trainers
and maintainers). This, too, will have an impact on the final allocation of legal liability.
As a general rule, a gradual shift can be observed from personal liability to general
enterprise liability and product liability. Thus, as tools become increasingly automated,
liability will increasingly be attributable to the organizations that use such tools, and to
those that build them or are in charge of maintaining them, rather than to the operators
who interact with them.
However, intermediate levels of automation are sometimes those that also create higher
levels of legal risk for certain actors. This is because in these levels there is a high fragmentation
of task-responsibilities between the automated technology and the operator, possibly leading
in some circumstances to uncertainty in the assignment of tasks. In ­addition, i­ntermediate
levels of automation usually also imply greater complexity in the human-machine interface,
since fragmented tasks require more interaction between a technology and its operator.
In legal terms, this may translate to an increased duty of care, resulting in a higher
liability risk for (a) the operator; (b) the organization employing the operator, both for
vicarious liability and for organizational liability; and, finally, (c) the producer of the
technology, since higher complexity in the human-machine interface would increase the
risk of technological failure.28

28
  Hanna Schebesta et al., Design According to Liabilities: ACAS X and the Treatment of
ADS-B Position Data (Dirk Schaefer ed., 2015).

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 257 13/12/2018 15:25


258  Research handbook on human rights and digital technology

In order to limit liability risk, some legal measures may be introduced in the STS to
provide evidence that the operators’ actions were compliant with professional due care.
In particular, task allocation should be set forth in legally relevant documentation, such
as technical manuals for the use of the technology, ‘concepts of operations’, and training
manuals.
The manufacturers’ liability defences can be strengthened by adopting common
industry standards. To strengthen the ‘state of the art defence’, common industry
standards can be adopted in order to ensure that at least the customary standard of
industry practice is met. Another factor that affects liability allocation is the scope of
the manufacturer’s discretion under a given standard. Generally, with less discretion for
manufacturers, their liability risk decreases, even though compliance with standards
and regulations does not necessarily exonerate a producer from liability. Therefore, in
order to limit the liability of manufacturers, design options for automated technologies
ought to be mandated or constrained for all manufacturers. Finally, in order to address
warning-defect risks, it is suggested that manufacturers provide adequate warning
information about the technology.

6.  ARGUMENTATION MAPS AND LIABILITY RISK ANALYSIS

In this section we will briefly present the Legal Case, a methodological tool intended
to support the integration of automated technologies into complex STSs. We will then
focus on a particular aspect of the Legal Case methodology, namely, the use of argument
maps to capture, present, and animate legal information, and to link it to other kinds of
information.29
The purpose of the Legal Case methodology is to address liability issues arising from
the interaction between humans and automated tools, ensuring that these issues are
clearly identified and dealt with at the right stage in the design, development, and deploy-
ment process.30 Addressing liability early on in the design and development process will
make it easier, less costly, and less controversial to address legal issues and to apply a more
uniform approach to the attribution of liability across technological projects.
The Legal Case involves a four-step process where the output of each step works as
input for the next step. In the first step, background information on the technology is col-
lected and analysed in order to understand its level of automation and the risk of failures
associated with its use (Step 1).
Then the liability risks for all stakeholders are identified (Step 2). This step includes
in particular: (a) developing hypothetical accident scenarios involving the automated
technology; and (b) analysing the legal rules and arguments supporting the attribution of
liability for each of the actors involved.

29
  The Legal Case has been developed as the main output of the ALIAS I & II projects (co-
financed by Eurocontrol and the European Union), dealing with liability issues and automation in
ATMs, see http://aliasnetwork.eu.
30
  Giuseppe Contissa et al., ‘Classification and Argumentation Maps as Support Tools for
Liability Assessment in ATM’ in Proceedings of the SESAR Innovation Days (Eurocontrol, 2018)
1–8.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 258 13/12/2018 15:25


Liability and automation in socio-technical systems  259

In the next step, legal measures by which to mitigate unbalances in the liability alloca-
tion are proposed, and then submitted to the stakeholders for acceptance (Step 3).
The results of the assessment of the technology and the associated liability risks are
collected in the Legal Case Report (Step 4).
A key aspect of the methodology lies in the structuring of legal information so that it
can be used for the purpose of analysis and planning by the group of experts and stake-
holders engaged in the assessment of a new technology’s legal impact. The group’s first
task is to identify potential legal issues, and in particular potential liabilities. Its second
task is to identify what agreed measure may prevent or mitigate such problems. Possible
measures include legal arrangements (such as providing for liability exclusion clauses,
shifting liability, or requiring insurance) or recommendations for regulations concerning
the use of the technology in question or its deployment.
The Legal Case complements other methodologies (such as the Safety Case) to be
used when a significant innovation (a new concept) is introduced in aviation. This makes
it possible to proactively identify and remedy possible issues. The Legal Case provides
the ATM community with a methodological approach for identifying and managing in
a standard, structured, coherent, and pragmatic way the liability risks associated with
automated technologies.

6.1  Modelling the Law Through Argument Maps

The Legal Case provides, among other tools, a set of argumentation maps on which
basis to organize factual and legal knowledge. In constructing such maps, we have used
Rationale,31 a legal-argumentation software tool that provides a visual framework for
organizing information, structuring reasoning, and assessing evidence.
In Rationale, argument structures are represented as diagrams with boxes correspond-
ing to propositions put forward in the argumentation, with arrows indicating the relations
between them. The premises supporting a conclusion are tinted, while those attacking a
conclusion are dotted.
See, e.g., Figure 13.1, which is a partial view of the argument map laying out pilot
liability for negligence.
Reasons can be supported by cumulative claims (which must all be true in order for
the reason to support its conclusion) or by independent claims. By way of illustration,
consider again Figure 13.1: three claims (namely, ‘there is an injury’, ‘there is careless
behaviour’, and ‘there is a causal relation between the behaviour and the injury’) are linked
in the same reason and therefore jointly support the pilot’s liability. On the other hand, the
conclusion that the behaviour is careless may be independently supported by two reasons,
namely, ‘careless action’ or ‘careless omission’.
The argument ‘there is careless behaviour of the pilot’, tinted, is attacked by the
argument ‘the behaviour lacked will’ (dotted). However, this argument may be undercut

31
  See www.reasoninglab.com/rationale/. For a presentation of Rationale see Steven van Driel,
Henry Prakken, and A.Z. Wyner, ‘Visualising the Argumentation Structure of an Expert Witness
Report with Rationale’ in Proceedings of the Workshop on Modelling Legal Cases and Legal Rules,
in Conjunction with JURIX-10 (University of Liverpool, 2010) 1–8.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 259 13/12/2018 15:25


The pilot is liable
(personal liability –
negligence)

WAGNER_9781785367717_t.indd 260
Support
1A-a 1A-b 1A-c
There is an injury There is a careless There is a causal
to a legally behaviour of the relation between
protected interest pilot the behaviour and
the injury

Supports Supports Opposes


2B-a 2C-a 2D-a
Careless action Careless omission The behaviour
Evidence of the injury
lacked will

260
3A-b Support 3B-a Supports 3C-a Rebuts
3A-a
The pilot took action The action was The pilot was not The pilot brought
careless in possession of his himself to a state
physical or mental of loss of physical
faculties or mental faculties

The pilot executed the The pilot had a heart The pilot was affected
RA without checking for attack by narcotics
incoming objects

via New York University


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
Giuseppe Contissa and Giovanni Sartor - 9781785367724
Figure 13.1  Pilot negligence

13/12/2018 15:25
Liability and automation in socio-technical systems  261

by another argument, namely, ‘the pilot brought himself to a state of loss of physical or
mental faculties’ (dashed).32
Claims and their bases can receive three possible values: strong, weak, or nil. This
enables the model to represent claims of different strengths. For instance, authoritative
legal decisions (such as statutes and precedents) usually provide stronger support than
arguments derived from legal scholarship (the opinions of writers on law).

6.2  Legal Rules as Argument Maps

Argument maps reflect our understanding of liability law, in the relevant domains pertain-
ing to aviation law, product liability, insurance and contract law. Some of the maps are
extracted from legal sources; in such cases each part of the map corresponds to sections
of the legislative or regulatory documents. Other maps correspond to legal doctrine or
case law; in such a case, when possible, a link is available making it possible to explore the
source document. Argument maps are used in three ways:

● As connecting tools: the maps structure and connect information about the system
and its possible failures, on the one side, and the applicable legal framework, on
the other side: failures are represented on the maps and categorized according to
consolidated approaches33 adopted in the human factors domain34 and are con-
nected to the analysis of liabilities according to an actor-based framework.
● As communication tools: the maps support the process by which lawyers and other
stakeholders structure their legal knowledge, and also work as a communication
tool between stakeholders from different backgrounds, as visual representation
improves the ability to reason and analyse complex issues.
● As an assessment tool: the maps provide support for the legal-risk analysis35
carried out in the Legal Case because they help in identifying and evaluating legal
risks. By legal risk we mean the probability and severity of an unwanted legal
outcome as triggered by uncertain factual circumstances and/or uncertain future
legal decisions.

In the Legal Case, we use two kinds of argumentation maps: legal analysis maps and
legal design maps. The former are to be used for the purpose of understanding legislation
and anticipating possible legal issues; the latter are to be used for legal planning, that is,
for modelling remedies for such issues.

32
  In Rationale, this is displayed as a rebuttal, however, since it aims to undermine the infer-
ential link between the premise(s) and the conclusion. In argumentation literature, this is usually
referred to as an undercutter or undercutting defeater.
33
 Reason, Human Error, n. 5 above.
34
  Scott A. Shappell and Douglas A. Wiegman, A Human Error Analysis of General Aviation
Controlled Flight into Terrain Accidents Occurring Between 1990–1998 (Techspace Report, DTIC
Document, 2003).
35
  Tobias Mahler, ‘Tool-supported Legal Risk Management: A Roadmap’ in Law and
Technology: Looking into the Future, Selected Essays (European Press Academic Publishing, 2009)
vol. 7.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 261 13/12/2018 15:25


262  Research handbook on human rights and digital technology

6.3  Legal Analysis Maps

Legal analysis maps help the legal analyst analyse the legal arguments that could support
the attribution of liability, taking into account the applicable legal framework and the
factual circumstances of the accident resulting from the failures.
The issue to be addressed is whether there is the risk of a particular kind of liability:
this is established by checking whether the conditions for that kind of liability exist in
various hypothetical scenarios.
Initial hypotheses of liability attribution are validated by using a set of argument
maps that cover different types of liability (personal liability, enterprise liability, product
liability, special-case liability, such as air carrier liability, etc.). In such maps, arguments
supporting the attribution of liability are combined with first-level counterarguments
attacking the liability arguments (by rebuttals and undercutters), with further-level
counterarguments providing attacks against first-level counterarguments, and so on. In
this way a dialectical tree is built for each potential liability.
For example, Figure 13.2 shows a detail of the map for personal liability of the pilot
under an analysis of the pilot’s careless behaviour. Careless behaviour (1A-b) may consist
of a careless action or a careless omission.
In the first case (2A-a, careless action), the behaviour will be careless when (3A-a) the
pilot took action; and (3A-b) the action was careless. Carelessness is usually determined
by assessing whether (4B-a) the action violated the standard of due care applicable to
the pilot, that is, the proper behaviour a professional pilot would have been required
to follow in the given situation. Such expectations depend on the requirements for the
competent performance of tasks assigned to the pilot and on all applicable international
and national laws (such as navigation codes), public or private standards and regulations,
and customs. For example, the fact that the pilot acted without complying with a request
(RA Resolution Advisory) by the automated Airborne Collision Avoidance System
(ACAS) (5D-a) will be a reason strongly supporting the conclusion that he violated the
standard of due care.
However, this argument may be defeated by the argument that the pilot had a good
reason not to follow the order (6A-a).36
According to a general exception, the pilot’s behaviour would not be considered careless
if (2D-a) it lacked ‘will’, i.e., if the pilot was not in possession of his physical or mental
faculties (3B-a) (for example, the pilot had a heart attack). However, this exception would
not apply if the pilot brought himself to such a state of loss of physical or mental faculties
(3C-a). For example, it would not apply if the pilot was under the effect of illegal drugs.
In such a case, the pilot’s behaviour would still be considered careless, as his incapacity
was due to his fault.

36
  In the second case (2C-a, careless omission), not expanded in Figure 13.2 for lack of space,
the behaviour will be careless when the pilot (a) failed to take action; (b) the pilot had a duty to
act; and (c) the pilot’s action would have prevented the injury. The content of the duty to act will
depend on the tasks assigned to the pilot and may be based on international and national laws (such
as navigation codes), public or private standards and regulations, or even customs.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 262 13/12/2018 15:25


1A-b
There is a careless
behaviour of the
pilot

WAGNER_9781785367717_t.indd 263
Supports Supports Opposes
2C-a 2D-a
2B-a
Careless The behaviour
Careless action
omission lacked will

Supports
3A-a 3A-b Supports Rebuts
3B-a 3C-a
The pilot took The action was
action careless The pilot was not in The pilot brought
possession of his himself to a state of
physical or mental loss of physical or
faculties mental faculties

4B-a Supports

263
The action violated
The pilot executed the the standard of due The pilot had a heart The pilot was affected
RA without checking for care applicable to attack by narcotics
incoming objects pilot

Strongly Supports
5D-a supports 5E-a
International and ICAO rules Task- The pilot acted The pilot refused Contract Customs Case Law: Veuve De
national laws Responsibilities without complying contact with the Franceschi vs. Hiller
see LOAT with the ACAS RA ATCO Helicopters, Tribunal de
Versailles (1957)
France (The pilot
performed a brutal
Opposes
manoeuvre close to the
6A-a ground although his
The pilot had employer asked him not
reasons not to follow to do so)
the ACAS RA

via New York University


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
Giuseppe Contissa and Giovanni Sartor - 9781785367724
Figure 13.2  Pilot’s careless behaviour

13/12/2018 15:25
264  Research handbook on human rights and digital technology

6.4  Legal Design Maps

Legal design maps present possible remedies to liability issues identified within the legal
analysis phase. The legal design measures represent reasons why, as a consequence of a
particular precautionary measure, a certain liability does not fire or is limited. In the argu-
ment map, the remedy provided by the precautionary measure is represented as an attack
against the liability concerned, while the precautionary measure supports the remedy (the
attack against the liability).
For instance, assume that a liability-limitation clause has been agreed, according to
which the manufacturer does not owe any compensation to the airline for manufacturing
defects. In this case, the liability-limitation clause (presented in Figure 13.3) provides an
attack against the claim that the manufacturer is contractually liable to the airline.
On the contrary, assume that a liability-extension clause has been agreed, according to
which the software producer is bound to cover also the damages for defects going beyond
the state of the art. In this case, the liability-extension clause (also presented in Figure
13.3) provides an attack against the claim for the state of the art liability exemption.
Other measures concern not liability itself but rather the financial burden it entails. For
instance, the design measure consisting in mandating an insurance policy supports the
conclusion that the manufacturer will be indemnified, which attacks the conclusion that
the manufacturer should cover the cost.
Legal design maps help to represent and discover suitable design measures for dealing
with liability: they support the activity meant to identify creative solutions to the problem
of potential liability (and the consequent financial burden) for any failure the technology
in question should cause.

6.5  Actor-Based Liability Maps

In argumentation maps, liability is considered from an actor-based view; that is, the
analysis of liability issues will be carried out on different maps, each assessing the liability
risk of a specific ATM actor, including the following: (1) individual operators directly
involved in the provision of air services (namely, pilots, air traffic controllers, and air
service managers); (2) air carriers; (3) air navigation service providers; (4) technology
manufacturers; (5) insurance companies; (6) governments; and (7) other enterprises and
public and private bodies involved in ATM (such as airport companies, maintenance
providers, standard setters, certifying bodies, and national oversight bodies).
Once an accident occurs, the final ‘net’ outcome of liability allocation has to be assessed
against the background of the liability chain arising out of accidents. A party against
which a damages claim is initially made may often be insured. In addition, the accident
causing the damage may be owed to a third party’s behaviour, resulting in recovery claims
against the latter party. Therefore, our actor-based maps are linked, making it possible
to capture situations in which an accident would result in chains of liabilities involving
different actors, or in which one actor’s liability is a precondition for that of another.
For example, Figure 13.4 shows a small section of the enterprise liability maps, and in
particular the argument for vicarious liability: such liability rests on two conditions: that
(1) the employee has committed a tort (he or she could therefore be held personally liable);
and that (2) he acted within the scope of his employment. The argument 3D-a links the

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 264 13/12/2018 15:25


The manufacturer
covers the cost of
liability

Supports Opposes
Product liability The insurance will

WAGNER_9781785367717_t.indd 265
(the manufacturer indemnify the
is liable) manufacturer

Supports Opposes Supports


The software falls No compensation The manufacturer
The product was The product caused
under the definition for damage to the purchased an
defective the accident
of product Airline insurance policy

Supports Supports Supports Supports Supports


The product had a The parties agreed
The technology is The product had The product had
manufacturing on a liability
a software a design defect a warning defect
defect limitation clause

265
Opposes
The product was
designed according to the
available state of the art

Rebuts
The state of the art
exception could not be
invoked

Supports
The parties agreed on a
stricter liability clause
for the software producer

via New York University


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
Giuseppe Contissa and Giovanni Sartor - 9781785367724
Figure 13.3  Legal design map manufacturer’s liability

13/12/2018 15:25
266  Research handbook on human rights and digital technology

1B-a Supports
The enterprise is liable
for vicarious liability

2B-a 2B-b Supports


The employee is The employee acted
personally liable for within the scope of
negligence the employement.

Supports
3D-a
See personal liability map

Figure 13.4  Link between argumentation maps

enterprise-liability map to the personal-liability map, so as to make it possible to assess


the preconditions for an employee’s personal liability.

6.6  Outcome of the Legal Analysis

Liability maps have been used in two test applications of the Legal Case method: one
application concerned collision avoidance systems (ACAS X),37 the other one the
Remotely Piloted Aircraft System (RPAS)38 – two technologies currently being intro-
duced in aviation.
The use of the Legal Case and its supporting tools helped to identify the differences
between design options in terms of automation and the resulting liability risks for all
the actors involved. The outcomes of the test applications are being discussed and
used by the stakeholders involved in the design and employment of ACAS X and
RPAS technologies. In particular, the results of the legal analysis may support stand-
ardization bodies and policy-makers (e.g., EUROCAE WG75) in fully appreciating
the implications of introducing a new technology, while integrating such results with
those coming from other structured analyses, such as the safety analysis and the human
factor analysis.

37
  Airborne Collision Avoidance System X, Concept of Operations, available at www.skybrary.
aero/bookshelf/books/2551.pdf.
38
  Remotely Piloted Aircraft Systems, Concept of Operations, available at http://rpasregula​
tions.com/index.php/community-info/easa.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 266 13/12/2018 15:25


Liability and automation in socio-technical systems  267

7. CONCLUSION

Within an STS system, the distribution of functions, tasks, and liabilities has turned
into a governance mechanism to enhance the functioning of the entire system. This is so
because the complex interaction between human, technological, and normative elements
determines the functioning of the entire system.
Thus, the STS paradigm makes it possible to understand the problems that need to be
(legally and technically) regulated in light of this complex interaction.
Furthermore, this paradigm deals with general aspects, such as causality, subjective
states and risk. Currently, the legal debate on risk (especially in law and economics)
focuses on single policies rather than on a systemic approach. This systemic approach in
dealing with issues in STS makes it possible to consider the topic of liability from a new
perspective: no longer as a static concept, focused on an exclusive subject matter, namely,
the application of legal norms to a particular situation, but rather as a dynamic perspec-
tive which takes into account the distribution of tasks and functions within the system.
The analysis we are proposing contributes to creating legal certainties in fields that are
de facto uncertain: by regulating risks that are untenable for society and in particular for
the legal system, law creates these certainties by making new technologies compatible with
public health and safety and successfully responding to the needs of equity, deterrence,
and efficiency.
The Legal Case method supports the analysis of the legal risks for the different stake-
holders involved in the process of designing and implementing an automated technology
in STS. It is meant to make it possible to detect any issues pertaining to the allocation of
liabilities and to address such issues before the system is deployed, through convenient
technological adaptations or legal arrangements.

Giuseppe Contissa and Giovanni Sartor - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:52AM
via New York University

WAGNER_9781785367717_t.indd 267 13/12/2018 15:25


14.  Who pays? On artificial agents, human rights and
tort law
Tim Engelhardt*

1. INTRODUCTION
Artificial agents1 are increasingly shaping our world. They may determine who receives
health insurance and at what price, steer cars and aeroplanes, trade on the stock market,
assist surgeons, advise on prescribing medication, analyse the spread of diseases and
recommend measures to take to mitigate outbreaks. On the individual level, personal
electronic assistants are becoming more and more popular, as their abilities continue
to grow. Robots are envisaged for taking care of children and the elderly. This is not
to mention inherently dangerous machines, such as robot soldiers, armed drones and
unmanned patrols vehicles. Oftentimes, the actions of these artificial agents may affect
the human rights of individuals. The example of the malfunctioning autonomous car
causing a grave accident continues to inspire debates among scholars, politicians and
the general public. But peering beyond this scenario, it is not difficult to imagine other
scenarios which may give rise to complex questions of the interplay between artificial
agents and human rights: the refusal of health insurance based on an opaque and faultily
designed automated data analysis may make urgently needed medical care impossible;
medication prescribed as a result of an expert program’s advice may cause unforeseen
serious side effects; or counter-disease measures implemented on the recommendation
of an epidemiological real-time analysis may be focused on the wrong region as a conse-
quence of a calculation error.
In all these scenarios, if damage occurs as a result of the direct or indirect involvement
of artificial agents, the following questions inevitably arise: Who is legally responsible
for the injury or damage? Who must pay? and How can the victim be compensated?
This chapter deals with these questions. After laying out the requirements established
under human rights law regarding the availability of remedies to victims of human
rights abuses, it focuses on how this right to remedy may be provided for under tort
law, as it is known in common law countries, or what other jurisdictions often refer
to as extra-contractual liability.2 Although potentially relevant, other fields of law
including contract, criminal and administrative law are not explored but would merit

*  The views expressed herein are those of the author and do not necessarily reflect the views,
decisions or policies of his employer, the Office of the UN High Commissioner for Human Rights
(OHCHR).
1
  Other terms frequently used include electronic agent, intelligent agent and software agent.
However, as Samir Chopra and Laurence F. White, A Legal Theory for Autonomous Artificial Agents
(University of Michigan Press, 2011) 27–28 note, the term ‘artificial agent’ is the broadest of them,
being able to cover all kinds of computing processes and both embodied and disembodied agents.
2
  In the chapter both terms are used interchangeably.

268
Tim Engelhardt - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 268 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  269

further discussion. Moreover, the important and worrying legal aspects of autonomous
military weapons are also beyond the scope of this chapter. It should further be noted
that given the immensity of the topic and the myriad of legal challenges that may emerge,
even a superficial overview of all these areas of the law would require an entire book.3
Therefore, this author has decided to focus on one narrow field, and hopes that, in this
way, this chapter will still vividly illustrate the significant challenges the law faces when
dealing with the increasingly significant area of artificial agents and its intersection with
international human rights law.

2.  HUMAN RIGHTS LEGAL FRAMEWORK


Regional and international human rights instruments provide a strong, albeit not entirely
clearly developed legal framework for developing standards for the legal responsibility
and accountability of algorithm-based acts. They not only determine the universal
and inalienable rights that domestic legal frameworks must uphold, but also contain
clauses spelling out obligations and responsibilities concerning the remedies necessary
for addressing violations and abuses of human rights. This aspect appears to be often
overlooked in the vivid and growing discussion of legal responses to the ever-evolving use
of digital technologies and artificial intelligence.

2.1  Affected Human Rights

It goes without saying that the algorithm-based decisions and acts may negatively affect
all kinds of human rights, be they civil, political, social or cultural rights. To name just
a few obvious instances, in addition to those briefly mentioned in the introduction,
software-based products or services, including autonomous agents, may injure people,
even kill them (think of a security guard robot), breach privacy by hacking into comput-
ers, destroy data, block political expression online, thereby touching upon the rights to
life,4 health5 and access to health care,6 privacy,7 property8 and freedom of expression.9
Moreover, algorithm-based decisions may be susceptible to discriminatory outcomes,

3
  And there are already a few admirable works on the market, e.g. Chopra and White, A Legal
Theory for Autonomous Artificial Agents, n. 1; Ugo Pagallo, The Laws of Robots: Crimes, Contracts,
and Torts (Springer, 2013) and Ryan Calo, Michael Froomkin and Ian Kerr (eds), Robot Law
(Edward Elgar, 2016).
4
  Universal Declaration of Human Rights (UDHR), Art. 3; International Covenant on Civil
and Political Rights (ICCPR), Art. 6; European Convention for the Protection of Human Rights
and Fundamental Freedoms (ECHR), Art. 2; American Convention on Human Rights (ACHR)
Art. 4; and Charter of Fundamental Rights of the European Union (CFREU), Art. 2.
5
  International Covenant on Economic, Social and Cultural Rights (ICESCR), Art. 12, see also
UDHR, Art. 25, where health is mentioned as part of the right to an adequate standard of living.
6
  CFREU, Art. 35.
7
  UDHR, Art. 12; ICCPR, Art. 17; ECHR, Art. 8; ACHR, Art. 11; and CFREU, Art. 7. See
also the right to the protection of personal data in CFREU, Art. 8.
8
  UDHR, Art. 17; ECHR, First Protocol, Art. 1.
9
  UDHR, Art. 19; ICCPR, Art. 19; ECHR, Art. 10; ACHR, Art. 13; and CFREU Art. 11.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 269 13/12/2018 15:25


270  Research handbook on human rights and digital technology

similar to human decisions, which raises the question of the compatibility with a range
of anti-discrimination rights.10
This chapter does not purport to provide a comprehensive list or analysis of the variety
of human rights abuses that algorithm-based acts may lead to, now or in the future.
Rather, it will focus on the aspect of how victims of such abuses may be able to obtain
redress, and the corresponding question of remedies.

2.2 Remedies

2.2.1  Right to a remedy under human rights law


‘The principle of effective remedy, or effective reparation, is a golden thread that
runs through all modern international human rights treaties,’ the former UN High
Commissioner for Human Rights Navi Pillay stated in her intervention in one of the most
prominent cases on the corporate liability for human rights abuses.11 And indeed, most
modern human rights instruments contain language requiring that victims of human
rights abuses have access to adequate and effective remedies.
Article 8 of the Universal Declaration of Human Rights (UDHR) states: ‘Everyone
has the right to an effective remedy by the competent national tribunals for acts violating
the fundamental rights granted him by the constitution or by law’.
Article  2(3) of the International Covenant on Civil and Political Rights (ICCPR)
confirms this principle and elaborates further:

Each State Party to the present Covenant undertakes:


(a) To ensure that any person whose rights or freedoms as herein recognized are violated shall

10
  Anti-discrimination provisions can be found in ICCPR, Art. 2(1); ICESCR, Art. 2(2);
ECHR, Art. 14; ACHR, Art. 1(1); and CFREU, Art. 21. Furthermore, specialized treaties
have added another layer of protection against discrimination. These include, in particular, the
Convention on the Elimination of All Forms of Discrimination Against Women (CEDAW),
the International Convention on the Elimination of All Forms of Racial Discrimination (CERD),
the Convention on the Rights of Persons with Disabilities (CRPD), and the Convention of the
Rights of the Child (CRD). Under these treaties, both formal discrimination on prohibited grounds
in law or official policy documents, as well as substantive discrimination, meaning discrimination
in practice and in outcomes, are forbidden. For example, states must ensure, to a certain degree,
that individuals have equal access to health services, adequate housing and water and sanitation,
see Committee on Economic, Social and Cultural Rights (CtESCR), General Comment No. 20 on
Non-discrimination and various General Comments on particular rights protected by the ICESCR.
Moreover, a number of economic and social rights themselves address aspects of inequality and
discrimination. These include, in particular, the rights to education, health and social security, see
CtESCR, General Comment No. 13, para. 6(b)(iii); No. 14, para. 19; and No. 19, paras 16 and 25.
11
  Kiobel v. Royal Dutch Petroleum Co., No. 10-1491 (US Supreme Court, 21 December 2011),
Brief of Amicus Curiae Navi Pillay, the United Nations High Commissioner for Human Rights
in Support of Petitioners, at 5. As pointed out in the same brief, it is even a general principle of
international law in particular and law in general that breaches of duties are linked to an obligation
to make reparation for that breach. See the famous Factory at Chorzów case of the Permanent
Court of International Justice: ‘It is a principle of international law, and even a general conception
of law, that the breach of an engagement involves an obligation to make reparation in an adequate
form’. Factory at Chorzów (Federal Republic of Germany v. Poland), 1928 P.C.I.J. (ser. A) No. 17,
29 (September 13), at 4–5.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 270 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  271

have an effective remedy, notwithstanding that the violation has been committed by persons
acting in an official capacity;
(b) To ensure that any person claiming such a remedy shall have his right thereto determined by
competent judicial, administrative or legislative authorities, or by any other competent authority
provided for by the legal system of the State, and to develop the possibilities of judicial remedy;
(c) To ensure that the competent authorities shall enforce such remedies when granted.

Even though the other instrument of the so-called International Bill of Rights, the
International Covenant on Economic, Social and Cultural Rights (ICESCR), lacks an
equivalent to this provision, it is generally understood that the availability of remedies
is a crucial element of an effective application of that Covenant too. The Committee on
Economic, Social and Cultural Rights (CtESCR), in its General Comment No. 20 on
Non-discrimination, provides, under the heading ‘Remedies and Accountability’:

Institutions dealing with allegations of discrimination customarily include courts and tribunals,
administrative authorities, national human rights institutions and/or ombudspersons, which
should be accessible to everyone without discrimination. These institutions should adjudicate or
investigate complaints promptly, impartially, and independently and address alleged violations
relating to article 2, paragraph 2, including actions or omissions by private actors . . . These
institutions should also be empowered to provide effective remedies, such as compensation,
reparation, restitution, rehabilitation, guarantees of non-repetition and public apologies, and
State parties should ensure that these measures are effectively implemented.12

The CtESCR’s General Comment No. 3 on the nature of states parties’ obligations
further reiterates:

Among the measures which might be considered appropriate, in addition to legislation, is the
provision of judicial remedies with respect to rights which may, in accordance with the national
legal system, be considered justiciable. The Committee notes, for example, that the enjoyment
of the rights recognized, without discrimination, will often be appropriately promoted, in part,
through the provision of judicial or other effective remedies.13

Other international human rights instruments adopting express remedy requirements


include the Convention on the Elimination of All Forms of Racial Discrimination
(CERD),14 the Convention on the Elimination of All Forms of Discrimination Against
Women (CEDAW),15 the Convention Against Torture and Other Cruel, Inhuman or
Degrading Treatment or Punishment (CAT)16 and the International Convention on the
Protection of the Rights of All Migrant Workers and Members of Their Families.17 On
the regional level, the European Convention on Human Rights (ECHR),18 the American

12
  CtESCR, General Comment No. 20: Non-discrimination in Economic, Social and Cultural
Rights (ICESCR, Art. 2(2)) para. 40.
13
  CtESCR, General Comment No. 3: The Nature of States Parties’ Obligations (ICESCR,
Art. 2(1)) para. 5.
14
  CERD, Art. 6.
15
  CEDAW, Art. 2(b) and (c).
16
  CAT, Art. 14.
17
  Migrant Convention, Art. 83.
18
  ECHR, Art. 13; see also Art. 41, according to which the European Court of Human Rights
may afford just satisfaction.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 271 13/12/2018 15:25


272  Research handbook on human rights and digital technology

Convention on Human Rights (ACHR),19 and the Protocol to the African Charter on
Human and Peoples’ Rights on the Rights of Women in Africa20 further codify the
principle of effective remedies. Article 47 of the Charter of Fundamental Rights of the
European Union also includes a right to an effective remedy.

2.2.2  Issues, in particular the treatment of private parties


Although these provisions collectively confirm that human rights abuses must give rise to
remedies, a number of issues should be addressed.
To start with, the term ‘remedy’ used in the above instruments focuses on the procedural
aspect rather than substantive domestic law.21 While the English word ‘remedy’ may cover
both aspects, other linguistic versions of the relevant provisions show that the drafters
had judicial remedies in mind. The French version of Article 2(3) of the ICCPR, for
example, speaks of ‘recours’, stressing much more clearly the procedural aspect. Similarly,
the German version of Article 47 of the Charter of Fundamental Rights of the European
Union refers to a ‘Rechtsbehelf’, an entirely procedural term.
This approach would suggest that the right to remedy established by the human rights
instruments referred to above would not necessarily give victims the right to demand
remedies such as damages, compensation, reparation, restitution, correction of false
information, destruction of rights violating objects, but would only require granting
access to the judicial system. However, access to the courts would be meaningless without
particular substantive claims a victim would be entitled to bring forward. Thus, at least
indirectly, an obligation to establish and maintain substantive remedies emanates from all
significant human rights instruments.22
The second issue calling for closer attention is the responsibility of non-state actors
under human rights law. Whilst some human rights abuses have been specifically defined
as only to have occurred when the state is the perpetrator,23 traditionalist approaches more
generally understood that only states could be duty bearers under international human
rights law.
Nevertheless, over the last decades, this approach has attracted more and more criti-
cism, with increasing demands to hold private, actors accountable.24 The discussion of the

19
  ACHR, Arts. 25 and 63.
20
  African Charter, Protocol on Women’s Rights, Art. 25.
21
  Christian Tomuschat, Human Rights: Between Idealism and Realism (3rd edn, OUP, 2014) 403.
22
  See Dinah Shelton, Remedies in International Human Rights Law (3rd edn, OUP, 2015) 16–19
with an extensive description of the dual meaning of remedies, referring to the aspect of access to
justice, on the one hand, and substantive redress, on the other. See also Human Rights Committee,
General Comment No. 31, The Nature of the General Legal Obligation Imposed on States Parties
to the Covenant, para. 16, emphasizing the importance of reparations, a term encompassing the
substantive side of remedies.
23
  See e.g., the right to equality before courts and the right to a fair trial, ICCPR, Art. 14; and
the right to vote and to be elected, ICCPR, Art. 25.
24
  See e.g., Steven R. Ratner, ‘Corporations and Human Rights: A Theory of Legal Responsibility’
(2001) Yale LJ 443.; Andrew Clapham, Human Rights Obligations of Non-State Actors (OUP, 2006);
Nicola Jägers, Corporate Human Rights Obligations: In Search of Accountability (Intersentia, 2002).
The latter identifies, inter alia, the following rights protected by the Bill of Rights as applicable to
non-state actors: the right to life, equality and the prohibition of discrimination; the right to freedom
of thought, conscience and religion; the right to privacy; the right to freedom of movement; the

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 272 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  273

responsibility of non-state actors under international human rights law gained particular
momentum as a result of the actions of multinational companies, who were seen as
themselves committing, or alternatively being complicit in, many of the human rights
violations committed by states; in certain contexts, it was also argued that such companies
exercised equal or even more power than many states within which they were active.25
Therefore, there should be a possibility to hold them accountable under international
human rights law.26
However, the prevailing view in the international community is still that non-state
actors may not be regarded as duty bearers under international human rights law.27 This
does not mean that international human rights law is silent with regard to the actions
of individuals and corporations. Rather, the effect of human rights obligations on their
actions is indirect, through an obligation of the states to create a framework and take all
necessary measures that ensure that human rights are upheld, protected and promoted.
To illustrate, see the first sentences of paragraph 8 of the Human Rights Committee’s
General Comment No. 31:

The article 2, paragraph 1, obligations are binding on States [Parties] and do not, as such, have
direct horizontal effect as a matter of international law. The Covenant cannot be viewed as a
substitute for domestic criminal or civil law. However the positive obligations on States Parties
to ensure Covenant rights will only be fully discharged if individuals are protected by the State,
not just against violations of Covenant rights by its agents, but also against acts committed by
private persons or entities that would impair the enjoyment of Covenant rights in so far as they
are amenable to application between private persons or entities. There may be circumstances in
which a failure to ensure Covenant rights as required by article 2 would give rise to violations
by States Parties of those rights, as a result of States Parties’ permitting or failing to take
appropriate measures or to exercise due diligence to prevent, punish, investigate or redress the
harm caused by such acts by private persons or entities. States are reminded of the interrelation-
ship between the positive obligations imposed under article 2 and the need to provide effective
remedies in the event of breach under article 2, paragraph 3.

Similar principles have been expressed with regard to numerous human rights by several
other UN treaty bodies.28
This approach is echoed in the to date most comprehensive and most widely accepted
set of rules dealing with human rights responsibilities of private parties, the so called
‘Guiding Principles on Business and Human Rights’ of the United Nations.29 These

right to familiy life; the right to property; the right to freedom of expression and information; the
right to participation in cultural life; the right to health and the right to education.
25
  See Tomuschat, Human Rights, n. 21 above, 133, sceptically summarizing this line of reasoning.
26
  See Jordan J. Paust, ‘Human Rights Responsibilities of Private Corporations’ (2002) 35
Vanderbilt Journal of Transnational Law 801; Beth Stephens, ‘Expanding Remedies for Human Rights
Abuses: Civil Litigation in Domestic Courts’ (1997) 40 GYIL 117, and the authors cited in n. 24 above.
27
 Tomuschat, Human Rights, n. 21 above, 131–33; John H. Knox, ‘Horizontal Human Rights
Law’ (2008) 102 AJIL 1, at 4–10.
28
  An excellent overview can be found in Clapham, Human Rights Obligations of Non-State
Actors, n. 24 above, 317–46.
29
  Published as an Annex to John Ruggie, Report to the Human Rights Council of the Special
Representative of the Secretary-General on the Issue of Human Rights and Transnational Corporations
and Other Business Enterprises, A/HRC/17/31.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 273 13/12/2018 15:25


274  Research handbook on human rights and digital technology

31 Principles, developed over a number of years in consultation with states and private
companies, outline a framework of responsibilities of states and business enterprises
which rests on three pillars.
The first pillar is the state duty to protect against human rights abuses by third parties.
As the first principle elaborates, ‘[t]his requires taking appropriate steps to prevent, inves-
tigate, punish and redress such abuse through effective policies, legislation, regulations
and adjudication’.
The second pillar is the corporate responsibility to respect human rights. This includes,
in the words of Principle 11, that business enterprises ‘should avoid infringing on the
human rights of others and should address adverse human rights impacts with which
they are involved’. Principle 12 specifies that this responsibility to respect human rights
refers to internationally recognized human rights, which includes, as a minimum, the
International Bill of Human Rights and the principles concerning fundamental rights set
out in the International Labour Organization’s Declaration on Fundamental Principles
and Rights at Work. However, it is important to note that the commentary to Principle
12 also clarifies that the responsibility of business enterprises to respect human rights
is ‘distinct from issues of legal liability and enforcement, which remain defined largely
by national law provisions in relevant jurisdictions’. The Guiding Principles thus do not
diverge from what has been stated in the General Comment above and, consequently,
failures to honour the stipulated responsibility by business enterprises as such does not
give rise to liability of the businesses concerned. Rather, as part of their duties according
to pillar one, states must create an appropriate legal framework which needs to include
appropriate remedies, including norms on liability and enforcement.30
The third pillar concerns precisely this question of remedies, as rights and obligations
need to be matched to appropriate and effective remedies when breached. Principle 25, as
a foundational principle of the remedies pillar, provides:

As part of their duty to protect against business-related human rights abuse, States must take
appropriate steps to ensure, through judicial, administrative, legislative or other appropriate
means, that when such abuses occur within their territory and/or jurisdiction those affected have
access to effective remedy.

It is apparent that the emphasis on maintaining a framework that gives access to appro-
priate and effective remedies is on states’ duties, confirming the statement made in the
preceding paragraphs that liability and enforcement issues are matters of domestic law.
This also echoes the states’ obligations discussed with regard to pillar one. It should not,
however, be overlooked that Principles 28 to 31 lay the foundations for non-state-based
grievance mechanisms, albeit of a non-binding nature.
To summarize: under current international and regional human rights law, non-state
actors including individuals and business enterprises generally do not face any binding
obligations. In particular, human rights law does not oblige these actors to directly
assume liability for breaches of human rights caused by their actions, nor to give access
to enforceable remedial actions. However, international human rights law does recognize

30
  This is a general principle known as the principle of complementarity between international
and domestic legal regimes.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 274 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  275

that breaches of human rights must be matched by effective, appropriate remedies. Thus,
as the law currently stands, the primary duty under international human rights law to
ensure that such mechanisms exist, are accessible and sufficient falls to states.

2.2.3  Consequences for extra-contractual liability law


Any attempt to derive from the principles described above the precise contours that
domestic extra-contractual liability regimes should have in the context of damages or
injury caused by algorithm-based acts faces clear limits.
Most importantly, states have a broad margin of appreciation regarding the design
of their legal order. They can, in principle, freely decide whether tort law should apply
to certain cases or if other instruments are preferable, which claims should be allowed,
which evidentiary rules should apply, etc. However, the margin of appreciation ends
where the effectiveness of human rights protection is compromised in contravention of
the human rights obligations of the state concerned. For example, in certain cases, even
where remedies under private law exist, if they do not sufficiently protect or provide an
adequate remedy for serious human rights abuses, penalization under criminal law may
be warranted. In other cases, criminal law remedies may be disproportionate and remedy
under tort law a more appropriate response. As Berka has specified:

the threshold can only be concretised by evaluative analysis whereby comparative law can at least
deliver starting points for what the requirements of an altogether effective fundamental rights
protection constitute.31

One aspect of critical importance must be emphasized: private law deals with the rela-
tionships of individuals, and the rights of each of these individuals must be respected
and protected. As a consequence, human rights considerations can have two opposing
effects on tort law. On the one hand, human rights law may demand redress through
an award of damages, but it may also limit the range of cases where liability may occur
and the amount of damages that should be paid.32 For example, human rights law can
in certain cases require that those who caused an injury to someone through an abnor-
mally dangerous activity, such as blasting a structure as part of a lawful construction
project, should be held liable, while at the same time excluding punitive damages as
disproportionate.
To return to the topic of this chapter, the legal responsibility and accountability for
algorithm-based decision-making, the following can be concluded from the preceding
remarks: states have a duty to shape a legal environment where individuals are sufficiently
protected against algorithmic decision-making encroaching on their human rights. This
includes a duty to ensure that appropriate and effective remedies exist for cases where
individuals suffer violations of their human rights, as protected under international

31
  Walter Berka, ‘Human Rights and Tort Law’ in Attila Fenyves et al. (eds), Tort Law in the
Jurisprudence of the European Court of Human Rights (De Gruyter, 2011) 237, 249.
32
  Consequently, many of the tort law-related cases brought before the European Court of
Human Rights (ECtHR) were about situations where it was claimed that the domestic court had
granted an extra-contractual claim in violation of the defendant’s human rights, see e.g., the recent
case law on the liability of Internet intermediaries, Delfi AS v. Estonia, Application no. 64569/09, 16
June 2015, and MTE v. Hungary, Application no. 22947/13, 2 February 2016.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 275 13/12/2018 15:25


276  Research handbook on human rights and digital technology

human rights law, as a result of such decision-making. The next part of this chapter is
devoted to an analysis of some of the legal options that extra-contractual liability can
provide in shaping such an adequate system.

3.  TORTS/EXTRA-CONTRACTUAL LIABILITY

In general terms, the law on extra-contractual liability deals with redressing harms caused
by wrongful acts not governed by an agreement between the wrongdoer and the victim.
While this field of law covers an extremely wide range of doctrines, this chapter focuses
on the elements of the core tort common law doctrines of negligence and strict (products)
liability from the perspective of US law and what could in a very simplified way be deemed
their German equivalents, namely section 823(1) of the German Civil Code33 and the
Products Liability34 Act.35 This focus is necessary in the context of this brief chapter in
order to present a meaningful, yet sufficiently detailed overview of relevant issues arising
in the context of the widespread and ever-growing use of artificial agents in modern
societies.
The analysis starts by looking at the treatment of torts committed with the involve-
ment of embodied or disembodied software of a deterministic type, that is, software
built around a well-defined sequence of steps to achieve certain goals. Such software’s
behaviour is entirely predetermined by its developers, and for the purposes of tort law it
is in many respects very similar to any other products or services that have been offered
on the market for decades and centuries. Taking a closer look at how tort law regulates
embodied and disembodied deterministic software is therefore a way for introducing
the general principles of the law. On this basis, the analysis then turns to sophisticated
autonomous artificial agents, agents with developed learning, adaptation, communica-
tion and interaction abilities whose action and reaction patterns evolve and change over
time, depending on the experiences made by the agent.36 Thus, by design, they develop a
behaviour none of their developers and users could entirely predict. As will be shown, this
creates great challenges for the law of torts.

33
  Bürgerliches Gesetzbuch (BGB) as promulgated on 2 January 2002 (BGBl. I 2, 2909; 2003 I
738), last amended by art. 3 of the Law of 24 May 2016 (BGBl. I 1190). It should be noted that this
also means that this chapter does not touch upon a number of other provisions on special forms of
extra-contractual liability, such as the violation of a protective law (s. 823(1)), endangering credit
(s. 824) or intentional damage contrary to public policy (s. 826).
34
  Gesetz über die Haftung für fehlerhafte Produkte (Produkthaftungsgesetz, ProdHaftG) of
15 December 1989 (BGBl. I 2198), last amended by art. 180 of the Regulation (Verordnung) of 31
August 2015 (BGBl. I 1474).
35
  Referring to ‘elements’ implies that this chapter does not discuss the various forms of relief
that could be sought under the doctrines, such as damages, punitive damages, compensation for
immaterial damages, injunctions, provision of information, destruction of tools, restitution and
payment of attorney’s fees.
36
  Other terms frequently used include electronic agent, intelligent agent and software agents.
However, as Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 27–28,
stress, the term ‘artifical agent’ is the broadest of them, being able to cover all kinds of computing
processes and both embodied and disembodied agents.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 276 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  277

3.1  Software with Predetermined Behaviour: Basics of Tort Law

To start with a concrete example, let us imagine a car with a software-based braking
assistance system. Assume that the software contains a bug causing the braking assistance
system to react at times too slowly, and as a consequence, the car runs over a pedestrian
causing serious injuries. Possible actors who may be liable to the victim include: the
manufacturer, the car seller, the owner of the car and the driver. Or in more abstract
terms: the manufacturer, the distributor, the owner, and the user or consumer. Moreover,
insurance claims may play a role as well. However, for the sake of clarity this chapter will
focus entirely on the liability of users/operators and manufacturers as the key players in
the marketing and use of any goods and service.

3.1.1 User/operator

Fault-based liability    The liability of the user would first of all be assessed based on the
principles of fault-based liability. In this context, intentional and negligent behaviour of
the user as a source of liability can be distinguished. However, for the present purpose,
we will neglect intention, as it does not pose any particular challenges in the context of
algorithm-based decisions and acts, and will focus on negligence.
A plaintiff asking for damages based on negligence on the part of the defendant would
have to demonstrate that the defendant breached a duty of care, thereby causing the
damage at issue. Thus, there are at least two crucial elements: breach of duty (negligence)
and causation.37
The following is a typical definition of negligence under US law, which may apply more
broadly as an example of negligence under common law jurisdictions:

Negligence is lack of ordinary care. It is a failure to use that degree of care that a reasonably
prudent person would have used under the same circumstances. Negligence may arise from doing
an act that a reasonably prudent person would not have done under the circumstances, or, on the
other hand, from failing to do an act that a reasonably prudent person would have done under
the same circumstances.38

German law defines the same issue as follows: ‘[a] person acts negligently if [s]he fails to
exercise reasonable care’.39
In the example introduced above, the fact that the car’s software contained a bug
does not prove in and of itself that the driver breached a duty of care. The standard
for determining this would be the behaviour of a reasonably prudent person in her
situation, and it therefore follows that a reasonably prudent person could simply not
know if her car contained a dangerous software bug. Therefore, the law does not expect
her to refrain from driving her car. To find liability on the part of the driver, some

37
  This is a vastly simplifying statement guided by the goal to focus on the questions most
pertinent to the subject-matter of this chapter. All torts specialists may forgive the author.
38
  New York Pattern Jury Instruction: Covol 2:10 (3rd edn, 2000), cited in John C.P. Goldberg
and Benjamin C. Zipursky, Torts (OUP, 2010) 83.
39
  German Civil Code, s. 276(2).

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 277 13/12/2018 15:25


278  Research handbook on human rights and digital technology

other form of faulty behaviour would be required. That could, for example, be the
case if she had had reason to suspect a problem: for example, if the car had displayed
a warning sign, or she had experienced braking problems in the past, or if she had
been informed by the manufacturer that the brakes were faulty and needed to receive
an update. In those situations, it could easily be argued that continuing to drive the
car amounted to negligent behaviour, since a reasonably prudent person would have
refrained from driving the car. In other contexts, using a beta-version of software for
security-sensitive  tasks or items for the purposes for which they are not built could
constitute negligence.
Even if one can establish fault, finding liability would still require that the fault was
the cause of the harm. The core test for finding causation is some variation of the fol-
lowing question: would the plaintiff have suffered the harm if the defendant had acted
diligently?40 In the examples given in the preceding paragraph, the answer would be
without doubt ‘yes’.
The obvious downside of this simple test is its broadness. Taken literally, it would be
easy to establish liability even for the most far-fetched consequences even of tiny mishaps.
Assume in the example above the only damage initially done was a little crack in a chair
the pedestrian was carrying. One week later, the pedestrian sits down on the chair in her
home while holding a candle, the chair collapses, the candle drops, the curtains catch fire
and ultimately the entire house burns down. In this case, the breach of the duty – driving
although knowing that the brakes are not fully functional – was the cause of the destruc-
tion of the house. However, it seems problematic to let the driver carry the financial
burden of that catastrophic outcome.
Courts and scholars have discussed these issues at length, and devised various
approaches to avoid these kind of outcomes, partly by looking to limit the scope of the
duty of care (what is the purpose of the duty of care that was breached?), and partly by
trying to limit the relevant causal link to some sort of a natural, ordinary course of events,
excluding unexpected turns. What all these limitations have in common is that they ask
what outcomes of a breach of a duty would be reasonably foreseeable. In the example
above, the house burning down would not be a reasonably foreseeable result of the driver’s
breach of her duty. We will return to foreseeability several times and in particular in the
discussion of autonomous artificial agents.

Strict(er) liability    In its narrow sense, the term ‘strict liability’ refers to cases where a
person is held liable irrespective of any personal wrongdoing. That is, even if that person
has exercised utmost care, that would not negate liability. Laws require this kind of
liability for certain particular types of cases. Areas where we come across strict liability
regimes include ultra-hazardous activities,41 other activities linked to particular risks,42
certain parts of product liability and acts committed by animals or employees. Differences
between jurisdictions in this field can be enormous. It should be noted, however, that even

40
  Goldberg and Zipursky, Torts, n. 38 above, 95; Hein Kötz and Gerhard Wagner, Deliktsrecht
(10th edn, Wolters Kluwer, 2006) note 186.
41
  Such as blasting with explosives.
42
  Such as operating a car or an airplane.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 278 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  279

if understood in this narrow sense, the term ‘strict liability’ does not imply that victims
need not demonstrate other elements, such as the existence of a defect in product liability
cases and the causality for the injuries suffered.
In a wider sense – and this is the meaning applied herein – the term could also refer
to situations where there is still a breach of duty requirement but with the important
twist that it is presumed that the defendant had been at fault and needs to disprove this
presumption by showing that all duties had been properly discharged.
Users of items, in particular consumers, do not normally face some form of strict
liability. Sometimes, however, jurisdictions foresee strict liability for merely owning or
operating an item, usually in response to a perceived particular danger emanating from
the objects in question. In many jurisdictions, for example, operating a car is deemed
so risky as to justify obliging car owners to pay for the damages caused by their cars,
regardless of any fault.43 For aeroplanes, similar laws exist. As mentioned above, liability
for injuries caused by animals often takes the form of strict liability.
In order to ensure that victims have sufficiently solvent debtors, jurisdictions imposing
strict liability often also impose an obligation on the owners to obtain insurance covering
a certain minimum amount of damages. Moreover, putting the burden onto the owner
in these cases has the benefit that victims can bring their claims against a solvent debtor,
regardless of who drove the car.
Considering that software with predetermined behaviour does not create particular
risks, the law currently does not impose strict liability on users or owners of such
software. While this may be different when the software is embedded in goods that are
deemed dangerous, such as cars and airplanes, the imposition of strict liability in these
cases is not based on the software used but on the risk attributed to the entire physical
object.

3.1.2 Manufacturer
Under fault-based liability rules, victims injured by defective products face the problem
that defects as such are not conclusive of breach of duty, as injury may occur even if
all persons involved in the design and manufacturing exercise utmost diligence. Proving
that the manufacturer has neglected its duties is thus often close to impossible, which
would leave many victims without redress. Many jurisdictions, including the United
States and Germany, have responded to this problem by adopting (more or less) strict
liability regimes, supported and challenged by numerous commentators.44 Software and

43
  The owners may have recourse against the person actually using the car and causing the
damages.
44
  Already long before strict liability for defective products gained momentum in the 1960s and
1970s, Judge Traynor had already laid out important arguments for establishing strict products
liability regimes. Most importantly, the manufacturer is generally better placed than anyone else to
anticipate some hazards and guard against the recurrence of others. Moreover, the risk of injury
can be insured by the manufacturer and distributed among the public as a cost of doing business.
And it is in the public interest to discourage the marketing of products having defects that put
the public at risk, Escola v. Coca Cola Bottling Co., 150 P.2d 436 (Cal. 1944), concurring opinion.
Later, Guido Calabresi’s works proved highly influential on supporters of strict liability, e.g. Guido
Calabresi and Jon T. Hirschoff, ‘Toward a Test for Strict Liability in Torts’ (1972) 81(6) Yale Law
Journal 1055, and Guido Calabresi, The Cost of Accidents: A Legal and Economic Analysis (Yale

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 279 13/12/2018 15:25


280  Research handbook on human rights and digital technology

­software-based products are, in principle, covered by these regimes.45 The following


paragraphs describe the fundamental principles of the US and the German regimes.
Reflecting decades of case law46 and scholarly discussion, section 1 of the Restatement
(Third) of Torts provides the following basic rule of US products liability: ‘One engaged
in the business of selling or otherwise distributing products who sells or distributes a
defective product is subject to liability for harm to persons or property caused by the
defect’. Similarly, the first sentence of section 1(1) of the German Product Liability Act,
which implemented an EC Directive,47 lays down the elements of strict product liability,
stating ‘[i]f because of the defect of a product someone is killed, his or her body injured or
health impaired or a thing damaged, the producer of this product is liable for the damages
that arise from this’.48 Both formulas share a few commonalities can be identified: there
needs to be a product, the product must have been defective, and the defect must have
caused personal injury or a damage to property. These elements are addressed individually
in the following paragraphs.

Product    Both in the United States and under the EU system, the first question to be
asked is if a product was the cause of the injury giving rise to liability. Section 19 of the
Restatement (Third) of Torts defines ‘product’ as ‘tangible personal property distributed
commercially for use or consumption’. It goes on:

Other items, such as real property and electricity, are products when the context of their distribu-
tion and use is sufficiently analogous to the distribution and use of tangible personal property
that it is appropriate to apply the rules stated in this Restatement.

Section 2 of the German Product Liability Act states that products are ‘all movables even
though incorporated into another movable or into an immovable, and electricity’. For
the present purpose, it is important to note that both definitions require an element of
tangibility, either expressly, as the Restatement, or impliedly, as the word ‘movable’ in the
Directive suggests.
This raises the question whether software can be considered a product within the
meaning of the US and EU products liability laws.49 The answer appears to be straight-
forward as long as the software is embodied in a device. In such cases, one can easily

University Press, 1970). A very critical voice can be found in Richard A. Posner, ‘Strict Liability: A
Comment’ (1973) 2 Journal of Legal Studies 205.
45
 See below.
46
  Beginning with the famous case Greenman v. Yuba Power Products, Inc., 377 P.2d 897 (Cal.
1963).
47
  Council Directive of 25 July 1985 on the approximation of the laws, regulations and adminis-
trative provisions of the Member States concerning liability for defective products [1985] OJ L210/29.
48
  The EU Directive provides in art. 1 even more simply: ‘The producer shall be liable for
damage caused by a defect in his product’. The elements ‘producer’, ‘product’, ‘defect’ and
‘damage’ are defined by arts. 2, 3, 6 and 9.
49
  The European Commission in the past has answered this question in the affirmative: answer
of the Commission of the European Communities of 15 November 1988 to Written Question
No. 706/88 by Mr. Gijs de Vries (LDR, NL) (89/C 114/76) [1989] OJ C114/42. However, consider-
ing the date the answer was given, it is of limited value for the classification of downloads or
Cloud-services.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 280 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  281

speak of a product.50 The treatment of software itself, however, is subject to debate. One
line of reasoning distinguishes between software distributed on a data carrier (product)
and software available for download (service).51 Others argue that downloaded software
should be considered a product.52 One main argument supporting this latter position is
that even downloaded software has a tangible quality as it is, both before and after the
download, embodied in a physical data carrier.53 So far, there is no conclusive outcome
to this debate, although in the United States, downloads appear to rather be treated as
a service54 whereas one may speculate that there is a tendency within the EU to include
software distributed via download in the category of products.55 Another distinction
made is one between customized and standard software, with the argument that the
former resembles rather a service than a good.56 Again, the US view appears to lean
towards a denial of a product quality of customized software, whereas the situation in
the EU may be more open.57

Defect    US and German products liability law distinguish three kinds of product defects
that may give rise to products liability claims.58
The first category are manufacturing defects. Here, claims are based on the fact that an

50
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 136; Jürgen
Oechsler, in J. von Staudinger Kommentar zum Bürgerlichen Gesetzbuch mit Einführungsgesetz
und Nebengesetzen, Buch 2, Recht der Schuldverhältnisse, §§ 826–829, ProdHaftG (Unerlaubte
Handlungen 2, Produkhaftung) (Sellier, de Gruyter 2014), § 2 ProdHaftG, note 64.
51
  Oechsler, n. 50 above, Einl zum ProdHaftG, note 65; Edwin Frietsch, in Hans Cl. Taschner
and Edwin Frietsch (eds), Produkthaftungsgesetz und EG-Produkthaftungsrichtlinie (2nd edn, C.H.
Beck, 1990) § 2 ProdHaftG, notes 22–23.
52
  Gerhard Wagner, in Münchener Kommentar zum Bürgerlichen Gesetzbuch, vol. 5, Schuldrecht,
Besonderer Teil III, §§ 705–853, Partnerschaftsgesetz, Produkthaftungsgesetz (5th edn, C.H. Beck,
2009) § 2 ProdHaftG, note 16; Friedrich Graf von Westphalen, in Ulrich Foerste and Friedrich Graf
von Westphalen (eds), Produkthaftungshandbuch (3rd edn, C.H. Beck, 2012) § 47, note 44; Andreas
Cahn, ‘Produkthaftung für verkörperte geistige Leistungen’ (1996) NJW 2899, at 2904; Rüdiger
Krause, in Soergel, Bürgerliches Gesetzbuch mit Einführungsgesetz und Nebengesetzen, vol. 12,
Schuldrecht 10, §§ 823–853, ProdHG, UmweltHG (13th edn, W. Kohlhammer, 2005) § 2 ProdHaftG,
note 4.
53
  Cahn, ‘Produkthaftung für verkörperte geistige Leistungen’, n. 52 above, 2899; Gerald
Spindler, ‘Verschuldensunabhängige Produkthaftung im Internet’ (1998) MMR 119, at 121.
54
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 136.
55
  The recent case law of the CJEU on the exhaustion of the distribution right in software
under copyright law may indicate that the courts tend to liken immaterial forms of distribution to
material forms, which may also indicate a certain openness to an equal treatement in other areas of
the law, see Case C-128/11 Oracle v. UsedSoft [2012] OJ C287/10.
56
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 136; Wagner,
n. 52 above, § 2 ProdHaftG, note 15; Cahn, ‘Produkthaftung für verkörpterte geistige Leistungen’,
n. 52 above, 2899.
57
  See Jochen Marly, Praxishandbuch Softwarerecht (6th edn, C.H. Beck, 2014) note 1863;
Oechsler, n. 50 above, § 2 ProdHaftG, note 69, and the numerous references therein.
58
  Although the language of German Product Liability Act, s. 1(1) seems to suggest a unitary
understanding of ‘defect’ – it determines that a product is defective when it ‘does not provide the
safety reasonably expected’ – German courts and the majority of the commentators distinguish the
three forms of defects discussed herein, see e.g., Wagner, n. 52 above, § 3 ProdHaftG, notes 29–37;
Oechsler, n. 50 above, § 3 ProdHaftG, notes 103–21.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 281 13/12/2018 15:25


282  Research handbook on human rights and digital technology

individual item of an otherwise safe product line fails to live up to safety standards. A
car for which the employees of the manufacturer forgot to install an important software
update before sending it out, would be an example. If a manufacturing defect is at issue,
a strict liability rule in the narrow sense applies. In other words, it is irrelevant if the
manufacturer could have avoided the defect by taking particular precautions.59
The second category, which is for the purposes of this chapter particularly interesting,
are design defects. This includes cases where the design of a product line itself is not suf-
ficiently safe, making the products dangerous for their intended and reasonably expected
uses.60
Chopra and White point out that distinguishing between manufacturing and design
effects in the context of coding errors presents particular challenges. The question is of
great importance as the test used for examining manufacturing defects is different from
that used for assessing design issues. In the former case, as discussed above, strict liability
would apply. However, in the latter case, as we will see below, a test close to fault-based
liability would be used. One possibility for a distinction could be to treat only copying
errors as manufacturing defects, and all other errors as design defects.61 Another approach
would be to limit the realm of design defects to high-level design defects and consider
low-level defects as well as copying errors as manufacturing defects.62 Errors at the level
of the underlying algorithm would hence be design errors, implementation issues would
fall into the category of manufacturing errors.63
The question of which test to apply to determine design defect is one of the main bat-
tlefields of US tort law. Two tests have been discussed for decades. One is the consumer
expectation test, the other the so-called risk/utility test. The former asks if the product in
question is as safe as a reasonable consumer was entitled to expect. The risk/utility test
weighs the benefits gained and risks provoked by the specific design in relation to each
other. According to the test, if the benefits of the challenged design do not outweigh the
risk of danger inherent in such design, it is deemed defective.64 Both tests are applied by
the courts, with a strong tendency towards applying the risk/utility test in cases involving
complex products, leaving the consumer expectation test to defects of simple products.65
At first glance, section 3(1) of the German Products Liability Act, which provides that
a product is defective when it ‘does not provide the safety reasonably expected’, seems to
lean towards the reasonable expectation test just mentioned. However, German courts
interpret this provision with regard to design defects as requiring a balancing between

59
  Oechsler, n. 50 above, § 3 ProdHaftG, notes 104–6, § 1 ProdHaftG, note 78; Restatement
(Third) of Torts, s. 2(a).
60
  Oechsler, n. 50 above, note 108; Goldberg and Zipursky, Torts, n. 38 above, 284; Restatement
(Third) of Torts, s. 2(b).
61
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 137.
62
  Peter A. Alces, ‘W(h)ither Warranty: The B(l)oom of Products Liability Theory in Cases of
Deficient Software Design’ (1999) 87 Cal. L. Rev. 269, at 300–1; Chopra and White, A Legal Theory
for Autonomous Artificial Agents, n. 1 above, 137.
63
  Ibid. 137.
64
 See Barker v. Lull Enn’g Co., 20 Cal. 3D 413, 418 (1978).
65
  Goldberg and Zipursky, Torts, n. 38 above, 295; Aaron Twerski and James A. Henderson Jr,
‘Manufacturer’s Liability for Defective Product Designs: The Triumph of Risk-Utility’ (2009) 74(3)
Brooklyn Law Review 1061, at 1069, 1106.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 282 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  283

the dangers flowing from the product in question, on the one hand, and the cost of
avoidance, on the other, moving it close to the risk/utility test. The greater the danger, the
higher the expectation that comprehensive and possibly costly safety measures have been
taken in order to address the risk, and vice versa. A producer has to take only those safety
measures which are objectively necessary for preventing a danger in the circumstances of
the particular case, and objectively fair and reasonable.66 Several factors need to be taken
into account, including, inter alia, the price of the product in question, the risk linked to
the design used (taking into account the probability of occurrence and the severity of the
injuries possible) and the possibility and cost of alternative designs.67
One question arising in this context that has great relevance for highly innovative sec-
tors such as the software-based industries is whether advances in science and technology
occurring after the product in question was put into circulation should be taken into
account when determining if it suffered from a design defect. If the risks connected to
a product are to be assessed in hindsight, taking into account new insights which the
manufacturer could not possibly have known when marketing the product, it would result
in true strict liability. Otherwise, when taking the perspective of the manufacturer at the
time of putting the product into circulation, the focus shifts towards foreseeable risks,
moving the test towards a fault-based one.68 Both German69 and US law lean towards a
rejection of hindsight-based assessments,70 although there appears to be some division
among US courts.71
Finally, missing or insufficient warnings or instruction may also give rise to strict prod-
ucts liability claims. This category of defects covers the lack of information that would
have made the ordinary use of a product safe. This kind of defect can have particular
importance when a design is dangerous without being defective, as this may give rise to

66
  BGHZ 181, 253, note 24.
67
  Oechsler, n. 50 above, § 3 ProdHaftG, note 87.
68
  Goldberg and Zikursky, Torts, n. 38 above, 296; Michael J. Toke, ‘Restatement (Third) of
Torts and Design Defectiveness in American Products Liability Law’ (1996) 5(2) Cornell Journal of
Law and Public Policy 239, Article 5, at 266. For the German law on products with defective designs,
see Hein Kötz, ‘Ist die Produkthaftung eine vom Verschulden unabhängige Haftung?’ in Bernhard
Pfister and Michael R. Will (eds), Festschrift für Werner Lorenz zum siebzigsten Geburtstag (J.C.B.
Mohr, Paul Siebeck, 1991) 110, at 114–19; Oechsler, n. 50 above, Einl zum ProdHaftG, note 35.
However, this is only true when the analysis is limited to manufacturers designing and producing the
products themselves. If the scope of the inquiry broadens so as to include retailers, importers, sup-
pliers, the outcome may change. Retailers, for example, may act with utmost prudence when buying
goods for sale and still obtain defective products. And a manufacturer may receive important parts
of a product from a supplier without having the possibility to verify if the design used is appropriate.
69
  Products Liability Act, s. 3(1)(c), provides that one of the factors for determining if a prod-
uct is deemed defective is the time when it was put on the market. Moreover, s. 1(2) No. 5 excludes
liability if ‘the state of scientific and technical knowledge at the time when he [the manufacturer]
put the product into circulation was not such as to enable the existence of the defect to be discov-
ered’. However, it should be noted that the manufacturer carries the burden of proof for the latter,
which creates a presumption that the manufacturer should have known about the risk and possible
ways to avoid it that may have materialized post-sale.
70
  Wagner, n. 52 above, § 3 ProdHaftG, notes 25–26; Goldberg and Zipursky, Torts, n. 38 above,
291, 295.
71
  See Toke, ‘Restatement (Third) of Torts and Design Defectiveness’, n. 68 above, in particular
264–67.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 283 13/12/2018 15:25


284  Research handbook on human rights and digital technology

the need of balancing this out by additional warnings and instructions. For example, a car
may have a relatively sophisticated driving assistance system allowing the car to navigate
traffic in simple conditions mostly by itself. However, it may not be able to consistently
react appropriately in complex urban situations. The manufacturer has to give clear
instructions and warnings about such limits. For another example, imagine a personal
assistance robot whose development has not been completed, with the consequence that
its use could create unexpected risks. If the manufacturer would like to have it tested on a
larger scale, it needs to label it as an unfinished version and display clear and conspicuous
warnings, when making it available to interested users.

Damages caused    Both US and German products liability law limit the range of dam-
ages that can be recovered under products liability law to personal injuries and damage
to property. Neither allows for the recovery of pure economic loss. However, there are a
number of crucial differences.
First, German law covers damage to property only if a good other than the defective prod-
uct itself was damaged, provided that the former ‘is of a type ordinarily intended for private
use or consumption, and was used by the injured person mainly for his own private use or
consumption’ (German Products Liability Act, section 1(1), second sentence). US products
liability law, while also only covering damage to items other than the defective product,
recognizes no general restrictions as to the purpose and use of the damaged property.
Second, under German law, several courts and commentators have recognized that
loss of data can amount to property damage.72 US legal doctrine appears to treat data
loss as pure economic damage, therefore excluding data loss from recoverable forms of
damage.73 This is a serious shortfall for the victims of many cases of software-caused
damaging incidents, as data has become an extremely valuable asset, with its availability
and integrity of vital importance for the affected. However, the fact that German products
liability law only covers damage to property used for private purposes limits the practical
reach of its recognition of data loss as recoverable damage, as most cases of data loss with
important economic implications will affect businesses rather than individuals.

Burden of proof   A few words here on the burden of proof under products liability
principles, as the burden of proof is an important and often decisive factor for the
practical effectiveness of a remedy. Focusing on the factors most relevant to the present
analysis, both under German and US law, a plaintiff has to prove that the product at issue
was defective, that she was injured and that the defect proximately caused her injury.74

72
  See OLG Oldenburg, (2011) BeckRS 28832; OLG Karlsruhe, (1996) NJW 200, 201;
Robert John, Haftung für künstliche Intelligenz: Rechtliche Beurteilung des Einsatzes intelligenter
Softwareagentn im E-Commerce (Verlag Dr Kovac, 2007) 265; Klaus Meier and Andreas Wehlau,
‘Die zivilrechtliche Behandlung von Datenlöschung, Datenverlust und Datenzerstörung’ (1998)
NJW 1585, at 1588; Gerald Spindler, ‘IT-Sicherheit und Produkthaftung: Sicherheitslücken,
Pflichten der Hersteller und der Softwarebenutzer’ (2004) NJW 3145, at 3146; Gerald Spindler, in
Heinz Georg Bamberger and Herbert Roth (eds), Kommentar zum Bürgerlichen Gesetzbuch, vol. 2,
§§ 611–1296, AGG, ErbbauRG, WEG (3rd edn, C.H. Beck, 2012) § 823, note 55.
73
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 121–22.
74
  Additionally, it must also be proved that the product in question is actually a product and
that the defendant sold the product.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 284 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  285

However, whereas US law demands that the plaintiff prove that the defect existed when
the manufacturer put the product on the market, there is a rebuttable presumption under
German law that this was the case.75 As we have seen above, in manufacturing defect cases,
it is irrelevant if the defendant was at fault. However, in design defect cases, the plaintiff
has to show that the specific risks that have materialized in the harm caused to her had
been foreseeable and could have been avoided by implementing, at reasonable cost, an
available better design option; the situation is similar in warning and instruction defect
cases. This is quite a heavy burden, limiting the effective reach of products liability.

Manufacturer’s liability under Civil Code, section 823(1) (Produzentenhaftung)    Intrigu­


ingly, German courts, long before the advent of the Products Liability Directive, developed
a separate form of manufacturers tort liability (Produzentenhaftung) based on general tort
law, namely section 823(1) of the Civil Code. While this section generally provides for
plain fault-based liability, the manufacturer’s liability regime is considerably less burden-
some for plaintiffs. It shares many traits with the regime of the Products Liability Act. A
number of differences exist, however.
To start with, in contrast to the regime of the Product Liability Act, manufacturer’s
liability principles under section 823(1) enable manufacturers to bring evidence that all
necessary precautions had been taken, with the defect consequently not being reasonably
avoidable, thereby raising the bar for victims to bring a successful claim. This may be
important not only for manufacturing defects but also for design defect cases where a
supplier was responsible for the faulty design.76
Other differences broaden the scope of liability. First, section 823(1) allows a victim to
recover damages for harm caused to a considerably broader range of so-called absolute
rights, which includes, inter alia, life, bodily integrity, health, property, the general right of
personality, the right of an established and operating business, and intellectual property.
Thus, section 823(1) would allow, for example, for damages claims based on privacy
breaches or the unauthorized use of intellectual property rights.77 Second, damages to
property, both personal and real property, that is used for professional or commercial
purposes are recoverable under section 823(1). Third, section 823(1) (in conjunction with
section 253(2)) also allows for the recovery of intangible damages.
Finally, under section 823(1), certain market and product monitoring duties have been
developed that are not recognized under the Products Liability Act. The breach of those
duties can give rise to liability under section 823(1). Manufacturers have the duty to moni-
tor the market and collect information on incidents and risks related to the use of their
products. Notably, this applies even when the products were not defective when put into
circulation. The monitoring duty should incentivize the manufacturer to take appropriate
steps for protecting the public against dangers surfacing over time. Apart from the influ-
ence such information may and should have on future decisions on the design of products
and their manufacturing processes, information gathered through market monitoring

75
  Product Liability Act, s. 1(2) No. 2.
76
  However, it should be noted that manufacturers rarely succeed in proving that all necessary
precautions had been taken.
77
  However, both in the field of privacy and intellectual property special provisions exist.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 285 13/12/2018 15:25


286  Research handbook on human rights and digital technology

may also lead to a duty to warn the public against new or newly discovered risks. If such
a duty arises, the scope of the measures to be taken depends on the probability that the
risk may materialize, the value of the interests at stake, and the possibilities to reach
the persons likely to be affected by the danger.78 While these duties are well established,
the question whether manufacturers could also face a duty to recall or repair products
in particular cases is far from settled. For cases where a defect existed when the product
was put on the market, such a duty can quite comfortably be established.79 When issues
arise, however, without the product being initially defective, it is far more difficult to find
a reasonable basis in tort law for imposing a duty upon a manufacturer to take back or
repair the products in question.80
In the United States, the same issues are hotly debated, mostly with inconclusive
results.81 However, the Restatement (Third) on Torts has addressed the issue of warnings
and recalls in its sections 10 and 11. Similar to the approach taken in Germany, warnings
appear to be warranted irrespective of the existence of a defect at the time of the sale of
the product in question (section 10(b)(1)). A duty to recall a product, however, requires
that a governmental directive requiring a recall had been issued or that the manufacturer
itself had committed to a recall (section 11(a)). Both warning and recall duties apply a
reasonable person standard, and are thus negligent-based.82
These duties, and the possibility to recover damages in case of breaches, are highly
relevant for a dynamic sector like the software and artificial intelligence business.
Computer programs, technological environments, hardware environments, forms of use,
etc. constantly evolve. New vulnerabilities and other risks surface every day. Bugs need to
be repaired on an ongoing basis. A duty to warn users, as well as remedying severe issues
by providing updates (as in the car example given above) appears to be a reasonable way
for protecting the public’s interest.83

3.1.3  Service providers


Services where no physical object is produced and transferred nor software installed on
the customer’s computer, by contrast, are not covered by products liability regimes.84
In such cases, the general principles on fault-based liability apply. Thus, the victim has
to prove that the injury suffered was proximately caused by a breach of a duty of care

78
  Wagner, n. 52 above, § 823, notes 646–47; Krause, n. 52 above, Anh III § 823, notes 25–27.
79
  Wagner, n. 52 above, § 823, note 653; Krause, n. 52 above, Anh III § 823, note 28.
80
  Idem.
81
  See American Bar Association, Post-Sale Duty to Warn, A Report of the Products Liability
Committee (2004), available at www.productliabilityprevention.com/images/5-PostSaleDutyto​
WarnMonograph.pdf (accessed 18 February 2018).
82
  J. David Prince and Kenneth Ross, ‘Post-Sale Duties: The Most Expansive Theory in
Products Liability’ (2009) 74 Brook. L. Rev. 963, at 968.
83
  Although being beyond the scope of this chapter, it should be noted that this field of duties to
warn and to recall are to a large extent governed by regulatory requirements that may often require
further-reaching actions than what the above standards demand in order to avoid tort liability.
84
  Wagner, n. 52 above, § 2 ProdHaftG, note 16; Johannes Hager, in J. von Staudinger,
Kommentar zum Bürgerlichen Gesetzbuch mit Einführungsgetz und Nebengesetzen, vol. 2, Recht
der Schuldverhätltnisse, §§ 823 E-I, 824, 825 (Unerlaubte Handlungen 1 – Teilband 2) (Sellier, de
Gruyter, 2009) § 823 F, note 6.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 286 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  287

on the part of the provider. For example, imagine an e-health application that assists
the diagnosis of certain conditions and determining the appropriate medication. If this
application was installed on a doctor’s computer and the output was in a particular case
erroneous, causing severe harm to a patient, the patient could bring claims against the
producer of the application under products (and manufacturer’s, if in Germany) liability
principles. Assume now that the program, instead of being installed at the doctor’s, was
run on the servers of the application provider. Here, the applicable rules would be those
of negligence. These different outcomes may appear arbitrary and at times inadequate
considering that the underlying rationale for products liability law may apply to services
as well: the victim cannot know how the product or service has been designed, who was
involved, on what infrastructure it is run, with what safety precautions, how it was tested,
etc. However, the practical outcomes may often not be that different, as in negligence cases
courts can apply certain alleviation of proof doctrines.85 For example, under the re ipsa
loquitur doctrine, US courts can presume fault when the event that injured the plaintiff
was of the kind that typically does not occur without carelessness.86 In that case, it is up
to the defendant to show that other circumstances caused the event to happen. In the
e-health application case, for example, the provider could bring forward evidence of being
hacked despite the tightest safety precautions.

3.2  Autonomous Artificial Agents

When we turn to autonomous artificial agents, the matter becomes increasingly complex.
Their behaviour is not easily predictable, and only to some degree. As they learn from
their experiences and react flexibly to their environment, every single autonomous arti-
ficial agent will develop different goals, different strategies, and consequently a different
behaviour both in terms of its actions and reactions. This puts pressure on the concept of
foreseeability, crucial for fault-related liability.

3.2.1 User/consumer

Fault-based    When looking at a potential fault-linked base for a liability of users of


autonomous agents, the situation does not appear to be fundamentally different from
that discussed above. The mere fact of using an autonomous artificial agent should not,
in normal circumstances, give rise to a claim of negligent behaviour. In particular, it
cannot be argued that using an autonomous agent amounts per se to an act of negligence.
Additional factors must come into play for justifying such a conclusion.
Such factors could be the same as or similar to those discussed above.87 But in the
context of using autonomous artificial agents, other aspects may be relevant. First of
all, artificial agents may depend on training and configuration, even fulfilling their social
needs, as Pagallo puts it,88 by their operators or users; they ‘are best not thought of as

85
  See Wagner, n. 52 above, § 823, note 325; Chopra and White, A Legal Theory for Autonomous
Artificial Agents, n. 1 above, 139–40.
86
  Goldberg and Zipursky, Torts, n. 38 above, 153.
87
  See section 2.1.1.
88
 Pagallo, The Laws of Robots, n. 3 above, 123–24.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 287 13/12/2018 15:25


288  Research handbook on human rights and digital technology

complete when they begin their operational “careers”’.89 Thus, there are factors the
operators or users are in control of, such as provision of insufficient data, inadequate
training environments, premature productive use, etc., that could all contribute to a faulty
behaviour of an agent and consequently be the basis of a negligence claim.90
Another aspect could be supervision. An agent that is expected to act autonomously
(and thus unpredictably) may require a different and potentially more frequent supervi-
sion than other tools routinely employed.91 This is true especially for the earlier stages of
use for particular purposes.
However, one may wonder about the precise nature and scope of a duty to supervise
and even interfere with an artificial agent’s choices and actions. After all, it is envis-
aged that artificial agents will be used not only for convenience, but also because we
consider their abilities potentially far superior to those of humans. While this problem
of interaction and co-action of humans and machines has existed for years now, it
gains in weight as agents develop skills and insights developers and users cannot be
fully aware of. This makes it far more difficult to judge the actions of such agents in
their environment and consequently to make an informed decision as to whether and
how to interfere. This affects fundamentally the capacity to clearly define duties of care
in specific situations.
Looking at the same issue of unpredictably evolving behaviour from a different angle,
it also impacts the examination of causation. As discussed above, proximate causation
requires, inter alia, that it must have been foreseeable that the breach of duty would
eventually lead to the injuries at issue. Often enough, it will be possible to argue that due
to an agent’s constant learning, communicating and adapting, a particular chain of events
could not possibly have been foreseen.92
These difficulties lead us directly into the realm of strict liability and other forms of
expanded liability.

Strict(er) liability    When it comes to autonomous artificial agents, a number of scholars


discuss the idea of applying stricter liability standards, at least to some types.93 The liabil-
ity for dangerous goods or activities and for acts committed by animals and employees
are often referred to as possible models that could be extended.

89
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 138;
Pagallo, The Laws of Robots, n. 3 above, 72, 123–24.
90
  See Pagallo, The Laws of Robots, n. 3 above, 124 and European Parliament, Resolution of
16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics,
2015/2103(INL), para 56: ‘the greater a robot’s learning capability or autonomy, and the longer a
robot’s training, the greater the responsibility of its trainer should be.’
91
  See Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 132–35.
92
  See Curtis E.A. Karnow, ‘Liability for Distributed Artificial Intelligences’ (1996) 11 Berkeley
Tech. L.J. 147, at 148; Curtis E.A. Karnow, ‘The Application of Traditional Tort Theory to
Embodied Machine Intelligence’ in Ryan Calo, Michael Froomkin and Ian Kerr (eds), Robot Law
(Edward Elgar, 2016) 73–74; Chopra and White, A Legal Theory for Autonomous Artificial Agents,
n. 1 above, 137.
93
 Pagallo, The Laws of Robots, n. 3 above, 121–22; Chopra and White, A Legal Theory for
Autonomous Artificial Agents, n. 1 above, 127–28; Thomas Schulz, Verantwortlichkeit bei autonom
agierenden Systemen: Fortentwicklung des Rechts und Gestaltung der Technik (Nomos, 2015) 146–47.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 288 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  289

(1) Are autonomous artificial agents generally dangerous? As discussed above, users
of most kinds of items are not subject of any strict liability regimes. As one exception,
we pointed to the strict liability of the owners of cars. The reason for imposing no-fault
liability regimes in these cases is the increased risk inherent in the operation of those
devices. It is worth considering if autonomous artificial agents could fall into this category
of dangerous products. Indeed, for new technologies, Posner has pointed out that:

[n]ew activities tend to be dangerous because there is little experience with coping with whatever
dangers they present. For the same reason, the dangers may not be avoidable simply by taking
care—yet the fact that the activities are new implies that there probably are good substitutes for
them. The best method of accident control may therefore be to cut back on the scale of the new
activity—to slow its spread while more is learned about how to conduct it safely.94

However, considering the breadth of possible autonomous artificial agents and the
broad variety of conceivable fields of operation, it appears unreasonable to claim that all
of them could be classified as particularly dangerous or even dangerous at all. Thus, only
for certain agents would it be justified to create a non-fault liability system based on the
heightened risk connected to their operation.95 In these cases, mandatory insurance cover-
age would be an important factor for ensuring that victims can have recourse against a
solvent party, while protecting owners or operators from ruinous consequences of harms
caused by their agents.
(2) Liability of an animal keeper. Several authors argue that the regime of liability
imposed on the keepers of animals could present another suitable model for addressing
liability issues arising from the use of artificial agents.96 The elements of a capacity to
cause serious harm, unpredictability combined with some sort of volition, may indeed
look inspiring enough to draw parallels between animals and artificial agents.
Under US common law, strict liability is imposed on the owner of animals that are
likely to roam for any damage caused by their trespassing. Strict liability in other cases is
generally limited to cases where the keeper of the animal knows or has reason to know that
the animal in question has an abnormal tendency to specific dangerous behaviour and the
injury results from that danger.97 Thus, there is an element of foreseeability in the liability
rule, although the keeper is even liable if she was prudent in the extreme, once the knowl-
edge element is established. Importantly, liability is imposed only for harm resulting from
the known dangerous trait. The law thus imposes strict liability for keeping a domestic
animal although it has become apparent that the animal may act abnormally dangerously.

94
  Richard A. Posner, Economic Analysis of Law (9th edn, Wolters Kluwer, 2014) 208.
95
  Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine Intelligence’,
n. 92 above, 68; Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above,
130–31. One could think of weapon systems, security guard robots.
96
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 130–31; see
also Pagallo, The Laws of Robots, n. 3 above, 72, 125.
97
  Marshall v. Ranne, 511 S.W2d 255 (Tex. 1974); Jividen v. Law, 194 W.Va 705, 461 S.E.2d 451
(1995); Restatement (Second) of Torts, s. 509, which reads: ‘Except as stated in § 517, a possessor of
a domestic animal which he has reason to know has dangerous propensities abnormal to its class,
is subject to liability for harm caused thereby to others, except trespassers on his land, although he
has exercised the utmost care to prevent it from doing the harm.’ However, in many states, statutes
and ordinances impose strict liability rules regarding any injuries caused by dog bites.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 289 13/12/2018 15:25


290  Research handbook on human rights and digital technology

Keepers and harbourers of wild animals, by contrast, are strictly liable for any injuries
caused by those animals.98 The liability is, however, limited to injuries specifically caused
by the wild or abnormally dangerous character of the animal.99 Here, the law thus
addresses the particular ‘wild’ characteristic that creates an abnormally great danger.
The German approach is considerably different. Under German Civil Code, section
833, the keeper of an animal is, in principle, liable for any damage her animal caused to
someone else’s body, health or tangible property. No negligence is required. However,
there are a number of carve-outs. First, section 833 only applies when the damage was
a consequence of an animal-typical danger.100 Therefore, a damage caused by an animal
under the guidance of its keeper would not lead to strict liability under section 833.101
Second, section 833, second sentence, provides for an immunity for damage caused
by domestic animals used for professional or commercial purposes. In these cases, the
owner of the animal in question can avoid liability by showing that either she performed
appropriate care or that the damage would have been caused even if such care would have
been performed.
Chopra and White argue that the (US) principles of liability for domestic animals could
be applied to artificial agents that lack the level of autonomy, exposure and involvement
with their environment and their owner’s or user’s affairs that would justify applying
the principles of liability for employees, which we will tackle below.102 They refer to
particularly dangerous agents, that is, agents where harm is likely to materialize should
they ‘escape from physical or virtual boundaries’, listing as examples malware, such
as viruses, worms, Trojan horses, and robotic weapon systems.103 In these cases, strict
liability was warranted. The same should apply in respect of agents which are normally
not dangerous, but show signs of risky behaviour. In such cases, they argue, the owner or
user should have a duty to take additional precautionary steps.104 However, under US law,
showing a propensity for harming others would lead to strict liability, regardless of any
actions the owner or user has taken to prevent harmful outcomes. Effectively, owners and
users of artificial agents would have only two options once their agent had given reason
to recognize dangerous tendencies: (1) giving up the use entirely; or (2) continuing the
use and accepting full liability. It is doubtful whether this would always create the right
incentives, as precautionary measures could not prevent being held liable.105
The German exception from strict liability for domestic animals kept for professional or
commercial purposes provides an interesting contrast. It was introduced in 1908 in order

 98
 See Irvine v. Rare Feline Breeding Ctr., Inc., 685 N.E.2d 120 (Ind. Appl. 1997); American
States Ins. Co. v. Guillermin, 108 Ohio App.3d 547, 671 N.E.2d 317 (1996).
 99
  Restatement (Second) of Torts, s. 507(2).
100
  BGHZ 67, 129, 130; Wagner, n. 52 above, § 833, note 9.
101
  RGZ 50, 180, 181; BGH VersR 1952, 403; 1966, 1073, 1074; 2006, 416.
102
  See section 2.2.1 (3).
103
  Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 130–31.
104
  Ibid. 131. Pagallo, however, argues that users in the foreseeable future will often not be in a
position to recognize dangerous propensities of their agents, The Laws of Robots, n. 3 above, 126.
105
  However, insurance schemes could provide incentives for taking precautions, if the insur-
ance would only pay when reasonable precautions had been implemented. Moreover, the economic
argument could support decreasing the level of use of such agents by imposing strict liability as a
more efficient way of avoiding harm, see Posner, Economic Analysis of Law, n. 94 above.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 290 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  291

to address the concerns of many small farmers who had no choice but to keep domestic
animals and therefore feared the strict liability regime of s. 833(1), first sentence.106 The law
thus distinguishes between animals that are necessary for earning a livelihood and other
animals that may be just a luxury. Similar distinctions could be made in the field of artificial
agents. Those who use artificial agents because their business requires could face less strict
liability than those who decide to rely on artificial agents ‘just for fun’. However, considering
the availability of highly sophisticated and diversified insurance options, as well as the pos-
sibility to price in the risk, it is doubtful that privileging commercial forms of applications
of artificial agents would be an adequate way of shaping the liability landscape.107
(3) Employees. The US common law doctrine of respondeat superior imposes on the
principals’ strict liability for their agents’ tortious acts. It arises, e.g., when the agent acts
within the scope of its actual authority (Restatement (Third) of Agency, s. 7.04); when
the agent is an employee and acts within the scope of her employment (s. 7.07); when the
agent is an individual contractor while discharging non-delegable duties (s. 7.06); or com-
mits a tort while acting with apparent authority (s. 7.08). The principal in these cases has
no way of avoiding liability by proving that a sufficient level of care had been exercised.
Moreover, s. 7.05(1) establishes liability of the principal for an agent’s conduct ‘if the harm
was caused by the principal’s negligence in selecting, training, retaining, supervising, or
otherwise controlling the agent’.
German law takes a distinctively different approach. Under Civil Code, s. 831(1),108
the liability of the principal for wrongful acts of her employees still relies on an aspect of
breach of duty, albeit in a modified fashion. The relevant duty is here to appropriately
select and supervise the employees, a duty that s. 7.05 of the Restatement (Third) treats
as a basis for a principal’s liability that is independent from the others mentioned above.
An element of stricter liability is introduced by presumptions that (1) the principal in fact
failed to carry out the selection and supervision in a sufficiently diligent way; and (2) that
this caused the eventual damage. Thus, in order to avoid being held liable for wrongful
conduct of employees, a principal must prove that he had taken all reasonably necessary
steps to choose a prudent employee and to supervise her activities in an adequate fashion.
Under s. 831 of the Civil Code, the liability of the principal is thus construed as a liability
for the principal’s failure to adequately organize her own sphere, even if the presumptions
above may in practice lead to outcomes that are similar to those of a strict liability regime
in its narrow sense.109

106
  See Detlev W. Belling and Christina Eberl-Borges, in J. von Staudinger, Kommentar zum
Bürgerlichen Gesetzbuch mit Einführungsgesetz und Nebengesetzen, vol. 2, Recht der Schuldverhältnisse,
§§ 830–838 (Sellier, de Gruyter, 2002) § 833, notes 3 and 6.
107
  Indeed, German commentators are mostly sceptical about Civil Code, s. 833, second sen-
tence; Belling and Eberl-Borges, n. 106 above, note 7; Wagner, n. 52 above, § 833, note 5.
108
  Civil Code, s. 831(1) reads in its English translation: ‘A person who uses another person
to perform a task is liable to make compensation for the damage that the other unlawfully inflicts
on a third party when carrying out the task. Liability in damages does not apply if the principal
exercises reasonable care when selecting the person deployed and, to the extent that he is to procure
devices or equipment or to manage the business activity, in the procurement or management, or if
the damage would have occurred even if this care had been exercised.’
109
  For harm done in a contractual relationship, by contrast, Civil Code, s. 278, provides
for a strict liability regime. This is based on the idea that contracts form special relationships

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 291 13/12/2018 15:25


292  Research handbook on human rights and digital technology

Treating artificial agents analogously to employees may be most appropriate and most
useful in cases where artificial agents have such a high level of autonomy that it makes
them comparable to persons employed to perform more or less clearly defined tasks on
behalf of the principal. A toy dog, mainly designed to play in the house with the owner’s
children would fit this description far less well than, say, a personal assistance robot that
can freely move from place to place, interacts with a variety of humans and other artificial
agents to organize the owner’s life and assist him in various tasks.110 In the latter case, one
could quite easily reactivate the rationale that initially informed the idea of more or less
strict liability for employees’ tortious conduct. If the stricter treatment of principals for
their employees’ actions was justified as a reaction to widespread division of labour, the
same justification could be valid as regards the acts of truly autonomous agents used for
the purposes of the principal.
One example to illustrate this: if we do not apply the principles of the liability for
employees’ torts, the liability of an agent’s operator depends on her own negligence.111 If
an operator has chosen her agent well, properly trained, configured and monitored it – in
short, taken all necessary care – no liability would be imposed, neither on the operator
nor on the manufacturer, if we assume that the agent was not initially defective. But this
would mean that even if the agent had caused harm by acting unnecessarily dangerously
in the particular circumstances, in a way that would be considered reckless even by human
standards, the victim would go uncompensated. Admittedly, under German law, the
outcome may look different; but even there, the application of Civil Code, s. 831 standards
would be advantageous to victims, who often will struggle to access any information on
the behaviour of the operator, let alone be able to prove any failure to take reasonable care.
However, difficult questions arise. If the principal’s liability rests on the condition that
the agent has committed a tortious act, what standard of care should be applied to artificial
agents? Should the traditional notion of a reasonably prudent person be the benchmark,
or wouldn’t it be necessary to adapt the standard to the fact that an artificial agent with
abilities that may go beyond those of humans has acted? In the latter case, one could ask, for
example, how an agent designed for the same type of tasks, construed and trained according
to the state of the art, applying all relevant safety guidelines, regulations and recommenda-
tions, would have acted in the situation in question (prudent state of the art agent). But
this approach would not be workable in cases where the relevant class of artificial agents
lags behind humans in terms of certain relevant skills. It would effectively establish a safety
standard falling short of the duty of care humans would have to exercise, thereby incentiv-
izing the use of badly designed agents.112 One way of designing a proper standard of care
could be to refer generally to the standard of a prudent state of the art agent, but add as a
minimum the reasonably prudent person standard well established in tort law.

between the contractual partners, in which one side should not bear the risk of the other side’s
using third parties for discharging their obligations, see Detlev W. Belling, in J. von Staudinger,
Kommentar zum Bürgerlichen Gesetzbuch mit Einführungsgesetz und Nebengesetzen, vol. 2, Recht der
Schuldverhältnisse, §§ 830–838 (Unerlaubte Handlungen 3) (Sellier, de Gruyter, 2008) § 831, note 5.
110
  See Pagallo, The Laws of Robots, n. 3 above, 129.
111
  We neglect the option of strict liability for particularly dangerous things like cars.
112
  However, the victim may still be able in such cases to bring products liability claims against
the manufacturer.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 292 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  293

This last observation draws attention to another crucial point. A principal’s liability
for employees rests on the fact that the employees are fully independent persons in their
own right, able to commit tortious acts. Thus, without specific rules attributing their
actions to their principals, the latter would generally not face liability for their conduct,
if their tortious acts were committed independently from any direct involvement of
their principal. This is different when it comes to artificial agents, as they are so far
treated only as things. Thus, the person in control of them would carry responsibility
for them if she failed to exercise due care, for example, when choosing the agent for
a particular purpose, maintaining, training or monitoring it. The principles of the
liability for employee’s torts may only meaningfully be applied if at the same time it
is recognized that the artificial agents in question are more than just things, namely
autonomous, independent actors with their own standards of behaviour. Whether this
means that they could or should be accorded legal personhood of some sort will be
discussed further below.113
Finally, the German approach of leaving the principal a chance to ‘get off the hook’ by
proving that she had exercised reasonable care when selecting, monitoring and supervis-
ing the agent, could also be a fitting way for dealing with artificial autonomous agents, as
it would incentivize the owners and users of such agents to exercise reasonable care when
choosing, configuring, training and monitoring such agents.114

3.2.2 Manufacturer
Manufacturers’ liability regimes also face tremendous complications when attempting
to capture autonomous artificial agents’ harmful conduct. To start with, increasing
autonomy, with its loosening of human control, also entails an increased need for design
safety precautions in order to prevent certain risks arising and materializing. And at
the same time, the unpredictability of autonomous artificial agents’ development and
behaviour creates problems as regards the determination of defects and the assessment
of causation, both issues related to obstacles to finding evidence.115
Judging whether an autonomous artificial agent was defective when being put into
circulation needs to overcome particular challenges. With respect to manufacturing
defects, the self-modifying aspect of autonomous artificial agents can make it highly
difficult to show the initial state of the agent,116 which may result in serious problems in
demonstrating that the particular agent at the outset deviated from the specifications of

113
  See Section 3.
114
  As long as it does not seem preferable to decrease the level of adoption by imposing strict
liability as a more efficient way of avoiding harm, see Posner, Economic Analysis of Law, n. 94 above.
115
  See European Parliament, Resolution of 16 February 2017, n. 90 above, recital AL,
according to which the current legal framework ‘would not be sufficient to cover the damages
caused by the new generation of robots, insofar as they are equipped with adaptive and learning
abilities entailing a certain degree of unpredictability in their behaviour’. One solution that has
been suggested is the creation of a regime of joint strict liability of manufacturers and opera-
tors, e.g., Malte-Christian Gruber, ‘Gefährdungshaftung für informationstechnologische Risiken:
Verantwortungszurechnung im “Tanz der Agenzien”’ (2013) 46(4) Kritische Justiz 356, 369.
116
  The dependence of many artificial agents on training and configuration discussed above
may also blur the contours of a causal link between an initial defect of the agent and the damage
done, see Pagallo, The Laws of Robots, n. 3 above, 124.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 293 13/12/2018 15:25


294  Research handbook on human rights and digital technology

the entire line. As far as design defects are concerned, both under the risk-utility test and
the German balancing test discussed above, it is to ask if the benefits of the particular
design used outweigh the risk that has eventually materialized in the damage. If we take
a constantly adapting and learning agent, possibly with its own strategic goal-setting,
it can become very difficult, if not impossible, to make such a determination. If the
self-modifications undergone have eventually resulted in damage, one would have to ask
whether another design could have prevented these modifications, but without losing
the benefits (or at least the main benefits) of the capacity of learning and adapting.
Furthermore, these tests take the perspective of the time of the initial distribution of the
agents, thus focusing on foreseeable risk, making it all the more difficult to demonstrate
that a reasonable alternative design existed.
This problem would probably have to be addressed by focusing on the measures a
manufacturer could have taken to prevent certain outcomes of self-modification, with the
idea in mind that whenever certain outcomes materialize, the design must have been defec-
tive, regardless of any self-modification, unless other causes can be proven.117 Karnow,
for example, suggests that robots should have an inbuilt common-sense buffer.118 This
could be useful, especially with respect to particularly severe outcome, such as killing or
severely injuring a person or assuming unhealthy financial obligations or risks. Moreover,
precautionary measures should make it possible to interfere with and stop an artificial
agent, if needed. This, as well as the manufacturer’s duty to prevent to a reasonable
extent possible misuses of its products, stresses the need for adequately designed human-
machine interfaces.119 Also, ensuring interoperability with a large set of other agents
and technical infrastructure should gain in importance for designing safe autonomous
artificial agents.120
The interactivity aspect of autonomous artificial agents and the prevalence of multi-
agent systems121 adds another dimension of unpredictability and untraceability of
malfunctions. Traceability of communications and reactions may often prove impossible
and it may thus be unattainable to determine which of the many agents may have been
defective, if there was a defect at all, and if the defect was the proximate cause of the
damage suffered.122 The causation issue could in some cases potentially be addressed
in a way analogous to German Civil Code, s. 830(1), second sentence. According to
that provision, if it cannot be established which of several persons involved caused the
damage, each of them is deemed responsible for the damage.123 However, this approach

117
  See Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 137.
118
  Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine Intelligence’,
n. 92 above, 75–76.
119
 Schulz, Verantwortlichkeit bei autonom agierenden Systemen, n. 93 above, 164.
120
  Ibid.
121
  For more on multi-agent systems, see Karnow, ‘Liability for Distributed Artificial Intelligences’,
n. 92 above, and Mireille Hillebrandt, Smart Technologies and the End(s) of Law (Edward Elgar,
2015) 26–27.
122
  See Karnow, ‘Liability for Distributed Artificial Intelligences’, n. 92 above, 181–92; and
Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine Intelligence’, n. 92
above, 73–4.
123
  The text book case for that is a pub brawl, after which nobody can tell with certainty which
of the many fighters caused what damage.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 294 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  295

only ­creates a presumption of causation; all other elements of liability must have been
present. Thus, a victim would still have to prove that the agents involved were defective,
with the problems discussed above.124 More importantly, malfunctions of multi-agent
systems with disastrous consequences may often occur without any of the participating
agents being defective at all.125
One category of products liability that could help address the unpredictable nature
of autonomous artificial agents would be the doctrine of a breach of post-sale duties,
such as warning and possible recall and repair duties. As seen above, warning duties are
recognized for products regardless of their initial defectiveness. If a manufacturer learns
about certain unforeseen dangerous forms of behaviour of an autonomous artificial agent
it had brought into circulation, be this through active information gathering as required
under German law, be this by passively receiving customer complaints, as sufficient in
the United States, it could have a duty to issue warnings in a reasonable, effective way,
provided the risk of damage is serious enough. If the behaviour was made possible by
an initial defective condition of the autonomous artificial agent, the manufacturer could
also be obliged to take the necessary steps to repair the defect and remedy the resulting
risky behaviour, for example, by updating the software or giving instructions on how to
reconfigure the autonomous artificial agent. On the other hand, if entirely new risks have
evolved that could not have been foreseen on the basis of the state of the art and science
at the time of the product entering the market, it is difficult on the current basis of torts
and products liability law to come to a duty to mitigate those risks, for example with an
update. At least, the manufacturer would not be obliged to do so free of charge.126
Post-sale duties could have particular value in the context of multi-agent systems.
German law recognizes that monitoring duties may also encompass a duty to keep a
watchful eye on combinations of the product in question with other products on the
market and take the appropriate steps if new risks of some weight are perceived.127 This
duty translates quite easily into the multi-agent environment, putting it upon manufac-
turers to monitor how the product is being used, and whether there are any particular
compatibility or other problems resulting from the interaction with other products in the
market.128

124
  An inspiring early analysis of liability issues posed by multi-agent systems was delivered by
Karnow, ‘Liability for Distributed Artificial Intelligences’, n. 92 above.
125
  See Karnow, ‘The Application of Traditional Tort Theory to Embodied Machine Intelligence’,
n. 92 above, 73, quoting Richard I. Cook, How Complex Systems Fail, available at http://web.
mit.edu/2.75/resources/random/How%20Complex%20Systems%20Fail.pdf: ‘Because overt failure
requires multiple faults, there is no isolated “cause” of an accident. There are multiple contributors
to accidents. Each of these is necessary but insufficient in itself to create an accident. Only jointly
are these causes sufficient to create an accident. Indeed, it is the linking of these causes together
that creates the circumstances required for the accident. Thus, no isolation of the “root cause” of
an accident is possible. The evaluations based on such reasoning as “root cause” do not reflect a
technical understanding of the nature of failure but rather the social, cultural need to blame specific,
localized forces or events for outcomes’ (accessed 18 February 2018).
126
 Schulz, Verantwortlichkeit bei autonom agierenden Systemen, n. 93 above, 172.
127
  Wagner, n. 52 above, § 823, note 646; Hartwig Sprau, in Palandt. Bürgerliches Gesetzbuch mit
Nebengesetzen (74th edn, C.H. Beck, 2015) § 823, note 175.
128
 Schulz, Verantwortlichkeit bei autonom agierenden Systemen, n. 93 above, 172.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 295 13/12/2018 15:25


296  Research handbook on human rights and digital technology

3.2.3  Service providers


Service providers face very similar challenges when using autonomous artificial agents.
However, due to the continuous character of services, duties of monitoring and, if
necessary, adapting or even removing agents must have increased weight in comparison
to the set of duties manufacturers have to comply with. Also, the doctrines of liability for
animals or vicarious liability could play a role.129

4.  LEGAL PERSONHOOD OF ARTIFICIAL AGENTS

Many of the issues described in the course of this chapter originate in the independence of
many agents’ decision-making from any direct human control or interference. Indeed, new
forms of artificial agents are not comparable to any other things created by humankind
before, in that they sense and react to their environment, learn from their experiences, set
their own goals and strategies and change their environment, all free from human guid-
ance. This may indeed warrant not treating them like other things. Does this mean that
at least the most sophisticated form of autonomous artificial agents should be accorded
some sort of independent legal status, legal personhood?
The idea of establishing some form of legal personhood or independent legal status for
artificial agents has been swirling around for a couple of decades now.130 Part of the moti-
vations behind such suggestions were frictions created by the use of networked computers
with regard to contract formation,131 attribution of knowledge and related issues. But a
special status for autonomous artificial agents could possibly also help address problems
relating to extra-contractual liability.
Recall the example of a rogue agent briefly discussed above in the context of vicarious
liability for employees. We observed that applying vicarious liability principles would be
an adequate way to address the liability issue where the manufacturer and the operator
are without blame while the agent causes harm by acting in a way a prudent state of the
art agent would not act. However, the fact alone that one necessary element of vicarious
liability is a tortious (that is, an intentionally or negligently wrongful) act of the employee,

129
  See Sections 2.2.1 (2) and (3).
130
 See Leon E. Wein, ‘The Responsibility of Intelligent Artifacts: Towards an Automation
Jurisprudence’ (1992) Harvard Journal of Law and Technology 6, at 103–54; Karnow, ‘Liability for
Distributed Artificial Intelligences’, n. 92 above, 189, 191; Steffen Wettig and Eberhard Zehendner,
‘The Electronic Agent: A Legal Personality under German Law?’ in LEA, The Law and Electronic
Agents, Proceedings of the Second Workshop, 24 June 2003, in Connection with the ICAIL 2003
Conference (Ninth International Conference on Artificial Intelligence and Law), Edinburgh, Scotland
(Oslo: Institutt for rettsinformatikk, 2003) 97–112; Malte-Christian Gruber, ‘Gefährdungshaftung
für informationstechnologische Risiken’, n. 115 above, 370; Marlena Jankowska, Mirosław
Pawełczyk and Marcin Kulawia, ‘Legal Personality and Responsibility: A Tale About How a Robot
Does It Alone in Life’ in Marlena Jankowska, Miroslaw Pawelczyk and Marcin Kulawia (eds), AI,
Law, Philosophy and Geoinformatics (Wydawnictwo Poskiej Fundacji Praw Konkurencji i Regulacji
Sektorowej Ius Publicum, 2015) 73–89; European Parliament, Resolution of 16 February 2017, n.
90 above, para. 59(f).
131
  See Tom Allen and Robin Widdenson, ‘Can Computers Make Contracts?’ (1996) Harvard
Journal of Law and Technology 9, at 26; Chopra and White, A Legal Theory for Autonomous
Artificial Agents, n. 1 above, 160.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 296 13/12/2018 15:25


Who pays? On artificial agents, human rights and tort law  297

demonstrates that there is a dividing line between the principal and the person (or artificial
agent) committing the tort – and that the latter should have a status that goes beyond
being merely a tool, a thing of the operator. Also, the principles of vicarious liability for
employees’ wrongful acts have been developed in order to address the division of labour
between fully independent persons shaping our economic and social world. And in the
same vein, vicarious liability for autonomous artificial agents would be a reaction to a
new paradigm of shifted work and activity patterns, building on artificial agents acting
independently but on behalf of and to some extent instructed by their operators. Again,
this seems to call for recognition of an independent legal status of some sorts of the
artificial agents in question.
Across jurisdictions, many types of dependent personhood exist, the most common
one being all forms of corporations. Other examples include unincorporated associations,
partnerships, states, cities, intergovernmental organizations, and many kinds of other
bodies of public law. In some jurisdictions, even temples or ships have a form of legal
personality.132 The large variety of phenomena of legal personhood demonstrates that
conferring a similar status on artificial agents would not be a dramatic break with legal
tradition.133
However, if artificial agents could be accorded a legally recognized special status, several
other questions arise. Most importantly, would an independent legal status simply be a
mechanism for properly attributing acts and ultimate responsibility? Then some tweaks to
torts doctrine could potentially do the trick, too, and probably better. Or would artificial
agents become bearers of duties as well as holders of rights? Taking again the example of
the liability for acts committed by employees, the vicarious liability of the principal does
not preclude that the employee herself can face claims brought by the victim.
When affording personhood to artificial agents, a number of other issues would inevi-
tably arise. To start with, the conduct of such agents could much more easily break the
chain of causation between human acts and harm done, as it would qualify as a so-called
novus actus interveniens.134 Also, recall the above discussion of the appropriate standard
of a duty of care. Moreover, treating an agent, on the one hand, as a person and, on the
other, as a product (for products liability purposes) may also seem contradictory.

132
  Christopher D. Stone, ‘Should Trees have Standing? Toward Legal Rights for Natural
Objects’ (1972) 45 Southern California Law Review 450, at 452; Lawrence B. Solum, ‘Legal
Personhood for Artificial Intelligences’ (1992) 70 N.C. L. Rev. 1231, at 1239; Chopra and White, A
Legal Theory for Autonomous Artificial Agents, n. 1 above, 159.
133
  And the range of examples of legal persons should also indicate that we only discuss here
so-called dependent personhood. A dependent legal person cannot exercise all of its rights by itself
but needs to act through another person. A typical example are children or adults who are not of
sound mind. They clearly have rights but can, for example, enter contracts or sue only via their par-
ents or guardians, Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above,
159. Similarly, corporations and other artificial legal entities need humans to act on behalf of them.
Independent personhood would mean to have all rights a full citizen has with the power to exercise
those rights. That would require some sort of general-purpose intelligence with the capacity to take
discretionary decisions of all kind, Chopra and White, A Legal Theory for Autonomous Artificial
Agents, n. 1 above, 163; according to Solum, ‘Legal Personhood for Artificial Intelligences’, n. 132
above, 1248, that still seems to be far from being achievable.
134
  See Chopra and White, A Legal Theory for Autonomous Artificial Agents, n. 1 above, 135.

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 297 13/12/2018 15:25


298  Research handbook on human rights and digital technology

Furthermore, if autonomous artificial agents could cause and assume liability, the
question remains as to how such agents could obtain and hold assets for satisfying liability
claims. The owner would either have to issue a guarantee to cover the liabilities assumed
by the agent, or provide the agent with a budget. These models could be combined with a
limitation of liability. This again would require making the limitations public. Some com-
mentators suggest a register for this purpose.135 The register could contain information on
the agent’s owner, its financial backing, liability limits, etc.

5. CONCLUSION
This chapter has demonstrated that international human rights law recognizes the critical
importance of ensuring that victims of human rights abuses have access to remedies.
States therefore have a duty to maintain an effective legal framework that provides
adequate redress. This includes a duty to ensure that appropriate and effective remedies
exist for cases where individuals suffer violations of their human rights, as protected
under international human rights law, as a result of algorithm-based decision-making.
Although international human rights law does not, in and of itself, mandate the specific
way in which states should provide appropriate redress within domestic legal regimes, tort
law can often play a key role in providing such remedies.
Yet, by analysing both US and German tort law as examples of common and civil
law jurisdictions, this chapter has demonstrated that sophisticated autonomous artificial
agents may create enormous challenges for the tort systems as they currently stand.
The root cause of this lies in the autonomy and adaptability of autonomous agents. In
circumstances where sensing the environment, communicating, interacting and learning
contribute to the continuous self-modification of those agents, the centrality of the
concept of foreseeability, which underpins both classical negligence and parts of products
liability doctrines, is shaken.
In order to ensure that legal systems do provide the necessary redress, as required under
human rights law, the law may have to react to this by recognizing the special character
of (some of) those agents. For example, liability of their owners or operators could be
modelled similar to that of principals for their employees. Moreover, the proposal of
according a form of legal personhood to at least the most developed autonomous artificial
agents certainly deserves further attention and discussion.

135
  Karnow, ‘Liability for Distributed Artificial Intelligences’, n. 92 above, 193–204; Wettig
and Zehendner, ‘The Electronic Agent’, n. 130 above, 109; Chopra and White, A Legal Theory for
Autonomous Artificial Agents, n. 1 above, 149; and also European Parliament, Resolution of 16
February 2017, n. 90 above, para. 59(e).

Tim Engelhardt - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:50:56AM
via New York University

WAGNER_9781785367717_t.indd 298 13/12/2018 15:25


15.  Digital technologies, human rights and global
trade? Expanding export controls of surveillance
technologies in Europe, China and India
Ben Wagner and Stéphanie Horth

Historically global trade has not had a strong value-based focus, whether on human rights
or for that matter any other values in general. Trade-oriented institutions and legal mecha-
nisms have historically been almost exclusively economically oriented, with human rights
concerns either considered to be non-tariff trade barriers, or as non-binding guidelines
such as the UN Guiding Principles on business and human rights. However, it should be
noted that despite their voluntary nature, many non-binding mechanisms have developed
considerable normative strength in recent decades, even though they do not have legally
binding mechanisms attached.
Despite these challenges, concepts related to human rights have slowly been seeping into
international trade. One aspect has been the public debates about international investment
treaties such as the Transatlantic Trade and Investment Partnership (TTIP) and Trans-
Pacific Partnership (TPP) at a multilateral level, or in individual bilateral investment
treaties (BITs) such as between South Africa and Germany. Another important example
is the increase in human rights language in export control regimes. Export control regimes
such as the international Wassenaar Arrangement are typically focused on ensuring
international security by limiting the trade in certain goods and services. However, in
recent decades their scope has expanded from focusing on limiting the proliferation of
military goods to ensure international security, to also limiting the spread of goods used
for internal repression, a measure frequently justified by recourse to human rights or
human security. How can this increased focus on human rights be understood and how
does this shift impact export controls of digital technologies?
This chapter will provide an overview of the increasing role of human rights and digital
technology in export control regimes over the past two decades and how this has led to
the expansion of export controls of surveillance technologies. It will take a particularly
close look at the EU debate on export controls, human rights and digital technologies,
before looking at China and India which also implement similar restrictions of digital
technologies. The EU is a particularly interesting case as it is the key norm-setter in
the area of export controls for surveillance technologies. China and India, by contrast,
are particularly interesting cases because they have spent decades implementing part
of the Wassenaar Arrangement, including controls of digital technologies, without any
meaningful ability to define the norms they were to a considerable extent abiding by. The
chapter will then discuss challenges with export controls in the areas of human security,
cryptography regulation, transparency, participation and governance, as well as the
appropriate size and scope of the relevant regime. In conclusion, the chapter will suggest
that while not a panacea for all challenges related to human rights in digital technologies,

299
Ben Wagner and Stéphanie Horth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 299 13/12/2018 15:25


300  Research handbook on human rights and digital technology

export controls of surveillance technologies can provide an important element of such


protection.
At a more fundamental level, this chapter will argue that while export controls may
not be the obvious mechanism of choice to govern digital technologies, they have become
a powerful way of inserting human rights into international trade. In contrast to claims
that the Internet is borderless and regulation fundamentally impossible, export controls
provide a key example of how human rights norms can be embedded in technical Internet
infrastructure by restricting the flow of surveillance technologies which are likely to have
a negative human rights impact.

1.  EXPORT CONTROLS AND HUMAN RIGHTS

1.1  Export Control Regimes After the End of the Cold War

Up until the end of the Cold War, there were only regional ‘block-oriented’ export control
mechanisms, but no ‘broad-based international treaty’.1 Those export control mecha-
nisms that did exist were limited to a focus on a narrow set of weapons defined in treaties
between a limited group of like-minded states. These treaties, such as the Coordinating
Committee for Multilateral Export Controls (COCOM) Cold War arms control treaty,
were normally intended not just as a tool of arms control but also as a mechanism to
keep access to key arms and technologies within a limited set of members of the club.2
This changed rapidly after the end of the Cold War, when institutions such as COCOM
morphed rapidly into organizations which focused on a broader membership basis from
both sides of the Iron Curtain.
The most notable organization here is the Wassenaar Arrangement, which grew out of
COCOM and is one of the largest organizations in the world for coordination of export
control regulations. In 1994 and 1995, the members of COCOM and several other states
came together in meetings referred to as the New Forum, in order to discuss new means
of controlling international exports. The New Forum envisioned its role as ‘custodian
for maintaining the emerging world order’.3 The Wassenaar Arrangement on Export
Controls for Conventional Arms and Dual-Use Goods and Technologies was the result
of the New Forum, a consensus that a control list of goods should serve to help ensure
regional and international stability.
What is perhaps most notable about the Wassenaar Arrangement is that it is a non-
binding ‘arrangement’. The Wassenaar Arrangement controls exports by cooperating to
establish a common List of Dual-Use Goods and Technologies that is then voluntarily
implemented to national laws by participating states. Participating countries also
exchange information about specific denials and licences. Within the Arrangement, states

1
  Mark Bromley, Neil Cooper and Paul Holtom, ‘The UN Arms Trade Treaty: Arms Export
Controls, the Human Security Agenda and the Lessons of History’ (2012) 88(5) International
Affairs (1 September) 1031, available at https://doi.org/10.1111/j.1468-2346.2012.01117.x.
2
  Cecil Hunt, ‘Multilateral Cooperation in Export Controls: The Role of CoCom’ (1982) 14 U.
Tol. L. Rev. 1285.
3
  Samuel A.W. Evans, Revising Export Control Lists (Flemish Peace Institute, 2014).

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 300 13/12/2018 15:25


Digital technologies, human rights and global trade?  301

simply develop joint sets of definitions of weapons or other dual-use items that they are
not legally bound to enforce. Despite this, all of the member states and even several non-
member states such as China enforce the majority of the restrictions on the Wassenaar
List, making it a fascinating example of international norm diffusion. Wassenaar is
unusual not just because it is non-binding, but because even states not involved in the
rule-making at all are willing to accept being bound by norms that they have not been
involved in developing. The goal of the Wassenaar Arrangement is to contribute to inter-
national and regional security by promoting transparency and responsibility concerning
the transfer of arms and dual-use goods, and to complement and reinforce the control
regimes for weapons of mass destruction. Although other organizations, such as the
OSCE or the European Union (EU), developed their own sets of export controls, the
Wassenaar Arrangement has, since 1995, been the key international norm-setter in the
area of export controls.4
In conclusion, it is possible to suggest that there has been a considerable increase in
the number of multilateral export control regimes since the end of the Cold War. While
these were previously mainly regional arrangements, they are increasingly agreed at an
international level and thus encompass a far larger number of states.

1.2  Human Rights and Export Controls

The end of the Cold War coincided not just with an increase in the number and scope
of export control regimes around the world, it also witnessed the increasing growth in
claims to ethical foreign and trade policy. Key international actors such as the EU but also
India and China want to be seen as ethical and responsible actors within the international
community, promoting international security in both foreign policy and trade.5
However, as has been discussed extensively in academic literature, there remains a
considerable distance between claims of ethical international policies and their actual
implementation in practice. This distance leads commentators to see claims of ethics in
foreign policy as simply ‘cheap talk’ with little grounding in state practices.6 Some even
go as far as describing such claims of ethics as ‘organized hypocrisy’7 when they are
simultaneously combined with massive transfers of arms. In particular when, as is the
case with many arms exporting countries, they have ‘not exercised export controls so as

4
  Ron Smith and Bernard Udis, ‘New Challenges to Arms Export Control: Whither Wassenaar?’
(2001) 8(2) Nonproliferation Review 81–92.
5
  Ian A.N. Manners, ‘The Normative Ethics of the European Union’ (2008) 84(1) International
Affairs 45–60; Samuel J. Spiegel and Philippe Le Billon, ‘China’s Weapons Trade: From Ships of
Shame to the Ethics of Global Resistance’ (2009) 85(2) International Affairs (1 March) 323–46,
available at https://doi.org/10.1111/j.1468-2346.2009.00796.x; Mira Sucharov, ‘Security Ethics and
the Modern Military: The Case of the Israel Defense Forces’ (2005) 31(2) Armed Forces and Society
169–99.
6
  David Chandler, ‘Rhetoric Without Responsibility: The Attraction of “Ethical” Foreign
Policy’ (2003) 5(3) British Journal of Politics and International Relations (1 August) 295–316, avail-
able at https://doi.org/10.1111/1467-856X.00108.
7
  Richard Perkins and Eric Neumayer, ‘The Organized Hypocrisy of Ethical Foreign Policy:
Human Rights, Democracy and Western Arms Sales’ (2010) 41(2) Geoforum (March) 247, available
at https://doi.org/10.1016/j.geoforum.2009.09.011.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 301 13/12/2018 15:25


302  Research handbook on human rights and digital technology

to discriminate against human rights abusing or autocratic countries during the post-Cold
War period’,8 they are likely to face considerable criticism.
It is in this gap between the claims and the reality of ethical trade and foreign policy that
numerous civil society groups have stepped in to push for states to live up to their claims
of ‘ethical’ trade and foreign policy. One of the key areas of civil society pressure is for
governments to limit exports harmful to human rights, with a specific focus on the arms
trade. Here, non-governmental organizations (NGOs) play both a crucial role in drawing
awareness to arms deals which seem evidently unethical, while actively advocating changes
to export control regulations around the world.9
Specifically, NGOs’ advocacy was instrumental in ensuring an increased consideration
of human rights concerns within the framework of export controls. They were most
successful within the EU, leading to a wide consideration of human rights in the ‘1998
EU Code of Conduct on Arms Exports (‘EU Code’), including language on preventing
exports of arms that might prolong armed conflicts or be used to violate human rights’.10
Despite its name, the EU Code not only covered the arms trade but also additional
dual-use goods and other military goods beyond arms themselves, thereby setting up a
framework for the consideration of human rights within EU export controls.
Civil society advocacy was also crucial in pushing for two of the most important
changes to the international export control regime: the UN Arms Trade Treaty (ATT)
which was adopted in 2013 and came into force in 2014, and the 2016 EU Commission
proposal for a new EU Dual-Use Export Control Regulation.11 Both documents explicitly
recognize promoting and protecting human rights as a key policy goal, reflecting the
strong pressure from civil society to include these concerns in their language.
Even broad multilateral arrangements such as the Wassenaar Arrangement have
increasingly reflected human-rights concerns in their decision-making. While the original
Wassenaar Arrangement from 1996 does not consider human rights concerns, an addi-
tional paper which acknowledges human rights concerns was approved by the Wassenaar
Arrangement Plenary in 1998. This paper, titled Elements for Objective Analysis and
Advice Concerning Potentially Destabilising Accumulations of Conventional Weapons,
includes as part of its criteria based on which arms exports should be restricted that
the ‘weapons might be used for the violation and suppression of human rights and
fundamental freedoms’.12
Thus, it is possible to suggest that multilateral export controls have significantly
expanded their focus on human rights since the end of the Cold War. Each of the
Wassenaar Arrangement, the European Union Code and the Arms Trade Treaty provide

 8
  Ibid. 247.
 9
  Lerna K. Yanik, ‘Guns and Human Rights: Major Powers, Global Arms Transfers, and
Human Rights Violations’ (2006) 28(2) Human Rights Quarterly 357–88.
10
  Bromley, Cooper and Holtom, ‘The UN Arms Trade Treaty’, n. 1 above, 1036.
11
  European Commission, ‘Proposal for a Regulation of the European Parliament and of the
Council Setting up a Union Regime for the Control of Exports, Transfer, Brokering, Technical
Assistance and Transit of Dual-Use Items (Recast)’, COM/2016/0616 (2016), available at http://
eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2016:0616:FIN.
12
  Federation of American Scientists (FAS), Destabilising Accumulations of Conventional
Weapons (FAS, 1998), available at https://fas.org/asmp/resources/interntlorgs/Wassenaar/Accumu​
lationCriteria.htm#paper.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 302 13/12/2018 15:25


Digital technologies, human rights and global trade?  303

significant relevant examples of considering safeguarding human rights as a relevant


policy objective within export control mechanisms.

2.  S
 URVEILLANCE TECHNOLOGIES, HUMAN RIGHTS AND
EXPORT CONTROLS

It is within this context that the human rights impact of the trade in surveillance technolo-
gies began to be more widely discussed. First public debates about the highly political
role of surveillance technologies took place in Europe in 2009 and 2011, in the context of
protests in Iran, Tunisia, Egypt and Syria.13 While the claim that revolutionary uprisings
are in some way ‘caused’ by digital technologies is highly problematic,14 it does serve to
illustrate the considerable potential for human rights harms that surveillance technologies
pose. In particular, revelations that European companies had provided surveillance to the
governments of Iran, Syria, Egypt and Bahrain caused widespread public outrage and
condemnation. Specifically:

● Nokia Siemens had exported a monitoring system for telecommunications surveil-


lance to Iran and was accused of complicity in Iranian human rights abuses after
the uprisings in 2009;15
● Ethiopian activist groups were targeted by software sold by German company
Gamma International;16
● Ethiopian journalists were targeted by software sold by Italian company Hacking
Team;17
● Nokia Siemens had exported surveillance systems to Bahrain, which were used as
part of the ‘arrest and torture of political opponents’.18

This led to a stream of statements from European politicians in 2011, such as the German
Foreign Minister Westerwelle, calling for ‘technology for controlling the Internet to be

13
  Ben Wagner, After the Arab Spring: New Paths for Human Rights and the Internet in European
Foreign Policy (Brussels: European Union, 2012), available at www.europarl.europa.eu/activities/
committees/studies.do?language=EN.
14
  Ben Wagner, ‘The New Media and Social Networks: Pointers from the Arab Uprisings’
in Frédéric Volpi and Richard Gillespie (eds), Routledge Handbook on Mediterranean Politics
(London: Routledge, 2016).
15
  James Farrar, ‘Nokia Siemens Networks Respond to Iran Human Rights Abuses Claims’,
ZDNet (2010), available at www.zdnet.com/article/nokia-siemens-networks-respond-to-iran-hu​
man-rights-abuses-claims/.
16
  Morgan Marquis-Boire et al., You Only Click Twice: FinFisher’s Global Proliferation –
Citizen Lab’ (Toronto: Citizen Lab, University of Toronto, 13 March 2013), available at https://
citizenlab.ca/2013/03/you-only-click-twice-finfishers-global-proliferation-2/.
17
  Bill Marczak, John Scott-Railton and Sarah McKune, Hacking Team Reloaded (Toronto:
Citizen Lab, University of Toronto, 9 March 2015), available at https://citizenlab.ca/2015/03/
hacking-team-reloaded-us-based-ethiopian-journalists-targeted-spyware/.
18
  Vernon Silver and Ben Elgin, Torture in Bahrain Becomes Routine with Help from Nokia
Siemens (2011), available at www.bloomberg.com/news/2011-08-22/torture-in-bahrain-becomes-
routine-with-help-from-nokia-siemens-networking.html.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 303 13/12/2018 15:25


304  Research handbook on human rights and digital technology

included in sanctions regimes’,19 and Member of the European Parliament Marietje


Schaake calling for an ‘Inquiry into the role of European companies in human rights
violations . . . and the export of dual-use technologies’.20
EU Member States were among the main proponents of stronger regulation of
surveillance technologies at the level of the Wassenaar Arrangement. This led several EU
Member States in 2012 and 2013 to submit modifications to the Wassenaar Arrangement
to include:

(1) mobile telecommunications interception or jamming equipment (5.A.1.f);


(2) IP network communications surveillance systems (5.A.1.j);
(3) targeted surveillance technologies/intrusion software (4.A.5, 4.D.4, 4.E.1.c).

These products adopted by the Wassenaar Arrangement entered into force in the EU on
31 December 2014, meaning that EU Member States were required to regulate the export
of goods in these categories. While considerable challenges in the implementation of these
controls were acknowledged by the participating states, in particular ensuring an accurate
definition of the relevant software and more broadly regulating something as easily (re-)
distributable as software, the controls were nevertheless implemented by numerous EU
Member States across Europe.
The goal of the addition of the new entries to the Control List was to prevent such
equipment or technologies being used as means of repression and state surveillance
and resulting in human rights violations. However, especially the last control on
targeted surveillance technologies was heavily contested by the information security
community as being too broad. Numerous experts suggested that the targeted surveil-
lance technologies control would also catch typical information security technologies,
in particular 4.E.1.c, which is defined as ‘Technology’ for the ‘development’ of ‘intru-
sion software’.
The intrusion software clause was heavily criticised by Bratus et al.,21 Eleanor Saitta22
and Dullien et al.,23 as part of a public consultation by the US Department of Commerce
on the US implementation of the proposed changes to the Wassenaar Arrangement.
While part of the criticism stems from the very broad implementation of Wassenaar
proposed by the US Department of Commerce, which went far beyond the initial
controls proposed within the Wassenaar Arrangement, it was also related to the controls
proposed by Wassenaar itself. This negative response led the US delegation to Wassenaar

19
  Guido Westerwelle, Auswärtiges Amt – Rede von Außenminister Guido Westerwelle bei der
Konferenz Umbrüche in der arabischen Welt der SWP (englisch) (Auswärtiges Amt DE, 2011),
available at www.auswaertiges-amt.de/de/newsroom/111108-bm-swp/247752.
20
  Marietje Schaake, Parliamentary Question: VP/HR – Inquiry into Role of European Companies
in Human Rights Violations (Part II) and the Export of Dual-Use Technologies (2011), available at
www.marietjeschaake.eu/en/parliamentary-question-vphr-inquiry-into-role-of-euro​pean-compa​
nies-in-human-rights-violations-part-ii-and-the-export-of-dual-use-technologies?color=secondary.
21
  S. Bratus et al., ‘Why Offensive Security Needs Engineering Textbooks’ (2014) 39(4) Login,
available at www.usenix.org/system/files/login/articles/02_bratus.pdf.
22
  See https://dymaxion.org/essays/wa-items.html.
23
  See https://tac.bis.doc.gov/index.php/documents/pdfs/299-surveillance-software-security-an​
d-export-controls-mara-tam/file.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 304 13/12/2018 15:25


Digital technologies, human rights and global trade?  305

to ­successfully renegotiate this control in December 2017, as a result of which the scope
of the revision of existing controls was considerably limited.24 Notably, the United States
has to this day not implemented this control and it remains to be seen whether they will
do so after the changes that have been negotiated.
In parallel to these developments at Wassenaar, the European Commission used the
regulation of the trade in surveillance technologies as one of their central justifications for
revising the 2009 EU Regulation 428/2009 on export controls for dual-use technologies.
This development took place in several steps:

● 2013: the first European Commission report on the implementation of the existing
regulation only referred vaguely to ‘cyber tools’;25
● 2014: only a year later in the EU Commission’s review of export control policy, the
definitions used had developed considerably to encompass ‘cybertools for mass
surveillance, monitoring, tracking and interception’;26
● 2016: in the proposed new regulation of export controls, surveillance technologies
are extensively defined as ‘items specially designed to enable the covert intrusion
into information and telecommunication systems with a view to monitoring,
extracting, collecting and analysing data and/or incapacitating or damaging the
targeted system’.27

However, what is notable about the European Commission’s approach is not just the
expanding definition, but also inserting additional regulatory mechanisms into the regula-
tion which exclusively deal with surveillance technologies. These additional regulatory
mechanisms are as follows.

(1) An EU autonomous list for surveillance technologies: in almost all cases of dual-use
goods, the EU simply copies existing regulatory categories of items to regulate
from the Wassenaar Arrangement. This is to ensure the widest possible number of
countries end up regulating the same item, rather than any one country ‘going it
alone’. In the case of surveillance technologies, the EU has proposed to create an
autonomous list specifically for surveillance technologies.
(2) Targeted catch-all control: almost all export controls are based on specific lists of
clearly defined goods, to ensure legal certainty for exporters. The only exceptions
that typically exist to this rule are items where there is a significant danger to inter-
national peace and security, such as items associated with chemical, biological or
nuclear weapons, national arms embargos, or items likely to be used by the military.
On top of these reasons for restricting the trade in dual-use items included in article 4
of the 2009 EU Regulation 428/2009, the new 2016 European Commission proposal
includes additional language covering ‘where there is evidence that the items may be

24
  See www.lawfareblog.com/wassenaar-export-controls-surveillance-tools-new-exemptions-v​
ulnerability-research.
25
  See http://trade.ec.europa.eu/doclib/docs/2013/october/tradoc_151857.pdf.
26
  See http://trade.ec.europa.eu/doclib/docs/2014/april/tradoc_152446.pdf.
27
  See http://eur-lex.europa.eu/resource.html?uri=cellar:1b8f930e-8648-11e6-b076-01aa75ed71​
a1.0013.02/DOC_1&format=PDF.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 305 13/12/2018 15:25


306  Research handbook on human rights and digital technology

misused by the proposed end-user for directing or implementing serious violations


of human rights or international humanitarian law in situations of armed conflict
or internal repression in the country of final destination’.28

These two additional mechanisms are designed to show that the EU is serious about
regulating the international trade in surveillance technologies. If either or both of these
two measures are implemented in the final EU Regulation they would go considerably
beyond existing export control regimes elsewhere in the world in safeguarding human
rights and limiting the spread of surveillance technologies.

3.  I MPLEMENTING EXPORT CONTROLS OUTSIDE THE


‘WASSENAAR CLUB’: INDIA AND CHINA

However, it is possible that much of what is developed in export control frameworks is


just a rhetorical smoke screen.29 Indeed, there are frequent claims that both arms and
export control frameworks are not actually implemented in any meaningful way and thus
serve simply as a fig-leaf to feign interest in international peace and security and human
rights.30 While there are doubtless numerous deficits in existing export control and arms
control frameworks, it is also too simple to simply dismiss them out of hand as playing no
role whatsoever in safeguarding either international peace and security or human rights.
This does not mean that claims of ‘organized hypocrisy’31 with regard to the frequent
exports of arms to heavily human rights infringing countries are necessarily false, but
rather that existing export control policies do not support claims of ethical foreign policy
agendas. States continue to follow economic self-interest in how they operate, however,
they may in some cases be willing to limit this self-interest in order to safeguard human
rights. As Bromley et al. suggest:

[a]rms trade agreements do not have to be grand moral projects . . . they are more likely to
succeed when the security subjectivities of policy-makers and ethical ideals of campaigners are
complementary.32

A question which is equally relevant that should be asked in this context is what inter-
national trade would look like without any forms of export control present at all. Here,
one of the crucial questions is related to the implementation of export controls, i.e., the
willingness of states to actively implement their widely-stated commitments to safeguard
both international peace and security and human rights. While this data is patchy, there is
some indication that some states implement numerous export controls even though they
are not explicitly required to restrict those items under international law.

28
  See http://eur-lex.europa.eu/resource.html?uri=cellar:1b8f930e-8648-11e6-b076-01aa75ed71​
a1.0013.02/DOC_1&format=PDF.
29
  This section was developed with support from Stéphanie Horth.
30
  Chandler, ‘Rhetoric Without Responsibility’, n. 6 above.
31
  Perkins and Neumayer, ‘The Organized Hypocrisy of Ethical Foreign Policy’, n. 7 above, 247.
32
  Bromley, Cooper and Holtom, ‘The UN Arms Trade Treaty’, n. 1 above, 1034.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 306 13/12/2018 15:25


Digital technologies, human rights and global trade?  307

This even extends to non-members of the Wassenaar Arrangement, such as China or


India which have no control over the norms that are set within the Wassenaar Arrangement,
but still choose to implement them. As a result, these two countries provide a fascinating case
to explore further in order to understand the quality of implementation of export controls
outside of countries which have passed legally binding legislation, such as the EU Member
States. The following section will look at the implementation of multilateral export control
regimes in general before focusing on specific provisions for surveillance technologies.

3.1 India

Export control laws in India provide statutory authority to its licensing and enforcement
departments. India’s current legal framework is on a par with global best practices in
export controls. Parts of the regulation are adapted from legislation worldwide; many
changes have been made in the last decade.33 The procedures are defined, implemented and
regulated by the Directorate General of Foreign Trade (DFGT) within the Department
of Commerce and Industry. The main laws regulating export control are the amended
Foreign Trade (Development and Regulation) Act No. 22 of 1992 (which was amended
in 201034). The Act attributes to the DFGT the power to define items on the Indian Tariff
Classification (ITC) list and also license the import and export of items found on the list.
There is also a list specifically dedicated to dual-use technologies; the list is referred to as
the Special Chemicals, Organisms, Materials, Equipment, and Technologies (SCOMET)
list.35 Export of cybersurveillance technologies could either fall within the SCOMET
list under ‘Category 7 Electronics, computers, and information technology including
information security’,36 or the regular export list. The list does not yet include the items
that were recently added to the Wassenaar Arrangement, such as ‘intrusion software,
telephone interception, jamming and monitoring equipment, etc.’.
Since India has expressed its interest in joining the Wassenaar Arrangement as well,
we can assume that they will be implementing the changes, as they have with other
export control regimes in order to adhere to the international export control regimes. If
the Government of India wishes to join the multilateral export control regimes, it must
adapt its own regime and adjust the national control list and SCOMET list with all the
multilateral regimes. Currently, the Indian legislation does not explicitly address deemed
technology exports; for the Wassenaar Arrangement, it could expand the scope of its
technology transfer and brokering in order to apply it to conventional arms-related as
well as dual-use activities.37

33
  See www.kcl.ac.uk/sspp/departments/warstudies/research/groups/csss/pubs/India-export-
control.pdf.
34
  See http://dgft.gov.in/exim/2000/Foreign_Trade_(Development_&_Regulations)_Amendm​
ent_Act,_2010.pdf.
35
  See http://cis-india.org/internet-governance/blog/export-and-import-of-security-technolo​
gies-in-india.pdf.
36
  See http://cis-india.org/internet-governance/blog/export-and-import-of-security-technolo​
gies-in-india.pdf.
37
  See www.securustrade.com/India’s%20Export%20Controls_Article__July_10_2011_FINAL.
pdf.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 307 13/12/2018 15:25


308  Research handbook on human rights and digital technology

In the summer of 2015, India officially applied to become a member of the Missile
Technology Control Regime (MTCR), a group that currently counts 34 members. The
goal of the regime is to limit the risks of proliferation of weapons of mass destruction
through control of the transfers. Even though India is not yet a member, it has still
pledged to adhere to its guidelines.38 India’s wish to join the MTCR and other interna-
tional export regimes was backed up by a powerful ally; Barack Obama declared during a
state visit to India in 2010, that he supported India’s aim to join several important export
control regimes: the Nuclear Suppliers Group, the Missile Technology Control Regime,
the Wassenaar Arrangement, as well as the Australia Group (AG).39 The United States,
however, requires that for India’s adhesion to the AG and Wassenaar Arrangement, India
must reflect the existing regimes in its own legislation in order to be considered for full
membership – all items listed in those regimes should therefore be covered by India. This
support was once again reiterated in January 2015,40 which led to a 2017 application by
India to join the Wassenaar Arrangement which was accepted at the 2017 plenary meeting
on 7 December 2017. As a result, India joined the Wassenaar Arrangement following all
procedural changes on 8 December 2017.
Regarding the addition of cyber-technology to the Wassenaar Arrangement’s dual-
use control list, Indian officials have expressed concern that this could have a negative
impact on India’s cybersecurity programme, both software and hardware, as they would
fall under the export control regime and it may result in denying products to Indian
organizations.41 Some have expressed concern that the recent modifications of the
Wassenaar rules regarding cybersecurity could result in a difficult bilateral dialogue on
issues concerning cybersecurity and Internet governance, and have questioned whether
the political costs of adhering to these new regulations would be strategically worth it
for India.42

3.2 China

Since the mid-1990s, China has undertaken several steps to regulate and limit the export
of military-related products, chemicals, biological agents, and missile-related technolo-
gies. This was a major change in direction in comparison with the early 1980s, when the
export controls on WMD (weapons of mass destruction) goods and technologies was
severely lacking. Due in part to increasing international pressure, and a desire to improve
its international reputation, the Chinese government decided to take steps in order to
improve its export control regime. This resulted in new domestic decrees and regulations,
as well as the amelioration of its administrative structures for non-proliferation and arms

38
  See www.nti.org/learn/treaties-and-regimes/missile-technology-control-regime-mtcr/.
39
  See www.whitehouse.gov/the-press-office/2010/11/08/joint-statement-president-obama-and-
pri​me-minister-singh-india.
40
  See www.whitehouse.gov/the-press-office/2015/01/25/us-india-joint-statement-shared-effo​
rt-progress-all.
41
  See www.thehindubusinessline.com/info-tech/new-export-control-law-could-threaten-indias-
cyber-security-programme/article6130704.ece.
42
  See www.thehindu.com/opinion/columns/wassenaars-web-a-threat-to-technology-transfer/
article7499748.ece.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 308 13/12/2018 15:25


Digital technologies, human rights and global trade?  309

control. Another important factor was the recognition by Chinese officials that promot-
ing multilateral, regional and bilateral agreements of all kinds plays a key role.43
Nonetheless, many doubts remain regarding China’s current and long-term export
control policies and whether they align with existing international instruments. In some
cases, although the Chinese government made changes domestically in order to closely
simulate international instruments, it still remains an outsider. In 2002, China created
export control regulations that are said to be parallel to the structures of the Missile
Technology Control Regime (MTCR). Its membership has, however, been under review
since then. Similarly, in 1998, China expanded its export control of dual-use chemicals,
nonetheless it is not yet a member of the AG. Additionally, although China has iterated its
intention to limit the export of dual-use goods and technologies, it is not yet a member of
the Wassenaar Arrangement.44 A considerable obstacle is the lack of comprehension on
the part of the international community of the Chinese export control regime, with little
material available in English and a complex and not very transparent decision-making
process; it is difficult to accurately compare the regime with existing multilateral agree-
ments such as the MTCR, AG and the Wassenaar Arrangement.
The export control of dual-use items and technologies in China has been develop-
ing in recent years; in 1998, the State Council adopted its first set of export control
regulations, which covered 183 dual-use goods and technologies. In 2002, it amended the
regulations to widen its span and include the majority of the Wassenaar Arrangement’s
List of dual-use goods and technologies. In 2010, the Index of Management of Import
and Export Permits of Dual-Use Items and Technologies came into effect and covered
nuclear, biological, chemical and missile-related dual-use goods and technologies.
Another recent development was announced by the new Bureau of Industry Security,
Import, and Export Control, indicating that a new control list for conventional arms
and dual-use goods and technologies is being worked on and should be released shortly.
The list is periodically reviewed and updated. As an example, in January 2015, China’s
dual-use control list had 816 items, whereas the Wassenaar Arrangement had a little over
1,000 items.45
In order to improve its enforcement mechanism for dual-use items, the Chinese govern-
ment has taken several important steps: in 2002, it created a ‘watch list’ listing companies
that made illegal transactions of sensitive items; in 2004, a computer-based control system
for the export of sensitive items was created, and in its approval step the Ministry of
Commerce created a national expert support system, hence creating a network allowing
scholars and exports from different sectors to exchange, and thus enhancing the decision-
making on some licence applications that require in-depth knowledge. The Ministry of
Commerce has also reached out to industry in order to raise knowledge of export control

43
  For more information, see www.gov.uk/government/publications/analysis-of-chinas-export-
controls-against-international-standards/bridging-the-gap-analysis-of-chinas-export-controls-agai­​
n​st-international-standards.
44
  See www.gov.uk/government/publications/analysis-of-chinas-export-controls-against-inter​
national-standards/bridging-the-gap-analysis-of-chinas-export-controls-against-international-​
st​andards.
45
  See www.saferworld.org.uk/resources/view-resource/873-expanding-and-sustaining-dialogue-
between-china-and-the-wassenaar-arrangement.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 309 13/12/2018 15:25


310  Research handbook on human rights and digital technology

regulations and policies, also holding seminars to give better insight into the political
aspects of export control and providing training on more technical aspects.46
A workshop was held in September 2015, and present at this workshop were the
Technical Expert Working Group (TEWG) made up of 12 policy experts and practitioners
from Russia, the United Kingdom, the United States, Korea and China, who emphasized
the importance of having an up-to-date control list of dual-use items and technologies.
The participants agreed that China has already adopted advanced procedures for dual-use
export control, and that international practices such as appointing a legal authority in
charge of controlling the export of dual-use items and technologies, licensing processes,
coordination between different agencies, end-user certification, as well as catch-all mecha-
nisms, are already applied in China. It was nonetheless remarked that China’s dual-use
export control is still in a development phase and that several challenges remain before
China reaches the next phase of export control policy; the technical experts therefore
agreed to continue sharing information on best practices in export licensing mechanisms.
For example, the experts agreed that it was key for China to maintain an up-to-date
control list that corresponds to existing international regimes, mainly the Wassenaar
Arrangement. It was also noted that communication and information sharing between
China and key international regimes should be improved.47 Additionally, a further two-
day workshop on strengthening the implementation of dual-use and arms trade control
was held in January 2016. The workshop was attended by Chinese practitioners, officials
and academics, as well as some Wassenaar Arrangement Participating States.48 The goal
of the workshop was to increase dialogue and cooperation between the two export control
regimes, the Chinese and the Wassenaar Arrangement.
China’s export control regulations on conventional arms are largely consistent with
the control policies of the Wassenaar Arrangement; the list for dual-use goods and
technologies, however, does not yet cover the comprehensive list of the Arrangement. A
more complete list is needed, and Chinese officials have mentioned that since its list of
conventional arms mostly covers the items of the Arrangement, some fail to see the point
in joining the Wassenaar Arrangement – it seems that the failed attempt at becoming
a member of the MTCR has left many doubting if they should apply to be part of the
Wassenaar regime.
At the same time, China’s export control regime has considerable weaknesses. One of
the main weaknesses of China’s export controls is the lack of enforcement – monitoring,
investigation of violations, and penalization is lacking. Often the authorities merely
react to intelligence data received from the United States, the United Kingdom, or the
European Union. Furthermore, Chinese officials do not actively pursue investigations
concerning large state-owned businesses, and lack of transparency is often an issue which
makes it difficult to accurately assess the enforcement of export control policies. Chinese

46
  See www.saferworld.org.uk/resources/view-resource/873-expanding-and-sustaining-dialogue-
between-china-and-the-wassenaar-arrangement.
47
  See www.saferworld.org.uk/resources/view-resource/1026-sharing-perspectives-on-achieve​
ments-and-challenges-relating-to-export-controls-of-dual-use-items-and-technologies.
48
  For more information, see www.saferworld.org.uk/resources/view-resource/1059-strength​
ening-technical-and-operational-capacity-to-implement-and-enforce-dual-use-and-arms-trade-
controls.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 310 13/12/2018 15:25


Digital technologies, human rights and global trade?  311

officials have declared their wish to increase their enforcement efforts; an example of
recent enforcement occurred when two companies were penalized for having had initial
contact with the Libyan government in 2011.49
To encourage China’s participation in export control regimes, dialogue is key.
Members of the international community must continue their outreach and assist
China in further developing, implementing and enforcing its control list to reflect
international standards; furthermore, capacity-building activities should be prioritized
between experts, practitioners, and officials and participants from China in order to
encourage sharing knowledge and promoting efficient enforcement of export control
policies.50

3.3  India and China: Conclusion

The implementation of export control regimes is obviously difficult outside of binding


regimes such as the EU. This is particularly the case when countries like China or India
are not even part of the regimes which define the norms, i.e. they are signing up to be
part of technology arrangements where they themselves do not control the norm-setting.
On the face of it, this is highly perplexing behaviour in the international environment.
This is one of the great misunderstandings of export control regimes such as the
Wassenaar Arrangement, which are assumed to be completely ineffectual, or as Innokenty
Pyetranker put it, an ‘umbrella in a hurricane’.51 On the basis of simply a rational
economic or institutional analysis, there seems to be no good reason for either China or
India to implement export controls such as those proposed in Wassenaar. The question
that should be asked in this context is why they adhere to the Wassenaar Arrangement at
all. After all, why should states agree to be bound by regularly updated norms that they
do not even themselves control?
The answer to this question perhaps lies in a dual wish to be a ‘good actor’ within
the international community, as well as genuine self-interest to prevent complete pro-
liferation of all technologies without any restrictions. Assuming at least some genuine
self-interest of states exists, the Wassenaar Arrangement becomes a useful norm-setter
that reduces the costs of developing a state’s own set of norms in a way that is broadly
agreed internationally. For sure, this does not mean that economic interests may not also
exist, but rather that these interests may sometimes be limited. Even in countries that
implement export controls well, the significant majority of requests to export goods or
services are accepted rather than rejected, a percentage that is likely to be even higher
in countries with low quality implementation of export controls. The reason for export
controls is not to try to completely stop trade, but rather to make the misuse of certain
goods more difficult. Thus, overt assumptions of perfect compliance and complete

49
  See www.gov.uk/government/publications/analysis-of-chinas-export-controls-against-inter​
national-standards/bridging-the-gap-analysis-of-chinas-export-controls-against-international-
standards.
50
  For policy recommendations, see www.saferworld.org.uk/resources/view-resource/873-expa­​
nding-and-sustaining-dialogue-between-china-and-the-wassenaar-arrangement.
51
  Innokenty Pyetranker, ‘An Umbrella in a Hurricane: Cyber Technology and the December
2013 Amendment to the Wassenaar Arrangement’ (2015) 13 Nw. J. Tech. and Intell. Prop. 153.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 311 13/12/2018 15:25


312  Research handbook on human rights and digital technology

adherence are part of the problem, a point acknowledged by Pyetranker, who went on
to admit that ‘[h]aving an umbrella in a hurricane is better than having nothing at all’.52
In fact, given the strong economic incentives that oppose it and its weak institutional
framework, the Wassenaar Arrangement has been remarkably successful in spreading
the norms that are set by it.
In the case of China and India, the reasons behind adherence to the Wassenaar regime
are different but lead to significant levels of compliance greater than zero. Both countries
have been put under pressure by other countries around the world and civil society to
improve their export controls, pressure which seems to be bearing fruit. However, this
pressure alone is clearly not enough, and some degree of self-interest to prevent mass
instability in international peace and security can also be expected. With India, this even
led to India eventually becoming a Wassenaar participating state, joining the Arrangement
in December 2017. China has not gone this far but is clearly making additional efforts to
improve the state of export controls in the country. While neither country seems willing
to call into question their fundamental economic or national security interests in order to
do so, it does seem that there is a basic willingness to limit technology exports and actively
enforce in a multilateral control regime, even if neither country was not until very recently
formally part of it.

4.  O
 NGOING CHALLENGES IN EXPORT CONTROL REGIMES
OF SURVEILLANCE TECHNOLOGIES

Beyond questions of general implementation, there are numerous other challenges related
to the regulation of the trade in surveillance technologies through export controls. These
challenges can also be seen as a roadmap for relevant next steps in the debate. If export
controls are to ensure their international legitimacy and effectiveness, some additional
considerations may be helpful.

4.1  Human Security versus Human Rights

One of the most notable shifts within the debate about export controls is from human
security to human rights. In order to understand where this shift comes from, however,
it is first important to understand the development of the human security concept.53 A
narrative focused on security alone can have adverse consequences on human rights when
it leads to states altering laws and upgrading capabilities that allow them to interfere with
the freedom of expression and the right to privacy. A different narrative that has gained
attention among academics and practitioners alike is that of human security. According
to Simon Dalby, the concept of human security has four key characteristics. First, much
like human rights, it is a universal concept without geographical restrictions. Second, it
understands various aspects of security as mutually interdependent. Third, human secu-
rity is closely related to the notion of risk, and operates through the logic of prevention.

52
  Ibid. 180.
53
  Parts of this section were developed with support from Thomas Behrndt.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 312 13/12/2018 15:25


Digital technologies, human rights and global trade?  313

Fourth, human security introduces a narrative shift of the referent object of security from
the state to the individual level, making it about people.54
However human security has also been heavily criticized as a vehicle for a securitization
of human rights narratives.55 These concerns are shared by many in the civil society sector,
who continue to feel uncomfortable with the term. ‘Despite such conceptual difficulties,
many NGOs have actively referenced human security language and principles in a wide
range of campaigns.’56
Notably, the human security justification was at the core of the European Commission’s
review of export control policy in 2014, where the Commission proposed to shape its
entire export control policy around the concept of human security in suggesting that:
‘The Commission will consider evolving towards a “human security” approach recognis-
ing that security and human rights are inextricably interlinked’.57 However, in the final
Commission Proposal of an export control regulation in 2016, human security is nowhere
to be found.58 This is in no small part due to pressure from civil society to emphasize
human rights rather than human security, and pressure from EU Member States con-
cerning a vague and unspecific concept. It seems that both civil society and the EU have
learnt from previous extensive usage of the concept of human security and based on the
widespread criticism59 decided to focus on human rights instead.
Interestingly, this criticism in many ways mirrors similar concerns that exist about
the term ‘cyber’.60 Like human security, which attempts to shift the referent object of
security from the state to human beings, cybersecurity attempts to expand the concept of
security beyond security of devices to also encompass human beings, states and societies.
Also similarly to cybersecurity, human security brings a heavy danger of securitizing all
objects it touches and is only able to function ‘within the logic of militarism from above’.61
Finally, there is the not entirely unreasonable claim that both the concept of cybersecurity
and human security have been co-opted and institutionalized to a considerable degree, as
a result of which both should be treated with considerable caution with regard to potential
effects and side-effects.62 Both have become so broad and vague that they are increasingly
indeterminate, as it is possible to justify a vast number of potential actions within their
scope. Happily, many of the most recent export control regimes in the EU and under

54
  Simon Dalby, Geopolitical Change and Contemporary Security Studies: Contextualizing the
Human Security Agenda (Institute of International Relations, University of British Columbia, 2000).
55
  S. Watson, ‘The “Human” as Referent Object?: Humanitarianism as Securitization’ (2011)
42(1) Security Dialogue (March) 3–20, available at https://doi.org/10.1177/0967010610393549.
56
  Bromley, Cooper and Holtom, ‘The UN Arms Trade Treaty’, n. 1 above.
57
  See http://trade.ec.europa.eu/doclib/docs/2014/april/tradoc_152446.pdf.
58
  See http://eur-lex.europa.eu/resource.html?uri=cellar:1b8f930e-8648-11e6-b076-01aa75ed​
71a1.0013.02/DOC_1&format=PDF.
59
  See e.g., David Chandler and Nik Hynek (eds), Critical Perspectives on Human Security:
Rethinking Emancipation and Power in International Relations (Routledge, 2010).
60
  Ben Wagner and Kilian Vieth, ‘Was Macht Cyber? Epistemologie Und Funktionslogik von
Cyber’ (2016) 9(2) Zeitschrift Für Aussen- Und Sicherheitspolitik 213–22.
61
  Mandy Turner, Neil Cooper and Michael Pugh, ‘Institutionalised and Co-Opted: Why
Human Security has Lost Its Way’ in David Chandler and Nik Hynek (eds), Critical Perspectives
on Human Security: Rethinking Emancipation and Power in International Relations (Routledge,
2010) 87.
62
  Ibid.; Wagner and Vieth, ‘Was Macht Cyber?’, n. 60 above.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 313 13/12/2018 15:25


314  Research handbook on human rights and digital technology

the Wassenaar Arrangement recognize this, as a result of which the concept of human
security is increasingly being side-lined in favour of human rights.

4.2  Whither ‘Crypto’ in Export Controls?

The ‘historic sin’ in the context of technology regulation by export controls are the
measures to regulate cryptography in the mid-1990s. As part of what were termed the
‘crypto-wars’ in the 1990s, there were numerous steps to limit the spread of encryption
technologies, with export controls of cryptography and publicly enforced key escrow
mechanisms two of the key measures to implement restrictions of cryptography.63 These
measures were not only highly harmful to the development and spread of cryptography
at the time, they had negative effects even decades later. This is because some of the
cryptographic technologies developed in a manner to allow them to be exported in the
1990s were still in use several decades later, making it very easy for these ‘export grade’
implementations to be attacked technically.64
The resulting attacks may not have been a surprise for some companies, who freely admit
that they wouldn’t ‘touch EXPORT-grade cryptography with a 20ft stick’,65 however, it
does exemplify how regulations on cryptography had served to make communications on
the Internet insecure. This also had a negative effect on human rights, as cryptography is
a key technology to safeguard many important human rights.66 This is because key rights
such as privacy, free expression or even human dignity require a technical foundation. One
of the key components of this technical foundation is cryptography, as it allows human
beings to have access to a protected space in which their thoughts, ideas and interactions
are genuinely private. As more and more forms of human interaction and communication
transition online, they are increasingly subject to a wide variety of surveillance. Without
technical safeguards against such cryptography, it is essentially impossible to have mean-
ingful access to both privacy and other key human rights in a digital age.
Notably, it was the US Department of Commerce that began implementing regula-
tions of cryptography unilaterally in 1996. By contrast, the ‘OECD Guidelines on
Cryptography Policy and the European Commission expressed strong support for the
unrestricted development of encryption products and services’.67 Similar national state-
ments of support for cryptography without regulation were made by Canada, Ireland,
Finland and France,68 suggesting strong international opposition. Despite considerable

63
  Whitfield Diffie and Susan Landau, The Export of Cryptography in the 20th Century and the
21st (Sun Microsystems, 2001); EPIC, Cryptography and Liberty 1999: An International Survey of
Encryption Policy (1st edn, Washington, DC: EPIC, 1999).
64
  David Adrian et al., ‘Imperfect Forward Secrecy: How Diffie-Hellman Fails in Practice’ in
Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security
(ACM, 2015) 5–17.
65
  Filippo Valsorda, ‘Logjam: The Latest TLS Vulnerability Explained’, Cloudflare Blog, 20
May 2015, https://blog.cloudflare.com/logjam-the-latest-tls-vulnerability-explained/.
66
 EPIC, Cryptography and Liberty 1999, n. 63 above; Frank La Rue, Report of the Special
Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, A/
HRC/23/40 (Geneva: United Nations, 2013).
67
 EPIC, Cryptography and Liberty 1999, n. 63 above.
68
  Ibid.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 314 13/12/2018 15:25


Digital technologies, human rights and global trade?  315

international opposition, the Wassenaar Arrangement passed restrictions on ‘encryption


software and hardware having keys longer than 56-bits’.69 To a considerable degree,
these encryption controls are still in force to this date, with the Wassenaar Arrangement
continuing to regulate ‘cryptography for data confidentiality’ having ‘in excess of 56 bits
of symmetric key length’.70
Having been the main proponent of regulating encryption at an international level
and restricting internationally traded cryptography to 56-bits in 1998, the United States
then decided two years later to liberalize its own national export controls in January
2000.71 Barely 13 months after they had convinced large parts of the rest of the world
to regulate cryptography in the Wassenaar Arrangement, they then decided that they
would no longer apply the rules they had fought for. While this was ‘seen by industry as a
victory’,72 as well as by civil society which had been fighting hard to lift these restrictions,
the victory was primarily limited to the United States. As the Wassenaar Arrangement
had not changed, it was used by numerous states in Europe and around the world as an
excuse to regulate cryptography.
This state of affairs was not challenged until more recent debates about export controls
of surveillance technologies in Europe, as part of which both civil society, industry and
the European Parliament actively fought for the relaxation of restrictions on cryptography
within the 2016 revised export control regulation. Specifically, it was suggested to provide
a European Union General Export Authorization (EU GEA) for cryptographic products,
thereby ensuring that export controls would not apply to them. This also explicitly intends
‘to delete cryptography items from the relevant international control lists as soon as
possible’.73 While such controls are still part of the Wassenaar Arrangement, this move
represents a considerable step towards actually ending the regulation of cryptography
through export controls, not just in the United States but everywhere else in the world as
well.

4.3  Transparency, Participation, Governance

The post-Cold War expansion of export controls brought with it a considerable expansion
of transparency in the way that export controls are managed. Particularly, the Wassenaar
Arrangement and the European Union made a strong effort to be as transparent as
possible in the way they managed their export control lists. This was a considerable step
for the post-COCOM regime Wassenaar, which had been used to keeping the goods it
controls secret. This sea-change in transparency meant that it was now acceptable to make
publicly available the dual-use list of items being controlled, including key elements of
numerous arms being controlled and parts of chemical and nuclear weapons whose export
was being restricted.

69
  See http://gilc.org/crypto/crypto-survey-99.html.
70
  See www.wassenaar.org/wp-content/uploads/2017/12/2017_List_of_DU_Goods_and_Tech​
n​o​logies_and_Munitions_List.pdf.
71
  Diffie and Landau, The Export of Cryptography in the 20th Century and the 21st, n. 63 above.
72
  Ibid.
73
  See https://marietjeschaake.eu/en/european-parliament-adopts-position-on-export-control-
reform.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 315 13/12/2018 15:25


316  Research handbook on human rights and digital technology

However, what may have seemed to be wide-ranging transparency in the early 1990s
for a post-Cold War regime is far from the rigorous standards of transparency expected
of public sector organizations in 2018. As time progresses, this increasingly makes the
Wassenaar Arrangement look like a Cold War dinosaur, with closed door meetings that
are impossible to attend and where neither minutes nor agendas nor lists of participants
are publicly available. In comparison even to other international organizations also focused
on security, like the Organization for Security and Co-operation in Europe (OSCE), the
Wassenaar Arrangement is far from a model of transparency. This is in part because the
organization has not yet embraced its new role of also dealing with non-military questions
of dual-use. As questions of dual-use technologies become more and more societally
relevant, they cannot be made behind closed doors by an organization that only publishes
the results of its deliberations. The same goes for the participation of non-state actors,
something that has become par for the course in many international organizations but is
still viewed as unimaginable within the context of the Wassenaar Arrangement.
While far from perfect models of participation and transparency, the OSCE, the United
Nations and even the International Telecommunication Union (ITU) demonstrate that it
is perfectly possible for international organizations to negotiate international agreements
transparently and integrate non-state actors into their deliberations. This is particularly
evident within the European Union, which is at pains to consult all relevant stakeholders,
and involve them in the process of assessing and developing new regulations in the area
of dual-use. For Wassenaar to function in the future it will need to become similarly
transparent and ensure it takes the participation of a broad set of stakeholders seriously.
This also extends to greater transparency in the export control licensing decisions of
Wassenaar member states. If the United Kingdom and Finland are capable of publish-
ing their licensing decisions on the Internet, there is no reason why Germany and India
cannot do the same. Greater transparency in actual export control licensing decisions
both increases the legitimacy of the export control decision-making process and shows
other Wassenaar member states that the state which is making the decisions transparent
takes its international commitments seriously.
Finally, providing public access to export control licensing decisions makes a more
accurate assessment of the overall quality of export control implementation more
meaningfully possible. Initial provisions for better information exchange between states
are present in the 2016 revision to the EU dual-use export control regulation, but far more
needs to be done to ensure actual transparency both in the EU and among Wassenaar
member states.

4.4  Expanding and Developing the Wassenaar Arrangement

While the EU and other regional organizations can doubtless be improved, the centrality
of the Wassenaar Arrangement in global export controls means that Wassenaar, more
than any other international agreement, defines the quality of the international export
control regime. This also means that considerable changes can and should be made within
the existing arrangement to improve the overall global export control regime.
The first and perhaps most important step is inviting new members to join the Wassenaar
Arrangement. Here, India is an excellent example, which in 2015 was rumoured to be
interested in ‘at some point’ joining the Arrangement, and by the time this chapter had

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 316 13/12/2018 15:25


Digital technologies, human rights and global trade?  317

been completed in 2018 it had already joined the Wassenaar Arrangement. This swift
process is by no means typical, and Wassenaar member states, in the spirit of their block-
oriented Cold War COCOM past, often still see the Wassenaar Arrangement as their own
private club. This will need to change dramatically to ensure that other countries which are
leading exporters of dual-use goods, such as Israel or China, can join the club. In order to
do this, Wassenaar should both more actively attempt to include new member states, and
improve the accession process to make it easier to move from complete or close regulatory
alignment that already exists to a considerable degree in countries like Israel or China,
to full membership of the Arrangement. Membership of the Wassenaar Arrangement
should not be misused as a political bargaining chip – it is also an essential instrument
to safeguard international peace and stability and increasingly to protect human rights.

4.5  Expanding the Arms Trade Treaty to Include Dual-Use Items

In the long term, however, there is another mechanism that could well – in decades to
come – take over from the Wassenaar Arrangement as the central instrument of dual-use
export controls. The Arms Trade Treaty (ATT) has several key advantages over the
Wassenaar Arrangement which in the long term could make it a more feasible mechanism
for negotiating international export controls:

(1) the ATT has a far broader base of member states than Wassenaar, with 92 states
having ratified the treaty;
(2) the ATT is a legally binding international treaty rather than a non-binding arrange-
ment like Wassenaar;
(3) the ATT treaty organization is a UN framework organization, meaning that it exists
within a stable international framework and follows UN rules and procedures. This
inherently provides for greater transparency and access by non-state actors than is
the case in the Wassenaar Arrangement.

However, at present the ATT does not regulate dual-use goods but simply conventional
arms. In order for the ATT to do this, an additional protocol would be required to also
apply the ATT to dual-use goods, the development and ratification of which is likely to
take decades rather than years. At the same time, the fundamental value of such a long-
term project means that it seems perfectly plausible and achievable, if some Wassenaar
and ATT member states are willing to take a long-term perspective to ensure an effective
international export control regime. While this is, of course, a challenging long-term issue,
it also illustrates the time that it takes to develop institutions at an international level.

5.  CONCLUSION: NOT A PANACEA

While the future development of export controls seems hard to ascertain, this chapter has
attempted to provide an overview of the development of export control regimes in the last
two decades, with a focus on the role of human rights and digital technologies.
While many of the steps suggested to improve existing export control regimes are
likely to take considerable time to implement, they are anything but impossible and are

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 317 13/12/2018 15:25


318  Research handbook on human rights and digital technology

r­eflective of what remain relatively ‘young’ regimes (by international standards) which
only began to flourish and meaningfully consider human rights several decades ago. Only
by taking a very long-term perspective is it possible to effectively ensure the development
of international institutions in this area, as such changes take years or even decades to
agree on and implement.
Export controls are not a panacea for all of the human rights challenges associated with
modern technologies, there are also many other measures states can and should take to
contribute to international peace and security and promote human rights.74 While they
may not be the obvious mechanism of choice, they have become a powerful way of insert-
ing human rights into international trade. Thus, export controls, when implemented cor-
rectly, can meaningfully contribute to limiting the spread of surveillance technologies.75
In an era in which states like to claim there is little they can do to safeguard human
rights online, it is possible to remind them that this is one of the key mechanisms they
do have control over. In reminding states of their human rights obligations, international
civil society should continue to be wary of the human security concept and its ability
to safeguard human rights. This does not mean, however, that it is not worth working
together with established institutional structures. If anything, the experience with export
controls suggests that support for human rights norms can come from the most unlikely
places and through mechanisms that are not immediately obvious.
If the urgency of the challenges associated with the Internet and human rights are taken
seriously, however, the mainstreaming of human rights across all domains becomes even
more important. No regulation, law or arrangement can claim to be separate from or
outside of the scope of these challenges, whether they are related to security, global trade
or foreign policy. This does not mean that institutional failure or conflicting interests do
not exist, but rather that the meaningful and systematic consideration of human rights
can and should happen at all levels. Only by acknowledging this need and systematically
addressing it, is it possible to respond to the challenges of the Internet and human rights
in a sustainable and effective manner.

74
  For an overview, see Wagner, After the Arab Spring, n. 13 above.
75
  See Lasse Skou Andersen and Laura Dombernowsky, Danmark åbner for eksport af
netovervågning til Kina (Information, 2016), available at www.information.dk/indland/2016/07/
danmark-aabner-eksport-netovervaagning-kina.

Ben Wagner and Stéphanie Horth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:00AM
via New York University

WAGNER_9781785367717_t.indd 318 13/12/2018 15:25


16.  Policing ‘online radicalization’: the framing of
Europol’s Internet Referral Unit
Kilian Vieth

1. INTRODUCTION
Europol’s Internet Referral Unit (EU IRU) makes recommendations to Internet industry
players for take-down of online content. This body monitors online material, assesses
it, and asks hosting platforms to remove content that Europol regards as inappropriate.
Basically, the unit tries to draw attention to activities that may breach the commercial
terms of service of social media platforms, not laws. The EU Internet Referral Unit was
created in the context of European counter-terrorism policy and began operations in
2015. The EU IRU is the case study of this chapter that will illustrate the greater context
of transnational law enforcement operations and the privatized governance of online
media. This chapter analyzes the discursive practices underlying the EU IRU, with the
aim of understanding the (in)security dynamics of online (counter-)radicalization and
assessing the related human rights issues.
This chapter engages with the framing of the EU IRU and examines why and how
filtering Internet content has become ‘normal’ practice in European counter-terrorism
policy. The core question is: How does Europol frame radicalization and Internet content?
Therefore, I have located the EU IRU within the wider discourse on counter-radicalization
and contend that we need to better understand the meaning of radicalization as it is used
by professionals working in the field. This case study focuses on referral and take-down
practices and only parenthetically touches upon other forms of law enforcement opera-
tions on the Internet, such as open intelligence gathering and the investigation of criminal
networks on social media.
This chapter begins with a brief theoretical discussion of a practice-oriented approach to
security and counter-terrorism that addresses the starting point of my discussion of counter-
ing (online) radicalization. The central concept – radicalization – is explained and criticized.
This discussion is linked to a consideration of the role of the Internet in countering violent
political extremism online. The subsequent policy analysis first scrutinizes the activities of
the EU IRU in great detail and then delineates how Europol frames its Internet referral
operations by focusing on radicalization and the role of technology. The main conclusion
is that the EU IRU is rooted in a flawed and ineffective understanding of radicalization.
Over-blocking of content, failing to provide paths to recourse, and a complete absence
of due process are just a few of the problems that arise when social media companies
act as arbiters of human rights.1 The former UN Special Rapporteur on Freedom
of Expression has also stressed the importance of official judicial intervention and

1
  See https://onlinecensorship.org/news-and-analysis (last accessed August 13, 2017). See
further, Rikke Frank Jørgensen, Chapter 17.

319
Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 319 13/12/2018 15:25


320  Research handbook on human rights and digital technology

t­ ransparent ­procedures in the regulation of Internet content and emphasized that ‘there
must be effective remedies for affected users, including the possibility of appeal’.2
Studies have also demonstrated the power of social media to influence human behavior
in various ways.3 Countering radicalization on the Internet raises vast ethical concerns due
to the enormous power leveraged by technology. A better understanding of the reasoning
behind the EU IRU is needed in order to address human rights concerns.
The EU IRU represents one specific case in the broader discourse on (counter‑)
radicalization in security politics. This unit is concerned with the emerging constellation
of public and private actors cooperating to regulate expression online in the name of
counter-terrorism. As such, the unit’s activities can be contextualized within the growing
debate on countering violent extremism online,4 focusing on one approach out of a broad
range of counter-radicalization measures.

2.  LOGIC OF COUNTERING RADICALIZATION

Over the past few decades, new conceptualizations of what security is and how it comes
about have emerged.5 For instance, the blurring of the divide between the external (for-
eign) and internal (domestic) dimensions of security has fundamentally reshaped the way
in which security is understood. Threat constructions converge towards global insecurities
and security professionals develop global interconnected networks – two developments
that significantly alter the notions of defense, war and policing.6 The European Police
Office (Europol) is one of the transnational security actors heavily involved in the chang-
ing field of security, both as a driver and a symbol of a changing security landscape. This
agency represents the transformation of statehood and the increasing interconnectedness
of European security production.7 Europol’s operations in ‘cyberspace’ transcend not

2
  Frank la Rue, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression, A /HRC/17/27 (United Nations Human Rights Council, 2011)
21, available at www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.
3
  Robert M. Bond and others, ‘A 61-Million-Person Experiment in Social Influence and
Political Mobilization’ (2012) 489 Nature 295; Adam D.I. Kramer, Jamie E. Guillory and Jeffrey
T. Hancock, ‘Experimental Evidence of Massive-Scale Emotional Contagion Through Social
Networks’ (2014) 111 Proceedings of the National Academy of Sciences 8788.
4
  See www.voxpol.eu/.
5
  Didier Bigo, ‘When Two Become One’ in Morton Kelstrup and Michael Charles Williams
(eds), International Relations Theory and the Politics of European Integration: Power, Security and
Community (Routledge, 2000); Didier Bigo, Polices en réseaux: lexpérience européenne (Presses
de la Fondation Nationale des Sciences Politiques, 1996); Christopher Daase, Der Erweiterte
Sicherheitsbegriff. Sicherheitskultur Im Wandel, Working Paper 1/2010 (2010); Barry Buzan, Ole
Wæver and Jaap de Wilde, Security: A New Framework for Analysis (Lynne Rienner Publishers,
1998); CASE Collective, ‘Critical Approaches to Security in Europe: A Networked Manifesto’
(2006) 37 Security Dialogue 443.
6
  Didier Bigo, ‘Globalized (In)Security: The Field and the Ban-Opticon’ in Didier Bigo and
Anastassia Tsoukala (eds), Terror, Insecurity and Liberty: Illiberal Practices of Liberal Regimes
After 9/11 (1st edn, Routledge Chapman & Hall, 2008).
7
  Didier Bigo and others, ‘Mapping the Field of the EU Internal Security Agencies’ (2007) The
Field of the EU Internal Security Agencies 5.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 320 13/12/2018 15:25


Policing ‘online radicalization’  321

only the borders between EU Member States, but also the border between the EU and
the rest of the world.8
Ensuring the security of a certain group always implies the need for security from some
outside threat. In the face of transforming security architectures, the promise of security
necessarily entails the production of insecurity for the out-group – a dilemma inherent to
the concept of security. Threats are socially constructed through ‘boundary-creating pro-
cesses’9 that use symbols and social enemies to label deviant out-groups. Many eloquent
metaphors have been used to describe the interplay between security and discrimination,
ranging from David Lyon’s famous ‘social sorting’, to the separation of the ‘risky’ from
the ‘at risk’.10 These metaphors all try to capture the visible and invisible political violence
that is inscribed in the many practices of (in)securitization.
I will therefore use ‘(in)security’ as an inherently ambivalent term in order to avoid
ontological security frames which employ ‘objective’ understandings of security. My
inquiry is based on a relational conception of (in)security, following Didier Bigo, whereby
security and insecurity are produced simultaneously through discursive practices. This in
turn assumes that in order to understand (in)security, we do not need new definitions of
security, but rather a better understanding of (in)securitization processes.

2.1  Analyzing Radicalization: Concept and Critique

Radicalization has become the primary narrative used to make sense of terrorism and
design counter-terrorism policy. The EU strategy on countering terrorism contains a
subprogram on combating radicalization, violent extremism, and recruitment to ter-
rorism.11 Arun Kundnani argues that, immediately after the 9/11 attacks, terrorism was
mainly identified as the result of bare hatred and fanaticism. Numerous commentators
regarded requests to investigate the root causes of the attacks as attempts to justify
the mass murder of innocent human beings. The terrorists were framed as purely evil,
incorrigible individuals whose actions had no political context. This led to the conclusion
that ‘only overwhelming force would be successful against this new enemy’, which led to
the war in Afghanistan and Iraq and the establishment of Guantánamo.12 This counter-
terrorism policy founded on violent retaliation turned out to be deficient. The subsequent

 8
  For an in-depth discussion of the ‘cyberspace’ metaphor, see Mark Graham, Geography/
Internet: Ethereal Alternate Dimensions of Cyberspace or Grounded Augmented Realities? (Social
Science Research Network, 2012); and Lene Hansen and Helen Nissenbaum, ‘Digital Disaster,
Cyber Security, and the Copenhagen School’ (2009) 53 International Studies Quarterly 1155.
 9
  Anastassia Tsoukala, ‘Boundary-Creating Processes and the Social Construction of Threat’
(2008) 33 Alternatives: Global, Local, Political 137.
10
  Jenny Edkins and Véronique Pin-Fat, ‘Introduction: Life, Power, Resistance’ in Jenny
Edkins, Véronique Pin-Fat and Michael J. Shapiro (eds), Sovereign Lives: Power in Global Politics
(Routledge, 2004).
11
  Council of the EU, Revised EU Strategy for Combating Radicalisation and Recruitment to
Terrorism, Strategy Paper 5643/5/14 (General Secretariat of the Council, 2014), available at http://
data.consilium.europa.eu/doc/document/ST-9956-2014-INIT/en/pdf.
12
  Arun Kundnani, ‘Radicalisation: The Journey of a Concept’ in Christopher Baker-Beall,
Charlotte Heath-Kelly and Lee Jarvis (eds), Counter-Radicalisation: Critical Perspectives (1st edn,
Routledge, 2014) 14.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 321 13/12/2018 15:25


322  Research handbook on human rights and digital technology

attacks in Madrid and London and the failing military campaign in Iraq helped to surface
radicalization as a new way to guide counter-terrorism policy. Radicalization became the
vehicle that made it possible to talk about the causes of terrorism, broadening counter-
terrorism efforts beyond the ‘killing and capturing’ approach, shifting priorities towards
‘winning hearts and minds’.13 However, radicalization emphasizes the individual and
de-emphasizes the social and political circumstances, which often causes the radical to
look like a rebel without a cause.14
The upswing of radicalization as a central concept in global (in)security debates was
mirrored by an increase in academic research on radicalization.15 Although radicalization
is by no means a new concept, the increase in research on radicalization since around
2004 is remarkable. Over the last ten years, most of the literature on radicalization has
been published in the fields of security and counter-terrorism studies.16 There is an older
strand of social movement research which is associated with scholars such as Crenshaw17
and della Porta,18 who highlighted how terrorist groups can develop out of wider social
movements by exploiting organizational dynamics. These researchers studied the relation-
ships between terrorism, political opportunity structures, and contextual factors such as
poverty, regime types and disenfranchisement.
How is radicalization understood today? Terrorism studies literature acknowledges that
there is no universally accepted definition of the term; nevertheless, there seems to be a
general consensus that radicalization poses a threat.19 Generally speaking, the concept of
radicalization assumes that individuals can be radicalized and that this process may lead
to violence:

[T]he terms ‘radicalisation’, ‘radicalise’ and ‘radical’ are employed in a way that suggests they
are self-evident concepts. Even worse, the terms are often used in a circular fashion: a radical is
someone who has radical ideas or who has been radicalized.20

This in turn gives rise to criticism that radicalization is an ‘essentially contested concept’21
because it is inherently political and incorporates ideological elements that make it

13
  Ibid. 15.
14
  Mark Sedgwick, ‘The Concept of Radicalization as a Source of Confusion’ (2010) 22
Terrorism and Political Violence 479, at 480ff.
15
  Kundnani, ‘Radicalisation’, n. 12 above, 18.
16
  Charlotte Heath-Kelly, Christopher Baker-Beall and Lee Jarvis, ‘Introduction’ in Christopher
Baker-Beall, Charlotte Heath-Kelly and Lee Jarvis (eds), Counter-Radicalisation: Critical Perspectives
(1st edn, Routledge, 2014) 1.
17
  Martha Crenshaw, ‘The Causes of Terrorism’ (1981) 13 Comparative Politics 379.
18
  Donatella Della Porta, Social Movements, Political Violence, and the State: A Comparative
Analysis of Italy and Germany (1st edn, Cambridge University Press, 1992).
19
 Charlotte Heath-Kelly, ‘Counter-Terrorism and the Counterfactual: Producing the
“Radicalisation” Discourse and the UK PREVENT Strategy’ (2013) 15 British Journal of Politics
and International Relations 394, at 398; Anthony Richards, ‘The Problem with “Radicalization”: The
Remit of “Prevent” and the Need to Refocus on Terrorism in the UK’ (2011) 87 International Affairs
143, at 143.
20
  Minerva Nasser-Eddine and others, Countering Violent Extremism (CVE) Literature Review
(DTIC, 2011) 13, available at http://oai.dtic.mil/oai/oai?verb=getRecord&metadataPrefix=html&
identifier=ADA543686.
21
  Heath-Kelly, Baker-Beall and Jarvis, ‘Introduction’, n. 16 above, 6.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 322 13/12/2018 15:25


Policing ‘online radicalization’  323

impossible to define based on empirical evidence – no ‘neutral’ definition of radicalism


is possible.
In the counter-terrorism discourse, the initial conception of radicalism has been
replaced by the idea that radicalization involves a shift from moderation to extremism.22
Addressing radicalization is generally presented as the wiser, more nuanced and thought-
ful approach to counter-terrorism, and is readily embraced by policy-makers and law
enforcement officials. Governments are eager to find out what turns a person into a terror-
ist, and radicalization – stemming from the Latin word radix for root – appears to provide
exactly the right framework for obtaining this information. Neumann distinguishes
between two main types of radicalization, with different consequences for policy frames.
He speaks of cognitive radicalization, which focuses on radical thinking, holding radical
beliefs and following radical ideologies. He also identifies behavioral radicalization, which
considers engaging in violent action to be the relevant constitutive end-point that defines
radicalism. As such, behavioral radicalization stipulates that actions, not thoughts, define
the state of being radical.23
Monitoring and censoring online content can only be justified by tackling cognitive
radicalization. A frame of expected consequences24 that draws a direct connection
between radical speech and radical actions must be applied. After all, without assuming
causality between radical thinking and radical actions, filtering and removing material
from the Internet does not make sense.
The biggest issue with tackling cognitive radicalization is the inherent chilling effect this
approach has on free speech.25 If radicalization starts with holding radical beliefs, this may
place strong limits on the fundamental right to freedom of expression. Policing cognitive
radicalism – that is, radical ideas – falls outside of the traditional realm of liberal legal
frameworks that (mostly) criminalize actions, not ideas. Being radical is not a crime in
liberal democracies – or, at least, it should not be, because it is vital to democracy. Radical
thinking has often led to positive change, or, as Rik Coolsaet puts it: ‘Most democratic
states would not exist but for some radicals who took it upon themselves to organise
the revolt against a foreign yoke or an autocratic regime’.26 Radicalism is essentially a
question of standpoint; many ideas are radical at some point in history, but become
mainstream over time (and vice versa). Women’s right to vote used to be a radical claim
a century ago, but is uncontested today in Western society. The civil rights movement in
the United States also fought for ‘radical’ demands, but is now celebrated for the positive

22
  Jonathan Githens-Mazer, ‘The Rhetoric and Reality: Radicalization and Political Discourse’
(2012) 33 International Political Science Review/Revue internationale de science politique 556, at 556.
23
  Peter Neumann, ‘The Trouble with Radicalization’ (2013) 89 International Affairs 873, at
874ff.
24
  Clara Eroukhmanoff, ‘The Remote Securitisation of Islam in the US Post-9/11: Euphemisation,
Metaphors and the “Logic of Expected Consequences” in Counter-Radicalisation Discourse’ (2015)
8 Critical Studies on Terrorism 246.
25
  Clara Eroukhmanoff, Counter-Radicalisation and Its Impact on Freedom of Expression (2016)
2, available at www.free-expression.group.cam.ac.uk/pdfs/workshop-paper-clara-eroukhmanoff-
draft.pdf.
26
  Rik Coolsaet, ‘Epilogue: Terrorism and Radicalisation: What Do We Now Know?’ in
Rik Coolsaet (ed.), Jihadi Terrorism and the Radicalisation Challenge: European and American
Experiences (Ashgate Publishing Ltd, 2011) 260.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 323 13/12/2018 15:25


324  Research handbook on human rights and digital technology

social change it brought about. In many ways, descriptions of what is ‘extreme’ or ‘radical’
tell us more about the dominant mainstream of the society in which these descriptions
are assigned than about the groups deemed radical. Being radical or extreme is always a
relational term that entails narratives of identity. Following Said’s theory of orientalism,
constructing an identity depends on the depiction of a negative ‘other’ that serves as a
mirror and antipode of the self.27 Radicalism is one of the frames that shapes counter-
terrorism policy by, often implicitly, defining a certain social identity. Creating otherness
traces the problem to an obscure socialization process that must have happened outside
of ‘mainstream’ society, thereby conveniently absolving the hegemonic part of society of
responsibility and taking Western nations out of the equation.

2.2  Countering Extremism Online

A policy based on the vague and flawed concept of radicalization will most likely be inef-
fective and, at worst, be harmful. Developing appropriate instruments to effectively tackle
online radicalization seems unachievable if we do not know what ‘online radicalization’ is.
There is no doubt that terrorists use the Internet, just like virtually everybody else in
today’s society. Terrorists are highly integrated with the Internet; they depend on it to
the same extent that non-terrorists do. Conceptualizing the Internet as a separate and
somehow distinct space is a flaw in counter-terrorism policy, the same as this would be
a flawed presumption to make in the context of any other policy. The Internet shapes
human behavior and is shaped by human practices.28 Objects – and certainly technologies
such as the Internet – are not objective or mere facts, they are much more heterogeneous
and complex than that, and cannot be reduced to simple materiality.29
New technologies play an important role in pushing counter-terrorism policy from
investigation to anticipation. The imperative of technological progress demands the
relentless adoption of digital means for (in)security purposes. Keeping up with techno-
logical development is framed as indispensable and pushes many legitimate concerns
aside.30 For example, the fact that the strive for greater efficiency in (in)security is driven
by human actions and is in no way inevitable is often overlooked.
Research on the role of the Internet in terrorism has yielded long lists of ways in which
the Internet may be used to support or enable terrorist activity.31 Spreading propaganda

27
  Edward W. Said, Orientalism: Western Conceptions of the Orient (25th Anniversary edn with
1995 Afterword, Penguin Classics, 2012).
28
  Wiebe E. Bijker and others (eds), The Social Construction of Technological Systems: New
Directions in the Sociology and History of Technology (Anniversary edn, MIT Press, 2012); Philip
Brey, ‘Artifacts as Social Agents’ in H. Harbers (ed.), Inside the Politics of Technology: Agency and
Normativity in the Co-production of Technology and Society (Amsterdam University Press, 2005).
29
  See Bruno Latour, Von der Realpolitik zur Dingpolitik oder Wie man Dinge öffentlich macht
(Merve, 2005) 26ff.
30
  Jeff Huysmans, Security Unbound: Enacting Democratic Limits (1st edn, Routledge, 2014)
149ff.
31
  Gabriel Weimann, Www.Terror.Net: How Modern Terrorism Uses the Internet (United
States Institute for Peace (USIP), 2004); Fred Cohen, ‘Terrorism and Cyberspace’ (2002) Network
Security 17; Maura Conway, ‘Terrorist “Use” of the Internet and Fighting Back’ (2006) 19
Information and Security 34.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 324 13/12/2018 15:25


Policing ‘online radicalization’  325

is the most prevalent concern in countering radicalization online, because the Internet
gives terrorists unprecedented control over the means of delivering their messages: ‘It
considerably extends their ability to shape how different target audiences perceive them
and to manipulate not only their own image, but also the image of their enemies’.32
Unsurprisingly, all kinds of online media channels and media formats are used, from
videos and online magazines, to popular websites and social media platforms. This is
also done for recruitment and training purposes, which entails tactics such as mobilizing
specific individuals online and spreading instructional materials to certain ends.
What is the specific relationship between radicalization and the Internet? Related
research typically concludes that digital technology is an incubator that accelerates
the development of, but does not solely initiate, radicalism. Would-be radicals seek
out extremist Internet content, not vice versa, and consuming propaganda on the
Internet does not necessarily turn anyone into a terrorist.33 Most scholars agree that
while the Internet may mediate and facilitate terrorism and extremism, it does not
cause these.34
Online radicalization is used as an incredibly broad concept in large swathes of research
published on the topic. Gill and his colleagues point out that ‘a wide range of virtual
behaviours is subsumed into the category of online radicalisation’.35 In their meta-study
of studies on online radicalization, they criticize the lack of empirical foundation of
most of the literature on this issue. They do cite studies that stand out for their empirical
thoroughness such as those conducted by von Behr et al.36 and Gill and Corner.37 These
studies found that the Internet functions as a key source of information and means of
communications (as it supposedly does for most people). Furthermore, opportunities to
confirm existing beliefs are greater on the Internet than in offline interaction. However,
these studies also conclude that the Internet does not necessarily speed up radicalization,
and did not replace offline gatherings in the cases examined. Digital communication
only supplements in-person meetings, but it does not substitute them. Since the Internet

32
  Conway, Terrorist “Use” of the Internet and Fighting Back’, n. 31 above, 5.
33
  Ghaffar Hussain and Erin Marie Saltman, Jihad Trending: A Comprehensive Analysis of
Online Extremism and How to Counter It (Quilliam, 2014), available at www.quilliamfoundation.
org/wp/wp-content/uploads/publications/free/jihad-trending-quilliam-report.pdf; Jamie Bartlett
and Ali Fisher, How to Beat the Media Mujahideen (5 February 2015), available at http://quarterly.
demos.co.uk/article/issue-5/how-to-beat-the-media-mujahideen/; Philip Howard, The Myth of
Violent Online Extremism (6 February 2015), available at http://yalebooksblog.co.uk/2015/02/06/
myth-violent-online-extremism/.
34
  Ian Brown and Josh Cowls, Check the Web: Assessing the Ethics and Politics of Policing the
Internet for Extremist Material (Oxford Internet Institute/VOX-Pol Network of Excellence, 2015) 60,
available at http://voxpol.eu/wp-content/uploads/2015/11/VOX-Pol_Ethics_Politics_PUBLISHED.
pdf.
35
  Paul Gill and others, What are the Roles of the Internet in Terrorism? Measuring Online
Behaviours of Convicted UK Terrorists (2015) 5f, available at http://doras.dcu.ie/20897/.
36
  Ines Von Behr and others, Radicalisation in the Digital Era: The Use of the Internet in
15 Cases of Terrorism and Extremism (RAND Corporation, 2013), available at www.rand.org/
content/dam/rand/pubs/research_reports/RR400/RR453/RAND_RR453.pdf.
37
  Paul Gill and Emily Corner, ‘Lone-Actor Terrorist Use of the Internet and Behavioural
Correlates’ in Lee Jarvis, Stuart MacDonald and Thomas M. Chen (eds), Terrorism Online:
Politics, Law and Technology (Routledge, 2015).

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 325 13/12/2018 15:25


326  Research handbook on human rights and digital technology

pervades modern life, it makes no sense to frame it as a separate space in which radicaliza-
tion occurs.
Broadly speaking, there are two political strategies to engage with terrorists’ use of the
Internet, which can be labeled censorship and counter-speech. The former tries to filter
and remove unwanted Internet content, while the latter attempts to fight extremist speech
with specifically designed counter-messages and opposing communications campaigns.
Saltman and Russell frame the blocking and filtering of online content as ‘negative
measures’ and techniques such as developing contrasting narratives as ‘positive measures’
of online counter-terrorism.38
In structural terms, ‘negative’ censorship measures have two essential flaws. First, they
are ‘beset by issues of control due to the open, fluid and constantly mutating nature of
contemporary online media’.39 The technological design of the Internet undercuts the
effectiveness of censorship approaches, since content can be copied and reproduced at
practically no cost and the decentralized design of the infrastructure frustrates centralized
control. The nature of the Internet makes total message control impossible and counter-
productive, as users are driven to anonymous networks and the ‘dark web’.40 This holds
less true, though, for privately controlled platforms such as Facebook or Twitter, which
run on a more centralized server infrastructure.
Secondly, ‘filter and remove’ strategies may unintentionally send a reaffirming message
to alleged extremists, implying that said extremists are in the right. If arguments are not
challenged with counter-arguments, but simply taken down, this potentially stabilizes the
extremist’s worldview which is often based on conspiracy ideologies. Censoring speech
from a position of power may reinforce the ideas held by extremists, who see themselves
as ‘radical’ outsiders trapped by a conspiracy.41

3.  CASE STUDY: THE EU INTERNET REFERRAL UNIT

This following section examines what Europol’s Internet Referral Unit does towards
countering terrorism and violent extremism on the Internet. The EU IRU is a working
unit at Europol that officially began operations on July 1, 2015. The creation of the
EU IRU was requested by the Justice and Home Affairs (JHA) Council on March 12,
2015, roughly two months after the attacks in Paris in January 2015, which targeted the
staff of the satirical magazine Charlie Hebdo and customers of the Jewish supermarket
Hypercacher.

38
  Erin Marie Saltman and Jonathan Russell, The Role of Prevent in Countering Online Extremism,
White Paper (Quilliam, 2014) 11, available at www.quilliamfoundation.org/wp/wp-content/uploads/
publications/free/white-paper-the-role-of-prevent-in-countering-online-extremism.pdf.
39
  Nasser-Eddine and others, Countering Violent Extremism (CVE) Literature Review, n. 20
above, 50.
40
  See Michael Seemann, Das neue Spiel: Strategien für die Welt nach dem digitalen Kontrollverlust
(1st edn, iRights Media, 2014).
41
  Lella Nouri and Andrew Whiting, ‘Prevent and the Internet’ in Christopher Baker-Beall,
Charlotte Heath-Kelly and Lee Jarvis (eds), Counter-Radicalisation: Critical Perspectives (1st edn,
Routledge, 2014) 184.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 326 13/12/2018 15:25


Policing ‘online radicalization’  327

After its establishment, the new unit operated in a pilot phase for six months, and has
been running at full operational capability since July 2016.42 In November 2015, the EU
IRU’s staff consisted of nine employees; in April 2016, the total number of staff had
increased to 17, and was supposed to rise to 21 by end of 2016.43
The EU IRU builds upon a prior Europol project called Check the Web (CTW), which
was established in 2007 following a Member State initiative led by Germany. The goal
of the CTW project was to collect and analyze terrorist propaganda material on the
Internet. The project team was composed of counter-terrorism experts and linguists
who searched for relevant material and stored it in a database accessible to all Member
States. Europol says that this database contains about 10,000 electronic documents and
individuals.44 The Check the Web project merely focused on monitoring and analysis;
referral and take-down initiatives were not included. A second project already in place on
the European level, CleanIT, consisted of a dialog process held between the public and
the private sector that drafted ‘general principles’ and ‘best practices’ for how to fight
terrorism online.45 These initiatives were criticized for their overly broad understanding
of what constitutes terrorism and undesirable online content, exceeding the definitions
stipulated by law.46
The United Kingdom was the first EU Member State to implement law enforcement
strategies targeting radicalization on the web, most prominently the ‘PREVENT’ strand of
its counter-terrorism approach, which was put forward in 2006. Documents suggest that
the main model for the EU IRU was the so-called ‘Counter-Terrorism Internet Referral
Unit’ (CTIRU) in the United Kingdom, which has been in operation since February
2010.47 This special unit is in charge of flagging and, if necessary, removing material from
the Internet. The UK’s CTIRU works together with Europol and actively supports the
development of the EU IRU. The second Member State leading the European effort on
policing Internet content to combat terrorism is the Netherlands. The Dutch government
led the CleanIT project and started to develop informal social media policies and engage
in a dialog with the Internet industry relatively early on.48

42
 Europol, Enhancing Counter Terrorism Capabilities at EU Level: European Counter Terrorism
Centre (ECTC) at Europol and Counter Terrorism Related Information Sharing (European Police
Office, 2015), available at www.statewatch.org/news/2015/nov/eu-council-europol-ECTC-14244-15.
pdf.
43
 Europol, EU Internet Referral Unit: Year One Report Highlights (European Police Office, 2016)
4, available at www.europol.europa.eu/content/eu-internet-referral-unit-year-one-report-highlights.
44
 Europol, EU Internet Referral Unit at Europol: Concept Note (European Police Office, 2015)
3, available at www.statewatch.org/news/2015/may/eu-council-internet-referral-unit-7266-15.pdf.
45
  See www.cleanitproject.eu/.
46
  Didier Bigo and others, Preventing and Countering Youth Radicalisation in the EU, Study
for the LIBE Committee (European Parliament, 2014) 21, available at www.europarl.europa.
eu/RegData/etudes/etudes/join/2014/509977/IPOL-LIBE_ET%282014%29509977_EN.pdf; EDRi,
Clean IT: Leak Shows Plans for Large-Scale, Undemocratic Surveillance of All Communications (21
September 2012), available at https://edri.org/cleanit/.
47
  EU Counter-Terrorism Coordinator, EU CTC Input for the Preparation of the Informal
Meeting of Justice and Home Affairs Ministers in Riga on 29 January 2015, DS 1035/15 (General
Secretariat of the Council, 2015) 3, available at www.statewatch.org/news/2015/jan/eu-council-ct-
ds-1035-15.pdf.
48
  Ibid. 2.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 327 13/12/2018 15:25


328  Research handbook on human rights and digital technology

The push for Internet referral capabilities at Europol began shortly after the attacks
in Paris in January 2015. The Riga Joint Statement of the Council on January 30, 2015
mentioned that:

[t]he Internet plays a significant role in radicalization . . . we must strengthen our efforts to
cooperate closely with the industry and to encourage them to remove terrorist and extremist
content from their platforms.49

After the subsequent attacks in Paris in November 2015, the European Counter-Terrorism
Centre (ECTC) was established within Europol, also following a decision made by the
JHA Council of Ministers. The creation of the ECTC was an ad hoc political move to
demonstrate action after the attacks. The EU IRU became part of the ECTC, but remains
a relatively independent working unit within Europol. This is also due to the double man-
date of the EU IRU, which covers counter-terrorism as well as ‘fighting traffickers’.50 The
dual strategic focus of the EU IRU encompasses working on ‘illegal immigration’, which
became a paramount topic in 2015 and resulted in the creation of a European Migrant
Smuggling Center at Europol: ‘The expertise of the EU Internet Referral Unit will also
be used to identify and refer online content relating to the provision of illegal migration
services’.51 The following analysis of the EU IRU’s working mandate will illuminate this
dual function of countering both ‘terrorism’ and ‘illegal immigration’.

3.1  Casing the EU Internet Referral Unit

Identifying and taking down online material is the core mission of the EU IRU: the unit
aims to reduce the amount of ‘terrorist material’ on the Internet. According to official
documents, the unit’s mandate comprises the following tasks:52

● to coordinate and share the identification tasks (flagging) of terrorist and violent
extremist online content with relevant partners;
● to carry out and support referrals quickly, efficiently and effectively, in close
cooperation with the industry;
● to support competent authorities, by providing strategic analysis and operational
analysis;
● to act as a European Centre of Excellence for the above tasks.

This means that the staff of the EU IRU searches the Internet for what it deems to be ‘ter-
rorist and violent extremist online content’. Identifying relevant content essentially means
carrying out web surveillance and storing the identified content in a central IRU database.

49
  European Council, Special Meeting of the European Council: Statement, EUCO 18/15
(General Secretariat of the Council, 2015) 5, available at http://ec.europa.eu/dorie/file­Download.
do;jsessionid=jeSxzMm7FWkq2g1w8-L_fqIxkN-Be_A75tLbPuSPQK4RGL9GpSbJ!-21427498​
60?docId=2098892&cardId=2098891.
50
  Ibid. 2.
51
 Europol, Europol Strategy 2016–2020, Strategy Paper 796794v19B (European Police Office,
2015) 9, available at www.europol.europa.eu/content/europol-strategy-2016-2020.
52
  See Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 328 13/12/2018 15:25


Policing ‘online radicalization’  329

The online content is then analyzed by counter-terrorism experts and, if it is assessed


to be relevant, sent to the private company where it is hosted. Given its limited resources
and relatively small team, the EU IRU mostly targets large platforms and key points of
dissemination of online content. Most requests for taking down content originate from
the EU IRU’s own surveillance activity.53 Unlike the UK CTIRU, the EU IRU does not
receive information about relevant content from the public, as its legal basis does not allow
for this.54 Another way to identify content is through so-called ‘support referrals’. These
involve Member States sharing a piece of content that they have detected with Europol;
the EU IRU then organizes the referral to the hosting company. This coordination and
support role is provided as a service to Member States who do not have national Internet
referral units.
The online content targeted by the EU IRU could be anything, such as text, image,
or video material. It also comprises entire social media accounts or profiles. In order
to work most effectively, the EU IRU tries to focus on social media accounts, especially
Twitter accounts, that have a large outreach, provide translations into EU languages,
serve as dissemination hubs, and function as high-profile influencers.55 Which platforms
are mainly targeted? Europol states that the ‘EU IRU refers Internet content across the
following social media platforms, in ascending order of volume: Facebook, SendVid,
Vimeo, Google Drive, YouTube, Archive.org and Twitter’.56
Details regarding the kind of content that is monitored, assessed and referred are
scarce and rudimentary. What is publicly known is the double mandate of the EU IRU:
addressing ‘terrorist and violent extremist’ content as well as content related to ‘illegal
immigration’. Although the decision to establish the EU IRU was triggered in the context
of counter-terrorism, shortly after the Charlie Hebdo attacks, the scope of its mandate
was extended in April 2015 by the European Council. The accompanying statement
mentioned that Europol should also target ‘Internet content used by traffickers to attract
migrants and refugees, in accordance with national constitutions’.57 This expansion of
the mandate occurred before the EU IRU officially began operating on July 1, 2015. In
the words of the EU Counter-Terrorism Coordinator, this means that the EU IRU ‘is
also tackling the facilitation of illegal immigration, with a continuous analysis of social
media-related information on a 7/7 basis’.58
Figures on how many pieces of online content are assessed, referred and deleted
vary. In April 2016, the European Commission published a report saying the EU IRU

53
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 42 above, 11; Europol,
Europol’s European Counter Terrorism Centre Strengthens the EU’s Response to Terror, Press
Release (Europol, 2016), available at www.europol.europa.eu/content/ectc.
54
  Chris Jones, Policing the Internet: From Terrorism and Extremism to ‘Content Used by
Traffickers to Attract Migrants and Refugees’ (Statewatch, 2016) 2, available at www.statewatch.
org/analyses/no-290-policing-internet.pdf.
55
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 42 above, 11.
56
  Ibid. 12.
57
  European Council, Special Meeting of the European Council: Statement, n. 49 above, 3.
58
  Follow-up to the Statement of the Members of the European Council of 12 February 2015
on Counter-Terrorism: State of Play on Implementation of Measures, Note 12139/15 (General
Secretariat of the Council, 2015) 5, available at www.statewatch.org/news/2015/sep/eu-council-ct-
implementation-plan-12139.-15.pdf.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 329 13/12/2018 15:25


330  Research handbook on human rights and digital technology

has assessed over 4,700 pieces of material across 45 platforms and made over 3,200
referrals for Internet companies to remove content, with an effective removal rate of
91 percent.59
The numbers cited in another document, published in May 2016, are significantly
higher: 7,364 pieces of online content were assessed by the IRU, triggering 6,399 referral
requests (with a success rate of 95 percent concerning subsequent removal). In 2016, 629
new terrorist media files were uploaded to the Check-the-Web (CTW) portal.60
More recent numbers, released in December 2016, refer to 15,421 pieces of content61
that were taken down across at least 31 online platforms; 88.9 percent of those referrals
were removed by the respective hosting platform.
As there is no possibility to compare or fact-check these figures, one thing becomes
quite clear: the interesting piece of information is the small percentage of referrals that
were not removed by the platforms. The key question is not necessarily how many pieces of
content are assessed and deleted, but the criteria by which they are assessed and deleted.
How does this process work? Online content is assessed at two different points by two
different organizations, respectively. The first assessment is conducted by the EU IRU,
which classifies some content as ‘terrorist’ or ‘violent extremist’ propaganda. After the
referral has been made, a second assessment is conducted by the private company hosting
the content in question. The nature of this second assessment is left fully to the company’s
discretion and is based on its corporate terms of service. At this stage of the process,
Europol denies having any formal or informal influence on the company’s categorization
of the referred content.62 Europol has to leave the decision to the private hosting provider
because of Europol’s legal basis. Since Europol does not authorize any executive powers,
it can only flag content on the platform, but is unable to enforce the take-down.
Both assessment processes are obscure and completely inscrutable. The take-down
quota of about 90 percent indicates that the IRU and the private companies must use
different criteria to assess the content, at least to some extent. Since there is no judicial or
independent oversight of the process, there is no right to formal objection or disclosure
foreseen for this process. Europol’s referral activities are non-binding requests, legally
equivalent to the flagging of posts done by ordinary users; ‘thus the decision and related

59
  European Commission, Communication on Delivering on the European Agenda on Security
to Fight Against Terrorism and Pave the Way Towards an Effective and Genuine Security Union,
COM(2016)230 final (European Commission, 2016) 7, available at http://ec.europa.eu/dgs/home-
affairs/what-we-do/policies/european-agenda-security/legislative-documents/docs/20160420/comm​
unication_eas_progress_since_april_2015_en.pdf.
60
 Europol, Enhancing Counter Terrorism Capabilities at EU Level: European Counter Terror-
ism Centre (ECTC) at Europol (European Police Office, 2016); Matthias Monroy, ‘Terroristisches
Material’ Im Internet: Noch Mehr Löschanträge von Europol Erfolgreich (23 May 2016), available
at https://netzpolitik.org/2016/terroristisches-material-im-internet-noch-mehr-loeschantraege-von-
europol-erfolgreich/.
61
  See https://db.eurocrim.org/db/de/doc/2688.pdf at 22; Europol, Europol’s Internet Referral
Unit One Year On, Press Release (Europol, 2016), available at www.europol.europa.eu/newsletter/
europol%E2%80%99s-internet-referral-unit-one-year.
62
  See e.g. EuroparlTV, Panorama of the Combat Against Terrorism (2016) min. 3:30; also
confirmed in several of my interviews, see http://europarltv.europa.eu/en/player.aspx?pid=3c4a​z0d​
5​d-a3b2-4d85-9ac9-a62700f22bec.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 330 13/12/2018 15:25


Policing ‘online radicalization’  331

implementation of the referral is taken under full responsibility and accountability of the
concerned service provider’.63
It is important to highlight the fact that neither of the two distinct assessment processes
is based on existing legal rules, but rather on the ‘community guidelines’ or ‘terms and
conditions’ of the platform in question. The relevant criteria for referring content to
Internet companies is whether ‘Europol assesses it to have breached their terms and con-
ditions. This is a voluntary arrangement in order to alert the companies when their sites
are being abused by terrorist organisations’.64 This arrangement is not accidental, but part
of the reasoning that lead to the introduction of the EU IRU. This is evident in statements
made by EU Counter-Terrorism Coordinator Gilles de Kerchove, who advocated for the
creation of the EU IRU, because this would allow for more online content to be deleted
by private actors, than is legal under European and national legislations:

Consideration should be given to a role for Europol in either flagging or facilitating the flagging
of content which breaches the platforms’ own terms and conditions. These often go further
than national legislation and can therefore help to reduce the amount of radicalising material
available online.65

Clear examples of materials that are not protected by the right to free expression can only
be constructed within a given set of rules. Conditions for what kind of content is permis-
sible differ significantly from country to country, and even more so from one social media
platform to another. Defining the limits of free speech in a democratic society is usually
not a technocratic practice, but rather a deeply political choice. For example, certain
symbols or statements denying or trivializing the Holocaust constitute criminal offenses
in Germany and many other European countries, and are liable to prosecution. In other
countries, the same symbols or speech might be legally permissible. The interpretation
of the boundaries of free speech certainly cannot be generalized and detached from the
respective social and political context in which it is made. How does Europol make this
complex decision? The answer is: We do not know.

3.2  Political Context

Many scholars agree that European counter-terrorism policy has been developing mostly
ad hoc and does not necessarily follow a coherent strategic approach. Although the
European approach to counter-terrorism has become more comprehensive over time, it
has evolved in ‘a primarily incremental and external shocks-driven process’.66 As security
policy is traditionally regarded as a core part of national sovereignty, EU Member States

63
  Council of the European Union, The Functioning of the Internet Referral Unit (EU IRU) on
the Basis of the Future Europol Regulation, Meeting Doc. DS 1497/15 (General Secretariat of the
Council, 2015) 3, available at https://netzpolitik.org//wp-upload/council1497-15.pdf.
64
  European Commission, European Agenda on Security: State of Play, Fact Sheet (European
Commission, 2015), available at http://europa.eu/rapid/press-release_MEMO-15-6115_de.htm.
65
  EU Counter-Terrorism Coordinator, EU CTC Input for the Preparation of the Informal
Meeting of Justice and Home Affairs Ministers, n. 47 above, 3.
66
  Ursula C. Schröder, The Organization of European Security Governance: Internal and
External Security in Transition (Routledge, 2013) 85.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 331 13/12/2018 15:25


332  Research handbook on human rights and digital technology

have been reluctant and slow to cooperate on (in)security matters or even confer security-
related competences to the EU level. The broader narrative of EU counter-terrorism
policy follows pivotal events – first and foremost the attacks on September 11, 2001, on
March 11, 2004 in Madrid, and on July 7, 2005 in London – as the critical junctures that
opened windows of opportunity for counter-terrorism policy-making. At the same time,
it also urges policy-makers to act in an ad hoc manner, without conducting thorough
analysis of what ought to be done. Pressure to demonstrate decisiveness and authority
has led to the rather hasty adoption of pre-existing policy proposals, which are usually not
tailor-made and sometimes merely symbolic solutions to specific situations.67 As explored
above, the development of the EU IRU also follows this pattern of EU counter-terrorism
policy-making: an established infrastructure (the CleanIT and Check the Web initiatives)
was extended based on existing models in some Member States (the United Kingdom and
the Netherlands).
The time-frame investigated in this chapter (2015–2017) coincides with the negotiation
phase for the new Europol regulation, which was adopted by the European Parliament
in May 2016. It provides a new legal basis for Europol, and entered into force on May 1,
2017. The regulation appears relevant for the context of the EU IRU because some of
the provisions included therein were drafted to allow the exchange of information with
private parties.68 This happened late in the legislative process, after there had been some
public criticism that the EU IRU still lacks an appropriate legal basis.69 A Council docu-
ment acknowledges that the new Europol regulation was subsequently amended during
the trilogue negotiations in order to insert a sound legal basis for the EU IRU:

In order to provide explicit legal basis in the draft Europol Regulation for the functioning of
IRU while ensuring the necessary guarantees for the protection of personal data, the Presidency
has developed . . . proposals to amend the draft Europol Regulation.70

The initial draft of the Europol regulation prohibited the transfer of personal data to
private parties. It therefore seems that a high-level political decision to create a new
Internet referral police unit at Europol was only later authorized by the adoption of the
amended regulation.
Before the Europol regulation was established, Europol’s legal foundation consisted of
the Council Framework Decisions 2002/475/JHA and 2008/919/JHA. The two Council
Decisions authorized Europol to retrieve and process data only from publicly available

67
  Raphael Bossong, ‘The Action Plan on Combating Terrorism: A Flawed Instrument of
EU Security Governance’ (2008) 46 JCMS: Journal of Common Market Studies 27; Ben Hayes
and Chris Jones, ‘Taking Stock: The Evolution, Adoption, Implementation and Evaluation of
EU Counter-Terrorism Policy’ in Fiona De Londras and Josephine Doody (eds), The Impact,
Legitimacy and Effectiveness of EU Counter-Terrorism (Routledge, 2015).
68
  Thomas Rudl, EU-Parlament Beschließt Erweiterte Europol-Befugnisse Und Meldestelle
Für Internetinhalte (12 May 2016), available at https://netzpolitik.org/2016/eu-parlament-besch​
liesst-erweiterte-europol-befugnisse-und-meldestelle-fuer-internetinhalte/.
69
  Matthias Monroy, ‘Anti-Terror-Zentrum’: Europols Neuen Kompetenzen Fehlt Bislang
Die Rechtliche Grundlage (25 November 2015), available at https://netzpolitik.org/2015/anti-ter​
ror-zentrum-europols-neuen-kompetenzen-fehlt-bislang-die-rechtliche-grundlage/.
70
  Council of the European Union, The Functioning of the Internet Referral Unit, n. 63 above, 5.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 332 13/12/2018 15:25


Policing ‘online radicalization’  333

sources, but not to share it with private parties. One document shows that the Council is
well aware that the Framework Decision ‘has no express provisions regulating the transfer
of personal data (for example, the URL link)’.71 Accordingly, the development of the EU
IRU can be conceptualized as a process of mission creep,72 whereby competences and
mandates are introduced and expanded informally, and later incorporated into the formal
legal structures of the organization.
Parallel to the establishment of the EU IRU, two closely related initiatives were
advanced on the European level. Counter-speech initiatives were created, aiming to
provide alternative perspectives and facts to counter propaganda efforts. Notably, one
counter-speech project, the EU East StratCom Task Force, operates in the context of the
conflict in Ukraine and is run by the European Union External Action service. It collects
‘pro-Kremlin disinformation’ and regularly publishes a ‘Disinformation Digest’.73
The second counter-speech initiative is the Syria Strategic Communications Advisory
Team (SSCAT), which serves as a specialized advisory team that provides expertise
on countering ‘terrorism communication campaigns’ and ‘works as a network where
best practices on countering violent extremism and counter terrorism communications
between the Member States is shared’.74 It consists of a team of experts seconded by
Member States that work under the Belgian Ministry of Home Affairs. SSCAT consults
national governments across the EU on how to improve their counter-speech operations.
Cooperation with the Internet industry, most prominently companies such as Google,
Facebook, Microsoft, Twitter, has also been promoted: the ‘EU Internet Forum’ initiative
was launched in December 2015, so that partnerships could be established with relevant
stakeholders ‘to restrict the accessibility of terrorist material online and increase the
volume of effective counter-narratives’.75 Leading social media companies published a
code of conduct that was prepared together with the European Commission. The agree-
ment requires the platforms to ‘review the majority of valid notifications for removal
of hate speech in less than 24 hours and remove or disable access to such content’ and
moreover calls to ‘educate and raise awareness’ about the companies’ guidelines.76 The
agreement was met with refusal by NGOs such as European Digital Rights and Access
Now, who dropped out of the EU Internet Forum talks and criticized the public/private
cooperation which constitutes a ‘privatization of censorship’.77 Other observers say the
agreement is designed to produce chilling effects on users and encourage over-blocking of

71
  Ibid. 4.
72
  See Hayes and Jones, ‘Taking Stock’, n. 67 above, 17.
73
  See http://eeas.europa.eu/euvsdisinfo/.
74
  Council of the European Union, Fight Against Terrorism: Follow-up to the Council (Justice and
Home Affairs) of 12–13 March 2015, Implementation of Counter-Terrorism Measures, Meeting Doc.
9418/15 (Presidency, 2015) 9, available at www.statewatch.org/news/2015/jul/eu-cosi-ct-9418-15.pdf.
75
  European Commission, Communication on Delivering on the European Agenda on Security to
Fight Against Terrorism, n. 59 above, 7.
76
  European Commission, European Commission and IT Companies Announce Code of Conduct
on Illegal Online Hate Speech, Press Release (European Commission, 2016), available at http://
europa.eu/rapid/press-release_IP-16-1937_en.htm.
77
  Ingo Dachwitz, Hatespeech-Verabredung Zwischen EU-Kommission Und Internetfirmen:
NGOs Kritisieren Willkür (1 June 2016), available at https://netzpolitik.org/2016/hatespeech-vera​
bredung-zwischen-eu-kommission-und-internetfirmen-ngos-kritisieren-willkuer/; EDRi, EDRi and

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 333 13/12/2018 15:25


334  Research handbook on human rights and digital technology

legal content. The EU IRU participates in the EU Internet Forum and acts as a so called
‘trusted flagger’.78 Although having been severely criticized for human rights violations,
the cooperation between Internet corporations and governments on regulating speech
online is on the rise, with similar advances evolving in several EU Member States and
many countries around the world.79

4.  EUROPOL’S FRAMING OF THE EU IRU

We have seen why the EU IRU is a highly suitable case for studying the policy framing
of developing governance mechanisms to regulate online content, emerging at the
interface of private actors and law enforcement. The following section analyzes the
frames employed to describe, structure and make sense of Europol’s Internet Referral
Unit. Policy documents produced by Europol will be examined, building on the concep-
tion of frames as conditions for understanding and of framing as an implicit act of
sense-making.

4.1  Radicalization and (Violent) Extremism

How is radicalization framed by Europol? Notably, I was unable to find any description of
the term in any Europol publication over the course of my investigation. Europol does not
explicitly define radicalization in its policy discourse. There are only indirect references to
it in several documents and statements. Two different conceptions of radicalization can be
derived from the way in which Europol uses the term. One is radicalization as a process,
which draws upon formulations such as ‘efforts to radicalize and recruit’ or ‘on the path
of radicalization’. Radicalization is described as a conscious action that some humans
perform on others, whereby the active party is the recruiter or demagogue who radicalizes
an individual who adopts a rather passive role. The second conception is radicalism as
a condition. This understanding of the term envisions radicalism to be an end-product,
a state of mind of persons who ‘have been radicalized’ or are described as ‘radicalized
attackers’. This suggests that radicalization stops at the point at which an individual
has become ‘radicalized’ or ‘a radical’. Radicalization is generally referred to as a very
complex phenomenon, but no (public) effort is made to unpack this concept. Especially
‘online radicalization’ is often used interchangeably with the terms (online) propaganda
and (online) recruitment.

Access Now Withdraw from the EU Commission IT Forum Discussions (31 May 2016), available at
https://edri.org/edri-access-now-withdraw-eu-commission-forum-discussions/.
78
  European Commission, Tackling Illegal Content Online: Towards an Enhanced Responsibility
of Online Platforms, COM(2017)555 final (Brussels, 28 September 2017), available at https://
ec.europa.eu/digital-single-market/en/news/communication-tackling-illegal-content-online-towards-
enhanced-responsibility-online-plat​forms.
79
  See e.g. Ellen Nakashima, ‘Obama’s Top National Security Officials to Meet with Silicon
Valley CEOs’, Washington Post, 7 January 2016, available atwww.washingtonpost.com/world/
national-security/obamas-top-national-security-officials-to-meet-with-silicon-valley-ceos/2016/01
/07/178d95ca-b586-11e5-a842-0feb51d1d124_story.html; or the Netzwerkdurchsetzungsgesetz in
Germany, see www.bmjv.de/SharedDocs/Gesetzgebungsverfahren/DE/NetzDG.html.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 334 13/12/2018 15:25


Policing ‘online radicalization’  335

Europol occasionally uses metaphors to discuss radicalization – this occurs far more
frequently in European Commission texts. Metaphors may serve an illustrative function,
but they also structure the way in which we think about problems in policy-making. The
phrase ‘playing whack-a-mole’ is associated with a very particular mental image. What
does a ‘fertile environment for radicalization’ look like? How about a ‘breeding ground for
radicalization?’ What is ‘homegrown’ terrorism? The occasional use of such metaphors for
radicalization makes it easy to talk about radicalization as a process that, quite naturally,
resembles a tiny seed sprouting into a large tree.
One predication that comes up in both political and more technical policy texts is
‘violent extremism’. Attaching a certain property to a noun, in this case ‘violent’, endows
the term with a particular capacity.80 No doubt is left about the quality of the extremism
in question, the attribution tells us everything about the extremism that we need to know.
All European institutions use variations of this expression, such as ‘violent extremist
activities’ or ‘violent extremist views’. Violence becomes inscribed in the term ‘extremism’,
making the term ‘violent extremism’ virtually redundant. Similar alliances between noun
and adjective include ‘extremist propaganda’ and ‘extremist ideology’, that still carry the
predicated violence with them.
What is the EU IRU actually tackling, then, when it speaks of ‘terrorist and
violent extremist content’? Judging by my analysis of Europol’s policy documents, it
­predominantly targets online propaganda. The concept note sees the EU IRU as part of:

a coherent and coordinated European prevention strategy to counter terrorist propaganda and
ensure that the Internet remains a public good, free of terrorist and violent extremist propaganda
while respecting fundamental principles such as the freedom of speech.81

This suggests that radicalization is also seen as communication, or at least as the dis-
semination of terrorist propaganda. The EU IRU is, in this regard, an attempt to (re)
gain control over the flow of information on the Internet and keep a check on the usage
of social media platforms ‘for radicalisation purposes, through terrorist and violent
extremist propaganda’.82
In an interview, the Director of Europol, Rob Wainwright, said that ‘groups like IS
are very good in their use of social media – IS alone is active on more than 20 platforms,
including Twitter where it has, we are told, 5,000 accounts’.83 Terrorists becoming social
media savvy seems to be a considerable concern for law enforcement. Wainwright frames
the scale of terrorists’ social media operations as a threat and simultaneously worries
about the propaganda being aimed at Western audiences:

80
  See Linda Åhäll and Stefan Borg, ‘Predication, Presupposition and Subject-Positioning’ in
Laura J. Shepherd (ed.), Critical Approaches to Security: An Introduction to Theories and Methods
(Routledge, 2013).
81
 Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 2.
82
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 60 above, 5.
83
  Rob Wainwright, quoted in Martin Banks, ‘Online Terrorist Propaganda Being Done on a
Scale Not Seen Before, Says EU Police Chief’, available at www.theparliamentmagazine.eu/articles/
news/online-terrorist-propaganda-being-done-scale-not-seen-says-eu-police-chief.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 335 13/12/2018 15:25


336  Research handbook on human rights and digital technology

With the rise of social networking, hate propagators have evolved their techniques . . . they
present themselves in a friendly and appealing way to attract young followers. Humour and satire
are used to disguise hate speech.84

Statements such as this one assume a direct link between radicalization and propaganda
that promotes and accelerates radicalization as a process. All it takes, according to Europol,
is a certain susceptibility among members of the public for propaganda to resonate:

It can be a matter of weeks, in some cases, for vulnerable young men or young women to have
their minds turned by this ugly propaganda and to turn themselves into people capable of going
to Syria or Iraq and engaging in conflict there, and in some cases carrying out terrorism.85

The ‘mind-turning’ effect of propaganda is a recurring frame in the context of the EU


IRU. Although humans are exposed to media content almost everywhere all the time,
terrorist propaganda is credited with having a particularly strong resonance. What kind
of content is considered relevant online propaganda? Europol gave some clues as to this
in a press release, naming ‘e.g. terrorist propaganda videos, pictures of beheadings, bomb-
making instructions and speeches calling for racial or religious violence’.86
Another frame that accompanies the notion of radicalization is that of vulnerability.
In order for radicalization to resonate, targeted individuals need to be vulnerable.87 A
‘certain vulnerability’ is framed as an essential feature of potential terrorists, which is
‘picked up by recruiters to be used as such’.88 This vulnerability is especially attributed to
young people, which usually means ‘individuals aged below 25’.89 The vulnerability frame
undermines the agency of these individuals and questions their judgment. Analysts are
certain that age plays a role here because younger people are held to be more susceptible
and can be radicalized more quickly
Whenever statements only make sense to readers if certain background knowledge is
taken for granted, a presupposition applies. The idea that ‘patterns of radicalization have
evolved and broadened’ is a presupposition underpinning the policy debate of the EU
IRU. It is also presupposed that ‘terrorism finds its inspiration in ideologies’ and that ‘vul-
nerability’ is a prerequisite for becoming radical. Often, these constructions take a certain
context as given and make assumptions about causality and preferences. For example, the
statement that ‘security is naturally one of citizens’ greatest concerns’90 takes a certain

84
 Europol, Terrorism Situation and Trend Report 2015 (European Police Office, 2015) 13, availa-
ble at www.europol.europa.eu/content/european-union-terrorism-situation-and-trend-report-2015.
85
  Rob Wainwright, ‘Europe’s Top Cop Warns More Attempted Attacks “Almost Certain”’,
Time (2016), available at http://time.com/4336919/europol-terrorist-paris-brussels-rob-wainwright/.
86
 Europol, Europol Joins UK Appeal to Report Extremist and Terrorist Material Online Using
Red ‘STOP’ Button, Press Release (Europol, 2016), available at www.europol.europa.eu/newsletter/
europol-joins-uk-appeal-report-extremist-and-terrorist-material-online-using-red-stop-but.
87
  i.e. Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 4.
88
 Europol, Changes in Modus Operandi of Islamic State Terrorist Attacks (European
Police Office, 2015) Review, 6, available at www.europol.europa.eu/content/changes-modus-ope​
randi-islamic-state-terrorist-attacks.
89
  See Europol, Terrorism Situation and Trend Report 2015, n. 84 above, 41f.
90
  European Commission, Communication on Delivering on the European Agenda on Security to
Fight Against Terrorism, n. 59 above, 17.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 336 13/12/2018 15:25


Policing ‘online radicalization’  337

understanding of security for granted, one that is closely linked to European nation states’
representation of reality. The prevalence of radicalization and the primacy of (in)security
is taken to be true by the formulation of Europol’s European counter-terrorism policy.

4.2  Framing of Technology as the Problem

What role does technology play in the framing of the problem? Europol’s publications on
the EU IRU alternate between referring to terrorists’ use and the abuse of the Internet.
In this specific context, Internet and social media are sometimes used interchangeably.
Ambiguous references to the Internet therein are a sign of practitioners’ inner conflict
when it comes to technology: although it provides new opportunities and methods for
law enforcement, it equally does so for criminals. Technology is constructed as a problem
as soon as enemies are seen to be gaining an advantage from technical knowledge and
means. This argumentation is also deployed in debates on encryption and ‘cybercrime’.
In the context of ‘online radicalization’ and ‘propaganda’, these advances are more
recent:

Terrorists’ use of the Internet and social media has increased significantly in the recent years . . .
They have launched well-organised, concerted social media campaigns to recruit followers and
to promote or glorify acts of terrorism or violent extremism.91

Dealing with propaganda and recruitment campaigns is not necessarily the task of
police officers. The above statement exemplifies why terrorists’ and extremists’ progress in
digital innovation is seen as a problem for law enforcement, although this field tradition-
ally lies outside the remit of policing. To accommodate this, Europol’s sense-making has
to strike a balance between describing social media activities as ‘terrorism’ (read: high
priority) and labeling them as ‘criminal acts’ (read: policing duty). They are therefore
often named together; for example, when concerns are voiced that ‘the Internet has
become a principal instrument for facilitating organized crime and terrorism’.92
Threat construction is promoted by referring to the scale and speed associated with
technology. Policy documents explain that ‘[t]he nature of terrorist communication on
the Internet is constantly changing as a result of new technologies’ and emphasize how
terrorist groups ‘adapt their approaches to communication, exploiting new methods for
interaction and networking on the Internet’.93 In one public statement, Europol Director
Rob Wainwright said that ‘technology is not always a friend of society and is, in fact, part
of the security challenge we face today’.94 This framing of social networks, and digital
technology in general, as a threat to security occurs frequently and mostly focuses on the
role of social networks in disseminating propaganda and facilitating recruitment.
Similar claims about technology as a security threat are made in the European

91
 Europol, Europol’s Internet Referral Unit to Combat Terrorist and Violent Extremist
Propaganda, Press Release (Europol, 2015), available at www.europol.europa.eu/content/europol%​
E2%80%99s-internet-referral-unit-combat-terrorist-and-violent-extremist-propaganda.
92
  Rob Wainwright, quoted in Paul Adamson, ‘Security, Terrorism, Technology . . . and
Brexit’, E!Sharp.eu (2016), http://esharp.eu/conversations.
93
 Europol, Terrorism Situation and Trend Report 2015, n. 84 above, 12.
94
  Wainwright in Banks, ‘Online Terrorist Propaganda’, n. 83 above.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 337 13/12/2018 15:25


338  Research handbook on human rights and digital technology

Commission’s communications, which stress the scope, velocity and effectiveness of social
media propaganda:

Terrorist groups and extremists are capitalising on advances in technology to find new ways of
engaging with disaffected youth, taking advantage of social networking sites, online video chan-
nels and radical chat rooms. They are spreading their propaganda more widely, more rapidly,
and more effectively.95

European policy-makers associate a loss of control with technology that they seek to
tackle. They acknowledge that law enforcement struggles to regulate access to certain
material, which is attributed to the nature of digital technologies.96
Framing technology as facilitating terrorism rests upon a linear and simplistic under-
standing of radicalization; mere exposure to certain content is constructed as a problem.
Sometimes this direct causal relationship between Internet content and terrorism is even
explicitly stated, for example, in a report given by the EU Counter-Terrorism Coordinator,
which mentions that a person attempting an attack on a Thalys train ‘looked at preaching
on the Internet prior to the act’.97

4.3  Ever Closer Police Cooperation

Increasing cooperation among European police and intelligence agencies is Europol’s


number one demand for improving security in Europe. The cornerstone of this claim is the
improvement of information sharing within Europe, with Europol acting as the central
hub. The ‘fight against terrorism in the EU’ is emphasized as the motivation behind this
ambition, but other policy areas are, as has been shown, likely to follow suit. The creation
of the European Counter-Terrorism Centre (ECTC) is one example that shows how the
aim to provide ‘coordinated reaction’ to terrorism became more institutionalized.
Europol claims that ‘information sharing in the area of counter terrorism advances
in quantity and quality, requiring continuous commitment by all stakeholders to keep
pace with the terrorist threat’ which calls for ‘unprecedented levels’ of information to
be shared.98 The new Europol regulation is also framed as enhancing Europol’s ‘role as
the central hub for information exchange’.99 The main obstacle to sharing information
is seen as a cultural issue instead of a legal or political one, since some Member States
have ‘different views, different cultural attitudes, towards sharing’. Europol Director Rob
Wainwright says:

95
  European Commission, Communication on Preventing Radicalisation to Terrorism and
Violent Extremism: Strengthening the EU’s Response, COM(2013)941 final (European Commission,
2014) 2, available at http://ec.europa.eu/dgs/home-affairs/e-library/documents/policies/crisis-and-
terrorism/radicalisation/docs/communication_on_preventing_radicalisation_and_violence_promo​
ting_extremism_201301_en.pdf.
96
  Ibid. 9; Council of the European Union, Fight Against Terrorism, n. 74 above, 10.
97
  EU Counter-Terrorism Coordinator, EU CTC Input for the Preparation of the Informal
Meeting of Justice and Home Affairs Ministers, n. 47 above, 2.
98
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 60 above, 9.
99
 Europol, New Regulation Boosts the Roles of EDPS and Europol, Press Release (Europol, 2016),
available at www.europol.europa.eu/newsletter/new-regulation-boosts-roles-edps-and-europol.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 338 13/12/2018 15:25


Policing ‘online radicalization’  339

there is still a cultural journey that the whole community is on, to open up a little bit more the
posture of sharing, given the traditional mindset of intelligence officers needing to hold on to
information, to protect sources.100

Terror attacks provided momentum to overcome said ‘cultural’ barriers and push for
an increased exchange of information. With national (in)security considered the essential
component of national sovereignty, many Member States have long been reluctant to
open up to sharing what is seen as sensitive information on a Europe-wide level. ‘Trust’
had to be built up to present Europol as a ‘reliable partner’. In early 2016, Wainwright
publicly declared that ‘I have been talking with people like me for 10 years about improv-
ing information sharing . . . but this is the first time where I can remember it becoming a
mainstream issue’.101
Apart from event-driven calls for closer cooperation, the ‘added value’ of European
information sharing is also frequently emphasized in Europol’s documents. Arguing
for greater ‘effectiveness’ frames Europol as a service provider for the Member States,
a self-understanding that was confirmed during my interview with a Europol official.
Member States’ counter-terrorism agencies are seen as the leading actors, ‘supported by
a pro-active EU central information hub at Europol’.102 The same efficiency argument is
also applied to other policing areas, such as organized crime or cybercrime.103 The EU
IRU capabilities are also based on close cooperation with Member States and described
as a community effort:

The full impact of the IRU, however, will be delivered by leveraging the combined resources of
social media partners and national expert contact points . . . working as a concerted community
through the EU IRU at Europol.104

The framing of cooperation as increasing surveillance and take-down efficiency is


prevalent in the debate about the EU IRU. Another argument is made about open intel-
ligence on social media. There are cases in which law enforcement or intelligence services
investigate certain websites or accounts to gain intelligence about suspect groups or sus-
pect individuals, and in which taking down content would undermine surveillance of a site
or account. For example, Twitter accounts may provide GPS coordinates or information
about local events, which can be relevant for intelligence services. Cooperation through
the EU IRU is supposed to ‘act as a European deconfliction structure’. This means that
information is also shared about contents that are meant to stay online for the purpose
of gathering intelligence.105

100
  Wainwright, ‘Europe’s Top Cop Warns’, n. 85 above.
101
  Rob Wainwright, in Giulia Paravicini, Paris Attacks Prompt EU to Share Secrets (20
January 2016), available at www.politico.eu/article/paris-attacks-prompt-share-secrets-eu-security-
forces-eurodac-schengen-information-system-terrorism-isis-isil-islamic-state-bataclan/.
102
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 42 above, 5.
103
  i.e. Europol, The Internet Organised Crime Threat Assessment 2015, Threat Assessment
14 (Europol, 2015), available at www.europol.europa.eu/content/internet-organised-crime-thr​eat​
-​assessment-iocta-2015.
104
 Europol, Europol’s Internet Referral Unit to Combat Terrorist and Violent Extremist
Propaganda, n. 91 above.
105
 Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 5.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 339 13/12/2018 15:25


340  Research handbook on human rights and digital technology

4.4  Partnership with Private Actors

With the support of the European Commission, social media companies are continuously
approached to ensure, in particular, awareness concerning the political objectives of the
prevention of radicalization and the importance of a swift implementation of referred
Internet content.106
In the context of law enforcement, the novelty of the IRU approach is its lack of
coercion in cooperating with private companies. There have been prior forms of voluntary
cooperation between European law enforcement and the industry, most importantly
in the field of cybersecurity.107 But the ‘increased partnership’ that aims to ‘promote
“self-regulation” activities’108 on a non-contractual basis can be considered a new form
of public/private cooperation at Europol. In some way, it is the soft-power approach to
policing the web.
Cooperation with the Internet industry is closely intertwined with the ‘EU Internet
Forum’ set up by the European Commission to deepen ‘engagement’ and ‘dialogue’ with
Internet companies. The official aim of the forum meetings is ‘to secure industry’s com-
mitments to a common public-private approach and to jointly define rules and procedures
for carrying out and supporting referrals’.109 Europol sees ‘greater benefit in establishing
and building working relationships in order to stimulate the voluntary and proactive
engagement of the private sector’.110 The quality of the public/private partnership between
Europol and the industry is often underlined; the EU Counter-Terrorism Coordinator has
even suggested that ‘joint trainings’ for members of law enforcement, industry, and civil
society would be a good way to promote cooperation and mutual understanding.111
Why is loose, non-contractual partnership presented as the ‘best way’ to reduce the
spread of terrorist material online? The ‘voluntary’ nature of the cooperation draws upon
the terms and conditions of the platforms as reference points for what is permissible and
what is not. The EU IRU refers contents to the industry ‘to remove it on the basis that
it breaches individual companies’ user policies’.112 Europol frames the activities of the
IRU as something ‘any citizen could do’, and highlights the non-enforcing nature of the
cooperation.

4.5  ‘Playing Whack-a-Mole’: Technology and Preventing the Unpredictable

The political goal of reducing ‘terrorist and extremist material online’ is repeatedly broken
down and described by Europol in technocratic terms. Speaking of ‘identifying, assessing,
and referring’ propaganda and violent extremist content frames the process as an expert-

106
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 42 above, 12.
107
  i.e. Europol, The Internet Organised Crime Threat Assessment 2015, n. 103 above, 12.
108
 Europol, Enhancing Counter Terrorism Capabilities at EU Level, n. 60 above, 7.
109
 Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 5.
110
 Europol, The Internet Organised Crime Threat Assessment 2015, n. 103 above, 10.
111
  EU Counter-Terrorism Coordinator, EU CTC Input for the Preparation of the Informal
Meeting of Justice and Home Affairs Ministers, n. 47 above, 3.
112
 Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 2, emphasis
added.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 340 13/12/2018 15:25


Policing ‘online radicalization’  341

driven, high profile operation. Emphasizing that the related measures are fully based on
the opaquely formulated terms of service of private companies, and, for instance, also
include content posted by ‘migrant smugglers’ would turn these operations into a much
more mundane practice.
In one document, the practice of the EU IRU is described as akin to playing ‘whack-
a-mole’ with terrorists,113 referring to the popular redemption game, in which hitting one
mole on the head with a mallet just makes another one pop up somewhere else. Similar to
a cat-and-mouse game, the ‘mole-fighting’ efforts ultimately remain reactive and limited
in scope. Correspondingly, resilience to take-down measures seems to evolve and adapt
quickly.
Industry representatives with long-standing experience in taking measures against
spam bots on their platforms privately confirm that playing whack-a-mole is an accurate
description of the effectiveness of take-down practices on the Internet. Colloquially
referred to as the ‘Streisand effect’,114 the infamous counter-effectiveness of removing
online material is broadly known to both law enforcement and industry.
The fact that Europol is calling for more resources to beef up its IRU operations is
understandable from an institutionalist perspective, but this neglects the structural set-up
of the Internet. In an attempt to gain full control of the Internet, having larger capacities
does not hurt. But in principle it seems likely that IRU operations will still merely continue
to play whack-a-mole, even with faster computers and more staff. To address the political
imperative to ‘prevent every terrorist attack’,115 technological solutionism offers a trajec-
tory for law enforcement to demonstrate action to at least ‘mitigate the consequences of
technology being turned against us’.116 The European Commission acknowledges that the
‘filter and remove’ approach is an uphill battle:

Whilst it is perhaps impossible to rid the Internet of all terrorist material, we must do more to
reduce the immense volume of material that is online and so easily accessible to our citizens.117

One attempt to make the filtering and removing of online content more effective is the
introduction of so-called upload filters, which help prevent blocked material from being
republished. This works usually based on digital fingerprints of content, such as a picture
or video, that allow platforms to recognize material even if it has been adjusted or altered.
If someone tries to upload a file listed in the filter database, the upload is blocked.
Regardless of the technical means available, the policy framing of the IRU is clearly
loaded with a striking imbalance between terrorist activities being unpredictable and
an ever-stronger urge to prevent all attacks from occurring. Wanting to know when
and where terrorist attacks will happen in advance may be an understandable desire on
the part of law enforcement, but it also seems completely hopeless. Europol acknowl-
edges this, stating that ‘the threat is persistent and terrorist attacks are unpredictable.
Terrorism can strike anywhere at any time. It is therefore crucial that we work together

113
  Ibid. 4.
114
  See https://en.wikipedia.org/wiki/Streisand_effect.
115
  Wainwright, ‘Europe’s Top Cop Warns’, n. 85 above.
116
  Wainwright in Adamson, ‘Security, Terrorism, Technology . . . and Brexit’, n. 92 above.
117
  European Commission, European Agenda on Security: State of Play, n. 64 above.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 341 13/12/2018 15:25


342  Research handbook on human rights and digital technology

to do what we can to pre-empt this threat’.118 Lacking the ability to effectively exercise
coercion makes the focus of law enforcement shift from prevention to pre-emption.
In the context of radicalization on the Internet, the IRU approach is framed as a key
instrument in that it ‘combines both preventive – flagging and suspension – and pro-
active measures; in particular dynamic intelligence gathering to inform the flagging
process’.119
Europol’s self-perception as a leading center of expertise on this issue guides its
approach to establish pre-emptive capabilities at the EU IRU:

Ultimately, the EU IRU should be in a position to anticipate and pre-empt terrorist abuse of
social media and play a pro-active advisory role vis-à-vis Member States and the private sector.

Though the imbalance between the measures taken and the conceptual foundation of the
measures is striking, more research and analysis is needed. The central presupposition
for the EU IRU remains that preventing radicalization serves to pre-empt terrorism in
Europe, although this causal relationship is not scientifically proven, and may be impos-
sible to prove. However, it continues to inform counter-terrorism policy-making on the
European level because it is a presupposed frame. Breaking down the logic of pre-empting
radicalization might be the only way to develop more effective approaches to tackling
violent assaults, both on- and offline.

5.  CONCLUSION: THE MAIN FRAMES

Europol frames radicalization as an insightful concept, one that is taken for granted
despite the unclear and shaky scientific evidence behind it. This is reflected in the
ambivalent use of the term to describe both a process and a condition. Radicalization
is frequently equated with online communication and the dissemination of propaganda.
Europol frames technology as a security problem that is associated with a loss of
control and a new dimension of threats – a gateway for terrorism and radicalization.
The Internet is not seen as the cause or source of terrorism, but framed as a facilitating
means that therefore poses a risk. Following the radicalization frame, the mere exposure
to certain content is constructed as a problem for law enforcement.
Responses to the problem are both relatively banal and surprising. Promoting closer
police cooperation in Europe and increasing information sharing appear to be straight-
forward steps towards enhancing efficiency and effectiveness in European (in)security
production. Terrorists are exploiting weak European security cooperation and Europol
is framed as the right agency to tackle this issue. Pushes for closer partnership with
private actors implicitly concede that law enforcement indeed lacks control over digital
communication and is struggling to resolve this on its own. Law enforcement’s ceding
of any direct claim to be an enforcing party is reflected in the lack of coercion or legal
compulsion in its cooperation with private companies. It will be crucial to closely observe

118
 Europol, EU Internet Referral Unit at Europol: Concept Note, n. 44 above, 4, emphasis
added.
119
  Ibid. 5.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 342 13/12/2018 15:25


Policing ‘online radicalization’  343

the developments in cooperation between police and private platforms in the regulation
of speech on the national level.
Europol’s approach is both balanced and unbalanced: if one understands radicalization
in the way that Europol understands it, the presented solutions make sense. However,
Europol’s concessions to the fact that the IRU approach is strikingly ineffective make the
agency’s approach appear inconsistent. That Europol admits the IRU is playing whack-
a-mole on the Internet, can be interpreted in two ways: the measure is either a symbolic
reaction to the previous attacks that created political opportunity structures, or it is to
be understood as a preparatory measure that will increase in effectiveness as technology
advances in the future. How and to what extent law enforcement will be able to monitor
and control Internet content will be a critical question for human rights protection.

5.1  Root Causes and Corporatocracy

Governments have always tried to regulate speech.120 Bearing this in mind, the creation of
Internet referral units should not be treated as a totally unexpected, unique phenomenon.
On the contrary, I perceive the EU IRU and similar bodies as just one relatively new
element in the long history of the regulation of media and the Internet.
Policy and research engaging radicalization remain often limited to addressing radicali-
zation’s symptoms. Asking if and how certain material resonates, instead of obstinately
blocking and filtering content regardless of the individual situation, could move the
debate forward. Failing to take context into account and focusing on mere exposure to
certain material undermines empirical radicalization research and the development of
counter-radicalization policy. The case of the EU IRU strongly advises us not to over­
estimate the role of Internet content and ideology in radicalization.
Although academia knows that take-down measures only treat symptoms, and civil
society organizations know this too, and law enforcement also acknowledges this, the
presented frames appear to be stable and durable. Other, more creative ways of perceiving
and conceptualizing online radicalization and online extremism are necessary – and they
must be grounded in relational concepts. Technology alone does not cause and will not
solve radicalization. At the time of writing, the capacity of the EU IRU is still limited, but
the erroneous reasoning behind it is a problem that continues to clamp down on freedom
of speech. In other parts of the world, such as war zones or areas under authoritarian rule,
IRU-type approaches are far more advanced and harmful to society.121 More effective
approaches to violent and hateful Internet content are needed, but they have to uphold
human rights and transcend the delusive logic of online radicalization.
It must also be ensured that there are no double standards when it comes to freedom of
expression. Every piece of content must be assessed by clear and transparent criteria. The
IRU’s targeting of ‘illegal migration’ shows that reasons for surveillance and taking down
speech could potentially extend far beyond the traditional security realm. This could

120
  Ben Wagner, ‘Governing Internet Expression: How Public and Private Regulation Shape
Expression Governance’ (2013) 10 Journal of Information Technology and Politics 389.
121
  See e.g. Michael Pizzi, ‘The Syrian Opposition is Disappearing from Facebook’, The
Atlantic (2014), available at www.theatlantic.com/international/archive/2014/02/the-syrian-opposi​
tion-is-disappearing-from-facebook/283562/.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 343 13/12/2018 15:25


344  Research handbook on human rights and digital technology

easily be extrapolated to a familiar pattern of incrementally impinging on civil liberties,


introducing measures for seemingly uncontroversial cases (like videos of beheadings),
putting infrastructure in place, and slowly but surely expanding the scope of the policy.
The EU IRU is not the determining driver of the privatization of regulation of speech
on social media platforms. The fact that the ownership and control over most large
social media outlets and the Internet infrastructure lies in the hands of private actors
is a broader, global development. Nevertheless, the EU IRU is not a symbolic policy; it
does have substantial impact on the changing interface between the public and private
enforcement of law. The strategic use of cooperation with private firms is growing in other
fields as well, for example, in fighting ransomware.122 The black-box-like character of the
platforms’ removal processes is strategically used to secretly leverage filtering practices.
When private companies become enforcing actors, we move from questions of ‘is a piece
of content legal?’ to ‘is it politically and commercially eligible?’. Of course, all private
media platforms have curative freedom in designing their services and outlets. That being
said, if an instrument similar to the IRU were to be implemented in a non-digital medium,
such as a large daily newspaper, it would probably be called ‘state censorship’ or ‘political
exercise of influence’, and the public outcry would most likely be immense.
It should be the task of digital social work, not police repression, to combat hateful
material on the Internet. It might be worthwhile to consider capacity building in political
education and media literacy on a Europe-wide level. Holistic approaches need to be
developed, ones that embrace online and offline communication as one inseparable sphere
in order to overcome technological solutionism and political biases. In order to accom-
plish this, we need to employ security concepts that are effective, in line with fundamental
rights, and which adhere to democratic principles of accountability.

122
 Europol, No More Ransom: Law Enforcement and IT Security Companies Join Forces to
Fight Ransomware, Press Release (Europol, 2016), available at www.europol.europa.eu/newsletter/
no-more-ransom-law-enforcement-and-it-security-companies-join-forces-fight-ransomware.

Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:07AM
via New York University

WAGNER_9781785367717_t.indd 344 13/12/2018 15:25


17.  When private actors govern human rights
Rikke Frank Jørgensen

The private sector’s role in the digital age appears pervasive and ever-growing, a driving force
behind the greatest expansion of access to information in history. Vast social media forums for
public expression are owned by private companies. Major platforms aggregating and indexing
global knowledge, and designing the algorithms that influence what information is seen online,
result from private endeavour.1

1. INTRODUCTION

It is commonly argued that individuals enjoy the same rights online as they do offline.
In practice, however, the online realm poses a number of human rights concerns, not
least, due to the fact that large chunks of public and private interactions unfold within
platforms and services provided for by companies. While international law gives states the
principal responsibility for protecting human rights, the broad ecosystem of companies
responsible for Internet infrastructure, products and services play a crucial role for
individuals’ effective ability to enjoy those rights.2 In short, companies have great powers
over users’ rights but no legal human rights obligations, beyond the obligation to support
legitimate law enforcement activities.3
The challenges that arise at the intersection between human rights and technology
companies has recently been highlighted at the UN Human Rights Council by both the
UN Special Rapporteur on Privacy4 and the UN Special Rapporteur on Freedom of
Expression.5 Both Rapporteurs point to the corporate use of personal information as
a key contemporary challenge, in particular the online business model, whereby social
interaction, expressions, collaboration, information search, etc. is part of a ‘data driven
economy’ or ‘personal information economy’.6
Scholars have warned that the personal information economy represent a largely
uncontested new expression of power, ‘the big other’,7 and have referred to it as surveil-
lance capitalism. The implicit logic of the personal information economy is constituted

1
  D. Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression (Human Rights Council, 11 May 2016) para. 1.
2
  Ibid.
3
  I. Brown and D. Korff, Digital Freedoms in International Law: Practical Steps to Protect
Human Rights Online (Global Network Initiative, 14 June 2012).
4
  J.A. Cannataci, Report of the Special Rapporteur on Privacy (Human Rights Council, 8
March 2016).
5
 Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression, n. 1 above.
6
  G. Elmer, Profiling Machines: Mapping the Personal Information Economy (MIT Press, 2004).
7
  S. Zuboff, ‘Big Other: Surveillance Capitalism and the Prospects of an Information
Civilization’ (2015) 30 Journal of Information Technology 75.

346
Rikke Frank Jørgensen - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 346 13/12/2018 15:25


When private actors govern human rights  347

by mechanisms of extraction and commodification of personal information, while


producing new markets of behavioural prediction. In short, users supply information
as a precondition for using the services and platforms, and the companies harness the
information provided by users’, as well as the metadata about their online patterns and
behaviour as a means to produce revenue:

This increasingly detailed data-map of consumer behavior has resulted in personal data becom-
ing a commodity where access to such data or exploitation of such data in a variety of ways is
now one of the world’s largest industries generating revenues calculated in hundreds of billions
most usually in the form of targeted advertising.8

Presuming that the online services and platforms’ hold great powers as both enablers
and potential violators of human rights, such as privacy and freedom of expression,
it becomes paramount to examine how the frameworks and narratives that govern their
activities influence these fundamental rights and freedoms. In order to do so, the chapter
will focus on two questions that occur in the grey zone between private companies and
human rights protection in the online domain. First, what are the main human rights
frameworks and narratives that govern major Internet companies such as Google and
Facebook? Second, what are some of the dynamics that work against an effective human
rights protection online (data-driven revenue model, state-like powers but full discrepancy
on content regulation)? In conclusion, it will point to some possible responses to these
challenges.

2.  HUMAN RIGHTS AND INTERNET SCHOLARSHIP

Scholarship related to human rights and technology is distributed amongst different disci-
plines spanning from international law and Internet governance, to media and communica-
tion studies. Since the topic started to surface on the international Internet policy agenda
during the first World Summit on the Information Society (WSIS) in 2003, a large number
of books, reports and soft law standards, especially from the Council of Europe, UNESCO,
OSCE and the UN Special Rapporteur on Freedom of Expression and Information, have
been produced. The majority of these sources, however, are not anchored in a theoretical
framework but present empirically grounded studies of (1) opportunities and threats to
established human rights standards by use of communication technology, in particular
the right to privacy and the right to freedom of expression;9 or (2) cases that focus on the

8
 Cannataci, Report of the Special Rapporteur on Privacy, n. 4 above, 4.
9
  T. Mendel, A. Puddephatt, B. Wagner, D. Hawtin and N. Torres, Global Survey on Internet
Privacy and Freedom of Expression (2012); R. Deibert, J. Palfrey, R. Rohozinski, J. Zittrain and
I. Opennet, Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (MIT Press,
2010); W. Benedek and M. Kettemann, Freedom of Expression and the Internet (Strasbourg: Council
of Europe, 2014); I. Brown, Online Freedom of Expression, Assembly, Association and the Media in
Europe (Council of Europe Conference of Ministers on Freedom of Expression and Democracy in
the Digital Age, 2013); D. Korff, The Rule of Law on the Internet and in the Wider Digital World,
Issue Paper for the Council of Europe (Commissioner for Human Rights, 2014); Y. Akdeniz, Media
Freedom on the Internet: An OSCE Guidebook (Office of the OSCE Representative on Freedom of

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 347 13/12/2018 15:25


348  Research handbook on human rights and digital technology

use of technology for human rights and social change;10 or (3) standard-setting that seeks
to establish norms for human rights protection in the online domain.11 More recently,
scholars have begun to examine how these challenges relate to the political economy of ‘new
media‘and the modalities it sets for participation in the digital era.12
As mentioned above, privacy and freedom of expression are some of the most debated
human rights in relation to the online domain. Since the WSIS, a more comprehensive
approach addressing all human rights has been called for; however, with a few exceptions,
the vast majority of literature pertain to these two sets of rights. For freedom of expression,
specific challenges relate to new means of curtailing expression rights (spanning from
overly broad legislation, to blocking and filtering of content, to disruption of services) as
well as the dominant regulatory trend of self- and co-regulation, not least in Europe, where
intermediaries are enlisted to take action against unwanted content on their platforms and
services.13 In relation to privacy, critical issues include state and intelligence surveillance
carried out without the necessary oversight and proportionality assessment; extensive
capture and use of personal information by corporate actors; societal benefits of big data
vis-à-vis the individual’s right to privacy and data protection; and the increasing exchange
of personal information between public authorities, without the required safeguards.14
In relation to the technology companies, the public attention to the relationship between
online platforms and services and human rights exploded during the Arab Spring, the
Occupy movement, and the growing use of social media platforms in global activism.15

the Media, 2016); APC and HIVOS (eds), Global Information Society Watch 2014: Communication
Surveillance in the Digital Age (Association for Progresive Communication (APC) and Humanist
Institute for Corporation with Developing Countries (HIVOS), 2014); R. MacKinnon, Consent of
the Networked: The World-Wide Struggle for Internet Freedom (Basic Books, 2012).
10
  A. Comninos, Freedom of Peaceful Assembly and Freedom of Association and the Internet,
Issue Paper (APC, June 2011); J. Earl and K. Kimport. Digitally Enabled Social Change: Activism
in the Internet Age (MIT Press, 2011); J. Lannon and E. Halpin, Human Rights and Information
Communication Technologies: Trends and Consequences of Use (Information Science Reference,
2013).
11
  Council of the European Union, EU Human Rights Guidelines on Freedom of Expression
Online and Offline (12 May 2014); Council of Europe, Recommendation of the Committee of
Ministers to Member States on a Guide on Human Rights for Internet Users (16 April 2014); United
Nations General Assembly, Resolution on the Right to Privacy in the Digital Age (18 December
2013); UNHRC, Elaboration of an International Legally Binding Instrument on Transnational
Corporations and Other Business Enterprises with Respect to Human Rights, UN Doc. A/HRC/
RES/26/9 (14 July 2014).
12
  S.M. Powers and M. Jablonski, The Real Cyber War: The Political Economy of Internet
Freedom (University of Illinois Press, 2015); Zuboff, ‘Big Other’, n. 7 above, 75; M. Moore, Tech
Giants and Civic Power (King’s College London, April 2016).
13
 Korff, The Rule of Law on the Internet and in the Wider Digital World, n. 9 above; R.F.
Jørgensen, A.M. Pedersen, W. Benedek and R. Nindler, Case Study on ICT and Human Rights
(Policies of EU), Frame: Work Package No. 2, Deliverable No. 2.3 (European Commission, 2016).
14
  R. Deibert, Black Code: Inside the Battle for Cyberspace (McClelland & Stewart, 2013); B.
Schneier, Data and Goliath: The Hidden Battles to Capture Your Data and Control Your World (W.
W. Norton & Company, 2015); L. Floridi, Protection of Information and the Right to Privacy: A New
Equilibrium? (Springer, 2014).
15
  B. Wagner, After the Arab Spring: New Paths for Human Rights and the Internet in European
Foreign Policy (European Parliament, July 2012).

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 348 13/12/2018 15:25


When private actors govern human rights  349

Arguably, Internet companies have been caught between two competing narratives: on the
one hand, as heroic enablers of online activism that push back against government over-
reach; on the other hand, as private actors that hold an unprecedented power over users’
ability to exercise rights of expression and privacy in the online domain, yet have no legal
obligation under human rights law.16

3.  HARD AND SOFT LAW FRAMEWORK

International human rights law requires that states respect, protect and fulfil the human
rights of individuals within their power or effective control. This includes a duty to protect
against human rights abuse by companies.17 The state’s duty to protect is a standard of
conduct, implying that states are not per se responsible for human rights abuse by private
actors. However, states may breach their international human rights law obligations
where such abuse can be attributed to them, or where they fail to take appropriate steps
to prevent, investigate, punish and redress private actors’ abuse.18
The right to freedom of expression is defined in Article 19 of the International Covenant
on Civil and Political Rights (ICCPR) as the right to impart, seek and receive information
and ideas of all kinds, regardless of frontiers. It is also protected in a number of regional
conventions, including the European Convention on Human Rights (Article 10) and the
European Union Charter of Fundamental Rights (Article 11). Article 19(3) of the ICCPR
allows for restrictions on freedom of expression. For a restriction to be legitimate, it must
be provided by law in a form that is publicly accessible and precise enough in order to limit
the authorities’ discretion, serve one of the specified aims, and be necessary in a democratic
society.19 The necessity criteria requires an assessment of proportionality demonstrating
that restrictive measures are the least intrusive instrument and proportionate to the interest
to be protected.20 When restrictions fail to meet the standard of Article 19 (3), individuals
enjoy the right to an effective remedy under Article 2(3) of the Covenant.
The right to privacy is defined in Article 17 of the ICCPR as the right to be free from
unlawful or arbitrary interference with one’s privacy, family, home or correspondence,
and from unlawful attacks on one’s reputation, and the right to enjoy the protection of the
law against such interference or attacks. In Europe, the right to privacy is protected by the
European Convention on Human Rights (Article 8), and the European Union Charter
of Fundamental Rights (Articles 7 and 8). The European Court of Human Rights has
developed extensive jurisprudence exploring the boundaries of the right to privacy, and a

16
  E. Taylor, The Privatization of Human Rights (Centre for International Governance
Innovation, 2016); MacKinnon, Consent of the Networked, n. 9 above; E. Laidlaw, Regulating
Speech in Cyberspace (Cambridge University Press, 2015).
17
  J. Ruggie, Report of the Special Representative, Guiding Principles on Business and Human
Rights: Implementing the United Nations ‘Protect, Respect and Remedy’ Framework (United Nations
Human Rights Council, 21 March 2011).
18
  Ibid.
19
  United Nations Human Rights Committee, General Comment No. 34 on Article 19: Freedoms
of Opinion and Expression (United Nations, 12 September 2011).
20
  Ibid.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 349 13/12/2018 15:25


350  Research handbook on human rights and digital technology

number of recent statements from international human rights mechanisms establish that,
although Article 17 of the ICCPR does not prescribe a test for permissible limitations, the
same tripartite test applies to privacy as it does to freedom of expression.21
While human rights law is binding on states only, several soft law standards provide
guidance to companies on their responsibility to respect human rights.22 The most impor-
tant benchmark is the Guiding Principles on Business and Human Rights from 2011.23
Anchored in the ‘Protect, Respect, and Remedy’ framework, the UN Guiding Principles
on Business and Human Rights (UNGP) establish a widely accepted vocabulary for
understanding the respective human rights roles and responsibilities of states and busi-
ness. Based on human rights law, the UNGP reaffirm that states must ensure that busi-
nesses under their jurisdiction respect human rights. Also, they outline the human rights
responsibility of private actors, independent of how states implement their human rights
obligations. The Guiding Principles assert a global responsibility for businesses to avoid
causing or contributing to adverse human rights impacts through their own activities,
and to address such impacts when they occur. Moreover, companies shall seek to prevent
or mitigate adverse human rights impacts directly linked to their operations, products or
services, even if they have not contributed to those impacts.
In the digital domain, human rights impacts may arise in a number of situations
and from internal decisions related to how companies respond to government requests
to restrict content or access customer information; how they adopt and enforce terms
of service; the design and engineering choices that implicate security and privacy; and
decisions to provide or terminate services in a particular market.24 The UNGP state that
businesses should be prepared to communicate how they address their human rights
impacts externally, particularly when concerns are raised by or on behalf of affected
stakeholders.25 The right to remedy when businesses are involved in human rights abuses is
an essential component of the Guiding Principles. Despite broad support of the Guiding
Principles by the technology sector, access to effective remedy is less established in this
sector compared to other industries that have faced serious human rights scrutiny, includ-
ing extractive industries and manufacturing.26 The human rights scrutiny in these sectors
grew out of considerable advocacy related to labour rights, particularly in manufacturing,

21
  See the Concluding Observations of the Human Rights Committee in their 2014 review of
the United States’ compliance with its obligations under ICCPR, Art. 17. Here, the Committee
noted that interferences with the right to privacy must comply ‘with the principles of legality, neces-
sity and proportionality’ (CCPR/C/USA/CO/4, para. 22). Likewise, the UN Special Rapporteur
on Freedom of Expression in his 2013 report on privacy and communications surveillance stated
that ‘[t]he framework of article 17 of the ICCPR enables necessary, legitimate and proportionate
restrictions to the right to privacy by means of permissible limitations’ (A/HRC/23/40, para. 28).
22
  See the OECD Guidelines for Multinational Enterprises, available at http://mneguidelines.
oecd.org/text/; and the ILO Declaration on Fundamental Principles and Rights at Work, available
at www.ilo.org/declaration/lang--en/index.htm.
23
 Ruggie, Report of the Special Representative, n. 17 above.
24
 Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression, n. 1 above, para. 11.
25
 Ruggie, Report of the Special Representative, n. 17 above, ch. II.
26
  D. Sullivan, Business and Digital Rights: Taking Stock of the UN Guiding Principles for
Business and Human Rights, Issue Paper (APC, June 2016) 20.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 350 13/12/2018 15:25


When private actors govern human rights  351

and human rights violations by companies in the extractive industry. While the ICT sector
also has cases related to labour rights and the manufacturing of devices, most of the
human rights concerns relate to potential harms to freedom of expression and the right
to privacy. These concerns are accentuated by the fact that a negative effect on individuals’
rights to freedom of expression or privacy will affect millions of users across the globe.
This particular feature of the sector may explain why access to remedies has not been
further developed. Providing access to remedies at such scale will require an extensive
system of response that companies may not eagerly buy into. Civil society organizations,
however, increasingly call for the implementation of the third pillar (access to remedies)
in the technology sector.27
The UNGP have resulted in several follow-up initiatives at both global and regional
level. At the global level, a UN Working Group on human rights and transnational
corporations and other business enterprises was established in June 2011 to promote
the effective and comprehensive dissemination and implementation of the UNGP.28 The
group has, among other things, produced ‘Guidance’ on the development of national
action plans (NAPs) on business and human rights. The Guidance iterates that NAPs
must reflect the state’s duty to protect against adverse corporate human rights impacts,
i.e. provide effective access to remedy and include corporate due diligence processes.
More recently, Ecuador and South Africa in 2014 put forward a Human Rights Council
resolution to establish an open-ended intergovernmental working group with the mandate
of elaborating an international legally binding instrument to regulate transnational
companies under human rights law.29 The report of the Second Working Group was
presented to the UN Human Rights Council in March 2017. However, as pointed out
by several commentators, an international legally binding instrument in this field poses
several challenges, such as concerns that states will use the treaty to excuse their own
refusal to protect human rights.30
At the European level, the European Commission has produced sector-specific Guides
on UNGP implementation in relation to three business sectors, including the information
and communication technology (ICT) sector.31 The Guide is not a legally binding docu-
ment, but translates the expectations of the UNGP to the specifics of the business sector,
at a rather generic level. In relation to the ICT sector, the Guide stresses that the right
to privacy and freedom of expression can be particularly impacted by companies in the
ICT sector.32 The Guide focuses on the state pressure that companies may be subjected
to when they operate in contexts where the national legal framework does not comply
with international human rights standards (i.e. a vertical conflict). In contrast, the nega-

27
  P. Micek and J. Landale, Forgotten Pillar: The Telco Remedy Plan, Access Now (King’s
College London, May 2013).
28
  The Human Rights Council Resolution establishing the working group is available at http://
daccess-dds-ny.un.org/doc/UNDOC/LTD/G11/141/87/PDF/G1114187.pdf ?OpenElement.
29
 UNHRC, Elaboration of an International Legally Binding Instrument, n. 11 above.
30
  For an overview of the debate, see e.g. www.harvardilj.org/2016/07/the-proposed-business-
and-hu​man-rights-treaty-four-challenges-and-an-opportunity/.
31
  The Guide is available at http://ec.europa.eu/enterprise/policies/sustainable-business/files/
csr-sme/csr-ict-hr-business_en.pdf.
32
  IHRB and SHIFT, EU ICT Sector Guide on Implementing the UN Guiding Principles on
Business and Human Rights (European Commission, 2012) s. 2.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 351 13/12/2018 15:25


352  Research handbook on human rights and digital technology

tive human rights impact that may flow from the company’s governance of allowed/not
allowed content or its tracking of user behaviour is not addressed. Arguably, the exclusive
focus on the company–government axis is also reflected in the companies’ own narrative
and governance mechanisms vis-à-vis human rights, as further addressed below.
Up until now, the principal mechanism for articulating commitments and plans
of action in response to the UNGP has been NAPs.33 The United Kingdom and the
Netherlands were the first states to publish NAPs in 2013, but have since been followed
by six more countries, and another 27 are underway.34 Few of these, however, mention
technology issues. The United Kingdom (in its 2016 update) addresses the Wassenaar
Arrangement to combat the export of surveillance technology to governments engaged
in human rights abuses, whereas Sweden mentions the Internet freedom agenda, and
Finland suggests a roundtable on privacy and data protection.

4.  COMPANY COMMITMENTS AND NARRATIVES

Historically, the ability of states to compel action by technology companies have been
the source of some of the most widespread and severe human rights violations facilitated
by the technology sector.35 In response to this, and as illustrated by the Global Network
Initiative below, several of the major Internet companies have a ‘long’ history of human
rights commitment, and have cooperated since 2009 to develop standards and guidance
on how to respond to, and mitigate, overreach by state agents. As such, the human rights
discourse is not foreign to these companies; however, it has been firmly rooted in the con-
text in which governments make more or less legitimate requests and companies respond
and push back against such requests. With the increasing power that these companies hold
in the online ecosystem, this narrow take on the human rights responsibility of companies
is no longer sufficient. With great power comes responsibility, and there is a rising expecta-
tion from scholars and civil society organizations that the giants of the online domain
not only safeguard their users from overreach by states, but also from negative human
rights impact because of their business practices, as illustrated by the Ranking Digital
Rights Corporate Accountability Index.36 As spelled out in the UNGP, companies have a
responsibility to assess all their business activities, procedures and practices for potential
human rights impacts. This task, however, is not straightforward and is countered by
industry narratives that maintain the sole focus on state action.37
While the Guiding Principles have established a common vocabulary for talking
about corporate human rights responsibilities, the practical implications of the ‘Protect,
Respect, and Remedy’ framework remain unclear. Moreover, the way that the companies

33
  See above.
34
  See the overview provided by the Business and Human Rights Resource Centre, a­ vailable at
http://business-humanrights.org/en/un-guiding-principles/implementationtools-examples/­implem​
entation-by-governments/by-typeof-initiative/national-action-plans.
35
 Deibert et al., Access Controlled, n. 9 above; Sullivan, Business and Digital Rights, n. 26 above.
36
  Available at https://rankingdigitalrights.org/.
37
  R.F. Jørgensen, ‘Framing Human Rights: Exploring Storytelling Within Internet Companies’
(2018) 21 Information, Communication and Society 3.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 352 13/12/2018 15:25


When private actors govern human rights  353

manage their human rights responsibility is not subjected to scrutiny by the human rights
system, for example, via the Universal Periodic Review process that all states undergo as
part of their human rights commitment, or via company visits similar to the country visits
regularly conducted by the Special Rapporteurs. On a voluntary basis, however, Internet
companies work with the human rights framework, most specifically as part of the multi-
stakeholder Global Network Initiative but also in government-led initiatives such as the
Freedom Online Coalition.

4.1  Freedom Online Coalition

The Freedom Online Coalition (FOC) is a partnership of 30 governments, established in


2011 to advance Internet freedom. ‘Coalition members work closely together to coordinate
their diplomatic efforts and engage with civil society and the private sector to support
Internet freedom – free expression, association, assembly, and privacy online – worldwide’.38
At the 2014 Annual Conference in Tallinn, FOC members unanimously adopted a set of
‘Recommendations for Freedom Online’, also referred to as the ‘Tallinn Agenda’. The
Tallinn Agenda, together with the Founding Declaration and the Nairobi Terms of
Reference, forms the member state commitments. The Tallinn Agenda, for example, calls
upon governments worldwide to promote transparency and independent, effective domestic
oversight related to electronic surveillance, use of content take-down notices, limitations or
restrictions on online content or user access and other similar measures, while committing
FOC members to do the same.39 In terms of engagement, critics have suggested that the
activities of the coalition need greater government involvement.40 Multi-stakeholder work-
ing groups addressing cybersecurity, digital development and privacy and transparency
include diverse participation by NGOs, academics and the private sector, but only very
limited government participation. Also, a recent evaluation of the FOC carried out by
the Center for Global Communication Studies recommends that the coalition establish
mechanisms through which stakeholders can raise concerns about member governments,
and periodically review performance and commitments.41 The impact of these suggestions
on individual states and the future work of the coalition remains to be seen.

4.2  Global Network Initiative

The Global Network Initiative (GNI) was established in 2008 as a multi-stakeholder initia-
tive to guide companies when governments make requests vis-à-vis user data in ways that
may violate international human rights standards, for example, requests for user data that

38
  See www.freedomonlinecoalition.com/.
39
  Freedom Online, Coalition Recommendations for Freedom Online (28 April 2014), avail-
able at www.freedomonlinecoalition.com/wp-content/uploads/2014/04/FOC-recommendations-co​
nsensus.pdf.
40
 Sullivan, Business and Digital Rights, n. 26 above, 10.
41
  Center for Global Communications Studies, Clarifying Goals, Revitalizing Means: An
Independent Evaluation of the Freedom Online Coalition (GCS, May 2016), available at www.global.
asc.upenn.edu/publications/clarifying-goals-revitalizing-means-an-independent-evaluation-of-the-
freedom-online-coalition/.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 353 13/12/2018 15:25


354  Research handbook on human rights and digital technology

are not lawful, legitimate and proportionate. As of June 2016, corporate members count
Google, Yahoo, Facebook, Microsoft, and LinkedIn, whereas seven of the big telecommu-
nication companies are united in the parallel initiative, the Telecommunications Industry
Dialogue (ID). The ID have observer status to the GNI, which precedes membership.42 The
GNI approach has been to help the companies enact policies to anticipate and respond
to situations in which local law and practice differ from international standards for the
rights to privacy and free expression online; to raise public awareness and understanding
of these issues; and to foster adoption of public policies that address the gaps.43 Based on
international law, the GNI has developed a set of standards that the companies should
follow to mitigate privacy and freedom of expression violations caused by governments.
Since its inception, the GNI has been criticized for lack of participation (including by
smaller and non-US companies); for not being independent enough in the assessment
process;44 for lack of a remedy mechanism; for insufficient focus on privacy by design;
and for lack of accountability.45 The criticism speaks to the inherent challenge of having
an industry define its own standards and procedures for respecting users’ rights to privacy
and freedom of expression. Moreover, it has been argued that the protection of human
rights may run contrary to business interests,46 for example, in the case of data collection,
where the online business model would benefit from as much personal data as possible,
whereas the right to privacy would call for a restrictive approach to data collection.
As reflected in their baseline documents, the GNI is strongly anchored in the initial idea
of providing guidance to Internet companies in countries where local laws conflict with
international human rights standards, rather than the systematic human rights impact
assessment suggested by the UNGP:

The right to freedom of expression should not be restricted by governments, except in narrowly
defined circumstances based on internationally recognized laws or standards . . . Participating
companies will respect and protect the freedom of expression rights of their users when
confronted with government demands, laws and regulations to suppress freedom of expression,

42
  See www.telecomindustrydialogue.org and http://globalnetworkinitiative.org/news/global-
network-initiative-and-telecommunications-industry-dialogue-join-forces-advance-freedom.
43
  C. Maclay, An Improbable Coalition: How Businesses, Non-Governmental Organizations,
Investors and Academics Formed the Global Network Initiative to Promote Privacy and Free
Expression Online (PhD Northeastern University, 2014) 11.
44
  In June 2014, the GNI Board consolidated the assessment process into a two-stage model:
first, self-reporting from the companies to GNI after one year of membership; second, assessment
of each company member held every two years. The assessment is carried out by a list of GNI
approved assessors and examines the policies, systems and procedures put in place by the company
to comply with GNI Principles.
45
  C.M. Maclay,‘Protecting Privacy and Expression Online: Can the Global Network Initiative
Embrace the Character of the Net’ in R. Deibert, J. Palfrey, R. Rohozinski and J. Zittrain (eds),
Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace (MIT, 2010); MacKinnon,
Consent of the Networked, n. 9 above, 179–82. For news coverage on this, see e.g., www.forbes.com/
sites/larrydownes/2011/03/30/why-no-one-will-join-the-global-network-initiative/m and http://the​
hill.com/policy/technology/327831-internet-freedom-group-splits-from-tech-companies-over-survei​
llance-concerns.
46
  D. Doane and U. Stanford, The Myth of CSR: The Problem with Assuming that Companies
Can Do Well While Also Doing Good is that Markets Don’t Really Work that Way (Stanford
Graduate School of Business, 2005).

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 354 13/12/2018 15:25


When private actors govern human rights  355

remove content or otherwise limit access to information and ideas in a manner inconsistent with
internationally recognized laws and standards.47

Similarly, when looking at the Implementation Guidelines for Freedom of Expression,


company practices are solely discussed in relation to ‘Government Demands, Laws and
Regulations’.48
While the declared industry push back towards illegitimate government requests, as
expressed in the GNI standards, undoubtedly addresses an important human rights
problem, it is not sufficient to comply with the corporate responsibilities set out in the
UNGP, thus to know the actual or potential human rights impact of their business
practices, prevent and mitigate abuses, and address adverse impacts with which they are
involved. In other words, companies are expected to carry out human rights due diligence
across all operations and products. This process of identifying and addressing the human
rights impact must include assessments of all internal procedures and systems, as well as
engagement with the users potentially affected by the company practices, for example,
the procedures that guide removal of content found to be in violation of the term of
service, or the criteria and mechanisms (human or non-human) that guide which content
is presented to the users.
A similar challenge is found in relation to privacy, where the GNI Principles iterate that:

the right to privacy should not be restricted by governments, except in narrowly defined circum-
stances based on internationally recognized laws and standards . . . Participating companies will
respect and protect the privacy rights of users when confronted with government demands, laws
or regulations that compromise privacy in a manner inconsistent with internationally recognized
laws and standards.49

In line with this, the corresponding section in the Implementation Guidelines addresses
‘Government Demands, Laws and Regulations’, as well as ‘Data collection’. The latter
is concerned with risk analysis of the specific national context/jurisdiction where the
company operates (GNI Implementation Guidelines, Section on Privacy). In line
with its counterpart on freedom of expression, the GNI Principles and the attached
Implementation Guidelines focus merely on the negative human rights impact caused by
external pressure (governments/risky national contexts), whereas internal mechanisms
related to data processing and exchange remain unchallenged.
Most recently, in March 2017, 22 of the major Internet and telecommunications com-
panies, including all GNI members except LinkedIn, were ranked on their commitments
to respect users’ right to freedom of expression and privacy. The ranking was based on
publicly available material concerning top-level commitment to human rights, accessible
policies, grievance mechanisms, etc. and carried out by the Ranking Digital Rights
Corporate Accountability Index.50 The main conclusion from the 2017 Index is that
companies do not disclose enough information to their users about policies and practices
affecting freedom of expression and privacy. As a result, most of the world’s Internet

47
  GNI Principles, Freedom of Expression.
48
  GNI Implementation Guidelines, Freedom of Expression.
49
  GNI Principles, Privacy.
50
  See https://rankingdigitalrights.org/.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 355 13/12/2018 15:25


356  Research handbook on human rights and digital technology

users lack the information they need to make informed choices. Similar to the 2015
results, the average score for all companies evaluated was just 33 per cent. While examples
of good practice were found across the Index, even the higher-ranking companies had
significant gaps in disclosure on policies and practices that affect freedom of expression
and privacy.51

4.3  Company Framing of Human Rights

In 2015 to 2016, the author carried out empirical research examining the human-rights
sense-making within Google and Facebook.52 The research found that there is a discon-
nect between the internal discourse on human rights, such as freedom of expression
and privacy, at Google and Facebook, and the external discourse and concern related
to these issues. Internally, both companies see themselves as strongly committed to, and
actively promoting, human rights. The framing, however, is limited to situations where
governments may censor legitimate content, shut down services, demand user data, etc.,
and is largely blind to areas where the companies’ business practices may have a negative
impact on their users’ rights and freedoms. In relation to freedom of expression, there is
a crucial distinction between requests from governments, and requests from the user-base
via flagging. Both companies have established due diligence processes in response to
government requests for content removal, thereby recognizing that illegitimate removals
amount to a freedom of expression issue. This implies that the requests are accessed for
lawfulness, legitimate aim, and proportionality. In contrast, the extensively larger volume
of content that reviewers remove each day because they are found to violate terms of
service is not considered a freedom of expression issue. These requests are only assessed
for compliance/non-compliance with the standards that apply for the particular service,
e.g. the Community Standards for Facebook, or the Community Guidelines for YouTube.
As for privacy, there is a corporate sense of paying great attention to privacy and to
push back against government requests for user data with similar due diligence standards
related to lawfulness, legitimate aim, and proportionality. Likewise, an extensive system
is in place to detect and react on privacy problems whenever a new product or revision
is introduced. The enormous data extradition and analysis that both companies excel in,
however, is not framed as a privacy issue. Privacy is repeatedly addressed as user control
in relation to the products, and not as limits to data collection per se. The idea of personal
data as the economic asset of the Internet economy is the taken-for-granted context,
and there is no recognition that the business model may, at a more fundamental level,
preclude individual control. Arguably, this also reflects essentially different approaches
to privacy and data protection between the United States (where these companies are
headquartered) and the EU, as covered extensively in the privacy literature.53
At the heart of both Google and Facebook lie narratives that speak to the liberating

51
  See https://rankingdigitalrights.org/index2017/findings/keyfindings/.
52
  Jørgensen, ‘Framing Human Rights’, n. 37 above; R.F. Jørgensen, ‘What Do Platforms
Mean When They Talk About Human Rights’ (2017) 9 Policy and Internet 3.
53
  K.A. Bamberger and D.K. Mulligan, Privacy on the Ground: Driving Corporate Behavior
in the United States and Europe (MIT, 2015); F. Boehm, A Comparison Between US and EU Data
Protection Legislation for Law Enforcement (Study for the LIBE Committee, 2015).

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 356 13/12/2018 15:25


When private actors govern human rights  357

power of technology, and which finds no contradiction between this narrative and the
personal information economy. As such, there is no sense of contradiction between
providing a public space where users exercise fundamental rights and the harnessing of
these communications as part of the online value chain. As argued by Sullivan, there is
a strong libertarian impulse throughout large portions of the tech industry that views
government as a source of problems and very rarely as a solution:

Within this mindset, all that is required of socially responsible companies is to resist govern-
ment overreach through technological and legal measures. It is informed by the vision of ICTs
as liberation technology and a maximalist approach to free expression rooted in the US first
amendment.54

In sum, major Internet companies such as Google and Facebook frame their core
mission in terms of freedom of expression, and engage in the industry networks dedicated
to protecting human rights norms and standards in the online domain. Yet, the effective
protection of their users’ right to freedom of expression and privacy is challenged by
several factors. First, the companies’ human rights commitment is voluntarily and largely
a commitment to push back against illegitimate government requests, whereas business
processes with a human rights impact remain outside the scope of the companies’ efforts.
In consequence, there is no systematic assessment of how the companies’ internal prac-
tices around, for example, content removal and data collection may impact negatively on
users’ right to freedom of expression and privacy worldwide. Second, the market incentive
of the personal data economy encourages maximum rather than minimum collection of
personal information, as addressed in more detail below.

5. DYNAMICS THAT COUNTER HUMAN RIGHTS


PROTECTION

5.1  Personal Information Economy

While Internet companies acknowledge their human rights responsibility, there are
essential market dynamics that counter a strong privacy protection, most notably the
personal information economy (PIE). PIE is a notion increasingly used to describe the
predominant online business model that derives its economic value from users’ personal
data, preferences and behaviour. According to this model, which drives the viability of
the major Internet platforms, every instance of online participation involves ‘a transfer
of data which has been economized’.55 On a legal level, the commodification of personal
information essentially implies ‘the organized activity of exchange, supported by the legal
infrastructure of private-property-plus-free-contract’.56 In 2015, Facebook c­ ommissioned

54
 Sullivan, Business and Digital Rights, n. 26 above, 6.
55
  G. Goldberg, ‘Rethinking the Public/Virtual Sphere: The Problem with Participation’ (2011)
13(5) New Media and Society 739–54, at 748.
56
  M.J. Radin, ‘Incomplete Commodification in the Computerized World’ in N. Elkin-Koren
and N. Weinstock Netanel (eds), The Commodification of Information (Kluwer Law International,
2002) 4.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 357 13/12/2018 15:25


358  Research handbook on human rights and digital technology

a study on how to ‘sustainably maximise the contribution that personal data makes to the
economy, to society, and to individuals’.57 The resulting report explains how mass produc-
tion (‘averaging’ required by standardization) is being replaced by mass customization,
enabled by specific information about specific things and people:

Today’s practices, whether they drive the production of a coupon
or a digital advertisement,
employ data analysts and complex computational power to analyse
data from a multitude of
devices
and target ads with optimal efficiency, relevance and personalization.58

As mentioned in the report, this business model has caused a number of concerns, such
as users’ lack of control over data, lack of reasonable mechanism of consent, lack of
awareness of the business model, fear of manipulation of algorithms, and unaccountable
concentrations of data power.59 Up until now, the industry response to these concerns
has concentrated on increased user information and expanding the radius of user action
(privacy settings) within the pre-set economic model. The fundamental privacy challenge
raised by the personal information economy, however, is receiving increasingly scholarly
attention.60 Presuming privacy is a core element in sustaining subjectivity and critical
societal discourse, it becomes foundational to the practice of reflective citizenship, as well
as an indispensable feature of a free and open democracy. As articulated by Cohen:

Privacy shelters dynamic, emergent subjectivity from the efforts of commercial and government
actors to render individuals and communities fixed, transparent, and predictable. It protects the
situated practices of boundary management through which the capacity for self-determination
develops.61

As such, privacy is crucial to sustaining the borders for individual freedom, the ‘breath-
ing room’ on the basis of which we develop as individuals and engage in society. From
this perspective, the PIE model disrupts one of the very basic elements of personal
development and societal discourse, since it extracts value from analysing, predicting and
controlling all mediated experiences, with no appreciation of a space outside the reach of
this economic paradigm.
In short, it seems paradoxical to expect that the boundaries for data collection and
use will be most effectively protected by companies whose business model is built around
harnessing personal data as part of their revenue model. Whereas the companies may
push back against illegitimate government requests for user data, they are less likely to
be a sufficiently critical judge of their own business practices, not least when these are
closely linked to their business model. The EU data protection framework, for example,

57
 Ctrl-Shift, The Data Driven Economy: Toward Sustainable Growth (Facebook and Ctrl-Shift,
2015) 3.
58
  Ibid. 9.
59
  Ibid. 15.
60
  J.E. Cohen, ‘What Privacy is For’ (2013) 126 Harvard Law Review 1904; T. Matzner, ‘Why
Privacy is Not Enough: Privacy in the Context of “Ubiquitous Computing” and “Big Data”’ (2014)
12 Journal of Information Communications and Ethics in Society 93; H.T. Tavani, ‘Informational
Privacy: Concepts, Theories, and Controversies’ in K.E. Himma and H.T. Tavani (eds), The
Handbook of Information and Computer Ethics (Wiley, 2009).
61
  Cohen, ‘What Privacy is For’, n. 60 above, 1905.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 358 13/12/2018 15:25


When private actors govern human rights  359

codifies long established data protection principles of user consent, purpose specifica-
tion, data minimization, etc. in order to protect European users’ right to privacy and
data protection, including when they use private platforms, however, the verdict is
still out on whether this recently reformed framework will be sufficient to counter the
excessive data collection that is part of the online economy.62 Also, while individuals
may be aware of the information they have provided when using specific services, the
collection and uses of the metadata they generate when interacting online is largely
invisible, and the extent of the risks for privacy associated with the use of that data not
easily understood.63

5.2  Public Resource, Privately Governed

Besides the data-driven business model, another human rights challenge relates to the
corporate governance of spaces and services that people have come to understand as
‘public resources’. Here, the potential conflict between human rights standards and
business practices relates to market incentives for restricting rights of expression beyond
the standards provided for in international law, as addressed below. Other dynamics that
work against online freedom of expression include unclear liability regimes and pressure
from governments that might incentivize the company to remove alleged illegal content
without sufficient due process safeguards. These problematics have been raised extensively
elsewhere64 and will not be addressed in this chapter.
From a business perspective, there is a strong incentive of keeping a global user-base
safe and content in order to maximize their use of the service. In response, the companies
have developed (and continue to work on) standards for allowed expressions that can
work from Denmark to Iran. These standards are at work across a number of national
norms to ensure that a global user-base can co-exist. As mentioned, the human right to
freedom of expression is not unlimited; however, restrictions have to be lawful, legitimate
and proportionate, as stipulated in ICCPR, Article 19(3). As part of providing a global
communication platform, the companies effectively decide on a universal standard for
allowed/not-allowed content that can work across very different national contexts and
­cultures, with effect on millions of users. For users in many parts of the world, this
corporate boundary setting provides a narrower space for allowed expression compared
to national law.65 In short, while platforms such as Facebook and YouTube have accentu-
ated means of individual expression at an unprecedented level, the boundaries for those
expressions are not set by human rights law, but by corporate norms codified in terms of

62
  R.F. Jørgensen and T. Desai, ‘Privacy Meets Online Platforms: Exploring Privacy Complaints
Against Facebook and Google’ (2017) 35(2) Nordic Journal of Human Rights 106.
63
  S. Barocas and H. Nissenbaum, ‘Big Data’s End Run Around Procedural Privacy Protections’
(2014) 57 Communications of the ACM 31; A. Bechmann, ‘Non-informed Consent Cultures:
Privacy Policies and App Contracts on Facebook’ (2014) 11 Journal of Media Business Studies 21.
64
 Korff, The Rule of Law on the Internet and in the Wider Digital World, n. 9 above; Jørgensen
et al., Case Study on ICT and Human Rights, n. 13 above; R. Mackinnon, E. Hickock, A. Bar and
H. Lim, Fostering Freedom Online: The Roles, Challenges and Obstacles of Internet Intermediaries
(United Nations Educational, 2014); Kaye, Report of the Special Rapporteur on the Promotion and
Protection of the Right to Freedom of Opinion and Expression, n. 1 above.
65
  See e.g., the collection of cases provided by onlinecensorship.org.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 359 13/12/2018 15:25


360  Research handbook on human rights and digital technology

service.66 Moreover, as illustrated by the Ranking Digital Rights Index (mentioned above),
the corporate procedures around enforcement of terms of service are not disclosed and
thus not transparent nor subject to public scrutiny.
In Facebook, the legal basis for allowed/not allowed content are the community stand-
ards, which stipulate the rules that users must follow. Alleged violations are reported via
user flagging and enforced by a combination of external and internal reviewers according
to a set of guidelines that specify, for example, types of nudity allowed or not-allowed.
Since freedom of expression set out to defend not least those expressions that raise
controversial or critical issues, there is an inherent conflict between the incentive of
keeping a global community happy vis-à-vis the protection of those expressions that
may be unwanted, yet would be allowed under international human rights law. In short,
having companies define, and users police, unwanted expressions, similarly to a global
neighbour watch programme, runs counter to the minority protection inherent in freedom
of expression. Besides, since the companies do not frame enforcement of community
standards as a freedom of expression issue, it is not subjected to the same type of legal
review or transparency reporting as government request. The companies’ enforcement of
self-defined boundaries for expression ranging from clearly illegal to distasteful, as well as
anything in between, are largely invisible, and guarded from any kind of public scrutiny.
In short, there is an inherent conflict between the business model for providing a global
communication service (user flagging, terms of service enforcement) and the guarantees
entailed in international standards on freedom of expression. In contrast to data protec-
tion, Internet platforms are rarely subject to regulation concerning the negative impact
they may have on freedom of expression. When content is taken down by Facebook or
YouTube because it is found to violate the community standards, there is limited guidance
in international human rights law, and rarely a national legislation that applies. In these
situations, the company is acting in a judicial capacity deciding whether to allow content
to stay up or to remove it according to internally defined practices and standards, but
without the human rights requirements that would apply if they were a state body rather
than a private company.67
Arguably, platforms such as Facebook or YouTube are perceived as public forums in
the sense that they provide an open space where everyone may participate. Their corporate
character, however, implies that the companies have wide discretion in the way they
choose to regulate expressions within the platform. Whereas US government regulation
of speech in public forums is subject to strict judicial scrutiny, owners of private property
are relatively free in the restrictions they can place on the speech when it takes place on
their property.68 In practical terms, this means that there is no freedom of expression
protection for users of privately owned Internet services. Effectively, the platforms are free

66
  Jørgensen, ‘Framing Human Rights’, n. 37 above.
67
 Laidlaw, Regulating Speech in Cyberspace, n. 16 above.
68
  In the United States, the public forum doctrine was developed to determine the scope and
reach of government authorities in restricting freedom of expression in public places. The doctrine
operates with three types of property: a public forum, such as streets and public parks; a designated
forum, created when a government affirmatively designates a nonpublic forum as open to speaker
access; and a nonpublic forum, such as a privately owned shop. See e.g., A. Maniscalco, Public Spaces,
Marketplaces, and the Constitution: Shopping Malls and the First Amendment (SUNY Press, 2015).

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 360 13/12/2018 15:25


When private actors govern human rights  361

to set their own rules and boundaries for allowed speech. Regarding quasi-public spaces,
US case law has conceded that in instances where private property is functionally the
same as public property, free expression cannot be restricted by the owner of the private
property. Marsh v. Alabama69 was the first case which developed the doctrine concerning
a town wholly owned by a private corporation. Here, the court stated that ‘the more an
owner, for his advantage, opens up his property for use by the public in general, the more
do his rights become circumscribed by the statutory and constitutional rights of those
who use it’.70 Since Marsh, the public forum doctrine has been narrowly applied to those
times when private property ‘has taken on all the attributes of a town’.71 In Hudgens v.
NLRB, for example, the court held that shopping centres are not functionally equivalent
to the company town in Marsh and that there are reasonable alternatives for expressions
by those who use the centre.72
At European level, the issue of private property vis-à-vis public forum has been
dealt with in Appleby v. United Kingdom (2003) before the European Court of Human
Rights, which concerned distribution of leaflets at a shopping mall.73 With reference
to US jurisprudence, the Court noted that shopping malls often either contain, or are
in close proximity to, public services and facilities. The Court found that in a private
sphere, a state’s positive obligation to protect freedom of expression only arises where
the bar on access to the property effectively prevents effective exercise of the freedom,
or if it destroys the essence of the right. This would be the case in a company town,
for instance:

Notwithstanding the acknowledged importance of freedom of expression, it does not bestow


any freedom of forum for the exercise of that right[.] Where however the bar on access to
property has the effect of preventing any effective exercise of freedom of expression or it can be
said that the essence of the right has been destroyed, the Court would not exclude that a positive
obligation could arise for the State to protect the enjoyment of Convention rights by regulating
property rights.74

Since the applicants had a number of alternative means of communicating their views, the
Court did not find that the government had failed in any positive obligation to protect the
applicants’ right to freedom of expression.
Considering the powers of Internet giants in the online domain, also the US state action
doctrine is relevant to mention. Following this doctrine, private action is considered state
action when ‘there is a sufficiently close nexus between the State and the challenged action
of [the private entity] so that the action of the latter may be fairly treated as that of the

69
  Marsh v. Alabama, 326 U.S. 501 (1946), available at https://supreme.justia.com/cases/federal/
us/326/501/case.html.
70
  Ibid. para. 506.
71
  Hudgens v. NLRB, 424 U.S. 507 (1976), paras 516–17, available at http://caselaw.findlaw.
com/us-supreme-court/424/507.html.
72
  Ibid.
73
  In the case, the applicants were stopped from setting up a stand and distributing leaflets by
the private company, which owned the shopping mall. The issue to be determined was whether the
government had failed in any positive obligation to protect the exercise of the applicants’ right to
freedom of expression from interference by the owner of the shopping mall.
74
  Appleby v. United Kingdom, Application no. 44306/98 [2003] ECHR 222, para. 47.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 361 13/12/2018 15:25


362  Research handbook on human rights and digital technology

State itself’.75 In Cyber Promotions v. American Online (1996),76 it was decided that since
AOL had not opened its property to the public by performing any municipal power or
essential public service, it could not be said to ‘stand in the shoes of the state’.77 More
recently, in 2011, the status of Facebook was challenged in the US courts, in the context of
a claim formed around state action.78 The applicant claimed that Facebook’s termination
of his account violated his right to express himself. The judge noted that the plaintiff’s
claim under the First Amendment was futile because he had alleged no facts plausibly
suggesting governmental action.79 In short, there is no support in the two cases for arguing
that the provision of major Internet services may qualify as governmental action and (US)
users therefore protected by their constitutional rights within these platforms. In short,
the cases iterate the right of private actors to freely conduct a business and illustrate that
in cases of online freedom of expression, there is a very high bar for invoking the positive
human rights obligation of the state vis-à-vis private actors.

6.  POSSIBLE RESPONSES

As mentioned above, the corporate responsibility to respect human rights is not simply a
‘negative’ obligation to avoid actively participating in human rights abuses, but rather an
operational framework of proactive responsibilities that ideally should help companies
assess, mitigate, and remediate a negative human rights impact. Human rights due
diligence is the idea that companies should design and implement effective policies and
procedures to identify any risk of causing human rights abuse, act to avoid that harm,
and position themselves to respond appropriately to abuses that occur in spite of those
safeguards. Initiatives such as the GNI are important steps towards human rights due
diligence, however, as pointed to by Human Rights Watch they also have shortcomings
due to their voluntary nature:

Voluntary initiatives all face the same crucial limitations: they are only as strong as their corpo-
rate members choose to make them, and they don’t apply to companies that don’t want to join.80

As outlined in the previous section, there are business interests and market incentives
that run counter to protection of privacy and freedom of expression, thus it seems naïve
to presume that a voluntary commitment is an effective mechanism for securing users’
rights. In addition, the vast amount of revenues derived from the commodification of
personal data makes it unlikely that the business model will change anytime soon due to
privacy concerns. The many cases raised against, for example, Facebook and Google in

75
  Blum v. Yaretsky, 457 U.S. 991 (1982), para. 23, available at https://supreme.justia.com/cases/
federal/us/457/991/. In Mark v. Borough of Hatboro, 856 F.Supp.966 (1995), the US Supreme Court
detailed three tests that have been used to determine state action.
76
  US District Court for the Eastern District of Pennsylvania, C.A. No. 96-2486.
77
  Ibid. para. 16.
78
  Kamango v. Facebook, US District Court, N.D. New York, No. 11-435 (2011).
79
  Ibid. para. III.
80
  C. Albin-Lackey, Without Rules: A Failed Approach to Corporate Accountability (Human
Rights Watch, 2013), available at www.hrw.org/sites/default/files/related_material/business.pdf.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 362 13/12/2018 15:25


When private actors govern human rights  363

the United States, as well as Europe, illustrate the variety of privacy and data protection
issues that their practices may entail, ranging from lack of sufficient user consent, to shar-
ing of user information with third parties, tracking of users, and retention of user data.81
On a final note, I will outline two possible responses to this challenge.
First, all actors in the human rights and technology ecosystem need to acknowledge
that Internet companies may have a negative human rights impact of their own accord,
irrespective of the regulation, pressure or cooptation they are subjected to by states. The
impact will vary depending on company size, user-base, business model, products, etc.,
however, the specific implications for users cannot be determined unless the companies
undergo a systematic and independent human rights impact assessment (HRIA) across
their operations. Such HRIAs across all business practices will help to identify conduct
with a negative human rights impact, irrespective of whether it is related to a state inter-
vention or not. The HRIA could be stipulated by states in NAPs as part of implementing
the UNGP’s call for human rights due diligence, with explicit reference to the Internet
companies whose practices affect millions of users. As part of this, it would also be impor-
tant to raise user awareness of online rights and remedies, drawing, for example, on the
work done by the Council of Europe in this field.82 Moreover, as a follow-up, companies
could be requested to undergo annual audits to review the procedures and initiatives they
have installed to mitigate any negative human rights impact.
Second, states need to formulate requirements for grievance mechanisms in line with
the right to remedy under the UNGP. Research suggests that the technology sector is
behind the curve when it comes to grievance mechanisms, and that ‘there is no reason
why companies with the resources and ingenuity of Internet and telecom leaders should
not have grievance mechanisms that meet or excel the effectiveness criteria of the Guiding
Principles’.83 However, implementing grievance mechanisms for platforms which serve
millions of users across the globe is no straightforward task, and there is no indication
that the companies will take on the task voluntarily or via industry networks such as the
GNI. This calls for states to formulate requirements for effective grievance mechanisms as
part of the NAPs, which may in return initiate development and sharing of best practices
via networks such as the GNI.
In conclusion, in an era in which private companies hold unprecedented powers over
individuals’ ability to exercise human rights in the online domain, there is an urgent and
currently unresolved task for states in securing robust mechanisms of human rights due
diligence and oversight of this new type of power.

81
  I.S. Rubinstein and N. Good, ‘Privacy by Design: A Counterfactual Analysis of Google and
Facebook Privacy Incidents’ (2013) 28(2) Berkeley Technology Law Journal 1332; Jørgensen and
Desai, ‘Privacy Meets Online Platforms’, n. 62 above.
82
  See www.coe.int/en/web/internet-users-rights/guide.
83
 Sullivan, Business and Digital Rights, n. 26 above, 25.

Rikke Frank Jørgensen - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:13AM
via New York University

WAGNER_9781785367717_t.indd 363 13/12/2018 15:25


18.  International organizations and digital human
rights
Wolfgang Benedek

1. INTRODUCTION
International organizations play a major role in promoting and protecting human rights
on the Internet. In particular, since the World Summit on the Information Society of 2003
and 2005, international organizations have become actively engaged in protecting human
rights online and contributing to standard-setting in this respect. The development of an
information society, or according to UNESCO of a ‘knowledge society’, requires common
rules, which need to respect existing human rights. However, those human rights need to
be adjusted to the online or digital environment. This can be done by interpretation of
existing human rights or by proposing new human rights or principles of regulation of new
protection needs. International organizations have made an important contribution to this
process, which is manifested in numerous declarations of Internet rights and principles.1
Some also use the concept of digital rights, which needs some discussion. It could be
understood in a very narrow sense as the access to copyrighted works in the sense of digital
rights management (DRM), which provides a systematic approach to copyright protection
for digital media. However, it is mostly used as a wider concept of rights in the Internet
environment, including the right to access to the Internet, net neutrality or user rights
against Internet companies. In this wider sense, it goes beyond the concept of human rights
in the information society, which, however, provides the main substance of digital rights.2
A wider ‘digital rights concept’ is reflected in the agenda of civil society organizations
such as European Digital Rights (EDRI),3 which in 2014 has also elaborated a digital
rights charter, which spells out commitments rather than rights.4 There is also an Index
ranking major companies in the ICT sector according to their accountability in terms of
digital rights, mainly with respect to freedom of expression and privacy, the second edi-
tion of which was launched in March 2017.5 In 2016, the proposal for a ‘Charter of Digital

1
  See Rolf H. Weber, Principles for Governing the Internet: A Comparative Analysis (UNESCO,
2015) and Matthias C. Kettemann, ‘The Power of Principles: Reassessing the Internet Governance
Principles Hype’, Jusletter IT, 29 February 2012, available at www.jusletter-it.eu.
2
  See the contributions of Matthias C. Kettemann and Ben Wagner, in ‘Menschenrechte digital’
(2016) 10 Zeitschrift für Menschenrechte 24 and 38; and Wolfgang Benedek and Catrin Pekari (eds),
Menschenrechte in der Informationsgesellschaft (Boorberg, 2007).
3
  See European Digital Rights (EDRI), at http://edri.org, which provides an excellent newslet-
ter on developments in the digital rights field and acts as a promotor of digital rights issues.
4
  EDRI, The Charter of Digital Rights, available at https://edri.org/wp-content/uploads/2014/06/
EDRi_DigitalRightsCharter_web.pdf, and www.wepromise.eu/en/page/charter.
5
  See the Ranking Digital Rights Corporate Accountability Index, available at https://ranking​
digitalrights.org.

364
Wolfgang Benedek - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 364 13/12/2018 15:25


International organizations and digital human rights  365

Fundamental Rights of the European Union’ was presented in Berlin and Brussels.6
However, the text produced by a group of experts and launched in German newspapers
in December 2016 with the help of the Zeit foundation has not been adopted by any EU
body nor is it based on a broader EU process. It is, however, of interest for the approach
taken, which goes beyond existing human rights. In this way it is intended to complement
the EU Charter on Fundamental Rights. It contains a right to encryption, which can be
derived from the freedom of expression and the right to anonymity. However, the right
not to be subject to computerized decisions which have significant consequences for one’s
life, and the provisions on artificial intelligence and net neutrality, are not yet covered by
human rights instruments.
The main focus of this chapter is on the role of international organizations with regard
to human rights on the Internet. International organizations are mainly concerned with
the relationship between existing human rights and the Internet and generally do not
use the concept of digital rights. Accordingly, this chapter will mainly address the issue
of human rights online, which are called for that purpose digital human rights as they
operate in the digital environment. Digital human rights are part of Internet governance,7
which therefore is analyzed in this respect. The logical starting point is the World Summit
on the Information Society (WSIS) of 2003 and 2005, leading to an analysis of the role
of the United Nations system, with particular emphasis on UNESCO and the Human
Rights Council and the relevant regional organizations, such as the Council of Europe, the
European Union and the Organization for Security and Cooperation in Europe (OSCE).
They have all contributed to the development of human rights on the Internet, while they
did so in different ways and intensities. This was often stimulated by civil society organi-
zations, and even individuals, who took advantage of the prevailing multi-stakeholder
approach in Internet governance.8 Their role can hardly be overestimated. They have also
been instrumental in elaborating declarations of rights and principles for the Internet, and
for using the Internet as a tool for empowerment based on human rights framing the Net,
although this has often meant a struggle against public and private powers.9
This chapter will first deal with the role of the United Nations and its conferences and
processes, like the World Summit on the Information Society (WSIS) and the Internet
Governance Forum (IGF), the contribution of UNESCO and of the UN Human Rights
Council (HRC), while the International Telecommunication Union (ITU) is not given
particular consideration because, as a technical organization, it has not made a significant
contribution to the topic of digital human rights. At the regional level, the role of the
Council of Europe (CoE) is highlighted as the most active organization in this field, to be

6
  See https://digitalcharta.eu/wp-content/uploads/2016/12/Digital-Charta-EN.pdf.
7
  See Wolfgang Benedek, Veronika Bauer and Matthias C. Kettemann (eds), Internet
Governance and the Information Society: Global Perspectives and European Dimensions (Eleven
International Publishing, 2008).
8
  See Matthias C. Kettemann, The Future of Individuals in International Law (Eleven
International Publishing, 2013) 111 and 139 et seq.
9
  See Rikke Frank Jørgensen, Framing the Net: the Internet and Human Rights (Edward Elgar
Publishing, 2013); and Marianne I. Franklin, Digital Dilemmas: Power, Resistance, and the Internet
(Oxford University Press, 2013), as well as the contributions of both authors in this volume
(Chapters 17 and 1, respectively).

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 365 13/12/2018 15:25


366  Research handbook on human rights and digital technology

followed by the European Union (EU), where the economic mandate is in the foreground,
although the EU is also concerned with human rights online, in particular freedom of
expression and privacy/data protection. Also of relevance is the OSCE, in particular the
work of the Representative on Freedom of the Media (RFoM), which is presented, before
reaching some general conclusions.
Generally, this chapter aims at a systematic analysis, providing also some insights into
the history of human rights in Internet governance as well as the challenges, opportunities
and limitations in the contributions of the most relevant international organizations. All
the organizations and their mechanisms studied fulfill several functions with regard to
digital human rights, in particular awareness raising and, to a different degree, standard-
setting, while the implementation or enforcement function usually is less developed.
Differences also exist regarding the role of non-state actors and multi-stakeholder
processes, which are least relevant when it comes to enforcement. This is also due to the
enormous differences in economic power between Internet companies and civil society
organizations and even states or international organizations, as well as geopolitical fac-
tors, where the global backlash on human rights and the shrinking space of civil society
also reflects on the space available for digital human rights.

2.  ROLE OF THE UNITED NATIONS

2.1  WSIS and IGF

Human rights or digital rights were not in the foreground of the technology-led
early phase of the Internet. Nor did they play a role in the work of the International
Telecommunication Union (ITU), as a mainly technical organization. During the
UN-sponsored World Summit on the Information Society (WSIS), it was mainly thanks
to civil society that the human rights aspect was given some attention.10 Of the UN
organizations, UNESCO and the Office of the High Commissioner on Human Rights
played the main role. However, the final documents of Geneva and Tunis only contained
general references to the Universal Declaration on Human Rights (UDHR), with the
exception of the human right to freedom of expression, which was explicitly mentioned.
The Geneva Plan of Action of 2003 did not mention human rights at all, while the Tunis
Agenda for the Information Society of 2005 has two paragraphs with references to human
rights.11
It was the NGO Association for Progressive Communication (APC), which in 2006
drafted a first ‘Internet Rights Charter’, which served as an inspiration for the ‘Charter

10
  See Rikke Frank Jørgensen (ed.), Human Rights in the Global Information Society (MIT
Press, 2006); and D. Hurley, Pole Stare: Human Rights in the Information Society (ICHRDD, 2003).
See also Statement on Human Rights, Human Dignity and the Information Society, adopted by
an international Symposium in Geneva on 3–4 November 2003, available at www.pdhre.org/wsis/
statements.doc. See further, Rikke Frank Jørgensen, Chapter 17.
11
 WSIS, Geneva Declaration of Principles, Building the Information Society: A Global Challenge
in the New Millennium (2003); Tunis Plan of Action (2003); Tunis Commitment (2005); and Tunis
Agenda for the Information Society (2005), all available at www.itu.int/net/wsis.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 366 13/12/2018 15:25


International organizations and digital human rights  367

of Human Rights and Principles for the Internet’ elaborated by the Dynamic Coalition
for Internet Rights and Principles, the first draft of which was presented at the Internet
Governance Forum (IGF) in Vilnius in 2010, while the full version was first presented at
the European Dialogue on Internet Governance (EuroDIG) in Belgrade in 2011.12
The World Summit on the Information Society as a conference of the United Nations
was to lay the ground for the future regulation of the Internet by discussing all aspects,
in particular the question of access to the new technology, which is unequally distributed
worldwide. The ‘digital divide’ was thus a major issue at the beginning and access to the
Internet for all has remained a major concern in the work of international organizations,
as shown in the following. It also continued to be a topic of the Internet Governance
Forum (IGF), following the WSIS, since 2006. A human right to access to the Internet
was requested in various declarations and reports of non-governmental and inter-
governmental actors.13
In the work of the Internet Governance Forum, which is based on a mandate contained
in the Tunis Agenda for the Information Society of 2005,14 and first met in 2006 in Athens
and since annually in a multi-stakeholder format, digital human rights became more
important every year. This took the form of explicit topics proposed in particular by
UNESCO or the Council of Europe, as well as civil society organizations and academia,
and of cross-cutting issues in many other topics, such as Big Data, in which a human
rights aspect is inherent. In the framework of the IGF, several dynamic coalitions were
established, which worked on digital human rights issues also between the annual ses-
sions, such as the Dynamic Coalition on Internet Rights and Principles, on Freedom of
Expression and Freedom of the Media on the Internet, on Accessibility and Disability,
on Child Online Safety and on Gender and Internet Governance. The IGF is a forum
for debate, it does not take decisions or make recommendations, which makes it difficult
to trace its impact on the development of digital human rights. However, this influence
can be considered as significant, as often ideas and proposals are discussed at the IGF or
similar forums before being taken up in more formal procedures, as the example of the
Charter on Human Rights and Principles for the Internet shows.
The Tunis Agenda for the Information Society identified 20 areas of activities, and
in Annex spelled out 11 Action Lines (C1–C11), together with possible facilitators in
the UN family of organizations. Some organizations are given several responsibilities,
which might also overlap, because some action lines are relevant to the work of several
organizations.
From 2013–2015, the United Nations undertook a review of the WSIS process and the
work of the IGF. The WSIS +10 Review resulted in UN General Assembly Resolution
70/125 of 16 December 2015, which, inter alia, extended the Internet Governance Forum

12
  See, in particular, Association for Progressive Communications (APC), Internet Rights
Charter (2006), available at www.apc.org/node/5677; and Dynamic Coalition on Internet Rights and
Principles, Charter on Human Rights and Principles for the Internet (2011), available at www.inter​
netrightsandprinciples.org; see also the European Dialogue on Internet Governance (EuroDIG) in
Belgrade, www.umic.pt/images/stories/publicacoes5/MsgsFromBelgrade_eurodig2011_FIN_EN_1.
pdf.
13
  See Matthias C. Kettemann, Chapter 7.
14
  See Tunis Agenda for the Information Society (2005) para. 72.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 367 13/12/2018 15:25


368  Research handbook on human rights and digital technology

for another ten years.15 In 2025, a further high-level meeting is to take place.16 UNGA
Resolution 70/125 foresees 12 UN entities as facilitators of the 11 WSIS Action Lines, No.
7 of which being split over several sub-lines, such as e-learning, e-science, e-agriculture,
etc. In view of the Agenda 2030 adopted in the same year by UNGA Resolution 70/1, the
objective now became to link up the 17 Sustainable Development Goals (SDGs) with the
ICT aspects of the 11 Action Lines. For that purpose, a WSIS matrix on how ICT can
contribute to the implementation of the SDGs was produced to provide a better overview
of the linkages and responsibilities.17
UNGA Resolution 70/125 also contains a chapter on human rights, entitled Human
Rights in the Information Society. While recognizing the potential of ICT to strengthen
the exercise of human rights, it reconfirms the work done by the United Nations in this
field. It shows concern about serious threats to freedom of expression and plurality of
information and emphasizes the rights to freedom of expression and to privacy.18

2.2 UNESCO

UNESCO, which has dealt with questions from the media for a long time, had a major
role in the WSIS and beyond in emphasizing and strengthening human rights. It also
regularly contributed to the IGFs with events on human rights defenders in the digital
space, so-called ‘digital defenders’, and on freedom of expression and privacy issues in
general. Its contributions, which are under the responsibility of the Communications
and Information Department, are often based on studies commissioned from knowl-
edgeable institutions and experts.19 It has organized pertinent conferences, such as the
‘CONNECTing the Dots’ Conference in Paris in March 2015.
In 2013, it hosted the WSIS +10 review event, which also involved the ITU, United
Nations Development Programme (UNDP) and United Nations Conference on Trade
and Development (UNCTAD), to be hosted by the ITU in 2014 and transformed into
the WSIS Forum in 2015. Since then, based on UNGA Resolution 70/125, it takes
place annually in Geneva.20 It considers itself as ‘the world largest gathering of ICT for
Development’21 and is, like the IGF, organized on a multi-stakeholder basis. However, dif-

15
  See UNGA Resolution 70/125, para. 63.
16
  Ibid. para. 71.
17
  The matrix is available at www.itu.int/net4/wsis/sdg/Content/Documents/wsis-sdg_matrix_
document. pdf.
18
  UNGA Resolution 70/125, paras. 41–47 et seq.
19
  See e.g., Toby Mendel (Centre for Law and Democracy), Andrew Puddephatt (Global Partners
and Associates) et al., Global Survey on Internet Privacy and Freedom of Expression (Paris: UNESCO,
2012); Rebecca MacKinnon (Director of the Ranking Digital Rights Project), New America
Foundation et al., Fostering Freedom Online: The Role of Internet Intermediaries (Paris: UNESCO,
2014). See also UNESCO, Keystones to Foster Inclusive Knowledge Societies: Access to Information
and Knowledge, Freedom of Expression, Privacy and Ethics on a Global Internet, Final Study (Paris:
UNESCO, 2015). The first study was prepared for the UNESCO Conference ‘CONNECTing the
Dots’ in March 2015.
20
  See UNGA Resolution 70/125, para. 69.
21
  See https://sustainabledevelopment.un.org/content/documents/10186World%20 Summit%20
on%20Information%20Society%202016%20Outcomes%202016-May-16.pdf.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 368 13/12/2018 15:25


International organizations and digital human rights  369

ferently from the IGF, which is organized by a small UN unit based in Geneva, the WSIS
Forum is organized by several specialized agencies of the UN, in particular UNESCO.
Within the 11 Action Lines, UNESCO has taken responsibility in six of them, of which
C3 is related to access to information and knowledge; C7 to e-learning and e-science,
cultural diversity and identities; C8 to linguistic diversity and local content; C9 to media;
and C10 to the ethical dimension, which are all relevant for human and digital rights.

2.3  UN Human Rights Council

The main human rights institution of the United Nations showed a rather low profile on
the issue of human and digital rights until about 2011, when the UN Special Rapporteur
on Freedom of Opinion and Expression, Frank La Rue, presented a first major report on
Freedom of Expression and the Internet, in which he focused in particular on the issue of
access to the Internet and content restrictions.22 This had an important effect on the work
of the Human Rights Council (HRC), which in 2012 adopted its first resolution on the
matter, which endorsed the key principle according to which ‘offline rights must also be
protected online’ or ‘the same rights people have offline must also be protected online’.23
Since then, the HRC has adopted a follow-up resolution every two years.24
In 2013, the Special Rapporteur, even before the revelations of Edward Snowden,
produced a report in which he drew attention to the various practices of large-scale
surveillance, which triggered a debate on better protection of privacy. A further resolution
was passed by the UN General Assembly on the Right to Privacy in the Digital Age,25
which requested a report by the High Commissioner on Human Rights on the matter,
which was presented in 2014.26 A large coalition of NGOs requested the new position of
a Special Rapporteur on the Right to Privacy, which was adopted by the Human Rights
Council on 1 April 2015.27 Following this, Joe Cannataci was appointed as UN Special
Rapporteur on the Right to Privacy. He has already issued important reports on the right
to privacy, based on five priorities.28 Also David Kaye, who succeeded Frank La Rue
as Special Rapporteur on Freedom of Opinion and Expression, has produced reports
relevant to human rights on the Internet, such as the report on encryption, anonymity
and human rights, and the protection of whistle-blowers, in 2015, and on freedom of
expression, states and the private sector in the digital age in 2016.29

22
  See Frank La Rue, Report of the Special Rapporteur on the Promotion and Protection of
Freedom of Opinion and Expression, UN Doc. A/HRC/17/27 (16 May 2011).
23
  See HRC Resolution 20/8 of 5 July 2012 on the Promotion, Protection and Enjoyment of
Human Rights on the Internet.
24
  See HRC Resolution 26/13 of 26 July 2014, HRC Resolution 32/13 of 30 June 2016 and HRC
Resolution 38/7 of 5 July 2018 on the Promotion, Protection and Enjoyment of Human Rights on
the Internet.
25
  See UNGA Resolution 68/167 of 18 December 2013. See also UNGA Resolution 69/166 of
18 December 2014 on Right to Privacy in the Digital Age.
26
  See UN High Commissioner on Human Rights, Report on the Right to Privacy, UN Doc. A/
HRC/27/37 (30 June 2014).
27
  See HRC Resolution 28/16 of 1 April 2015 on the Right to Privacy in the Digital Age.
28
 See Right to Privacy, Report to the General Assembly, UN Doc. A/71/368 (13 August 2016).
29
  All reports available at www.ohchr.org/EN/Issues/FreedomOpinion/Pages/Annual.aspx.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 369 13/12/2018 15:25


370  Research handbook on human rights and digital technology

In conclusion, while the main human rights institution of the UN was slow to face the
challenge of human and digital rights on the Internet, it has now started to contribute
actively to the international debates and produce guidance through its respective resolu-
tions and reports.

3.  ROLE OF THE COUNCIL OF EUROPE

The Council of Europe (CoE) is a regional institution having a wide mandate with a
focus on human rights. However, it was this organization which was the most active in
addressing the challenges of protecting and promoting human rights in the information
society. As the Internet is a global issue, the work of the CoE in this field did also have
a global vocation and impact. For this purpose, the CoE used all its instruments with a
focus on standard-setting and developing a case law on the Internet and human rights. In
addition, there is an advocacy role of the Council of Europe.30
With regard to standard-setting, it mainly used instruments of soft regulation, such
as recommendations, resolutions, declarations and guidelines,31 while in the case of data
protection and cybercrime, as well as xenophobia and racism on the Internet, it produced
hard law conventions and protocols.32 The human rights focus is an essential part of the
Council of Europe’s work on Internet governance.
One major actor is the Committee of Ministers, which receives the results of the work
of different groups of experts through the Steering Committee on Media and Information
Society supported by the Division on Media and Information Society of the Secretariat.
The expert groups usually consist of governmental and non-governmental experts from
academia or civil society. The support of the highly competent Secretariat is crucial for
the outcome of this work. In particular, following the WSIS, the CoE established the
group of specialists on human rights and the information society, which between 2005
and 2011 laid the ground for the work of the Council of Europe in this field, resulting
in several recommendations, such as the Declaration on Human Rights and the Rule of
Law in the Information Society (2005); the Recommendation on promoting freedom of
expression and information in the new information and communication environment
(2007); and the Recommendation on measures to promote the public service value of the

30
  See Wolfgang Benedek and Matthias C. Kettemann, ‘The Council of Europe and the
Information Society’ in Renate Kicker (ed.), The Council of Europe, Pioneer and Guarantor for
Human Rights and Democracy (Council of Europe Publishing, 2010) 109–15.
31
  See Council of Europe, Recommendations and Declarations of the Committee of Ministers of
the Council of Europe in the Field of Media and Information Society (2016), available at www.coe.
int/en/web/freedom-expression/committee-of-ministers-adopted-texts; see also at https://www.coe.
int/en/web/human-rights-rule-of-law/information-society-and-action-against-crime-directorate and
Wolfgang Benedek and Matthias C. Kettemann, Freedom of Expression and the Internet (Council of
Europe Publishing, 2013), 55 et seq.
32
  See the Convention for the Protection of Individuals with regard to Automatic Processing
of Personal Data, 1981 (ETS No. 108), and subsequent texts at Council of Europe, Data
Protection, Compilation of Council of Europe Texts (Council of Europe, 2011); and Convention
on Cybercrime, 2001 (ETS No. 185) and the Protocol on Xenophobia and Racism, 2003 (ETS
No. 189).

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 370 13/12/2018 15:25


International organizations and digital human rights  371

Internet (2007).33 Other Recommendations related to a new notion of the media (2011);
the protection of human rights with regard to search engines (2012); and with regard
to social network services (2012); on risks to fundamental rights stemming from digital
tracking and other surveillance technology (2013); on the protection of whistle-blowers
(2014); on ICANN, human rights and the Rule of Law (2015); on the free, transboundary
flow of information on the Internet (2015); on the protection of journalism and safety of
journalists and other media actors (2016); on the right to freedom of expression and the
right to private life with regard to network neutrality (2016); on Internet freedom (2016);
and on roles and responsibilities of Internet intermediaries (2018).34
A broader approach was taken by the Recommendation on a guide to human rights
of Internet users, which suggested providing users with the main elements of the major
human rights on the Internet, including the most important remedies. As in other recom-
mendations, the Explanatory Report to the Recommendation, produced on the respon-
sibility of the Secretariat, provides valuable details on those rights. The Internet Users
Rights Guide can be accessed right by right on the website of the Council of Europe.35
With regard to soft standard-setting on human rights and the Internet, the Parliamentary
Assembly (PACE) should also be mentioned. PACE has adopted recommendations and
resolutions on a right of access to the Internet, whistle-blowers, mass surveillance, safety
of journalists, cyber-terrorism, cyber-discrimination and online hate speech.36
Violations of the European Convention on Human Rights of the Council of Europe
can be brought before the European Court of Human Rights (ECtHR) in Strasbourg,
which has already developed some case law regarding the application of the Convention
in the Internet context.37 In that case law, the Court, while applying the principle that the
same rights people have offline must also be protected online, emphasizes the amplifying
character of the Internet, which requires, for example, additional care for the protection
of minors, but also with regard to defamation and, in particular, online hate speech.
Accordingly, the ‘duties and responsibilities’ referred to in Article 10(2) of the European
Convention on Human Rights (ECHR) need to be taken very serious, for example by
journalists or Internet intermediaries.
In particular, in Editorial Board of Pravoye Delo and Shtekel v. Ukraine (2011), the
ECtHR stated that the risk of harm for human rights, in particular as regards respect for
private life, posed by content on the Internet is higher than that posed by the press.38 The
Court also found a positive obligation to create a clear legal framework to clarify the roles
and responsibilities of all stakeholders, including journalists.39 In a similar way, the Court

33
  See Committee of Ministers (CoM) Recommendation CM/Rec. (2007)11 of 26 September
2007 and CM/Rec. (2007)16.
34
  See Council of Europe, Recommendations and Declarations, n. 31 above.
35
  See www.coe.int/en/web/internet-users-rights/guide.
36
  See website-pace.net/en_GB/web/apce/documents.
37
  European Court of Human Rights, Research Division, Internet: Case-Law of the European
Court of Human Rights (Council of Europe, 2015); see also European Court of Human Rights,
Research Division, Fact Sheet, Access to the Internet and freedom to receive and impart informa-
tion and ideas (2018) and Benedek and Kettemann, Freedom of Expression and the Internet, n. 31
above, 23 et seq.
38
  Editorial Board of Pravoye Delo and Shtekel v. Ukraine, Application no. 33014/05 (2011).
39
  Ibid. para. 31.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 371 13/12/2018 15:25


372  Research handbook on human rights and digital technology

requested in Yildirim (2012) a clear legal framework. It also required that restrictions had
to be narrowly interpreted and the principle of proportionality be applied.40 In two cases,
it dealt with the question of liability of intermediaries, such as news platforms allowing
online comments. In the controversial Delfi v. Estonia case (2015), the Grand Chamber
confirmed the finding that effective measures to limit dissemination of hate speech and
incitement to violence are needed, while in MTE v. Hungary (2016), the Court satisfied
itself with the existing disclaimer and notice-and-take-down procedure, as there had been
no incitement to violence.41 In its judgments, the Court often takes the recommendations
of the Committee of Ministers and other relevant soft law as guidance for the interpreta-
tion of the ECHR in the online context.
The Council of Europe has adopted a Strategy on Internet Governance, which identi-
fies its areas and line of action, in which human rights take the central place.42
An advocacy role is played by the European Commissioner for Human Rights, who
takes an active stance for freedom of expression on the Internet, in particular with regard
to journalists and human rights defenders, so called ‘digital defenders’.43

4.  ROLE OF THE EUROPEAN UNION

In the European Union, traditionally the economic approach to the Internet is in the fore-
ground. Its digital agenda, adopted in 2010 as part of the Europe 2020 strategy under the
responsibility of the General Directorate (DG) CONNECT does mention fundamental
rights, but only in the context of privacy and data protection.44 This partly corresponds
to the Fundamental Rights Charter of the EU, which in Article 8 recognizes the right
to protection of personal data. The main concern is to support the European economy
in benefiting from a digital single market to be created and in better competing with US
Internet companies. In this context, the EU is also concerned with user rights, and in
2012 produced a Code of EU Online Rights, which, however, is not much disseminated
and focuses on pertinent EU law protecting citizens using online networks and services.45
Human rights first appeared in the Communication of the European Commission on EU
Internet Governance Policies, which mentioned the need for respect for human rights,
freedom of expression, privacy and data protection.46

40
  Yildirim v. Turkey, Application no. 3111/10 (2012).
41
  Delfi v. Estonia, Application no. 64569/09 (2013, 2015) and MTE and Index.HU Zrt v.
Hungary, Application no. 22947/13 (2016).
42
  Council of Europe, Internet Governance Strategy 2016–2019, which followed the Internet
Governance Strategy of 2012–2015, available at www.coe.int/en/web/freedom-expression/igstrategy.
43
  See Nils Muizniek, The Rule of Law on the Internet and in the Wider Digital World (2014), as
well as his speeches, available at www.coe.int/en/web/commissioner/the-commissioner.
44
  See Communication from the Commission, A Digital Agenda for Europe, COM(2010)245
final/2 (26 August 2010).
45
  European Commission, Code of EU Online Rights (2012), available at https://ec.europa.
eu/digital-single-market/sites/digital-agenda/files/Code%20EU%20online%20rights%20EN%20
final%202.pdf.
46
  See Communication from the Commission, Internet Governance: The Next Steps,
COM(2009)277 final (18 June 2009) para. 7.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 372 13/12/2018 15:25


International organizations and digital human rights  373

However, it was the Arab Spring of 2011, in which the use of digital communication
had played a major role, that alerted EU policy-makers to the relevance of the Internet
for human rights and democracy.47 Consequently, the European Parliament and the
European Commission undertook several initiatives and also provided support to
strengthen digital human rights, in particular outside the European Union as part of its
external human rights policy. For example, the European Parliament in 2012 adopted a
‘Digital Freedom Strategy’, elaborated in a crowd-sourcing methodology.48 Since then, it
has on various occasions taken the lead to address digital human rights, as in the case of
surveillance.49
In May 2016, the Sakharov Prize network met to discuss human rights defenders in
the digital era, after the European Parliament had in 2015 given its Sakharov Prize to
Raif Badawi, a Saudi-Arabian blogger. Generally, the European Union offers important
support to digital defenders, inter alia, by providing funds from the European Initiative
on Democracy and Human Rights (EIDHR).50 One project, the Human Rights Defender
Mechanism, provides support for digital defenders with training, circumvention technol-
ogy, relocation, and other support.51
A major focus of the EU is on data protection in the digital environment. After long
negotiations, a data protection package consisting of a Regulation and a Directive was
adopted in 2016 and came into force in 2018.52 It also contains the ‘right to be forgotten’
allowing citizens to have certain personal data, which are not of public interest, to be
removed from search engines. In this, it follows the judgment of the Court of Justice of the
European Union (CJEU) in the so-called ‘Google case’ in 2014, in which the Court found
that the right to privacy contained a right to be forgotten under certain conditions.53
In Max Schrems v. Data Protection Commissioner (Ireland) in 2015, the CJEU found
that the ‘privacy shield’ provided by the so-called ‘safe harbour agreement’ with the
United States, according to which US companies were assumed to provide equivalent
protection for European data processed in the United States, was insufficient and had to
be strengthened.54 Already in 2014, the CJEU had found that the EU Data Protection

47
  See Ben Wagner, After the Arab Spring: New Paths for Human Rights and the Internet
in European Foreign Policy, Study for the Sub-Committee on Human Rights of the European
Parliament, EXPO/B/DROIT/2011/28 (2011).
48
  See European Parliament, Resolution of 11 December 2012 on a Digital Freedom Strategy
in EU Foreign Policy, 2012/2094 (INI), and the report by M. Schaake, EP-Doc. P7_TA(2012)0470
(15 November 2012).
49
  See European Parliament Resolution of 8 September 2015, Human Rights and Technology:
The Impact of Intrusion and Surveillance Systems on Human Rights in Third Countries,
2014/2232 (INI), based on a report of M. Schaake.
50
  See Rikke Frank Jørgensen, Anja Moller Pedersen, Wolfgang Benedek and Reinmar
Nindler, Case Study on ICT and Human Rights (Policies of EU) (2015) 63 et seq., available at www.
fp7-frame.eu/wp-content/materiale/reports/19-Deliverable-2.3.pdf.
51
  See Protect Defenders, implemented by 12 human rights NGOs, available at www.protectde​
fenders.eu/en/about.html.
52
  See http://ec.europa.eu/justice/data-protection/reform/index_en.htm.
53
  See C-131/12 Mario Kosteja Gonzales and AEPD v. Google Spain, CJEU, Judgment of 13
May 2014.
54
 C-362/14 Max Schrems v. Data Protection Commissioner, CJEU, Judgment of 6 October
2015.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 373 13/12/2018 15:25


374  Research handbook on human rights and digital technology

Directive violated the EU Fundamental Rights Charter.55 Accordingly, the CJEU plays
an important role in protecting human rights on the Internet.
As general guidance for its external policies, the Council of the EU, in 2014,
adopted ‘Human Rights Guidelines on Freedom of Expression Online and Offline’
prepared together with the European External Action Service (EEAS) and the European
Commission.56 These guidelines link EU policies with developments in the United
Nations, the Council of Europe and OSCE. They mainly provide guidance for EU action
in external relations.
Finally, the Fundamental Rights Agency of the EU (FRA) is active in online data
protection issues with regard to the processing of Passenger Name Record (PNR) data,
as well as regarding hate crime. Together with the ECtHR, it published the Handbook on
European Data Protection.57

5.  ROLE OF OSCE

In the Organization on Security and Cooperation in Europe (OSCE), the OSCE


Representative on Freedom of the Media (RFoM), established in 1997 and based in
Vienna, has increasingly become active on human rights on the Internet as restrictions
from states became more and more widespread. The RFoM has provided guidance by
studies and a Guide Book,58 and has spoken out on many occasions against restrictions of
freedom of expression on the Internet.59 There is a particular focus on safety of journal-
ists, including online journalism.

6. CONCLUSIONS

Digital human rights are of increasing relevance in the work of international organiza-
tions dealing with human rights in general. For example, the UN Human Rights Council
now regularly deals with the online dimension of human rights, in particular freedom of
expression and the right to privacy. At the same time, international organizations have an
important role to play to promote and protect human rights in the digital environment.
All international organizations studied have increased their involvement in this respect.
This is also necessary because of the growing threats on the part of states in the form of
censorship or surveillance, and because of the large impact of Internet companies on the

55
  C-293/12 and C-594/12 Digital Rights Ireland v. Ireland, CJEU, Judgment of 8 April 2014.
56
  Council of the EU, EU Human Rights Guidelines on Freedom of Expression Online and
Offline (12 May 2014).
57
  EU Fundamental Rights Agency and Council of Europe, Handbook on European Data
Protection Law (2014); and Council of the EU, EU Human Rights Guidelines, n. 56 above.
58
  Yaman Akdeniz, Media Freedom on the Internet: An OSCE Guide Book (OSCE Representative
on the Freedom of the Media, 2016); Zenet Mujic, Deniz Yazici and Mike Stone, Freedom of
Expression on the Internet (OSCE Representative on the Freedom of the Media, 2012).
59
  For the Representative on the Freedom of the Media, see further www.osce.org/repres​
entative-on-freedom-of-media.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 374 13/12/2018 15:25


International organizations and digital human rights  375

human rights of their users, which needs regulation and remedies. Bloggers and Internet
platforms suffer from the general trend of a shrinking space for civil society and media
freedom.
International organizations active in this field can be found both at the universal and
the regional level, where the Council of Europe stands out. In the digital context, multi-
stakeholderism plays a particular role, which allows the private interests of companies as
well as civil society pursuing public interests to be officially involved. However, following
these multi-stakeholder processes, such as the IGF, EuroDIG or in working groups, the
final decisions are still taken in the traditional intergovernmental way, whether by the
Human Rights Council or the Committee of Ministers of the Council of Europe. But
these decisions are usually informed by multi-stakeholder processes involving experts
from all stakeholders. The regulation of the digital environment is thus shaped by a
multitude of regular discussion processes, best visible in the IGF in which digital human
rights have become an important field, as well as a cross-cutting topic.
With regard to human rights, a major contribution comes from civil society and its
organizations, including academia. However, the language of digital rights used by
part of civil society has not been adopted by all and in particular not by international
organizations, which rather prefer to talk about human rights online. The advantage of
this approach is that the established concept of binding human rights is employed, while
the concept of digital rights runs the risk of being moved to the non-binding aspirational
concept of soft law. For those rights which go beyond established human rights, this may
make sense, but there is a risk that by putting them together, established human rights
might be devalued, which needs to be prevented.
International organizations can assist by regulation, although mostly in the form of
soft regulation, as in elaborating principles and guidelines, recommendations, resolu-
tions or declarations. They can also play an advocacy role, as is the case for the OSCE
Representative on the Freedom of the Media, the European Commissioner on Human
Rights, and the UN Special Rapporteurs on Freedom of Expression and Privacy. In
doing so, they closely cooperate with all stakeholders, following a multi-stakeholder
approach. They provide a space for civil society organizations, which are major contribu-
tors to discussions on seeking solutions in the public interest. The cooperation between
international organizations is generally good, although there is some competition from
time to time in addressing topical issues, which may lead to parallel resolutions. In the
case of the European Union, it sometimes provides funding for pertinent projects of other
organizations, such as the Council of Europe and United Nations in particular.
In conclusion, the awareness for the importance of digital human rights by interna-
tional organizations has grown considerably. They do contribute to their promotion
and protection in accordance with their respective mandates. The Council of Europe,
although a regional organization, has taken a lead with regard to (mainly) soft regulation.
As there is no general mechanism of coordination and no single lead agency, meetings
such as the Internet Governance Forum or the WSIS Forum can also play an indirect role
in this respect.

Wolfgang Benedek - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:18AM
via New York University

WAGNER_9781785367717_t.indd 375 13/12/2018 15:25


19.  Recognizing children’s rights in relation to digital
technologies: challenges of voice and evidence,
principle and practice
Amanda Third, Sonia Livingstone and Gerison Lansdown*

1.  CHILDREN’S RIGHTS AND DIGITAL MEDIA


1.1  Overview of an Emerging Problem

In an era of rapid technological change characterized by the growth of online digital


networks, the adoption of and increasing reliance on mobile and social media, and a host
of associated technological opportunities and risks, it is becoming clear that children’s
rights are both realized and infringed in new ways.1 Crucially, digital media are no longer
luxuries, but are rapidly becoming essentials of modern existence – and this applies
increasingly in the Global South as well as the global North. Children are at the forefront
of trends in digital uptake globally, with an estimated one in three worldwide already using
the Internet. Much future growth in Internet use will occur in the global South, where
children constitute between one-third and a half of the population; thus the proportion
of users under the age of 18 is set to grow significantly.2 In tandem, challenges associated
with digital media are now becoming acute in the Global South in the wake of the rapid
uptake of digital media, particularly via mobile platforms.3
The Convention on the Rights of the Child (UNCRC), which was adopted by the
UN General Assembly in 1989, and has since been supplemented by a series of Optional
Protocols and General Comments, elaborates a comprehensive framework of children’s
civil, political, protection, social, economic and cultural rights. It affirms children as active
agents in the exercise of their rights, delineates the particular rights of children to ensure
they develop to their full potential, and sets out special mechanisms to deliver these.
However, it was developed before the digital age.
In this chapter, we use the terms ‘digital’ and ‘digital media’ to refer to the Internet and
mobile technologies, digital networks and databases, digital contents and services, along

*  This chapter draws on a report produced for and funded by the Office of the Children’s
Commissioner of England, available at  www.childrenscommissioner.gov.uk/wp-content/uploads/​
2017/06/Case-for-general-comment-on-digital-media.pdf. Children’s contributions to this chapter
were generated with funding from the Young  and  Well Cooperative Research Centre. The authors
thank the funders, the child rights experts, and the  children who participated, for their ­support in
generating the evidence we present herein.
1
  Children here and throughout are defined as everyone under the age of 18; see UNCRC
(1989), Art. 1.
2
  See Livingstone et al. (2015b), also ITU (2016).
3
  See Byrne et al. (2016); Livingstone and Bulger (2014); Livingstone et al. (2015b).

376
Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 376 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  377

with diverse other information and communication technologies (ICTs), and also includ-
ing more recent developments in artificial intelligence, robotics, algorithms and ‘Big Data’
and the ‘Internet of things’.4 Increasingly, almost every aspect of children’s lives is becom-
ing influenced by, even reliant on, digital and networked media.5 Yet, amidst burgeoning
debates about human rights in relation to Internet governance in particular and the digital
environment more widely, children’s rights are not given sufficient profile. Further, where
they are given consideration, despite the centrality of Article 12 to the implementation of
the UNCRC, which codifies children’s right to be heard in the decision-making processes
that will impact their lives, their voices remain marginalized, as does the research and
practice that directly engages them.
Today, many policy, legislative and regulatory mechanisms do not adequately support
and protect children online.6 Many young Internet users around the world do not have
the benefit of appropriate forms of adult guidance from parents, teachers and other care-
givers on safe and appropriate online engagement.7 The need for reliable, evidence-based
mechanisms and guidance spans the full range of children’s rights, but this is too often
unrecognized or little understood in many countries.8 Such difficulties themselves tend
to result in anxiety, impeding the search for proportionate, evidence-based, sustainable
strategies and initiatives that support children’s agency and rights.
In this chapter, recognizing the global scope of the Handbook, we draw on geo-
graphically and culturally diverse examples of recent research to weigh the issues at stake,
showing how the relevant child rights issues relate to the practical contexts of children’s
experiences with digital technologies around the world. The emergent knowledge base
will be integrated with the insights of children generated through an innovative online
consultation platform (RErights.org).9 This allows us to pinpoint the pressing issues, con-
troversies and knowledge gaps relevant to children’s experiences with digital technologies,
as revealed by evidence gained from and by children, and thereby to inform vital efforts to
promote and fulfil their provision, protection and participation rights in the digital age.

4
  See, e.g., Rose et al. (2015).
5
  The integral role of media was already recognized at the 10th anniversary of the UNCRC by
the Oslo Challenge, which emphasizes that the media and communication environment is integral
to many, if not all, of children’s rights. See Sacino (2012); UNICEF (no date).
6
 Byrne et al. (2016); Livingstone et al. (2011b).
7
  Livingstone and Byrne (2015).
8
  See Livingstone et al. (2015b).
9
  RErights.org invites children aged 10–18, targeting 14- to 16-year-olds, to identify the key
topics they wish to discuss; participate in a series of interactive tasks designed to elicit their views
via surveys, creative writing, photography, interviews with peers, etc.; generate child-centred defini-
tions of key concepts; and contribute to the analysis of the growing dataset. Content received by
the research team in languages other than English is translated and the research team works from
English transcripts. Photo and audio-visual contributions are analysed using visual and discourse
analysis methods, and the results are shared with the community of children, youth-serving organi-
zations and policy-makers via infographics, blogs, social media and periodic industry reports. This
process began in 2014 to inform the deliberations at the Day of General Discussion and since then,
has engaged over 710 children from over 60 countries in sharing their views on their rights in the
digital age; see e.g., Third et al. (2014); OHCHR (2014).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 377 13/12/2018 15:25


378  Research handbook on human rights and digital technology

1.2  Challenge of Fulfilling Children’s Rights in the Digital Age

Crucially, digital media rely on a complicated, transnational value chain involving


multiple companies with diverse interests and a complex web of legislative and other
regulatory efforts.10 Their contents enable creative or malicious re-editing, and leave easily
searchable and permanent records of activity. Digital media are no longer set apart from
the realities of children’s existence, being merely ‘virtual’ or somehow ‘unreal’, but rather
are thoroughly embedded in the infrastructures of all our lives, and this is set to increase
dramatically.11 So, while attention is often centred on the online context, the wider poten-
tial of digital media matters for all dimensions of children’s experiences.
Digital media now pose new and broad-ranging challenges for states in meeting their
responsibilities to secure children’s rights. These challenges are already salient in the
global North and are becoming so in the global South. They include privacy hacks; new
forms of sexual exploitation ‘at a distance’; scalable networked solutions for education
and participation; the disintermediation of both parents and the state; discriminatory
algorithmic calculations harnessing the power of ‘Big Data’, and much more. Many
countries are facing the problem that ‘fast-paced, widespread growth often occurs far
ahead of any understanding of what constitutes safe and positive use in digital contexts’,12
especially as the Internet is generally designed for adults. No wonder the widespread
hopes and fears, anxieties and confusion about the Internet, as well as the flurry of state,
regulatory and industry responses, often produced in haste and under pressure. One result
is rising tensions between public and private sectors, between states, between families and
the institutions of school, law enforcement and governments, and even between children
and parents as societies struggle to manage technological change. Another is the rising
call from organizations that work with children for a coherent, principled, evidence-based
framework with which to recognize and address children’s rights and best interests.
Digital media pose particular challenges for children’s rights. First, the Internet is
age-blind. In the digital environment, it is generally the case that a particular platform
or online service is unable to determine whether a user is a child. The consequence is
that children are often treated as adults online, and it is difficult to provide particular
protections appropriate to children’s needs or best interests.13 Second, online operations
are ever more opaque. The complex interdependencies among companies providing
digital media and networked services are largely unaccountable. Businesses increasingly
embed value decisions into their operations through use of automated algorithms, which
infer user characteristics – and the consequences (in terms of bias, discrimination, inac-
curacy or even legality) are difficult to assess or adjust in relation to the public interest
in general, or child rights in particular. Third, the Internet is transnational. There is no
doubt that this poses difficulties for states, especially given the transnational nature of
key companies and, more subtly, the entire digital ‘value chain’, challenging jurisdiction,
impeding regulation, introducing unintended consequences of interventions, and risking

10
  See e.g., the resources available at the Global Commission on Internet Governance at www.
ourinternet.org/research and Internet Society at www.internetsociety.org/publications.
11
  See World Bank (2016).
12
 Livingstone et al. (2015a) 3.
13
 Livingstone et al. (2015b).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 378 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  379

cultural conflicts.14 Finally, the opportunities and risks associated with digital media are
profoundly impacted by wider social, economic and political factors. For children, the
possibilities of digital media for enacting their rights are highly dependent on their social
development, socio-demographic resources15 and cultural contexts. These circumstances
easily become a source of deepening inequality rather than the means of realizing rights
in the digital age.

1.3  Meeting the Challenge

Interest in rights-based approaches to children’s Internet use crystallized in 2014, which


marked the 25th anniversary of the UNCRC, as well as the 25th anniversary of the World
Wide Web. In September 2014, the UN Committee on the Rights of the Child held a Day
of General Discussion (DGD) on ‘Digital Media and Children’s Rights’.16 The resulting
report recognized that ‘what happens offline today will also be manifest online and what
happens online has consequences offline’17 and that ‘ICT in itself is neither good nor
bad from a human rights perspective – its benefits or harms depend on how it is used’.18
While the report urged that ‘a balance between empowerment and protection of children
in the online world has to be found’,19 it is not clear that significant and constructive steps
are now being taken, or even that the importance of digital and networked media is suf-
ficiently high on the agenda of many states, given uncertainties and dilemmas about how
to ensure that digital and networked media promote and protect rather than undermine
children’s rights.20
Since 2014, some significant initiatives have been set in motion, adding to the rising
attention towards digital media among those concerned with child rights, as well as the
growing concern with child rights among those at the forefront of Internet governance.
For instance, in its recent mapping of the global Sustainable Development Goals (SDGs)
and the UNCRC, UNICEF asserted ‘that all of the Global Goals are relevant for
children, not only those which specifically refer to Children’,21 urging in particular the
importance of digital media for UNCRC, Article 13 (freedom of expression), Article 17
(access to information and media), and Article 28 (education), among other articles.22
Further initiatives include the UN Special Representative of the Secretary-General on
Violence Against Children’s global initiative against cyber-bullying;23 the WeProtect
Global Alliance ‘to end child sexual exploitation online’;24 the prominence of the digital
environment in the 2016–21 Council of Europe Strategy for the Rights of the Child;25

14
  Global Commission on Internet Governance (2016).
15
 Livingstone et al. (2014b); Swist et al. (2015).
16
  See OHCHR (2014).
17
  Ibid. 3–4.
18
  Ibid. 4.
19
  Ibid. 3.
20
  See Gasser et al. (2010).
21
  See Wernham (2016) 2.
22
  See ITU (no date); Sachs et al. (2015).
23
  See UNSRSG (2016).
24
  See WeProtect (no date).
25
  See Council of Europe (2016).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 379 13/12/2018 15:25


380  Research handbook on human rights and digital technology

and the regional digital citizenship framework being developed by UNESCO Bangkok
(Asia-Pacific Regional Bureau for Education) with Google Asia-Pacific. UNICEF’s 2017
flagship report, The State of the World’s Children, which addresses children in a digital
world, both documents ongoing initiatives and draws attention to the challenges ahead.26
Recent years have also seen a growing body of research evidence examining children’s
experiences with digital media. Much of this is relevant to children’s rights, although not
all research is couched in those terms, and not all meets international standards of peer
review. Although observers are often concerned that digital media evolve so fast that
evidence quickly dates, social norms and practices change more slowly, and therefore
much evidence remains informative and much behaviour remains predictable, even when
particular incidences or percentages change over time. On the other hand, the evidence
remains unbalanced in important ways:27 most available evidence relates to children
and young people’s digital media use in the global North rather than the global South;
most, also, concerns young people rather than children, and little disaggregates them by
gender, ethnicity, socio-economic status or other demographic and vulnerability factors.
Also problematically, more research examines the incidence of online risks of harm,
outweighing attention to online opportunities, and rarely follows up to identify the later
consequences of risks or opportunities. Last, more research examines how digital media
use poses a challenge to children’s rights than evaluating whether and how digital or other
initiatives could enhance the realization of rights.

2.  W
 EIGHING THE EVIDENCE AND LISTENING TO
CHILDREN’S VIEWS ON THEIR RIGHTS IN THE DIGITAL
AGE

In what follows, we review the available evidence, along with the views of children, accord-
ing to the seven categories of rights, clustered by the reporting guidelines established for
states by the UN Committee on the Rights of the Child: general principles; civil rights,
freedoms and privacy; violence against children; family environment and alternative care;
disability, basic health and welfare; and education, leisure and cultural activities.28
We do not here address or advocate for the creation of new, so-called ‘digital rights’.
Rather, we urge recognition of the fact that ‘the digital’ is increasingly embedded in the
infrastructure of society rather than something discrete and set apart; it is becoming a
taken-for-granted environment for work, family, relationships, commerce, crime, govern-
ment, and much more. Thus, children’s rights are increasingly at stake in new ways in
the digital age. What is needed is greater clarification, interpretation and guidance on
the measures needed, in the context of the digital environment, to guarantee that their
existing rights are effectively respected, protected and fulfilled.

26
  UNICEF (2017).
27
  See, among others, Barbovschi et al. (2013); Gasser et al. (2010); Kleine et al. (2014);
Livingstone and Bulger (2013; 2014); Livingstone and O’Neill (2014); Livingstone et al. (2017);
UNICEF (2012).
28
  See Committee on the Rights of the Child (2015).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 380 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  381

2.1  A
 pplying the General Principles of the Convention on the Rights of the Child to the
Digital Environment

The general principles of the UNCRC – Articles 2 (non-discrimination), 3 (best interests),


6 (optimum development) and 12 (right to be heard) – relate to digital media in crucial
ways.

2.1.1  Non-discrimination, children’s best interests and optimum development


As digital media, especially forms of mobile Internet connectivity, spread throughout
high, medium and increasingly, low-income countries, considerable inequalities occur
in who gains access to what, with what quality and cost of connection.29 In addition to
inequalities in access to hardware and connectivity, there are inequalities in the provision
of content (especially in poorer countries, among small language communities, and for
ethnic or other minorities) and crucially, inequalities in the skills and competencies to use
and benefit from digital media.30
Irrespective of their country or region, the social, cultural and economic sources of
inequality that differentiate children’s life chances also shape their online opportunities
and risks. This has particular significance in relation to UNCRC Articles 22 (refugees),
30 (minority and indigenous groups), 34 (protection from sexual exploitation), 35
(protection from abduction, sale and trafficking), 36 (protection from other forms of
exploitation) and 38 (protection from armed conflict). Research consistently shows that,
for a variety of socio-structural reasons, some children (generally termed ‘vulnerable’
or ‘disadvantaged’31) are less likely to access or to benefit from online opportunities
and more likely to experience harm as a consequence of exposure to online risks. Such
groups include children living with chronic illness or disability; gender-diverse young
people; First Nations children; refugees; newly arrived migrants; children experiencing
homelessness; and children whose primary language is other than English. In short,
those who are more vulnerable offline tend to be more vulnerable online, and efforts need
to focus precisely on supporting them and fostering their abilities to take advantage of
opportunities online.32
Engaging online can help disadvantaged children to access information and build
communities of interest and broader support networks, thus improving their wellbeing
and capacity to enact their rights. Gender-diverse young people, children living with dis-
abilities and children living in rural locations, among other marginalized or disadvantaged
groups, all stand to benefit from the resources that online communities can provide,

29
  ITU (2016); World Bank (2016); UN ECOSOC (2015); WEF (2015).
30
 Kleine et al. (2014); Livingstone et al. (2012); World Bank (2016); UNICEF (2013).
31
  In making this claim, we must recognize the fact that ‘disadvantage’, ‘marginalization’ or
‘vulnerability’ is not a straightforward predictor of vulnerability online. Indeed, there are some
instances in which children who are classified as ‘vulnerable’ demonstrate exemplary levels of
resilience in their use of digital applications, programs and services, and deploy digital media to
benefit their wellbeing. The challenge is to better understand how such examples of resilience might
be translated to larger numbers of children both within and beyond ‘vulnerable’ communities.
Evidence and suggestions for policy and practice can be found in UNICEF (2017).
32
 Barbovschi et al. (2013); Kleine et al. (2014); Livingstone and Bulger (2013); Livingstone and
O’Neill (2014); Metcalf et al. (2013); Robinson et al. (2014); Third and Richardson (2010).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 381 13/12/2018 15:25


382  Research handbook on human rights and digital technology

whether informal or enabled through targeted interventions.33 As such resources are rolled
out, this is a critical moment to ensure that disadvantage is not compounded by digital
exclusion.
However, benefits are also anticipated for the wider population of children. A burgeon-
ing literature has found evidence for the positive impacts of digital media use on children’s
wellbeing.34 These benefits are expected to continue to extend to children in the Global
South, where young people outnumber the general population online by a factor of two
or three, although figures for younger children are scarce.35 It is commonly hoped that
the deployment of ICTs can support children’s best interests and optimum development,
both through the growth of general access to digital media and through the targeted use
of digital media in programme interventions and public policy initiatives, including, for
instance, in relation to health provision, environmental issues or disaster relief.36 But it
is also increasingly recognized that digital media pose distinct risks of harm to children,
through the contents and contacts they facilitate and the digital traces they create.37 It is
crucial that these hopes and fears, opportunities and risks, are addressed together, so that
interventions are neither naïve nor one-sided.
While children are often vocal on the subject of their rights in relation to digital media,
they often lack knowledge of or capacity to enact their rights in the digital environments
available to them.38 However, they are generally clear about the challenges they face
regarding poor infrastructure and low quality connectivity:39
I lack access most of the time. (boy aged 14, Kenya)

There is not enough power so the computer is not working. (boy, Nigeria)

Yet, however limited their access or outdated their technologies, children often display a
high degree of inventiveness and creative workarounds, revealing their strong motivation
and sense of ‘a right’ to the Internet. Third et al. report video footage submitted by a boy
in Nigeria that shows him powering up a diesel generator in order to charge his computer
and mobile phone.40 Children also report use of wind-up mobile phone c­ hargers and

33
 Collin et al. (2011); Mason and Buchmann (2016); Robinson et al. (2014); Swist et al. (2015)
7; Third and Richardson (2010); Third et al. (2014); UNHCR (2016).
34
  See, e.g., Swist et al. (2015); Collin et al. (2011); Third and Collin (2016), and the work of the
Young and Well Cooperative Research Centre, which, from 2011–2016, investigated the impacts of
technology on children’s and young people’s mental health and wellbeing: http://pandora.nla.gov.
au/pan/141862/20160405-1343/www.yawcrc.org.au/index.html.
35
  ITU (2016); ITU and UNESCO (2013); Livingstone et al. (2015b).
36
  For example, children have contributed to positive social change in their communities by
using digital technology to map hazards, such as excessive garbage, landslides, lack of drainage,
and inadequate sanitation facilities, and mobilize their communities to address them (see www.
unicef.org/cbsc/index_65175.html). Real-time mapping via digital platforms such as ushahidi.
com enable data to be gathered and visualized during crisis situations, enabling better coordinated
responses.
37
  Burton and Mutongwizo (2009); Gasser et al. (2010); Livingstone et al. (2017); UNICEF
(2012).
38
  Livingstone and Bulger (2014); Third et al. (2014).
39
  See Kleine et al. (2014); Livingstone and O’Neill (2014); Third et al. (2014).
40
 Third et al. (2014).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 382 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  383

similar workarounds to provide even the most basic access. No wonder, as Bob Hofman
of the Global Teenager Project,41 states:

[Many children] think that having access to the Internet is a basic right – food, water, health
care and connectivity . . . And whether it is students from Ghana or from Canada, they express
[this] very clearly.42

Yet, while income and geography are key determinants of people’s access to digital
media in general,43 gender (among other factors) is already a key source of discrimina-
tion, even within populations that do have access. The growth of digital resources now
threatens to compound and deepen gender discrimination. Girls have much to gain from
use of digital media, and are keen to optimize this,44 but most research and programme
evaluations show that their access and opportunities are far more restricted than those of
boys.45 Hence the value of targeted initiatives such as Regina Agyare’s Soronko Solutions
and Tech Needs Girls:

The girls are learning to code, and once they are done they will get paid internships at a software
company where they can start to economically empower themselves and be able to help pay for
their own education. We have also engaged with the community such that the parents see the
value in educating their girl child.46

Less research is available regarding other forms of inequality (such as ethnicity,


religion, caste or language), although in the global North, children are shown to experi-
ence discriminatory behaviour or outright hostility based on their gender, ethnicity,
sexuality or other factors.47 There is also evidence that online spaces can – under the
right circumstances – provide support and opportunities to explore identity and gain
needed resources, and that this can be of particular benefit to those who are vulnerable
or discriminated against offline.48

2.1.2  Children’s right to be heard


The challenge for policy-makers and professionals and organizations supporting children
is to maximize the benefits without exacerbating existing vulnerabilities or exposing

41
  The Global Teenager Project engages more than 20,000 students in over 42 countries in
collaborative learning experiences. See www.ict-edu.nl/gtp/wat-is-gtp/.
42
  Cited in Third et al. (2014) 65.
43
  See e.g., Banaji (2015); Walton and Pallitt (2012).
44
  See de Pauw (2011); Raftree and Bachan (2013).
45
  Girls are less likely to be given expensive devices; they have more domestic chores and so
less disposable time; they are more vulnerable to sexual risks and gender-based violence; they are
subject to gender discrimination and therefore have less access to education and employment; they
have less freedom to seek information or opportunities for expression, and so forth (Livingstone et
al. (2017)). See also Cortesi et al. (2015); de Pauw (2011); GSMA (2015); UN (2011); UNCTAD
(2014); UNICEF (2013); WEF (2015).
46
  Cited in Third et al. (2014) 53.
47
  See Alper and Goggin (2017); Campos and Simões (2014); Dahya and Jenson (2015); Tynes
(2015), among others.
48
  See Banaji and Buckingham (2013); Coleman and Hagell (2007); ITU (2012); Robinson et al.
(2014); UNICEF (2013); WEF (2015).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 383 13/12/2018 15:25


384  Research handbook on human rights and digital technology

children to harm. Children have a right to be heard as well as a right to protection.


UNCRC, Article 12 establishes the right of children to express their views and to have
them taken seriously in accordance with their age and maturity. This provision applies
both to children as individuals and as a constituency, and has been interpreted very
broadly by the Committee on the Rights of the Child, the international body charged
with monitoring states’ compliance with the Convention on the Rights of the Child.49
It has argued that most matters of public policy, from the local to the international, are
relevant for children’s lives and, accordingly, are legitimate issues on which their voices
should be heard.
Thus, the digital environment can, and indeed, does serve as a platform for individual
children to speak out, inform public debate, influence policy, as well as for children collec-
tively to collaborate, organize and share views. Several organizations have sought to har-
ness the potential of digital media to amplify children’s voices. Best known is UNICEF’s
U-Report mobile text messaging platform for children, first in Uganda, and then also in
other parts of Africa, to enable children to contribute information and suggestions to
decision-making processes (on, for instance, sanitation, HIV/AIDS, youth unemploy-
ment and disaster management) that affect their communities.50 Relatedly, on UNICEF’s
‘Voices of Youth’ platform, a community of youth bloggers and commentators from all
over the world offer their insights on a range of topics affecting them.51
Governments have a responsibility for promoting implementation of Article 12, includ-
ing in the online environment, and in so doing to ensure an appropriate balance between
protection and the right to be heard. But generally, the greater availability of digital media
is not being used to include or amplify children’s voices in the design of interventions
and decision-making processes, with considerable digital and cultural barriers to children
being heard and responded to. UNICEF frames child participation as a right in itself and
as a crucial path to other rights.52 While a host of initiatives scattered around the world
are experimenting with use of digital media to enable child participation, these tend to
remain small-scale, unsustainable and too rarely evaluated for good practice to be shared.
Child participation, even in an age of digital connectivity, is still more promise than
reality, and both determination and guidance from states are sorely needed, especially
given the considerable attention to risk-focused and protectionist (sometimes overly
protectionist) approaches to digital media.53 Indeed, the evidence suggests that, as digital
media are adopted in more parts of the world, and as society increasingly relies on
digital media for many functions pertinent to child wellbeing, children’s rights are being

49
 CRC, General Comment No.12: The Right of the Child to be Heard (2009), CRC/C/GC/12,
paras 22–25.
50
 Kleine et al. (2014).
51
  See www.voicesofyouth.org/en/page-1.
52
  See www.unicef.org/crc/files/Right-to-Participation.pdf; for the ‘ladder of participation’ and
a critique of token inclusion, see www.unicef-irc.org/publications/pdf/childrens_participation.pdf;
and for a resource guide for practitioners, see www.unicef.org/adolescence/cypguide/. Finally, see
the Council of Europe’s framework and tool for assessing the effectiveness of child participation
strategies, available at www.coe.int/en/web/children/child-participation-assessment-tool.
53
  For instance, Internews Europe’s study of media reporting of child rights issues in Kenya
found that a patronizing attitude to children by journalists, news agencies and civil society organi-
zations means their voices are routinely excluded (Angle et al. (2014)).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 384 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  385

infringed. The adverse and discriminatory implications for the child’s best interests and
optimum development of both gaining and lacking access to digital media will increase
unless efforts specifically target children’s rights.
Children themselves have high aspirations for a world facilitated by digital media,
believing the Internet enhances connection between individuals, communities and
cultures, across national and international borders, and positioning technology as key to
promoting a spirit of understanding, peace, tolerance, equality and friendship among all
peoples, supporting their rights to non-discrimination (Article 2):

[If everyone had equal access to digital media] this would help various people in various parts of
the world to learn about different cultures, about the people. This would help with the advance-
ment of people and society. (girl aged 16, Trinidad and Tobago)
For me, it unites the world. (boy aged 14, Argentina)

In short, children see accessing information as crucial to ‘becoming responsible citizens


who are able to form their own opinions and participate in their community and they
explicitly connect the idea that digital media enable their right to information with their
right to participation’:54

I don’t know what I would do without it because I was born in the Internet era. I cannot imagine
a life without the Internet because I use it every day, for my studies, I use it for all my needs. And
. . . I need it very much. (boy aged 16, Malaysia)

2.2  Children’s Civil Rights and Freedoms in the Digital Environment

The digital environment significantly impacts children’s civil rights and freedoms. Access
to the Internet affords wide-ranging opportunities for the realization of the UNCRC
right to freedom of expression (Article 13), freedom of thought (Article 14), freedom of
association and assembly (Article 15), and the right to information (Article 17). On the
other hand, it also raises challenges in respect of ensuring the right to privacy (Article 16).

2.2.1  Information and participation


Research55 and policy56 has begun to document the benefits for children of participating
online, in ways that are particularly relevant to these civil and political rights. As the
former UN Special Rapporteur on Freedom of Expression, Frank La Rue, put it, the
Internet is:

[a]n important vehicle for children to exercise their right to freedom of expression and can serve
as a tool to help children claim their other rights, including the right to education, freedom of
association and full participation in social, cultural and political life. It is also essential for the
evolution of an open and democratic society, which requires the engagement of all citizens,
including children.57

54
 Third et al. (2014) 38.
55
  See Swist et al. (2015); Collin et al. (2011).
56
 O’Neill et al. (2013).
57
  La Rue (2014) 16.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 385 13/12/2018 15:25


386  Research handbook on human rights and digital technology

Children, too, believe that digital media broaden their horizons and enable them to
know about and connect with other cultures and people, and they value this enormously:

The Internet gives the access to children to explore new things. (girl aged 17, Malaysia)

They report that ‘digital media enable them to be informed citizens of the world who are
better prepared to participate meaningfully in the lives of their communities’.58 Children
also note that digital media provide new ways for them to exercise their rights to freedom
of expression. They demonstrated an eagerness to join the conversation about matters
that concern them, to participate as fully engaged citizens and to access information:59

Many blogs or sites ask for people’s stands and opinion on all sorts of matter and there are
ways to raise awareness about some things and create movements and target groups. (girl aged
16, Serbia)
Nowadays it is possible to express oneself on the Internet and social media . . . Our words can
reach much further, sometimes worldwide. (girl aged 14, France)

When asked to rank which of their rights is most positively impacted by technol-
ogy, children judge the right to access information as most important.60 For example,
‘researching what’s happening in other parts of the world’ was one of the main benefits
cited by children in Ghana; they talked about how they had learned about the Ebola virus,
and conflicts in the Gaza Strip and Mali via online sources. Information is vital for many
reasons, and children have the right both to receive and to contribute it. Children also
believe that access to information underpins a wide range of other rights. For example,
by engaging with digital media, they ‘have access to politicians who can play a significant
role in the community’ (girl aged 16, Trinidad and Tobago), thus supporting their right
to contribute to discussions about matters that concern them, and to participate as fully
engaged citizens.61
However, children face challenges of ‘information overload’, ‘fake news’ and the need
for critical information literacy, resulting in growing calls for digital media education to
support children in their civil rights and freedoms, along with guidance on how public
and private sector organizations might best provide it. Such education is also important
insofar as access to current affairs via digital media has its downsides. Experts on

58
 Third et al. (2014) 30.
59
  RErights.org (2016).
60
  When asked, in the RErights.org consultation, to tell researchers of the rights that are
important overall in the digital age, children named (1) freedom of expression; (2) privacy; and (3)
­protection/safety from cyber-bullying, cybercrime and exploitation. Access to information was the
right seen as most positively impacted by digital media, followed by freedom of expression, while pri-
vacy followed by protection from violence were the rights most negatively impacted by digital media.
61
  For those who have no or less frequent access to digital media, the inability to access informa-
tion and current affairs, whether for reasons of finance, connectivity or censorship, is seen as a major
disadvantage. Indeed, some children expressed a sense of access to the Internet as key to ‘informa-
tion justice’: ‘If the internet disappeared, we would not be able to do research on the internet for
school projects; we would have to go to the library and that is a problem because some people don’t
have a library in their village so it is a lot more difficult, especially since there are libraries that do not
have a lot of books on specific topics or don’t have the money to buy more’ (girl aged 10, France).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 386 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  387

children’s right to civic and political information consider that the risks of exposure to
distressing news, for example, can be managed and do not outweigh the value of such
access.62 Children often concur, although they recognize the difficulties of the conflict
between the right to information and protection:
You’re going to learn about more gruesome things and the harsh realities of the world younger
. . . I’ve had to learn about things I wouldn’t have wanted to know by going on BBC and CNN.
(boy aged 15, United States)

This highlights the importance of working to develop balanced and child-centred


approaches to information provision and protection from potentially harmful media.
Ensuring that news corporations and other commercial entities prioritize children’s rights
would be greatly improved by child rights guidance for commercial entities that provide
public and civic resources for ‘the general population’. This could both encourage provi-
sion specifically for children and also for the many children who are, and have the right to
be, present in spaces for the general population.

2.2.2 Privacy
Civil rights and freedoms include the right to privacy. It is noteworthy that most children
identify privacy as one of the three most important rights in the digital age.63 Privacy can
be infringed by known others in the child’s social environment (parents, teachers, others
– whether well-meaning or potentially abusive); the state (via surveillance mechanisms
blind to age, via law enforcement or censors); and commercial players providing digital
services that exploit children’s data.
In the case of commercial data protection, most research suggests that children (and
adults) are less concerned about commercial uses of their data, increasingly aware that
this is the only ‘deal’ on offer if they are to gain ‘free’ services. But this does not mean that
child rights and privacy experts concur – witness Europe’s present efforts to update its data
protection regime to protect the digital data of its citizens, with special protections for chil-
dren (for instance, by regulating the ‘profiling’ and targeting of children by commerce and
marketing).64 Arguably privacy and data protection regimes are bedding down globally, and
we have a limited window of opportunity to centre children’s rights before systems, processes
and industry practices sediment. Here, crucially, it is timely and important to assert states’
obligations to ensure that businesses bear their responsibilities regarding children’s rights.
In the case of the state, there are growing concerns that schools, health providers and
other public bodies increasingly collect and use personal and transactional data from
children in ways that are little understood by the public or parents, and that do not always
observe robust standards of privacy, transparency, security or redress. The use by public
bodies of commercial systems for data collection and information management compounds
the problem of determining whether children’s privacy and identity rights are protected.
In facing these challenges, there is insufficient guidance about the legalities, complexities

62
 Angle et al. (2014); Council of Europe (2016).
63
  Children in the RErights.org consultation identified privacy, freedom of expression and
protection/safety as the three most important rights in the digital age.
64
  EU Regulation 2016/679; Macenaite (2017); Madden et al. (2013); Lievens (2017); WEF
(2017).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 387 13/12/2018 15:25


388  Research handbook on human rights and digital technology

and unintended consequences of uses of children’s digital data records.65 Teenagers are
increasingly aware that their privacy can be infringed by uses of digital technology:

[The] Internet collects private data that can expose people’s personal information that they want
to keep private. (girl aged 16, Serbia)
Some of the websites that [ask for] my name and identity card numbers don’t really make sure
that my info is secured. (girl aged 17, Malaysia)
You can post a photo on the Internet but then everybody can see it and it is difficult to remove
it. It can go anywhere in the world and this can be an issue for some people . . . There is the issue
of photos or documents that cannot be deleted. (girl aged 10, France)

Privacy from parents and other known adults is also a challenge with which many
children, families and schools are currently struggling. For example, children in Kenya
singled out ‘nosy parents’, ‘overprotective parents’, and ‘parents who spy’ as challenges to
their capacity to realize their rights in the digital age, signalling that they value the idea of
privacy, but often interpret it to mean having a space of their own that is beyond surveil-
lance by adults.66 Parental surveillance particularly affects children’s right to information
(Article 17) they wish or need to keep private from parents – consider children living in
abusive families who need access to helplines, children exploring their sexuality or sexual
identity in families or communities that endorse hostile religious or discriminatory views,
or children’s rights as they grow older to take responsibility for their own maturation and
experimentation.67
We might add that it is also unclear at present whether those minors who engage in civil
or political protest – and often it is the young who engage most vigorously in the world’s
struggles – have their rights protected in subsequent legal proceedings. At present, the
specific rights of young activists or protesters are rarely heard in relation to controversies
over the rapid increase in digital surveillance or state demands for commercial digital
records of communication and assembly.68

2.3  Violence Against Children

The UNCRC addresses violence against children through Articles 17 (protection from
harmful media), 19 (protection from violence, abuse and neglect) and 34 (all forms of
sexual exploitation and sexual abuse, including child pornography). (See also Articles 35,
36, 37 and the UNCRC Optional Protocol on the Sale of Children, Child Prostitution
and Child Pornography.)

2.3.1  Opportunities also bring risks


The more children gain access to the Internet and mobile technologies, seeking oppor-
tunities to benefit, the more they also tend to encounter risk of harm of various kinds.

65
  See Berson and Berson (2006); Lwin et al. (2008); Shapiro (2014); see also Goh et al. (2015);
Singer (2014).
66
 Third et al. (2014) 47.
67
  Albury (2017); Aroldi and Vittadini (2017); Dinh et al. (2016); Ybarra and Mitchell (2004).
68
  Banaji and Buckingham (2013); Khalil (2017).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 388 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  389

This has been found in research from Europe, Chile and Bahrain, among many other
countries.69 This is primarily because more use increases online exposure to a range of
online experiences, although in some country contexts the effort to gain access can itself
put children at risk.70 As a 2012 UNICEF literature review concluded:
Children from low- and middle-income countries are less likely to use the Internet from home,
and are more likely to go online from cybercafés, where they are at greater risk of encounter-
ing inappropriate images and online and offline solicitation. Lack of parental awareness and
knowledge, difficult economic conditions and under-developed regulatory frameworks can
further exacerbate potential risks and the likelihood of harm.71

Moreover, the more that children gain digital footprints72 via their school, parent or
medical or welfare databases of various kinds, the more their safety can be at risk even
if they themselves lack access to digital media.73 The risks range widely from new safety
risks associated with the rise of ‘hackable’ ‘Internet of Toys’, or forms of algorithmic
bias, to long-established forms of bullying, harassment and sexual abuse now extending
online; they also vary in severity from upsetting but manageable hostilities to persistent
victimization or life-threatening sexual abuse.74
Research in the global South is beginning to complement that already available and
compelling in the global North.75 For example, in South Africa, Samuels et al. found that
girls, and those who live ‘in metropolitan and urban areas are significantly more likely to
experience some form of online violence than those living in rural areas’.76 In addition,
there was found to be significant overlap in the risk factors associated with both on and
offline violence risk factors in offline violence. Exposure to violence, alcohol, drugs and
weapons were all strongly related to both the victims and perpetrators of online violence.77
From pilot data in South Africa, Argentina, the Philippines and Serbia, Byrne et al.78
found the following:

● Between one-fifth (of 9- to 17-year-olds in South Africa) and three-quarters (of 13-
to 17-year-olds in Argentina) reported feeling upset about something that happened
online.

69
  See Berríos et al. (2015); Davidson and Martellozzo (2010; 2012); Livingstone et al. (2011b);
Mascheroni and Ólafsson (2014); OCED (2011); UNICEF (2012).
70
  For example, mobile phones are widely used in many countries to share and ‘normalize’ the
experience of viewing often extreme or violent pornography, and also because children seek access
in internet cafes where abusive adults may prey on children in unsupervised circumstances (Berríos
et al. (2015); Cook et al. (2012); Livingstone et al. (2017); Samuels et al. (2013)).
71
  UNICEF (2012) 95.
72
  Digital footprint refers to the permanence, searchability, and traceability of one’s informa-
tion online (Third et al. (2014) 41).
73
  For instance, ECPAT International (2015) has argued that ‘many of the children who are
at highest risk of being subjected to sexual exploitation online are not connected to the Internet’.
74
 Bannink et al. (2014); Bhat et al. (2013); Holloway and Green (2016); Livingstone (2014);
Lupton and Williamson (2017); Rallings (2015); see also BEUC (2017).
75
  See Internet Safety Technical Task Force (2008); ITU (2010); Livingstone et al. (2015c);
OECD (2011); Rallings (2015); UNSRSG (2016); Webster et al. (2012), among others.
76
 Samuels et al. (2013) 32.
77
  Ibid. 36.
78
 Byrne et al. (2016).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 389 13/12/2018 15:25


390  Research handbook on human rights and digital technology

● One-third of 9- to 17-year-olds in Serbia reported being treated in a hurtful way


by their peers, online or offline, although in South Africa and the Philippines only
one-fifth said this had happened to them.
● In qualitative research, children mentioned a wide range of problematic issues that
concern them in relation to digital media, including Internet scams, pop-up adverts
that were pornographic, hurtful behaviour, unpleasant or scary news or pictures,
discrimination, harassment (including sexual harassment by strangers) and people
sharing too much personal information online.
● About one-third of 9- to 17-year-old Internet users in the Philippines and up to
twice that number in Argentina and Serbia had seen online sexual content, while a
small minority reported some kind of online sexual solicitation – being asked for
sexual information, to talk about sex or to do something sexual.

As the below quotations demonstrate, children interviewed in those countries report a


wide range of upsetting experiences online:79
Racism, xenophobia and killings. (South Africa, open-ended survey question)
Frequently having older strangers inviting me, seeing nude adverts. (South Africa, open-ended
survey question)
I once experienced a stranger asking for ‘my price’ – meaning, how much would it cost the
stranger for them to have a sexual activity. (boy aged 15–17, the Philippines)
I experienced being bashed by my classmates in Facebook and it hurt a lot! (girl aged 12–14,
the Philippines)
A stranger once tried to chat with me asking for my photos and sending his own nude photos to
me. (girl aged 12–14, the Philippines)
[My friend] typed free xxx porn dot com, entered into something. He told me, ‘Close your eyes,
turn around, it will be something, you’ll see a surprise’. When I turned around he started it and
women started screaming. (boy aged 11, Serbia)

Children do not always see threats in the same terms that adults do: European research
shows children to be particularly upset by online cruelty to children or animals, as well as
being worried by online kidnappers, viruses and a wide range of other concerns.80 This
highlights the need for child-centred definitions and for children’s insights and experi-
ences to more directly inform research, policy and practice efforts.
As has also been found elsewhere,81 online risks are correlated and can compound the
resulting harm:82
The relationship between sexting and cyber-bullying becomes most apparent when
the consequences of failing to comply with requests for photos are explored. Failing to
concede to such requests could result in other forms of bullying.83

79
 See ibid.
80
  See Smahel and Wright (2014); see also Livingstone et al. (2014a).
81
 Livingstone et al. (2012).
82
  Relatedly, a Turkish study by Erdur-Baker (2010) 121, of 14- to 18-year-olds found that
‘regardless of gender differences, the relationships between being a cybervictim and cyberbully are
significant and much stronger than the relationships between cyber and traditional bullying. This
result suggests that the same adolescents who are victims are also bullies in cyber-environments’.
83
 Samuels et al. (2013) 35.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 390 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  391

As Livingstone et al.84 conclude in their recent review of research in the global South:

While the correlations across risks, and across victim and perpetrator positions, complicate the
interventions needed, they serve to remind of the complexities that can surround experiences of
risk in children’s lives; thus simplistic or decontextualised interventions must be avoided.

2.3.2  Responses to risks in the digital environment


Digital media are being used to intervene in or work to alleviate children’s exposure to
risk. For example, the Child Protection Partnership (CPP), a project of the International
Institute for Child Rights and Development (IICRD), advocates for a Circle of Rights
process within programme implementation; see also Moraba, an award-winning mobile
game designed for UN Women to educate young audiences in a South African township
about gender-based violence.85 Children are clear that more should be done to protect
them:

Kids these days have easy access and there’s a lot of inappropriate things out there that they
should not be seeing. (girl aged 16, Australia)
Radio stations or televisions [should] reduce their broadcasting of explicit videos with sexual
content and vulgar words. (boy aged 17, Malaysia)
We do not have protection from various forms of violence in the virtual Internet network,
especially when we talk about cyberbullying. (girl aged 14, Brazil)
Because bullying spreads outside the school yard through cyberbullying. (boy aged 16, France)

In the RErights consultation, children talked knowledgeably about the range of risks
they might potentially encounter online.86 The risk of seeing inappropriate content was
often expressed in relation to violent content or disturbing footage from real-life situa-
tions such as scenes of war, schoolyard fighting, poverty and starvation. For example,
a 14-year-old boy from Thailand reported that, ‘a challenge is violent content’.87 Other
children also express concern at seeing adult content, and more specifically, violence and
pornography, and often call for adult support in strengthening their own coping strategies
rather than for outright bans or imposed restrictions.88
In sum, in many countries there is growing evidence of children’s risk of privacy-
related, violent and sexual harms on digital networks and platforms. No wonder that there
is a growing clamour for educational, regulatory and parental intervention to reduce the
risk of harm children face online.
It is presently unclear how much the evidence suggests that ‘offline’ risks are now occur-
ring online or, instead, that there is a genuine increase in the overall incidence of harm
to children. Many experts believe digital environments are primarily a new location for
risk rather than a means of exacerbating it significantly.89 It also seems likely that, since
digital environments record and enable the rapid distribution of records of many human

84
 Livingstone et al. (2017).
85
  Broadband Commission for Digital Development (2015).
86
  See also Smahel and Wright (2014).
87
  Quoted in Third et al. (2014) 40.
88
 Byrne et al. (2016); Livingstone et al. (2012); Third et al. (2014).
89
 Finkelhor et al. (2015).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 391 13/12/2018 15:25


392  Research handbook on human rights and digital technology

activities, the harms long experienced by children have become newly visible, thereby
demanding attention and redress. In this respect, the digital may have a key role to play in
regulating forms of abuse that have previously been difficult to identify, let alone address.
But there is no doubt that a host of professionals, including law enforcement, helplines,
medical services and digital media providers themselves, are grappling with online risk of
harm to children on a scale that they lack the resources to cope with.90
A coherent framework identifying the key roles to be played by different actors is
greatly needed and increasingly called for. But often this focuses only on protection and
safety, making it all the more vital that consideration is given to children’s rights in a holis-
tic manner. Equally vital is that children’s own voices shape the framework developed.91

2.4  Family Environment and Alternative Care

2.4.1  Parental responsibilities


Most research on how digital media are used and managed by families has been conducted
in the global North where, albeit to varying degrees, the heterosexual, nuclear family is
the dominant family structure. There is an urgent need for guidance that can support
user of digital media to support the rights of children living in a diverse array of family
structures. This is relevant to UNCRC, Articles 5 (parental responsibilities and evolving
capacities of the child), 7 (parental care), 18 (state assistance to parents), 20 (alternative
care) and 40 (juvenile justice).
Evidence suggests that many families fear the risks that digital media pose to their chil-
dren. At the same time, parents hold out considerable hopes that digital media will deliver
opportunities they may otherwise struggle to provide, helping to overcome disadvantage
or generally preparing their children for a digital future. Parental ambivalence and anxi-
ety can result in inconsistent, privacy-invading or overly restrictive parenting practices,
especially given the widespread conviction (not necessarily supported by evidence) that
children are more digitally literate than adults, seemingly able to challenge, transgress or
evade parental controls.92 Children themselves are often quick to point to a generation
gap that impedes family communication about digital media:
The biggest challenge is that adults don’t trust us. (boy aged 16, Malaysia)
A generation gap prevents teenagers to communicate effectively with parents and grandparents.
(girl aged 16, Trinidad and Tobago)
It’s harder for parents to guide their children because they can do things on the Internet without
the awareness of the parents. (girl aged 17, Belgium)

There is, therefore, a need for evidence-based guidance about digital media for families,
and for professionals who support children and their families, especially guidance that
eschews a heavily protectionist for an empowering approach.93 Research is beginning to

90
  Aoyama and Talbert (2010); Dinh et al. (2016); Finkelhor et al. (2015); Inhope.org (no date);
UNSRSG (2016); Virtual Global Task Force (no date).
91
 Third et al. (2014) 42.
92
 Ito et al. (2008).
93
  CRIN (no date); Green (2012); Livingstone and O’Neill (2014); OECD (2012a); Powell et al.
(2010); CRIN (2014); Hashish et al. (2014).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 392 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  393

identify optimal parental mediation strategies to maximize online opportunities and mini-
mize risk, but these are yet to inform the awareness of most parents.94 As a result, digital
media frequently become a site for the contestation of intra-familial power relations,
seen as a hindrance to, rather than a support for, strong family ties and wise parenting in
children’s best interests.
In the global North, there is evidence that, with increasing institutional and government
support for awareness-raising initiatives over time, parents and carers are increasing their
efforts to support their children online in ways that are beneficial.95 In response, as parents
shift from punitive to constructive responses to reports from their children of experiences
of online risk, relations of trust are improving.96 This, in turn, strengthens the ability
of states to rely on parents to foster their individual child’s best interests online in ways
appropriate to their evolving capacity, as long as states and industry provide the needed
tools, mechanisms and other resources to parents and in their regulation and monitoring
of the digital environment.97
On the other hand, provision to support parents is often lacking, even in wealthy coun-
tries. Moreover, it is in such countries that the leading edge of technological innovation
may infringe children’s rights in ways that the public (parents, experts, welfare profession-
als, the state) are inevitably slow to anticipate, recognize or redress. In relatively wealthy
countries, too, we often see the leading edge of social innovation (very young Internet
users, highly immersed users, parents sharing images of children online), again, in ways
that society lacks resources to evaluate or intervene in.

2.4.2  Changing family relations


In consultation, children note that digital media can be crucial for maintaining their
relationships with family, both nuclear and extended. This is particularly the case for
children living in diasporic communities or, although evidence is sparse, among migrants
and refugees:98

Using Skype so I can contact my family overseas, in Malta, and be able to talk to them and keep
them updated with what’s happening in our country and what’s going on in theirs. (girl aged 15,
Australia)

Yet some consider that digital media may impede meaningful time spent with family:

When my family gets together for dinner there is no communication. We’re all on tablets, phones.
This is a problem . . . We don’t talk as much as we do before. (boy aged 17, Malaysia)

While parents and carers are struggling to manage digital media in the lives of their
children, the situation for children living outside the biological nuclear family can be

94
 Hasebrink et al. (2009); Livingstone and Helsper (2008); Livingstone et al. (2011b);
McDonald-Brown et al. (2017). Specific suggestions for policy and practice appropriate to diverse
global contexts can be found in UNICEF (2017).
95
  See Helsper et al. (2013).
96
 Livingstone et al. (2017); Lwin et al. (2008).
97
  See OECD (2012b).
98
  See also www.enacso.eu/news/migrant-minors-and-the-internet-a-report-by-save-the-children-
italy/.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 393 13/12/2018 15:25


394  Research handbook on human rights and digital technology

particularly challenging. For example, for children living in care homes or institutions,
regulations often prevent children from accessing digital media for their own safety,
notwithstanding the cost to their social integration.
For children without parents or adequate alternative forms of care, digital media may
be yet more elusive, isolating them from their peers or sources of confidential help.99 For
children living in abusive or violent homes, digital media may become part of the problem
rather than the solution. For example, consider the impact of digital media on adopted
children and their families, where traditional efforts at protecting children’s privacy rights
from their sometimes-abusive or problematic birth parents have become highly confused
and almost impossible to implement. This is, in part, because children themselves may use
digital media to exercise their right to know and contact their birth family, and because
the courts and social workers that have long sought to oversee children’s best interests have
been disintermediated by digital networks.100

2.5  Disability, Basic Health and Welfare

With particular relevance for UNCRC, Articles 23 (children with a disability), 24 (right to
health) and 39 (recovery from trauma), in the global North policy-makers, practitioners
and researchers have long debated the potentially negative impacts of media on children’s
rights to a healthy life. These debates unfold in the context of broader concerns about the
adverse effects of sedentary lifestyles on growing rates of obesity and associated health
risks.

2.5.1  Balancing costs and benefits


On the ‘costs’ side of the scales, several problems are gaining attention, including
the potential consequences of the fact that children are exposed and susceptible to
the marketing of fast-food and calorie-intense, low-nutrient food and beverages and,
secondly, that the more time children spend online, the less time they have to engage in
activities that promote exercise and healthy eating and sleep patterns, undermining their
capacity to establish lifestyle behaviours early in life that promote both their immediate
and long-term right to a healthy life.101 In addition, a sizeable body of research, policy
and practice has addressed the potentially addictive qualities of digital media – framed
primarily as a mental health risk – centring in particular on children’s gaming and social
media practices.102 Also of long-standing concern is the effect of exposure to advertising
on diet and other consequences for children’s wellbeing,103 including the evidence (albeit
contested) of the influence of violent media content on children’s aggression and fear,104

 99
  Wilson (2016).
100
  See Aroldi and Vittadini (2017).
101
  See e.g., Brown and Bobkowski (2011); Chassiakos et al. (2016).
102
  See resources from the Center on Media and Child Health at http://cmch.tv/. Indeed, in early
2018, the World Health Organization proposed the inclusion of excessive gaming in the revised ver-
sion of The International Classification of Diseases. See www.theguardian.com/games/2018/feb/05/
video-gaming-health-disorder-world-health-organisation-addiction.
103
  Castro and Osório (2013); Polak (2007).
104
  See Gentile et al. (2004); Strasburger et al. (2012); Ybarra et al. (2008).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 394 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  395

and of sexual/pornographic content on children’s sexual development, self-esteem and the


formation of sexual norms (e.g. regarding consent, respect or sexual practices).105
In parallel, on the ‘benefits’ side of the scales, emerging research demonstrates that digi-
tal media can powerfully support children’s health and wellbeing. An emerging evidence
base suggests that, under certain circumstances, digital media, and in particular, biometric
devices, can foster positive approaches to eating, exercise, sleep and a range of other
physical and mental health practices,106 as can online social support and forms of therapy
support for those with mental health difficulties.107 Digital media are also playing a role
in protecting children’s rights to a healthy life in the face of major health epidemics in the
global South. For example, UNICEF’s text messaging platform, U-Report, has played a
key role in enabling children to access much-needed sexual health information in settings
where cultural taboos prevent them from seeking such information from parents and
carers. Evidence shows that this platform is building awareness and promoting healthy
sexual practices in countries where HIV is an ongoing population health challenge.
More simply, as children gain access to digital media, they seek all kinds of information,
including health information, relishing the immediacy and confidentiality that the Internet
can provide. The Global Kids Online project found, for instance, that around one-fifth
of 12- to 14-year-olds and 43 per cent of 15- to 17-year-olds in South Africa looked for
health information online at least every week (rising to over two-thirds in Argentina and
some other countries). Much of the available research on online opportunities to gain
health information concern adolescents’ preferred means of learning, asking whether they
want to receive health information through digital media. Less research evaluates whether
they actually learn from online sources, let alone whether what they learn is beneficial.108
Indeed, as Livingstone et al.’s review shows,109 many efforts to provide health information
to children in poor countries struggle or fail because of insufficient attention to the informa-
tion children actually seek or need, because of an often state-led preference for providing
basic medical information without child-centred interpretation or attention to the social
contexts of young people’s lives. Nonetheless, despite such opportunities, the potential for
digital media to support children’s right to a healthy life across a range of contexts and
settings has been inadequately explored and acted on to date. As Burns et al. argue:

There is an urgent need to capitalise on technologies to promote access to online self-directed


wellness management and the development of best-practice models that provide seamless and
continuous support and care across online and offline services.110

2.5.2  Exploring the future potential for wellbeing


Researchers are currently evaluating a range of apps and biometric devices for their ben-
efits for both physical and mental health.111 Once the evidence is in, it will be important to

105
  See Peter and Valkenburg (2006); Wolak et al. (2007).
106
  See e.g., Cummings et al. (2013).
107
 Burns et al. (2013).
108
 Livingstone et al. (2017).
109
 Livingstone et al. (2017).
110
 Burns et al. (2013) 5.
111
 Hides et al. (2014).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 395 13/12/2018 15:25


396  Research handbook on human rights and digital technology

promote the health benefits and to engage children in developing initiatives to encourage
children globally to exercise their right to a healthy life. At present, they tend to reflect the
negative perceptions they hear from adults and the mass media:
Health may deteriorate if too much time is spent in front of computers, tablets or smart-phones.
(girl aged 15, Malaysia)
[If digital media disappeared], I would be healthier because I would get outside more often. (girl
aged 16, Australia)
When we get addicted to our digital devices, we tend to stay up all night playing a game, watching
movies, chatting with friends or simply listening to music, and that is really bad for our health.
(girl aged 14, Malaysia)

While children do not explicitly connect digital media with benefits for their mental
health and wellbeing, they say that, ‘by engaging with digital media they learn new skills
and develop their talents; they become informed citizens of the world who can contribute
meaningfully to their communities; and they foster friendships, family ties, and a sense
of community and belonging’,112 all of which is critical to their resilience and wellbeing.
Digital media also provide opportunities for more isolated, marginalized or non-
dominant children to be included by engaging in peer relations and social life on their
own terms. The ‘Growing Up Queer’ project found that digital media provide a vital
source of information and support for LGBTQI young people who, due to entrenched
social stigma and practices of discrimination, are more likely to develop long-term
mental health difficulties and engage in alarming rates of suicidal ideation.113 The work
of the Young and Well Cooperative Research Centre114 demonstrates that digital media
can powerfully support a diverse range of children’s mental health and wellbeing.115 They
can be especially important in connecting children who live with a disability, serious ill-
ness or chronic disease with their peers, minimizing their social isolation, enabling them
to develop the necessary social and technical skills to engage with the social world,116 and
fostering their economic participation in ways that give substance to the fuller expression
of their rights.
Digital media can provide such children with continuity through periods of absence
from school or social activities, yielding benefits for their educational and other rights:
If you’re sick, you can get homework . . . So you don’t really miss a day at school, because of
technology you can just ask a friend or even a teacher. (girl aged 16, Trinidad and Tobago)

These ideas are supported by the stories of children like Kartik Sawhney,117 and the
practice-based knowledge of youth-facing organizations such as Soronko Solutions in
Ghana and Livewire.org.au in Australia, suggesting that organizations working in the
disability and chronic illness support sectors should be encouraged to work with such

112
 Third et al. (2014) 9.
113
  See Cole and Griffiths (2007); Robinson et al. (2014); Subrahmanyam and Greenfield
(2008).
114
  See www.youngandwellcrc.org.au.
115
 Lala et al. (2014).
116
  Third and Richardson (2010).
117
  Quoted in Third et al. (2014) 69.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 396 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  397

children to further explore how to implement digital media initiatives that enhance their
rights.
However, such claims about the possibilities for digital media to foster strength, resilience
and wellbeing in children must be weighed against a body of research that demonstrates
that some children encounter serious challenges to their wellbeing online. As noted earlier,
research shows that those children who are most vulnerable offline are often those who are
most vulnerable online.118 This calls for careful, proportionate and holistic assessment of
the need for protection and support, as well as for tech-savvy training and awareness on the
part of the specialist organizations that work with children with special needs.
Equally, it is vital that states target resources for specifically vulnerable groups rather
than spreading them (too) thinly across entire populations or, worse, applying safety-led
restrictions to the majority even though they are really for the intended benefit of a
minority. As Samuels et al. conclude from their research on cyber-bullying and sexual
harassment in South Africa:

Interventions aimed at reducing levels of online violence should target at-risk youths in general
and not simply those who frequently make use of social and digital media.119

As with online opportunities, the consequences of online risks in terms of actual harms
are heavily dependent on the child’s maturity and resilience, on the one hand, and on
their circumstances and resources, on the other.120 In relation to digital media, too little
attention is paid to children’s best interests and evolving capacity, with both public and
private bodies tending to treat ‘children’ or worse, Internet ‘users’, as if all were the same
in relation to their rights and needs in the digital environment.

2.6  Education, Leisure and Cultural Activities

2.6.1  Formal schooling


Children around the world see digital media first and foremost as a pleasurable and valued
form of leisure, and as a resource of huge potential for learning. Learning, here, includes
formal, informal and non-formal education, whether in or out of school, to supplement
school provision or to compensate for its limits or absence, to support a given curriculum
or to learn something interesting or valuable for the child that is entirely unrelated to
school, in support of developing them to their full potential. Digital media thus have
consequences for, especially, UNCRC, Articles 28 (education), 29 (educational goals,
including in relation to rights), 31 (leisure, play and culture) and 42 (knowledge of rights).
But, as the prominent failures of such high-profile initiatives as the One Laptop per
Child amply illustrate, providing access to digital media alone is not enough.121 Not only
are digital media vital for many child rights, but their provision must also be accompanied
with digital literacy education and training for children, teachers and parents, along with
a host of related forms of support and expertise. Several recent evidence reviews assert

118
  See e.g., Livingstone et al. (2011a).
119
 Samuels et al. (2013) 36.
120
 Livingstone et al. (2011a; 2012).
121
  James (2010); Kraemer et al. (2009).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 397 13/12/2018 15:25


398  Research handbook on human rights and digital technology

the growing importance of digital media for children’s learning and education.122 These
generally support the recommendations that states should do the following:

● Incorporate digital media within schools constructively, wisely, and with appropri-
ate curriculum development, teacher training and technical support.123
● Embed critical digital media education across school subjects to create a ‘digital
thread’ throughout the process of learning.124
● Use digital media to overcome rather than reinforce barriers or misunderstandings
between home and school, and formal and informal learning sites.125
● Ensure that digital media in education are used fairly, including to transcend or
compensate for or work around traditional forms of discrimination, to alleviate
inequalities and exclusions.126
● Persuade established educational authorities to rethink how digital media can
support interest-driven learning to suit the motivation, needs and best interests of
each child.127
● Conceive of digital literacy (or digital citizenship or digital media) education
broadly, to include imaginative, critical, civic and creative skills and literacies that
include, but go far beyond, e-safety.128
● Conduct independent evaluations of digital media interventions so that best
practice can be shared and mistakes learned from rather than perpetuated.

It is clear from the evidence that children seek to use digital media to support their
education, but there remain many barriers. For example, the Global Kids Online study,
including research in Argentina, Serbia, South Africa and the Philippines, found that
children in countries where access to the Internet is limited for reasons of connectivity or
cost are less confident in their digital skills, especially younger children and those from
poorer countries. They also receive less support from parents and carers since these, too,
lack skills (e.g. parents and carers in South Africa are as skilled as children aged 12–14).129
Just what should be taught is often unclear. Leung and Lee found even in their study of
9- to 19-year-olds in Korea that:

In information literacy, they were generally very competent with publishing tools but were not
social-structurally literate, especially in understanding how information is socially situated and
produced.130

Some years ago, based on a literature review and case studies in China, India and
Vietnam, Lim and Nekmat concluded that:

122
 Byrne et al. (2016); Frau-Meigs and Hibbard (2016); OECD (2012b).
123
  See Frau-Meigs and Hibbard (2016); Third et al. (2014).
124
  Hobbs (2011); NCCA (2007a); see also Davidson and Goldberg (2009); NCCA (2007b).
125
  See Buckingham (2006).
126
  See Greenhow and Lewin (2015); Henderson (2011); Mardis (2013); Sinclair and Bramley
(2011).
127
  See Vickery (2014).
128
 Myers et al. (2013).
129
  See Byrne et al. (2016).
130
  Leung and Lee (2012) 130.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 398 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  399

The acquisition and transmission of media literacy skills can have significant effects beyond
merely equipping people with the skills to consume and produce media content. Vested with
these skills, the youths trained in these programmes became considerably more empowered in
their ability to express themselves, raise societal awareness about issues that concerned them, and
also found themselves growing and developing as individuals . . . media literacy programmes that
focus on empowerment and democratic participation are arguably more sustainable than those
that focus only on skills. Such programmes will be more appealing to participants, and given the
focus on nurturing the complete individual, participants are also more likely to be committed
to the programme.131

A host of initiatives around the world now seek to rise to these challenges, some
community-based rather than top-down, some incorporating strategies to respond to
local needs as well as government imperatives, a few independently evaluated so as to
learn from past mistakes and share good practice.132 Ironically, the more states invest in
technology to support education, the more excluded or discriminated against become
those children who lack access to education, educational technology or digital literacy
education. Those with access are clear about the benefits; those who lack access are clear
about the problem, looking to their government for redress:
The government should provide communication devices at our school. (boy, Egypt)
Digital media contributes to education . . . Imagine all that is there in front of you on the net, to
research, to learn. (girl, Brazil)

The RErights platform, along with other international projects, seeks not only to
promote children’s discussion of their rights, but also their awareness of their rights,
the ability to articulate these, and the competencies to enact them. In other words, an
important part of education is to learn about their rights (UNCRC, Article 42), and
digital media can also help here:
Because of the Internet children can now look up what their rights are. (girl aged 17, Belgium)
From the digital technology children can form an organization e.g. UNICEF to discuss our
rights as children. (girl aged 17, Malaysia)
[The] right to access information online to complete my homework is an important right in the
digital age. (girl aged 17, Malaysia)

However, children also note that many schools block websites, in particular, social
media sites, suggesting that educational institutions are far from making the most of
digital media, with efforts to ensure protection conflicting with and often undermining
efforts to promote provision and participation rights. In the United Kingdom, the 5Rights
initiative has engaged children in innovative deliberative discussions to debate their rights
online and to contribute to finding solutions to their infringement.133

2.6.2  Informal learning and leisure


Children’s pleasure in digital media is often regarded negatively by adult society, despite
children’s right to leisure and play as part of living a full life and developing their

131
  Lim and Nekmat (2008) 273–74.
132
  AkiraChix (no date); GSMA (2014); Nethope Solutions Center (2015); Rijsdijk et al. (2011).
133
 Coleman et al. (2017).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 399 13/12/2018 15:25


400  Research handbook on human rights and digital technology

full potential. Given that evidence shows that children’s digital leisure-time activities
enhance their skills base and expose them to a wider variety of opportunities,134 it is
critical that children’s rights are foregrounded within popular and policy debates to shift
adult thinking. In short, despite the various pitfalls and a history of struggling or failed
initiatives, digital media can support education and education can support digital media
engagement, but evidence-based guidance is greatly needed to ensure investments are
well founded.
During leisure-time, children use the Internet to expand their learning beyond the
school curriculum, in ways that research shows can open up new learning pathways,
support skills, engage the disaffected and support wider inclusion:135

I have learnt how to bake, various baking techniques. (girl aged 16, Trinidad and Tobago)
I learnt to make these clay dolls on YouTube. (boy aged 8, Colombia)
I like creating apps, what I like is that we can create new things. (boy aged 16, Malaysia)
There are numerous games and contents for kids to play and use in their spare time. (girl aged
16, Serbia)

In line with trends in user-generated content, some children reported engaging in


creative content production in their leisure-time, highlighting their right to expression.
By providing an avenue for children to create content and share with others, digital media
may be seen to be fostering their right to expression. Yet:

Although digital access and literacy is growing apace, the evidence shows that many of the
creative, informative, interactive and participatory features of the digital environment remain
substantially underused even by well-resourced children.136

This is partly a problem of digital media literacy.137 However, it is also problematic that
there are few incentives for services to host and support children’s content, to do so in the
public interest rather than for profit; and the wider adult society often does not value or
attend to children’s contributions in the digital environment.

3. CONCLUSION

In this chapter, we have discussed the relation between children’s rights and digital media
in three distinct but interlinked ways, focusing on:138

134
 Ito et al. (2009; 2013); Livingstone and Helsper (2007).
135
  See Cilesiz (2009); Ito et al. (2013); Third et al. (2014); Walton (2009), among others.
136
 Livingstone et al. (2014b) 4.
137
  We focus on digital media literacy to emphasize the importance of children’s critical
understanding of a changing and complex digital environment. See also UNESCO (no date);
Buckingham (2006); and Hobbs (2011).
138
  See Third and Collin (2016).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 400 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  401

● children’s uses of digital media: questions of child rights here tend to prioritize the
‘right’ to (and barriers in accessing) digital media devices, content and services;
● children’s rights in digital environments: the focus here is on enhancing ways in which
children can enact their rights in online spaces, and overcoming the ways in which
their rights are infringed or violated in a host of digital, networked and online
spaces;
● children’s rights in the digital age: here the most ambitious challenges arise, recogniz-
ing that insofar as digital media are reshaping many dimensions of society, this
raises new prospects for how child rights could be further enhanced or infringed in
society.139

Digital media are set to be of growing significance in the future, powerfully reshap-
ing the conditions of and possibilities for children’s rights. It is vital that the power of
the digital is harnessed to deliver the Sustainable Development Goals for the broadest
population possible, maximizing opportunities for children both in the here-and-now and
as future adults, while preventing infringement to their rights, again, both in the present
and in the future. It is equally vital that children’s voices are heard in the expert debates
that too often unfold ‘above their heads’. How can this happen? The report of the Day of
General Discussion (DGD) by the UN Committee on the Rights of the Child set out the
distinct roles and responsibilities of relevant stakeholders needed to take responsibility
for children’s rights in relation to digital media, demanding that ‘States should also ensure
regular monitoring of implementation and assessment of legislation and policies’.140
Particularly, it urged that:

States should recognize the importance of access to, and use of, digital media and ICTs for
children and their potential to promote all children’s rights, in particular the rights to freedom
of expression, access to appropriate information, participation, education, as well as rest,
leisure, play, recreational activities, cultural life and the arts . . . In addition, States should
ensure that equal and safe access to digital media and ICTs, including the Internet, is integrated
in the post-2015 development agenda . . . [and] States should adopt and effectively implement
comprehensive human rights-based laws and policies which integrate children’s access to digital
media and ICTs and ensure the full protection under the Convention and its Optional Protocols
when using digital media and ICTs.141

However, the global community is still far from realizing the potential of digital media
to support children’s rights. Many states struggle to recognize children as agents and
rights-holders with a significant stake in the digital world, undermining their ability to
fulfil their fundamental duty of care to children in the digital environment. On the one
hand, too many children are being exposed to significant harm. On the other hand, a
protectionist mentality often inhibits states’ capacity to realize the expansive possibilities
for the digital to support children’s rights. This is compounded by a lack of rigorous
and actionable evidence to support effective policy and interventions, particularly in

139
  See Lievrouw and Livingstone (2006); Livingstone and Bulger (2014); Livingstone and
Third (2017).
140
  OHCHR (2014) 19.
141
  Ibid. 18–19.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 401 13/12/2018 15:25


402  Research handbook on human rights and digital technology

the global  South. Crucially, states are not yet adequately equipped with the necessary
frameworks and guidance to enable them confidently to drive effective digital policy and
practice that balances children’s protection from harm with nurturing the opportunities
for children.
The difficulties for states include coordinating the multiple relevant stakeholders across
the public, private and third sectors, and the fact that digital media have consequences
across the full range of children’s rights. Both the physical and informational infrastruc-
tures that underpin digital environments are proprietary, owned significantly by powerful
multinational corporations whose interests are commercial and which, while not beyond
the law or, indeed, the UNCRC, 142 are difficult for individual states to regulate. Thus, the
move towards digital media is substantially led by commercial developments rather than
those framed in terms of children’s best interests:143

The global corporate players through new gadgets, schemes, and advertisement, as well as the
government, through rhetoric and development schemes, are raising normative expectations to
be part of global markets that are impossible to meet in their rural location with infrastructural
limitations.

Even in relatively privileged countries in the global North, uncertainties, problems and
confusions are accumulating about how to ensure (or even recognize) the best interests of
the child as they apply in relation to digital media and the wider world now being shaped
by digital and networked media. The pressing questions confronting the global policy and
practice community include:

● How can the digital be mobilized to support (and not infringe) the full range
of children’s rights, for all children globally, including the most vulnerable or
disadvantaged?
● How can we foster children’s protection from harm online while simultaneously
empowering them to maximize the opportunities of growing connectivity?
● What is the role of states in ensuring children’s rights in the digital age, and how can
they work with other stakeholders in this task?

If society is to support children to better realize their rights using digital media, this will
require a concerted effort. The time is right for the global policy and practice community
to address these questions and to meet the challenges of children’s rights in relation to
digital media and to meet the demands of a future in which ‘the digital’ will play an ever
more integral role in the lives of children and adults around the world. There is a growing

142
  It is helpful when considering children’s rights to refer to UNCRC General Comment
No. 16 on state obligations regarding the impact of the business sector on children’s rights: ‘The
Committee recognizes that duties and responsibilities to respect the rights of children extend
in practice beyond the State and State-controlled services and institutions and apply to private
actors and business enterprises. Therefore, all businesses must meet their responsibilities regarding
children’s rights and States must ensure they do so. In addition, business enterprises should not
undermine the States’ ability to meet their obligations towards children under the Convention and
the Optional Protocols thereto’. See Committee on the Rights of the Child (2013).
143
  Pathak-Shelat and DeShano (2014) 998.

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 402 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  403

body of initiatives around the world which suggests ways in which policy and practice can
respond constructively to these challenges and thereby meet the specific needs of children
and communities in diverse contexts.144 The coming decade is likely to be crucial in the
global public and commercial shaping of digital environments. At stake is identifying,
anticipating and addressing the global relevance of the UNCRC in ‘the digital age’, by
and across geographic regions, and encompassing all dimensions of children’s lives. If
society can seize the opportunities, digital media will surely constitute a powerful tool for
delivering on the promise of the Convention. If society fails in this effort, digital media
threaten to undermine children’s rights on a significant scale. We suggest attention to both
the opportunities and risks for children’s rights is of critical urgency.

REFERENCES

AkiraChix (no date), http://akirachix.com


Albury, K. (2017), ‘Just Because It’s Public Doesn’t Mean It’s Any of Your Business: Adults’ and
Children’s Sexual Rights in Digitally Mediated Spaces’ New Media and Society 1–13, available at doi:
10.1177/1461444816686322
Alper, M. and G. Goggin (2017), ‘Digital Technology and Rights in the Lives of Children with Disabilities’ New
Media and Society 1–15
Angle, S., T. Baerthlein, N. Daftari, B. Rambaud and N. Roshani (2014), Protecting the Rights of Children: The
Role of the Media – Lessons from Brazil, India and Kenya (Internews Europe), available at https://internews.
org/sites/default/files/resources/InternewsEurope_ChildRightsMedia_Report_2014.pdf
Aoyama, I. and T.L. Talbert (2010), ‘Cyberbullying Internationally Increasing: New Challenges in the
Technology Generation’ in R. Zheng, J. Burrow-Sanchez and C.J. Drew (eds), Adolescent Online Social
Communication and Behavior: Relationship Formation on the Internet (IGI Global)
Aroldi, P. and N. Vittadini (2017), ‘Children’s Rights and Social Media: Issues and Prospects for Adoptive
Families in Italy’ New Media and Society 1–9, available at doi: 10.1177/1461444816686324
Banaji, S. (2015), ‘Behind the High-tech Fetish: Children, Work and Media Use Across Classes in India’ 77(6)
International Communication Gazette 519–32, available at doi: 10.1177/1748048515597874
Banaji, S. and D. Buckingham (2013), The Civic Web: Young People, the Internet and Civic Participation (MIT
Press)
Bannink, R., S. Broeren, P.M. van de Looij-Jansen, F.G. de Waart and H. Raat (2014), ‘Cyber and Traditional
Bullying: Victimization as a Risk Factor for Mental Health Problems and Suicidal Ideation in Adolescents’
9(4) PLoS ONE 1–7, available at doi: 10.1371/journal.pone.0094026
Barbovschi, M., L. Green and S. Vandoninck (eds) (2013), Innovative Approaches for Investigating How Children
Understand Risk in New Media: Dealing with Methodological and Ethical Challenges (EU Kids Online,
London School of Economics and Political Science), available at http://eprints.lse.ac.uk/53060/
Berríos, L., M. Buxarrais and M. Garcés (2015), ‘ICT Use and Parental Mediation Perceived by Chilean
Children’ XXIII(45) Communicar 161–68
Berson, I.R. and M.J. Berson (2006), ‘Children and Their Digital Dossiers: Lessons in Privacy Rights in the
Digital Age’ 21(1) International Journal of School Education 135–47, available at http://files.eric.ed.gov/
fulltext/EJ782348.pdf
BEUC (The European Consumer Organization) (2017), German Regulator Bans Spying Doll Cayla, available at
www.beuc.eu/publications/beuc-web-2017-006_german_regulator_bans_cayla.pdf
Bhat, C.R., S. Chang and M.A. Ragan (2013), ‘Cyberbullying in Asia’ 18(2) Cyber Asia and the New Media
36–39
Broadband Commission for Digital Development (2015), The State of Broadband 2015: Broadband as a
Foundation for Sustainable Development (UNESCO and ITU), available at www.broadbandcommission.org/
documents/reports/bb-annualreport2015.pdf
Brown, J.D. and P.S. Bobkowski (2011), ‘Older and Newer Media: Patterns of Use and Effects on Adolescents’
Health and Well-Being’ 21(1) Journal of Research on Adolescence 95–113

144
  See Cortesi et al. (2015); and UNICEF (2017).

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 403 13/12/2018 15:25


404  Research handbook on human rights and digital technology

Buckingham, D. (2006), ‘Defining Digital Literacy: What Do Young People Need to Know About Digital
Media?’ 4 Nordic Journal of Digital Literacy 263–76
Burns, J.M., T.A. Davenport, H. Christensen, G.M. Luscombe, J.A. Mendoza, A. Bresnan, M.E. Blanchard
and I.B. Hickie (2013), Game On: Exploring the Impact of Technologies on Young Men’s Mental Health
and Wellbeing: Findings from the First Young and Well National Survey (Young and Well Cooperative
Research Centre), available at https://cdn.movember.com/uploads/files/Our%20Work/game-on-movember-
foundation.pdf
Burton, P. and T. Mutongwizo (2009), Inescapable Violence: Cyberbullying and Electronic Violence Against
Young People in South Africa, Issue Paper No. 8 (Centre for Justice and Crime Prevention)
Byrne, J., D. Kardefelt-Winther, S. Livingstone and M. Stoilova (2016), Global Kids Online Research Synthesis,
2015–2016 (UNICEF Office of Research-Innocenti and London School of Economics and Political Science),
available at www.globalkidsonline.net/synthesis
Campos, R. and J.A. Simões (2014), ‘Digital Participation at the Margins: Online Circuits of Rap Music by
Portuguese Afro-descendant Youth’ 22(1) SAGE Journals YOUNG Editorial Group 87–106, available at doi:
10.1177/1103308813512931
Castro, T.S. and A.J. Osório (2013), ‘“I Love My Bones!” Self-harm and Dangerous Eating Youth Behaviours
in Portuguese Written Blogs’ 14(4) Young Consumers 321–30
Chassiakos, Y.L.R., J. Radesky, D. Christakis et al. (2016), ‘Children and Adolescents and Digital Media’ 138(5)
Pediatrics e20162593
Cilesiz, C. (2009), ‘Educational Computer Use in Leisure Contexts: A Phenomenological Study of Adolescents’
Experiences at Internet Cafés’ 46(1) American Education Research Journal 232–74
Cole, H. and M.D. Griffiths (2007), ‘Social Interactions in Massively Multiplayer Online Role-playing Gamers’
10(4) CyberPsychology and Behavior 575–83, available at doi: 10.1089/cpb.2007.9988
Coleman, J. and A. Hagell (eds) (2007), Adolescence, Risk and Resilience: Against the Odds (Wiley)
Coleman, S., K. Pothong, E. Perez Vallejos and A. Koene (2017), Internet on Our Own Terms: How Children
and Young People Deliberated About Their Digital Rights, available at https://casma.wp.horizon.ac.uk/
casma-projects/irights-youth-juries/the-internet-on-our-own-terms/
Collin, P., K. Rahilly, I. Richardson and A. Third (2011), The Benefits of Social Networking Services (Young
and Well Cooperative Research Centre), available at www.uws.edu.au/__data/assets/pdf_file/0003/476337/
The-Benefits-of-Social-Networking-Services.pdf
Committee on the Rights of the Child (CRC) (2009), General Comment No.12: The Right of the Child to be
Heard, CRC/C/GC/12, paras 22–25
Committee on the Rights of the Child (CRC) (2013), General Comment No. 16 (2013) on State Obligations on
the Impact of the Business Sector on Children’s Rights, CRC/C/GC/20 (United Nations Convention on the
Rights of the Child), available at http://tbinternet.ohchr.org/_layouts/treatybodyexternal/Download.aspx?sy
mbolno=CRC%2fC%2fGC%2f16&Lang=en
Committee on the Rights of the Child (CRC) (2015), Treaty-Specific Guidelines Regarding the Form and
Content of Periodic Reports to be Submitted by States Parties under Article 44, Paragraph 1(b), of the
Convention on the Rights of the Child, CRC/C/58/Rev.3 (United Nations Convention on the Rights of the
Child), available at http://docstore.ohchr.org/SelfServices/FilesHandler.ashx?enc=6QkG1d%2fPPRiCAq
hKb7yhsr1ZWeb%2bRuDNd9qD0ICL6ikRB2cfJhMR51%2f10eGSYFCtruq1Ql9a7QWVRO8Mi60ohm
vtNns63WFivVgw0QS1DEXzSOoUgSyF86P%2fVdRoD5Jx
Committee on the Rights of the Child (CRC) (2016), General Comment No. 20 (2016) on the Implementation
of the Rights of the Child During Adolescence, CRC/C/GC/20 (United Nations Convention on the Rights of
the Child), available at https://documents-dds-ny.un.org/doc/UNDOC/GEN/G16/404/44/PDF/G1640444.
pdf ?OpenElement
Cook, P., C. Heykoop, A. Anuntavoraskul and J. Vibulphol (2012), ‘Action Research Exploring Information
Communication Technologies (ICT) and Child Protection in Thailand’ 22(4) Development in Practice 574–87,
available at doi: 10.1080/09614524.2012.672960
Cortesi, S., U. Gasser, G. Adzaho, B. Baikie, J. Baljeu, M. Battles et al. (2015), Digitally Connected: Global
Perspectives on Youth and Digital Media (Social Science Research Network (SSRN) Electronic Paper
Collection), available at http://ssrn.com/abstract=2585686
Council of Europe (2016), Council of Europe Strategy for the Rights of the Child: 2016–2021, available at https://
rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=090000168066cff8
CRIN (Child Rights International Network) (2014), Access Denied: Protect Rights – Unblock Children’s
Access to Information (CRIN), available at www.crin.org/en/library/publications/access-denied-protect-ri​
ghts-unblock-childrens-access-information-0
CRIN (Child Rights International Network) (no date), ‘Towards a Charter for Children’s Rights in the Digital
Context’, available at www.ohchr.org/Documents/HRBodies/CRC/Discussions/2014/CRIN.pdf
Cummings, E., E. Borycki and E. Roehrer (2013), ‘Issues and Considerations for Healthcare Consumers Using
Mobile Applications’ 183 Studies in Health Technology and Informatics 227–31

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 404 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  405

Dahya, N. and J. Jenson (2015), ‘Mis/representations in School-based Digital Media Production: An


Ethnographic Exploration with Muslim Girls’ 9 Diaspora, Indigenous, and Minority Education 108–23
Davidson, C.N. and D.T. Goldberg (2009), The Future of Learning Institutions in a Digital Age (MIT Press)
Davidson, J. and E. Martellozzo (2010), Kingdom of Bahrain State of the Nation Review of Internet Safety
(Telecommunications Regulatory Authority), available at www.tra.org.bh/media/document/State%20of%20
the%20nation%20review%20full2.pdf
Davidson, J. and E. Martellozzo (2012), ‘Exploring Young People’s Use of Social Networking Sites and Digital
Media in the Internet Safety Context’ 16(9) Information, Communication and Society 1456–76
de Pauw, L. (2011), Girls Speak Out: Girls’ Fast-talk on the Potential of Information and Communication
Technologies in Their Empowerment and Development (Plan International)
Dinh, T., L. Farrugia, B. O’Neill, S. Vandoninck and A. Velicu (2016), Insafe Helplines: Operations, Effectiveness
and Emerging Issues for Internet Safety Helplines (European Schoolnet, Insafe, EU Kids Online, Kaspersky
Lab), available at www.lse.ac.uk/media@lse/research/EUKidsOnline/EUKidsIV/PDF/Helpline-insafe-
report.pdf
ECPAT (End Child Prostitution, Child Pornography and Trafficking in Children for Sexual Purposes) International
(2015), Written Statement Submitted to the UN Human Rights Council (New York), available at http://ecpat.de/
fileadmin/user_upload/Arbeitsschwerpunkte/Sexuelle_Gewalt_in_online_Situationen/20150313_Statement_
ECPAT_International.pdf
Erdur-Baker, O. (2010), ‘Cyberbullying and Its Correlation to Traditional Bullying, Gender and Frequent and
Risky Usage of Internet-mediated Communication Tools’ 12(1) New Media and Society 109–25
Ferraretto, L., M. Kischinhevsky, D. Lopez, A. Júnior, L. Klöckner, M. Freire and N. Prata (2011), ‘The Use
of Radio by Brazilian Teenagers and the Process of Digital Convergence’ 18(2) Journal of Radio and Audio
Media 381–96
Finkelhor, D., K. Saito and L. Jones (2015), Updated Trends in Child Maltreatment (Crimes Against Children
Research Center), available at www.unh.edu/ccrc/pdf/_Updated%20trends%202013_dc-df-ks-df.pdf
Frau-Meigs, D. and L. Hibbard (2016), Education 3.0 and Internet Governance: A New Global Alliance for
Children and Young People’s Sustainable Digital Development, Global Commission on Internet Governance
Paper Series No. 27 (Global Commission on Internet Governance), available at www.ourinternet.org/sites/
default/files/publications/gcig_no27_web.pdf
Gasser, U., C. Maclay and J. Palfrey (2010), Working Towards a Deeper Understanding of Digital Safety
for Children and Young People in Developing Nations, Berkman Center Research Publication No. 2010-7
(Berkman Center for Internet and Society), available at https://cyber.harvard.edu/sites/cyber.law.harvard.edu/
files/Gasser_Maclay_Palfrey_Digital_Safety_Developing_Nations_Jun2010.pdf
Gentile, D.A., P.J. Lynch, J.R. Linder and D.A. Walsh (2004), ‘The Effects of Violent Video Game Habits on
Adolescent Hostility, Aggressive Behaviors, and School Performance’ 27 Journal of Adolescence 5–22, avail-
able at doi: 10.1016/j.adolescence.2003.10.002
Global Commission on Internet Governance (2016), One Internet (Centre for International Governance
Innovation and Chatham House), available at www.cigionline.org/sites/default/files/gcig_final_report_-_
with_cover.pdf
Goh, W.W.L., S. Bay and V.H. Chen (2015), ‘Young School Children’s Use of Digital Devices and Parental Rules’
32(4) Telematics and Informatics 787–95, available at doi: 10.1016/j.tele.2015.04.002
Green, L., ‘Censoring, Censuring or Empowering? Young People and Digital Agency’ in M. Strano, H.
Hrachovec, F. Sudweeks and C. Ess (eds) (2012), Proceedings: Cultural Attitudes Towards Technology and
Communication 2012 (Murdoch University)
Greenhow, C. and C. Lewin (2015), ‘Social Media and Education: Reconceptualizing the Boundaries of Formal and
Informal Learning’ 41(1) Learning, Media and Technology 6–30, available at doi: 10.1080/17439884.2015.1064954
GSMA (2014), Najja7ni: Mobile Learning Services for Improving Education, English Language Skills and Employment
Opportunities in Tunisia, available at www.gsma.com/connectedliving/wp-content/uploads/2014/02/2013-
2014-Q4-Tunisiana-Najja7ni-services.pdf
GSMA (2015), Bridging the Gender Gap: Mobile Access and Usage in Low- and Middle-income Countries, avail-
able at www.gsma.com/mobilefordevelopment/programmes/connected-women/bridging-gender-gap
Hasebrink, U., S. Livingstone, L. Haddon and K. Ólafsson (2009), Comparing Children’s Online Opportunities
and Risks Across Europe: Cross-national Comparisons for EU Kids Online (EU Kids Online, London
School of Economics and Political Science), available at http://eprints.lse.ac.uk/24368/1/D3.2_Report-
Cross_national_comparisons-2nd-edition.pdf
Hashish, Y., A. Bunt and J.E. Young (2014), Proceedings of the SIGCHI: Conference on Human Factors in
Computing Systems (ACM), available at doi: 10.1145/2556288.2557128
Helsper, E.J., V. Kalmus, U. Hasebrink, B. Sagvari and J. de Haan (2013), Country Classification: Opportunities,
Risks, Harm and Parental Mediation (EU Kids Online, London School of Economics and Political Science)
Henderson, R. (2011), ‘Classroom Pedagogies, Digital Literacies and the Home–School Digital Divide’ 6(2)
International Journal of Pedagogies and Learning 152–61

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 405 13/12/2018 15:25


406  Research handbook on human rights and digital technology

Hides, L., D.J. Kavanagh, S.R. Stoyanov, O. Zelenko, D. Tjondronegoro and M. Mani (2014), Mobile
Application Rating Scale (MARS): A New Tool for Assessing the Quality of Health Mobile Applications
(Young and Well Cooperative Research Centre)
Hobbs, R. (2011), Digital and Media Literacy: Connecting Culture and Classroom (Cowin)
Holloway, D. and L. Green (2016), ‘The Internet of Toys’ 2(4) Communication Research and Practice 506–19,
available at doi: 10.1080/22041451.2016.1266124
Inhope.org (no date), ‘International Association of Internet Hotlines’, available at www.inhope.org/gns/home.
aspx
Internet Safety Technical Task Force (2008), Enhancing Child Safety and Online Technologies: Final of the
Internet Safety Technical Task Force to the Multi-state Working Group on Social Networking of State Attorney
Generals of the United States (Berkman Center for Internet and Society, Harvard University), available at
https://cyber.harvard.edu/sites/cyber.law.harvard.edu/files/ISTTF_Final_Report.pdf
Ito, M., S. Baumer, M. Bittanti, R. Cody, B.H. Stephenson, H.A. Horst et al. (2009), Hanging Out, Messing
Around, and Geeking Out: Kids Living and Learning with New Media (MIT Press)
Ito, M., C. Davidson, H. Jenkins, C. Lee, M. Eisenberg and J. Weiss (2008), ‘Foreword’ in M.J. Metzger and A.J.
Flanagin (eds), Digital Media, Youth, and Credibility (MIT Press)
Ito, M., K. Gutiérrez, S. Livingstone, B. Penuel, J. Rhodes, K. Salen et al. (2013), Connected Learning: An
Agenda for Research and Design (Digital Media and Learning Research Hub)
ITU (International Telecommunications Union) (2010), Child Online Protection: Statistical Frameworks and
Indicators 2010, available at www.itu.int/dms_pub/itu-d/opb/ind/D-IND-COP.01-11-2010-PDF-E.pdf
ITU (International Telecommunications Union) (2012), A Bright Future in ICTs: Opportunities for a New
Generation of Women, available at www.itu.int/en/ITU-D/Digital-Inclusion/Women-and-Girls/Documents/
ReportsModules/ITUBrightFutureforWomeninICT-English.pdf
ITU (International Telecommunications Union) (2016), Measuring the Information Society Report, available at
www.itu.int/en/ITU-D/Statistics/Documents/publications/misr2016/MISR2016-w4.pdf
ITU (International Telecommunications Union) (no date), ‘2030 Agenda for Sustainable Development’, avail-
able at www.itu.int/en/ITU-D/Statistics/Pages/intlcoop/sdgs/default.aspx
ITU (International Telecommunications Union) and UNESCO (2013), Doubling Digital Opportunities, available
at www.broadbandcommission.org/Documents/publications/bb-doubling-digital-2013.pdf
James, J. (2010), ‘New Technology in Developing Countries: A Critique of the One Laptop Per Child Program’
28(3) Social Science Computer Review 381–90
Khalil, J.F. (2017), ‘Lebanon’s Waste Crisis: An Exercise of Participation Rights’ New Media and Society 1–12,
available at doi: 10.1177/1461444816686321
Kleine, D., D. Hollow and S. Poveda (2014), Children, ICT and Development: Capturing the Potential, Meeting
the Challenges (UNICEF Office of Research-Innocenti), available at www.unicef-irc.org/publications/pdf/
unicef_royalholloway_ict4dreport_final.pdf
Kraemer, K., J. Dedrick and P. Sharma (2009), ‘One Laptop Per Child’ 52(6) Communications of the ACM
66–73
La Rue, F. (2014), Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of
Opinion and Expression, A/69/335 (United Nations General Assembly), available at www2.ohchr.org/english/
bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf
Lala, G., C. McGarty, E. Thomas, M. Broderick and M. Mhando (2014), ‘Messages of Hope and Messages
of Support: The Role of Positive Stories of Survival in Assisting Recovery and Developing Wellbeing in
Rwandan Genocide Survivors and Their Supporters’ 2(1) Journal of Social and Political Psychology, available
at: doi:10.5964/jspp.v2i1.290
Leung, L. and P. Lee (2012), ‘The Influences of Information Literacy, Internet Addiction and Parenting Styles
on Internet Risks’ 14(1) New Media and Society 117–36
Lievens, E. (2017), Children and the GDPR: A Quest for Clarification and the Integration of Child
Rights  Considerations (Ghent University), available at https://biblio.ugent.be/publication/8505935/
file/8505937.pdf
Lievrouw, L.A. and S. Livingstone (2006), Handbook of New Media: Social Shaping and Social Consequences –
Fully Revised Student Edition (Sage Publications)
Lim, S. and E. Nekmat (2008), ‘Learning Through “Prosuming”: Insights from Media Literacy Programmes in
Asia’ 13(2) Science, Technology and Society 259–78
Livingstone, S. (2014), ‘Children’s Digital Rights: A Priority’ 42(4/5) Intermedia 20–24, available at http://eprints.
lse.ac.uk/60727/
Livingstone, S. and M. Bulger (2013), A Global Agenda for Children’s Rights in the Digital Age: Recommendations
for Developing UNICEF’s Research Strategy (UNICEF Office of Research), available at www.unicef-irc.org/
publications/pdf/lse%20olol%20final3.pdf
Livingstone, S. and M. Bulger (2014), ‘A Global Research Agenda for Children’s Rights in the Digital Age’ 8(4)
Journal of Children and Media 317–35, available at http://eprints.lse.ac.uk/62130/

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 406 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  407

Livingstone, S. and J. Byrne (2015), ‘Challenges of Parental Responsibility in a Global Perspective’ in U. Gasser
(ed.), Digitally Connected: Global Perspectives on Youth and Digital Media (Berkman Center for Internet
and Society, Harvard University), available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2585686
Livingstone, S. and E.J. Helsper (2007), ‘Gradations in Digital Inclusion: Children, Young People and the
Digital Divide’ 9(4) New Media & Society 671–96, available at doi: 10.1177/1461444807080335
Livingstone, S. and E.J. Helsper (2008), ‘Parental Mediation of Children’s Internet Use’ 52(4) Journal of
Broadcasting and Electronic Media 581–99, available at http://eprints.lse.ac.uk/25723/
Livingstone, S. and B. O’Neill (2014), ‘Children’s Rights Online: Challenges, Dilemmas and Emerging
Directions’ in S. van der Hof, B. van den Berg and B. Schermer (eds), Minding Minors Wandering the Web:
Regulating Online Child Safety (Springer), available at http://eprints.lse.ac.uk/62276/
Livingstone, S. and A. Third (2017), ‘Children and Young People’s Rights in the Digital Age: An Emerging
Agenda’ New Media and Society, available at http://eprints.lse.ac.uk/68759/
Livingstone, S., J. Byrne and M. Bulger (2015a), Researching Children’s Rights Globally in the Digital Age
(Media@LSE, UNICEF, EU Kids Online), available at http://eprints.lse.ac.uk/62248/
Livingstone, S., J. Carr and J. Byrne (2015b), One in Three: Internet Governance and Children’s Rights, Global
Commission on Internet Governance Paper Series No. 22 (Global Commission on Internet Governance),
available at www.cigionline.org/sites/default/files/no22_2.pdf
Livingstone, S., A. Görzig and K. Ólafsson (2011a), Disadvantaged Children and Online Risk (EU Kids Online,
London School of Economics and Political Science), available at http://eprints.lse.ac.uk/39385/
Livingstone, S., L. Haddon and A. Görzig (eds) (2012), Children, Risk and Safety on the Internet: Research and
Policy Challenges in Comparative Perspective (Policy Press), available at http://eprints.lse.ac.uk/44761/
Livingstone, S., G. Mascheroni and E. Staksrud (2015c), Developing a Framework for Researching Children’s
Online Risks and Opportunities in Europe (EU Kids Online, London School of Economics and Political
Science), available at http://eprints.lse.ac.uk/64470/
Livingstone, S., L. Haddon, A. Görzig and K. Ólafsson (2011b), Risks and Safety on the Internet: The
Perspective of European Children (EU Kids Online, London School of Economics and Political Science),
available at http://eprints.lse.ac.uk/33731/
Livingstone, S., L. Kirwil, C. Ponte and E. Staksrud (2014a), ‘In Their Own Words: What Bothers Children
Online?’ 29(3) European Journal of Communication 271–88, available at http://eprints.lse.ac.uk/62093/
Livingstone, S., A. Nandi, S. Banaji and M. Stoilova (2017), Young Adolescents and Digital Media
Uses, Risks and Opportunities in Low- and Middle-income Countries: A Rapid Evidence Review. DFID/
ODI: Gender and Adolescence Global Evidence, available at https://www.gage.odi.org/publications/
young-adolescents-and-digital-media-rer
Livingstone, S., L. Haddon, J. Vincent, G. Mascheroni and K. Ólafsson (2014b), Net Children Go Mobile
(London School of Economics and Political Science), available at http://eprints.lse.ac.uk/57598/
Livingstone, S., K. Ólafsson, E.J. Helsper, F. Lupiáñez-Villanueva, G.A. Veltri and F. Folkvord (2017),
‘Maximizing Opportunities and Minimizing Risks for Children Online: The Role of Digital Skills in Emerging
Strategies of Parental Mediation’ 67(1) Journal of Communication 82–105, available at doi: 10.1111/jcom.12277
Lupton, D. and B. Williamson (2017), ‘The Datafied Child: The Dataveillance of Children and Implications for
Their Rights’ New Media and Society, available at doi: 10.1177/1461444816686328
Lwin, M.O., A.J.S. Stanaland and A.D. Miyazaki (2008), ‘Protecting Children’s Privacy Online: How Parental
Mediation Strategies Affect Website Safeguard Effectiveness’ 84(2) Journal of Retailing 205–17, available at
doi: 10.1016/j.jretai.2008.04.004
Macenaite, M. (2017), ‘From Universal Towards Child-Specific Protection of the Right to Privacy Online:
Dilemmas in the EU General Data Protection Regulation’ New Media AND Society 1–15, available at doi:
10.1177/1461444816686327
Mackey, A. (2016), ‘Sticky E/motional Connections: Young People, Social Media, and the Re-orientation of
Affect’ 17(2) Safundi 156–73
Madden, M., A. Lenhart, S. Cortesi, U. Gasser, M. Duggan and A. Smith (2013), Teens, Social Media,
and Privacy (Pew Internet and American Life Project), available at www.pewinternet.org/2013/05/21/
teens-social-media-and-privacy/
Mardis, M.A. (2013), ‘What It Has or What It Does Not Have? Signposts from US Data for Rural Children’s
Digital Access to Informal Learning’ 38(4) Learning, Media and Technology 387–406, available at doi:
10.1080/17439884.2013.783595
Mascheroni, G. and K. Ólafsson (2014), Net Children Go Mobile: Risks and Opportunities (2nd edn, Educatt),
available at http://netchildrengomobile.eu/reports/
Mason, B. and D. Buchmann (2016), ICT4Refugees: A Report on the Emerging Landscape of Digital Responses
to the Refugee Crisis (Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH), available at
https://regasus.de/online/datastore?epk=74D5roYc&file=image_8_en
McDonald-Brown, C., K. Laxman and J. Hope (2017), ‘Sources of Support and Mediation Online for
9–12-year-old children’ E-Learning and Digital Media, available at doi: 10.1177/2042753017692430

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 407 13/12/2018 15:25


408  Research handbook on human rights and digital technology

Metcalf, S., A. Kamarainen, T. Grotzer and C. Dede (2013), ‘Teacher Perceptions of the Practicality and
Effectiveness of Immersive Ecological Simulations as Classroom Curricula’ 4(3) International Journal of
Virtual and Personal Learning Environments 66–77
Myers, E.M., I. Erickson and R.V. Small (2013), ‘Digital Literacy and Informal Learning Environments: An
Introduction’ 38(4) Learning, Media and Technology 355–67, available at doi: 10.1080/17439884.2013.783597
NCCA (National Council for Curriculum and Assessment) (2007a), ICT Framework: A Structured Approach
to ICT in Curriculum and Assessment, available at www.ncca.ie/uploadedfiles/publications/ict%20revised%20
framework.pdf
NCCA (National Council for Curriculum and Assessment) (2007b), ICT Framework: A Structured Approach
to ICT in Curriculum and Assessment, Report on the School-based Developmental Initiative, available at www.
ncca.ie/uploadedfiles/publications/ict%20report%20new%2007.pdf
Nethope Solutions Center, Open Space Literacy Initiative: Using ICT to Enhance Education and Improve
Literacy (2015), available at http://solutionscenter.nethope.org/case_studies/view/open-space-literacy-initia​
tive-using-ict-to-enhance-education-improve-liter
OECD (Organisation for Economic Co-operation and Development) (2011), The Protection of Children Online:
Risks Faced by Children Online and Policies to Protect Them, OECD Digital Economy Papers No. 179, avail-
able at doi: 10.1787/5kgcjf71pl28-en
OECD (Organisation for Economic Co-operation and Development) (2012a), The Protection of Children Online:
Recommendation of the OECD Council, available at www.oecd.org/sti/ieconomy/childrenonline_with_cover.
pdf
OECD (Organisation for Economic Co-operation and Development) (2012b), Connected Minds: Technology
and Today’s Learners (OECD Publishing), available at doi: http://dx.doi.org/10.1787/9789264111011-en
OHCHR (Office of the United Nations High Commissioner for Human Rights) (no date), Committee on the
Rights of the Child, available at www.ohchr.org/EN/HRBodies/CRC/Pages/CRCIndex.aspx
OHCHR (Office of the United Nations High Commissioner for Human Rights) (2014), Committee on the Rights
of the Child, Report of the 2014 Day of General Discussion ‘Digital Media and Children’s Rights’, available at
www.ohchr.org/Documents/HRBodies/CRC/Discussions/2014/DGD_report.pdf
O’Neill, B., E. Staksrud and S. McLaughlin (eds) (2013), Towards a Better Internet for Children: Policy Pillars,
Players and Paradoxes (Nordicom)
Pathak-Shelat, M. and C. DeShano (2014), ‘Digital Youth Cultures in Small Town and Rural Gujarat, India’
16(6) New Media and Society 983–1001
Peter, J. and P.M. Valkenburg (2006), ‘Adolescents’ Exposure to Sexually Explicit Online Material and
Recreational Attitudes Toward Sex’ 56(4) Journal of Communication 639–60, available at doi:
10.1111/j.1460-2466.2006.00313.x
Polak, M. (2007), ‘“I Think We Must be Normal . . . There are Too Many of Us for This to be Abnormal!!!”:
Girls Creating Identity and Forming Community in Pro-Ana/Mia Websites’ in S. Weber and S. Dixon (eds),
Growing Up Online: Young People and Digital Technologies (Palgrave Macmillan)
Powell, A., M. Hills and V. Nash (2010), Child Protection and Freedom of Expression Online (Oxford Internet
Institute), available at www.oii.ox.ac.uk/archive/downloads/publications/FD17.pdf
Raftree, L. and K. Bachan (2013), Integrating Information and Communication Technologies into Communication
for Development Strategies to Support and Empower Marginalized Adolescent Girls (UNICEF)
Rallings, J. (2015), Youth and the Internet: A Guide for Policy Makers (Barnardo’s), available at www.barnardos.
org.uk/youth_and_the_internet_report.pdf
RErights.org (2016a), Thriving with Digital Technology, available at https://rerights.org/2016/11/thriving-wi​
th-digital-technology/
Rijsdijk, L.E., A.E.R. Bos, R.A.C. Ruiter, J.N. Leerlooijer, B. de Haas and H.P. Schaalma (2011), ‘The
World Starts with Me: A Multilevel Evaluation of a Comprehensive Sex Education Programme Targeting
Adolescents in Uganda’ 11(1) BMC Public Health 334
Robinson, K.H., P. Bansel, N. Denson, G. Ovenden and C. Davies, Growing Up Queer: Issues Facing Young
Australians Who are Gender Variant and Sexually Diverse (Young and Well Cooperative Research Centre, 2014)
Rose, K., S. Eldridge and L. Chapin (2015), The Internet of Things: An Overview Understanding the Issues and
Challenges of a More Connected World (Internet Society), available at www.internetsociety.org/sites/default/
files/ISOC-IoT-Overview-20151221-en.pdf
Sachs, J., V. Modi, H. Figueroa, M. Fantacchiotti, K. Sanyal, F. Khatun and A. Shah (2015), How Information
and Communication Technology Can Achieve the Sustainable Development Goals: ICT and SDGs (Earth
Institute, Colombia University and Ericsson), available at www.ericsson.com/res/docs/2015/ict-and-sdg-
interim-report.pdf
Sacino, S.W. (2012), Article 17: Access to a Diversity of Mass Media Sources (Martinus Nijhoff Publishers)
Samuels, C., Q. Brown, L. Leoschut, J. Jantjies and P. Burton (2013), Connected Dot Com: Young People’s
Navigation of Online Risks: Social Media, ICTs and Online Safety (Centre for Justice and Crime Prevention
and UNICEF), available at www.unicef.org/southafrica/SAF_resources_connecteddotcom.pdf

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 408 13/12/2018 15:25


Recognizing children’s rights in relation to digital technologies  409

Shapiro, J. (2014), ‘Your Kid’s School May Have the Right to Sell Student Data’ Forbes, available at www.forbes.com/
sites/jordanshapiro/2014/01/24/your-kids-school-may-have-the-right-to-sell-student-data/#458433104990
Sinclair, S. and G. Bramley (2011), ‘Beyond Virtual Inclusion: Communications Inclusion and Digital Divisions’
10(1) Social Policy and Society 1–11, available at doi: 10.1017/S1474746410000345
Singer, N. (2014), ‘With Tech Taking Over in Schools, Worries Rise’ New York Times, available at http://sitcom​
puterscience.pbworks.com/w/file/fetch/86037319/With%20Tech%20Taking%20Over%20in%20Schools,%20
Worries%20Rise%20-%20NYTimes.com.pdf
Smahel, D. and M.F. Wright (eds) (2014), The Meaning of Online Problematic Situations for Children: Results
of Qualitative Cross-cultural Investigation in Nine European Countries (EU Kids Online, London School of
Economics and Political Science), available at http://eprints.lse.ac.uk/56972/
Strasburger, V.C., A.B. Jordan and E. Donnerstein (2012), ‘Children, Adolescents, and the Media: Health
Effects’ 59(3) Pediatric Clinics of North America 533–87, available at doi: 10.1016/j.pcl.2012.03.025
Subrahmanyam, K. and P. Greenfield (2008), ‘Online Communication and Adolescent Relationships’ 18(1) The
Future of Children 119–46, available at doi: 10.1353/foc.0.0006
Swist, T., P. Collin, J. McCormack and A. Third (2015), Social Media and the Wellbeing of Children and Young
People: A Literature Review (Commissioner for Children and Young People, Western Australia), available
at www.uws.edu.au/__data/assets/pdf_file/0019/930502/Social_media_and_children_and_young_people.pdf
Third, A. and P. Collin (2016), ‘Rethinking (Children’s and Young People’s) Citizenship Through Dialogues on
Digital Practice’ in A. McCosker, S. Vivienne and A. Johns (eds), Negotiating Digital Citizenship: Control,
Contest and Culture (Rowman & Littlefield)
Third, A. and I. Richardson (2010), Connecting, Supporting and Empowering Young People Living with
Chronic Illness and Disability: The Livewire Online Community (Report prepared for the Starlight Children’s
Foundation, Centre for Everyday Life)
Third, A., D. Bellerose, U. Dawkins, E. Keltie and K. Pihl (2014), Children’s Rights in the Digital Age: A
Download from Children Around the World (Young and Well Cooperative Research Centre)
Tynes, B. (2015), Online Racial Discrimination: A Growing Problem for Adolescents, Psychological Science
Agenda, Science Brief, available at www.apa.org/science/about/psa/2015/12/online-racial-discrimination.aspx
UN (United Nations) (2011), Girls Speak Out on the Potential of ICTs in Their Empowerment and Development,
available at www.c4d.undg.org/files/girls-fast-talk-report-final-plan-international
UNCTAD (United Nations Conference on Trade and Development) (2014), Measuring ICT and Gender: An
Assessment, available at www.uis.unesco.org/Communication/Documents/measuring-ict-and-gender.pdf
UN ECOSOC (United Nations Economic and Social Council) (2015), Digital Development: Report of the
Secretary-General, Eighteenth Session of the Commission on Science and Technology for Development, E/
CN.16/2015/2
UNESCO (no date), ‘Media and Information Literacy’, available at http://portal.unesco.org/ci/en/ev.php-
URL_ID=15886&URL_DO=DO_PRINTPAGE&URL_SECTION=201.html
UNHCR (United Nations High Commissioner for Refugees) (2016), Connecting Refugees: How Internet and
Mobile Connectivity Can Improve Refugee Well-being and Transform Humanitarian Action, available at www.
unhcr.org/5770d43c4.pdf
UNICEF (United Nations Children’s Fund) (2012), Child Safety Online: Global Challenges and Strategies
(UNICEF Office of Research-Innocenti), available at www.unicef-irc.org/publications/pdf/ict_techreport3_
eng.pdf
UNICEF (United Nations Children’s Fund) (2013), Integrating ICTs into Communication for Development
Strategies to Support and Empower Marginalized Adolescent Girls, available at www.unicef.org/cbsc/files/
ICTPaper_Web.pdf
UNICEF (United Nations Children’s Fund) (2017), Children in a Digital World: State of the World’s Children
2017, available at www.unicef.org/publications/files/SOWC_2017_ENG_WEB.pdf
UNICEF (United Nations Children’s Fund) (no date), The MAGIC (Media Activities and Good Ideas by, with
and for Children) Briefing: Oslo Challenge, available at www.unicef.org/magic/briefing/oslo.html
UNSRSG (United Nations Special Representative of the Secretary-General on Violence Against Children)
(2016), Ending the Torment: Tackling Bullying from the Schoolyard to Cyberspace, available at http://srsg.
violenceagainstchildren.org/sites/default/files/2016/End%20bullying/bullyingreport.pdf
Vickery, J.R. (2014), ‘The Role of After-school Digital Media Clubs in Closing Participation Gaps and Expanding
Social Networks’ 47(1) Equity and Excellence in Education 78–95, available at doi: 10.1080/10665684.2013.866870
Virtual Global Task Force (no date), Combating Online Child Sexual Abuse: Virtual Global Task Force, available
at http://virtualglobaltaskforce.com/
Walton, M. (2009), Mobile Literacies and South African Teens: Leisure Reading, Writing, and MXit Chatting
for Teens in Langa and Guguletu (m4Lit Project), available at https://m4lit.files.wordpress.com/2010/03/
m4lit_mobile_literacies_mwalton_20101.pdf
Walton, M. and N. Pallitt (2012), ‘“Grand Theft South Africa”: Games, Literacy and Inequality in Consumer
Childhoods’ 26(4) Language and Education 347–61, available at doi: 10.1080/09500782.2012.691516

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 409 13/12/2018 15:25


410  Research handbook on human rights and digital technology

Webster, S., J. Davidson, A. Bifulco, P. Gottschalk, V. Caretti, T. Pham et al. (2012), European Online Grooming
Project, Final Report, available at www.crds.be/userfiles/files/European%20Online%20Grooming%20Project_
Final%20Version_140312.pdf
WEF (World Economic Forum) (2015), Global Information Technology Report, available at http://reports.
weforum.org/global-information-technology-report-2015/
WEF (World Economic Forum) (2017), Shaping the Future Implications for Digital Media for Society: Valuing
Personal Data and Rebuilding Trust: End-user Perspectives on Digital Media Survey, A Summary Report, available
at www3.weforum.org/docs/WEF_End_User_Perspective_on_Digital_Media_Survey_Summary_2017.pdf
WeProtect (no date), WeProtect Global Alliance: End Child Sexual Exploitation Online, available at
www.weprotect.org/
Wernham, M. (2016), Mapping the Global Goals for Sustainable Development and the Convention on the Rights
of the Child (UNICEF), available atwww.unicef.org/agenda2030/files/SDG-CRC_mapping_FINAL.pdf
Wilson, S. (2016), ‘Digital Technologies, Children and Young People’s Relationships and Self-care’ 14(3)
Children’s Geographies 282–94, available at doi: 10.1080/14733285.2015.1040726
Wolak, J., K. Mitchell and D. Finkelhor (2007), ‘Unwanted and Wanted Exposure to Online Pornography in a
National Sample of Youth Internet Users’ 119(2) American Academy of Pediatrics 247–57, available at doi:
10.1542/peds.2006-1891
World Bank (2016), World Development Report: Digital Dividends, available at http://documents.worldbank.org/
curated/en/896971468194972881/pdf/102725-PUB-Replacement-PUBLIC.pdf
Ybarra, M.L. and K.J. Mitchell (2004), ‘Youth Engaging in Online Harassment: Associations with Caregiver–
Child Relationships, Internet Use, and Personal Characteristics’ 27(3) Journal of Adolescence 319–36,
available at doi: 10.1016/j.adolescence.2004.03.007
Ybarra, M.L., M. Diener-West, D. Markow, P.J. Leaf, M. Hamburger and P. Boxer (2008), ‘Linkages Between
Internet and Other Media Violence with Seriously Violent Behavior by Youth’ 122(5) American Academy of
Pediatrics 929–37, available at doi: 10.1542/peds.2007-3377

Amanda Third, Sonia Livingstone and Gerison Lansdown - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:27AM
via New York University

WAGNER_9781785367717_t.indd 410 13/12/2018 15:25


20.  Digital rights of LGBTI communities: a roadmap
for a dual human rights framework
Monika Zalnieriute*

1. INTRODUCTION
While the relationship between digital technologies and human rights has attracted a lot
of attention among social, legal and political science scholars as well as civil society, these
efforts have mostly focused on what has become known as the ‘classical digital rights’:
freedom of expression, privacy and data protection. This chapter aims to highlight emerg-
ing issues beyond this classical repertoire by focusing on the human rights implications of
digital technologies for historically marginalized groups, such as racial and ethnic minorities
or LGBTI communities. Various discriminated-against groups have been long overlooked
both by scholars and civil society in the context of Internet policy (albeit some exceptions
surely exist). To kick off a stronger scholarly engagement with these often-overlooked human
rights implications in the Internet context, this chapter in particular focuses on the issues
at the intersection of digital technologies and LGBTI (acronym for lesbian, gay, bisexual,
trans, and intersex) communities.1 It discusses the emerging narratives on the Internet and
LGBTI rights, and suggests potential ways to move beyond those narratives by proposing
the construction and development of a combined LGBTI and digital rights analytical and
conceptual framework. Such a framework is necessary to address the growing complexity
of the human rights implications that in reality go well beyond the so-called classical digital
rights, such as freedom of expression and privacy. These core digital rights have captured
and dominated scholarly and activist attention – yet, notwithstanding for good reasons,

*  The author is grateful for insightful comments that anonymous reviewers provided on earlier
drafts. The author is also grateful to Melbourne Law School, University of Melbourne, for provid-
ing a Postdoctoral Fellowship, during which most of the research for this chapter was conducted.
The author is also deeply appreciative of the many thoughtful discussions that she has had over
the past few years with numerous researchers and civil society advocates working with LGBTI or
digital rights and who stimulated her thinking behind many elements of this chapter.
1
  Noting the problematics of the umbrella terms and acronyms, LGBTI is used here to refer to
lesbian, gay, bisexual, trans, and intersex individuals. While LGBT is a somewhat older term widely
adopted by many human rights organizations, more recently, ‘I’ (for intersex) is often added to LGBT
to create an LGBTI community, see e.g., a Joint UN Statement on Ending Violence and Discrimination
Against Lesbian, Gay, Bisexual, Transgender and Intersex People, 29 September 2015, signed by
12 UN entities (ILO, OHCHR, UNAIDS Secretariat, UNDP, UNESCO, UNFPA, UNHCR,
UNICEF, UNODC, UN Women, WFP and WHO), available at www.ohchr.org/Documents/Issues/
Discrimination/Joint_LGBTI_Statement_ENG.PDF. Intersex individuals are born with sex char-
acteristics that, according to the UN High Commissioner for Human Rights, ‘do not fit the typical
definitions for male or female bodies’. See United Nations Office of the High Commissioner for
Human Rights (OHCHR), Free and Equal Campaign: Intersex Fact Sheet (2015), available at https://
unfe.org/system/unfe-65-Intersex_Factsheet_ENGLISH.pdf (accessed 30 December 2016).

411
Monika Zalnieriute - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 411 13/12/2018 15:25


412  Research handbook on human rights and digital technology

often to the detriment of other pressing issues which have been left behind. To engage with
this more complex reality, once again a ‘struggle for new rights’2 needs to continue: however,
this time it is a dual, intersectional struggle, as both digital rights and LGBTI rights are still
in the process of manifestation and materialization both in international politics and law.
This chapter proceeds as follows: before moving onto the human rights issues at the
intersection of digital technologies and LGBTI communities, section 2 briefly introduces
the LGBTI movement and its main achievements at the international level. Section 3 then
moves on to discuss the existing narratives on digital technologies and LGBTI rights: the
celebrated dominant narrative portrays the Internet as a liberatory medium for LGBTI
communities, while the emerging counter-narrative sketches the Internet as a potential
tool of repression for LGBTI rights. Section 4 invites to move beyond the state of the art
by identifying a research gap and outlining a rationale for the new research programme.
Finally, the roadmap sketching the construction of the ‘LGBTI and Digital Rights’
framework is discussed in section 5.

2.  S
 ETTING THE CONTEXT: LGBTI RIGHTS IN
INTERNATIONAL POLITICS AND LAW

The term ‘LGBTI rights’ emerged relatively recently in international discourse as a broad
unifying idea that all people, regardless of their sexual orientation (and later, gender
identity and sex characteristics), should be able to enjoy their human rights. LGBTI rights
only gained political prominence in the late 1970s when the earlier Gay Liberation move-
ment with anarchist visions was superseded by the strategy to portray LGBTI individuals
as a minority group using the language of civil rights adopted by influential organizations
in the United States at the time.3 Despite a few decades of continuous struggle for justice,
which initially spread from North America to Europe and to the rest of the world since the
1970s, LGBTI communities around the world still encounter numerous legal restrictions
and social biases in their everyday lives.

2.1  LGBTI Communities in the Modern Human Rights Movement

The modern human rights movement, which originated with the formation of the United
Nations (UN) in 1945, has for a long time remained silent on LGBTI issues, and, with
the exception of several soft political pronouncements by the UN Human Rights Council
during the last several years,4 it is still very mute today. Despite the fact that many mem-

2
  Bob Clifford (ed.), The International Struggle for New Human Rights (University of
Pennsylvania Press, 2009).
3
  S. Epstein, ‘Gay and Lesbian Movements in the United States: Dilemmas of Identity,
Diversity, and Political Strategy’ in B.D. Adam, J. Duyvendak and A. Krouwel (eds), The Global
Emergence of Gay and Lesbian Politics (Philadelphia, PA: Temple University Press, 1999) 30–90.
4
  Human Rights Council Resolution on Human Rights, Sexual Orientation and Gender Identity,
A/HRC/RES/17/19, adopted 17 June 2011; HRC Resolution on Human Rights, Sexual Orientation
and Gender Identity, A/HRC/RES/27/32, adopted 26 September 2014; HRC Resolution on
Protection Against Violence and Discrimination Based on Sexual Orientation and Gender Identity,

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 412 13/12/2018 15:25


Digital rights of LGBTI communities  413

bers of LGBTI communities were direct victims of the Nazi atrocities and executions
during World War 2,5 the most famous and symbolic international human rights docu-
ment, the Universal Declaration of Human Rights of 1948, did not include reference to
sexual orientation.6 Similarly, even though international human rights instruments were
proliferating at enormous speed during the 1970s and 1980s, and various ‘new’ human
rights issues have been acknowledged by the international community since the UN’s
formation, initiatives to promote the rights of sexual and gender non-conforming minori-
ties remained very limited. Thus, in contrast to groups which are discriminated against
because of their disability, sex, race and ethnicity, LGBTI individuals are not explicitly
protected under any international convention obliging states to ensure their equal treat-
ment and non-discrimination (compare International Convention on the Elimination of
All Forms of Racial Discrimination (1966); International Convention on the Elimination
of All Forms of Discrimination Against Women (1979); Convention on the Rights of the
Child (1989); Convention on the Rights of Persons with Disabilities (2006)).

2.2  Varying Perspectives on LGBTI Rights

While the official positions of states evolve and change over time, including during the
last decade (e.g. the Bush and Obama Administrations adopted different stances on
LGBTI rights internationally), a disagreement within the international community over
the equal legal protection and status of LGBTI persons these days is broadly between
the liberal Western democracies versus more conservative countries in Africa, Asia, and
Eastern Europe, be they Islamic, Christian or secular states. This is surely a very large
oversimplification of the struggle for LGBTI rights on the international level and it by
no means suggests a full de facto acceptance of LGBTI rights, or inexistence of stigma
internally in those pro-LGBTI rights democracies. Large anti-LGBTI marriage rallies in
Paris in 2013 exemplify very well these internal domestic struggles for LGBTI commu-
nities.7 In the same way, the term ‘LGBTI rights’ itself is an oversimplification reflecting
Western classifications that are not universally applicable in all societies and communities
globally.8 The mainstream LGBTI movement has attracted a lot of valid criticism by both

A/HRC/RES/32/2, adopted 30 June 2016. Sexual orientation and gender identity is also mentioned
(among other grounds, such as religious or ethnic background) in the UN General Assembly
Resolutions on Extrajudicial, Summary or Arbitrary Executions, see e.g., UN General Assembly
Resolutions A/RES/69/182 (18 December 2014) and A/RES/67/168 (20 December 2012). A full list
of United Nations Resolutions on sexual orientation and gender identity is available on the UN
Human Rights Office of the High Commissioner, see www.ohchr.org/EN/Issues/Discrimination/
Pages/LGBTUNResolutions.aspx (accessed 30 December 2017).
5
  See Heinz Heger, Men with the Pink Triangle: The True Life and Death Story of Homosexuals
in the Nazi Death Camps (Boston, MA: Alyson Publications, 1994); Pierre Steel, Liberation Was for
Others: Memoirs of a Gay Survivor of the Nazi Holocaust (New York: Perseus Book Group, 1997).
6
  Pratima Narayan, ‘Somewhere Over the Rainbow . . . International Human Rights Protection
for Sexual Minorities in the New Millennium’ (2006) 24 Boston University International Law
Journal 313, at 313.
7
  ‘Huge Anti-Gay Marriage Protest March in Paris’, BBC News, 26 May 2013, available at
www.bbc.com/news/world-europe-22671572 (accessed 28 September 2016).
8
  Javaid Rehman, ‘Adjudicating on the Rights of Sexual Minorities in Muslim World’ in James

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 413 13/12/2018 15:25


414  Research handbook on human rights and digital technology

feminist and queer theorists, who claim that the dominance of the so-called Western ‘gay
international’ model9 disregards a non-essentialist view of gender and sexuality; reinforces
heteronormative gender roles and stereotypes;10 and has a sweeping potential to repress
varied cultural practices and understandings of sexuality that are not included in the
Western model of sexuality and gender identity.11
Despite the varying understandings, conceptions, and even absence of explicit reference
to ‘sexual orientation’ and ‘gender identity’ in the United Nations Declaration on Human
Rights and regional human rights instruments, evolving conceptions of international
human rights law today generally are interpreted to include (at least certain) protection
of the rights of LGBTI people. It is beyond the scope of this short chapter to discuss the
evolution of conceptions on LGBTI rights in the human rights jurisprudence,12 however,
it is worth mentioning that since the 1980s, the protections for LGBTI rights have often
been developed through the principles of equality and non-discrimination, the right to
privacy and private life,13 or right to marry14 by various regional human rights tribunals
and international bodies, such as the European Court of Human Rights and the UN
Human Rights Committee. In addition, the adoption of the Yogyakarta Principles  on
the Application of International Human Rights Law in Relation to Sexual Orientation
and Gender Identity (2006)15 is considered a crucial milestone in LGBTI advocacy on the
international level.16 While the document is not legally binding, it contains 29 principles,
intended to serve as an interpretive aid to the human rights treaties, as well as recom-

A. Green and Christopher P.M. Waters (eds), Adjudicating International Human Rights: Essays in
Honour of Sandy Ghandhi (Martinus Nijhoff Publishers, 2015).
 9
  Ratna Kapur, ‘The (Im)possibility of Queering International Human Rights Law’ in Dianne
Otto (ed.), Queering International Law: Possibilities, Alliances, Complicities and Risks (Routledge,
2017).
10
  Dianne Otto, ‘Decoding Crisis in International Law: A Queer Feminist Perspective’ in
Barbara Stark (ed.), International Law and Its Discontents: Confronting Crises (Cambridge
University Press, 2015) 115; Dianne Otto (ed.), Queering International Law: Possibilities, Alliances,
Complicities and Risks (Routledge, 2017).
11
  Paula Gerber and Joel Gory, ‘The UN Human Rights Committee and LGBT Rights: What
is It Doing? What Could It be Doing?’ (2014) 14(3) Human Rights Law Review 403–39.
12
  Laurence R. Helfer and Erik Voeten, ‘International Courts as Agents of Legal Change:
Evidence from LGBTI Rights in Europe’ (2014) 68 International Organization 77–110; Rehman,
‘Adjudicating on the Rights of Sexual Minorities in Muslim World’, n. 8 above.
13
  B v. France, EctHR, Judgment of 25 March 1992, Ser. A 232-c (1992), (1992) 16 EHRR 1,
available at http://hudoc.echr .coe.int/sites/eng/pages/search.aspx?i=001-57770.
14
  See in particular Goodwin v. United Kingdom and I v. United Kingdom, EctHR, (2002)
35 EHRR 18, available at http:// hudoc.echr.coe.int/sites/eng/pages/search.aspx?i=001-57974.
Following this decision, the UK government brought in the Gender Recognition Act 2004, which,
inter alia, accords the right of transsexuals to marry. It also provides them the right to be treated as
having the sex that they have adopted as their acquired gender. 

15
 The Yogyakarta Principles on the Application of International Human Rights Law in Relation
to Sexual Orientation and Gender Identity were developed by a group of 29 international human
rights experts following an experts’ meeting held in Yogyakarta, Indonesia from 6–9 November
2006; the Principles are available at http://data.unaids.org/pub/manual/2007/070517_yogyakarta_
principles_en.pdf (accessed 30 December 2017).
16
  Michael O’Flaherty and John Fisher, ‘Sexual Orientation, Gender Identity and International
Human Rights Law: Contextualising the Yogyakarta Principles’ (2008) 8(2) Human Rights Law
Review 207–48.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 414 13/12/2018 15:25


Digital rights of LGBTI communities  415

mendations to governments, international organizations, civil society, and the private


sector. Arguments could be advanced that the protection of LGBTI rights is now part
of customary international law;17 however, such claims are likely to be disregarded by
the states portraying the advocacy for LGBTI rights internationally as an attack on their
sovereignty by the liberal Western democracies.18

2.3  Silence and ‘Symbolic-Historic’ Moments at the United Nations

The varying perspectives within the international community on how LGBTI rights
should be conceived and reflected in law has also led to a long-term silence in the vari-
ous UN bodies. In the 1980s and 1990s, neither the UN Economic and Social Council,
nor the UN Sub-Commission on Prevention of Discrimination and Protection of
Minorities have considered sexual orientation and gender identity on their agendas.19
The symbolic breakthrough occurred with the case of Toonen v. Australia (1994), when
the Human Rights Committee found a breach of right to private life, in conjunction with
the right to non-discrimination under the International Covenant on Civil and Political
Rights (ICCPR), because of the criminalization of private homosexual conduct in the
Tasmanian Criminal Code.20 Many critical pronouncements on discrimination and
educational programmes by the UN Human Rights Council (HRC) have followed since
the mid-1990s;21 however these developments in jurisprudence did not reflect a broader
agreement among members of the international community. Since its founding in 1945,
the UN General Assembly has not discussed LGBTI rights for at least half a century.
It was only on 17 June 2011 that the UN Human Rights Council explicitly reaffirmed
LGBTI rights in international law in a Resolution on Human Rights, Sexual Orientation
and Gender Identity (23 to 19 votes, with three abstentions) and requested the UN High
Commissioner for Human Rights to present a first detailed UN study on discrimination
faced by individuals due to their sexual orientation and gender identity.22 Some com-
mentators question the real impact of this resolution beyond confirming Western states’
dominance in the HRC.23

17
  Sonia B. Gree, ‘Currency of Love: Customary International Law and the Battle for Same-
Sex Marriage in the United States’ (2011) 14 U. Pa. J.L. & Soc. Change 53.
18
  See e.g., Letter from the Representative of the Permanent Mission of Pakistan, on behalf of
the OIC (Geneva, 26 February 2004).
19
  Ignacio Saiz, ‘Bracketing Sexuality: Human Rights and Sexual Orientation – A Decade
of Development and Denial at the United Nations’ in Richard Parker and Peter Aggleton (eds),
Routledge Handbook of Sexuality, Health and Rights (London: Routledge, 2010) 459.
20
  Toonen v. Australia, Communication No. 488/1992, 31 March 1994. As a result of this case,
Australia was forced to change its law criminalizing homosexual acts in the State of Tasmania.
21
  See e.g., Concluding Observations of Human Rights Committee regarding United States of
America, CCPR/C/USA/CO/3/Rev.1 (18 December 2006) 25; Concluding Observations of Human
Rights Committee regarding Philippines, CCPR/CO/79/PHL (1 December 2003) 18; Concluding
Observations of Human Rights Committee regarding Namibia, CCPR/CO/81/NAM (30 July
2004) 22; Concluding Observations of Human Rights Committee regarding Austria, CCPR/C/79/
Add.103 (19 November 1998) 13.
22
  Human Rights Council Resolution on Human Rights, Sexual Orientation and Gender
Identity, A/HRC/RES/17/19, adopted 17 June 2011.
23
  Kapur, ‘The (Im)possibility of Queering International Human Rights Law’, n. 9 above.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 415 13/12/2018 15:25


416  Research handbook on human rights and digital technology

The High Commissioner’s Report (2011)24 led to the HRC panel discussion in March
2012. The division among the Council members became apparent again, with some mem-
bers leaving the Council chamber at the start of the meeting to ‘voice their opposition on
cultural or religious grounds’ and insist that sexual orientation and gender identity were
‘new concepts that lay outside the framework of international human rights law’.25
On 26 September 2014, the HRC passed a second LGBTI rights resolution, condemn-
ing violence and discrimination on the basis of sexual orientation or gender identity
across the globe, and calling for another report from the UN High Commissioner for
Human Rights. The 25–14 vote, as compared with the 2011 Resolution’s 23-19 vote,
indicated a slow but growing support for protecting LGBTI rights under international
human rights law.
In August 2015, for the first time in its 70 year history, the UN Security Council called
an initial organized session that explicitly addressed LGBTI rights, albeit its focus on IS
persecution of LGBTI Syrians and Iraqis leaves it questionable whether there were in fact
much of ‘LGBTI rights’ in the security discourse.26 Since then, the UN Security Council
has also released a statement condemning the Orlando killings ‘targeting persons as a
result of their sexual orientation’,27 which for the first time explicitly referred to sexual
orientation.
Most recently, the Human Rights Council established a mandate for an Independent
Expert on Sexual Orientation and Gender Identity in its June 2016 Resolution (23 in
favour, 18 against, six abstentions).28 At the international level, these recent resolutions
and establishment of the mandate, along with the adoption of the Yogkarta Principles,
present the culmination of the LGBTI movement’s activities and advocacy within the
international human rights framework.
Against this background of the LGBTI movement’s continuous struggle for basic
justice and recognition in international politics and the human rights framework, digital
technologies and the Internet have been playing an important role in awareness-raising,
networking, and the mobilization of LGBTI communities in their advocacy efforts.

24
  See United Nations High Commissioner for Human Rights, Discrimination and Violence
Against Individuals Based on their Sexual Orientation and Gender Identity, A/HRC/19/41 (17
November 2011).
25
  Human Rights Council Panel on Ending Violence and Discrimination Against Individuals Based
on their Sexual Orientation and Gender Identity, Geneva, 7 March 2012, Summary of Discussion,
available at www.ohchr.org/Documents/Issues/Discrimination/LGBT/SummaryHRC19Panel.pdf
(accessed 31 December 2017).
26
  ‘UN Security Council Holds First Meeting on LGBTI Rights’, Al Jazeera, 25 August
2015, available at www.aljazeera.com/news/2015/08/security-council-holds-meeting-LGBTI-righ​
ts-150824201712751.html (accessed 26 September 2016).
27
  UN Security Council Press Release, Press Statement on Terrorist Attack in Orlando, Florida
(13 June 2016), available at www.un.org/press/en/2016/sc12399.doc.htm (accessed 31 December
2017).
28
  Human Rights Council Resolution on Protection Against Violence and Discrimination
Based on Sexual Orientation and Gender Identity, A/HRC/RES/32/2, adopted 30 June 2016.
See also OHCHR Press Release, Council Establishes Mandate on Protection Against Violence
and Discrimination Based on Sexual Orientation and Gender Identity (28 June 2016), available at
www.ohchr.org/en/NewsEvents/Pages/DisplayNews.aspx?NewsID=20220&LangID=E#sthash.
BwvML7lR.dpuf (accessed 26 September 2016).

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 416 13/12/2018 15:25


Digital rights of LGBTI communities  417

3.  E
 XISTING NARRATIVES ON THE INTERNET AND LGBTI
COMMUNITIES

Information and communication technologies have had a revolutionary impact on our


society, and are changing the way individuals, communities and society communicate,
exchange information and participate in the democratic processes.29 While these changes
bring novel opportunities for various marginalized communities to effect social change,
they also convey new modes for control and repression, which can be exercised via the
very communication mediums themselves.30 Two dominant and perhaps interdependent
narratives can be ascertained on the implications of digital technologies for political
engagement and the advancement of human rights of LGBTI communities. Both of these
– optimistic and pessimistic – accounts, however, tend to oversimplify the multifaceted
relationship between the LGBTI communities and the Internet, which can both facilitate
and impede the exercise of digital rights by the LGBTI communities.

3.1  Dominant Liberatory Narrative

A celebrated dominant narrative on the Internet and LGBTI rights is based on popular
conceptions of the Internet as an inherently democratizing and liberating medium as it
facilitates the exchange of information. This view, most often building on, adapting and
extending the influential theory of communicative action of Jurgen Habermas,31 generally
perceives the Internet as a key space facilitating the exercise of fundamental rights and
freedoms, especially to access critical information, to build knowledge, express thoughts
and beliefs, to form networks and communities and to mobilize for change.32 A theory
of the rising network society by Manuel Castells (1996)33 is one of the most prominent
examples of this celebratory and optimistic narrative.
Such conceptions are also incorporated in the discourses of political establishments and
form the central core of the (in)famous US ‘Internet Freedom Agenda’ (see e.g., the US
State Department website).34 They also gained traction among mainstream, international
political institutions, with the former UN Special Rapporteur Frank La Rue pronouncing
that ‘[t]he Internet facilitates the realization of a range of human rights’.35
Under such conceptions of the Internet as a liberatory and democratizing medium, the

29
  Natasha Primo, Gender Issues in the Information Society (UNESCO Publications for the
World Summit on the Information Society, 2003).
30
  Laura DeNardis, ‘Hidden Levers of Internet Control: An Infrastructure-based Theory of
Internet Governance’ (2012) 15 Information, Communication and Society 720.
31
  Jürgen Habermas, The Theory of Communicative Action (Thomas McCarthy (trans), Beacon
Press, 1984) vols 1, 2.
32
  Jack S.M. Kee (ed.), ://EROTICS: Sex, Rights and the Internet (APC, 2011) 7.
33
  Manuel Castells, The Rise of the Network Society: The Information Age: Economy, Society,
and Culture (Wiley, 1996) vol. 1.
34
  US Bureau of Democracy, Human Rights and Labor, ‘Internet Freedom’, www.humanrights.
gov/issues/internet-freedom/ (accessed 31 December 2017). For an in-depth discussion on ‘Internet
Freedom’ from a political economy perspective, see Shawn M. Powers and Michael Jablonski, The
Real Cyber War: The Political Economy of Internet Freedom (University of Illinois Press, 2015).
35
  Frank La Rue, Report of the Special Rapporteur on the Promotion and Protection of the

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 417 13/12/2018 15:25


418  Research handbook on human rights and digital technology

Internet is understood as a particularly important space for the negotiation and fulfilment
of the rights of various minorities and marginalized groups facing discrimination in their
everyday lives both in developed and developing countries, ranging from LGBTI people
in Eastern Europe and African countries, to disabled people and ethnic minorities in
North America and Western Europe. For these groups and communities, the Internet
is understood as a vital public platform due to greater barriers of access to traditional
media or political representation than the majority of their fellow citizens.36 Indeed, the
Internet is a particularly important space for such communities whose voices are often
marginalized, negated and discriminated against in everyday life.37
Not surprisingly then, empirical data suggests that LGBTI communities are amongst
the earliest adopters of digital technology, relying on the Internet for contesting and
defending their civil and sexual rights.38 The Internet is often perceived as providing a
‘public platform’ for LGBTI groups which plays a crucial role in transcending geographic
boundaries and reducing isolation, providing access to safe virtual communities, and
connecting members to education, identity formation, and civic engagement.39
The Internet for LGBTI communities is also an especially vital space for democratic
deliberation. This is particularly so for the younger generations, who are much more ready
to engage in activism via social networks than their older peers.40 Political awareness of
the younger members of LGBTI communities thus often come from their engagement
with the online discussion forums and social platforms.41
Empirical research also reveals that LGBTI communities rely on technology and
the Internet to a much greater extent than the general population to combat the social
isolation, marginalization and lack of access to health, economic and legal information,
especially in rural areas.42 In other words, the celebrated narrative is that the Internet
has proved a critical medium in providing LGBTI communities with an access to critical
social, economic, legal and medical information, as well as a virtual space for democratic
deliberation.

Right to Freedom of Opinion and Expression, UN GAOR, 17th Sess., Agenda Item 3, UN Doc. A/
HRC/17/27 (16 May 2011) 7.
36
  Kee, ://EROTICS: Sex, Rights and the Internet, n. 32 above, 1.
37
  Australian Human Rights Commission, ‘I Want Respect and Equality’ – Racial Discrimination:
National Consultations: Racism and Civil Society (2001), available at www.humanrights.gov.
au/i-want-respect-and-equality-racial-discrimination-national-consultations-racism-and-civil-soci​
ety#forward (accessed 28 September 2016).
38
  Jessie Daniels and Mary L. Gray, A Vision for Inclusion: An LGBTI Broadband Future,
Research Paper (LGBTI Technology Partnership, April 2014), available at http://academicworks.
cuny.edu/cgi/viewcontent.cgi?article=1213&context=gc_pubs.
39
  Natalie T.J. Tindall and Richard D. Waters, Coming Out of the Closet: Exploring LGBTI Issues
in Strategic Communication with Theory and Research (New York: Peter Lang Publishing, 2013).
40
  Jan Moolman and Jack S.M. Kee, ‘Sexuality and Women’s Rights’ in Global Information
Society Watch: Internet Rights and Democratization (APC, 2011), available at www.giswatch.org/
sites/default/files/gisw_-_sexuality_and_womens_rights.pdf (accessed 22 September 2016).
41
  Gay, Lesbian and Straight Education Network, Out Online: The Experiences of Lesbian,
Gay, Bisexual and Transgender Youth on the Internet, Results of an Extensive 2013 Survey (2013).
42
  Daniels and Gray, ‘A Vision for Inclusion: An LGBTI Broadband Future’, n. 38 above;
Mary L. Gray, Out in the Country: Youth, Media, and Queer Visibility in Rural America (NYU
Press, 2009).

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 418 13/12/2018 15:25


Digital rights of LGBTI communities  419

3.2  Emerging Disquieting Counter-Narrative

While the Internet has indeed boosted freedom of communication and democracy as
professed by the dominant liberatory narrative, it has also been progressively subjected
to increased scrutiny, controls and content regulation by public and private sector actors
in both liberal democracies and authoritarian regimes. Various power structures in the
international scene influence ‘permissible’ online content.43 The centrality of the Internet
as a communication tool has also led to new approaches to surveillance and Internet
infrastructure is increasingly used by various public and private actors to impose restric-
tions on the exercise of fundamental rights in the digital sphere.44
LGBTI communities are thus often faced with the realities of inaccessible websites with
LGBTI content (e.g. Deti 404 in Russia);45 filtered out content, including health-related
information (even the words ‘breast cancer’);46 or being put in danger of physical assaults
by the ‘real name’ policies of social platforms, such as Facebook.47 Ranging from the
censorship of LGBTI content online by governments, as well as social media platforms,
to discriminatory Internet domain names policies,48 to explicitly discriminatory national
legislation, affecting LGBTI persons’ rights to free speech online, such as Russian anti-
LGBTI censorship laws,49 these restrictions are numerous and represent a daily struggle
for LGBTI communities to exercise their basic digital rights.
Furthermore, the rise of large-scale data collection and algorithm-driven analysis
targeting sensitive information poses many threats for LGBTI communities, who are
especially vulnerable to privacy intrusion due to their often hostile social, political and
even legal environments.50 A lot of publicly available data, such as Facebook friend
information or individual music playlists on YouTube, are incredibly effective at inferring
individual sexual preferences with high levels of accuracy.51 The accuracy of the online
trail of information we leave is argued to be higher than the predictions of human friends’

43
  Ben Wagner, Global Free Expression: Governing the Boundaries of Internet Content (Springer,
2016).
44
  Laura DeNardis, The Global War for Internet Governance (Yale University Press, 2014).
45
  ‘Russia Blacklists Deti-404, a Website for LGBT Teens’, Washington Times (2016), available
at www.washingtontimes.com/news/2016/oct/11/russia-blacklists-deti-404-a-website-for-lgbt-teen/
(accessed 31 December 2017).
46
  Daniels and Gray, ‘A Vision for Inclusion: An LGBTI Broadband Future’, n. 38 above.
47
  Andrew Griffin, ‘Facebook to Tweak “Real Name” Policy After Backlash from LGBTI
Groups and Native Americans’, Independent, 2 November 2015, available at www.independent.
co.uk/life-style/gadgets-and-tech/news/facebook-to-tweak-real-name-policy-after-backlash-from-
LGBTI-groups-and-native-americans-a6717061.html (accessed 7 October 2016).
48
  M. Zalnieriute and T. Schneider, ICANN’s Procedures and Policies in the Light of Human
Rights, Fundamental Freedoms and Democratic Values (Strasbourg: Council of Europe, 2014) 15–35.
49
  See Russian Federal Law for the Purpose of Protecting Children from Information
Advocating for a Denial of Traditional Family Values 2013.
50
  EU Fundamental Rights Agency, EU LGBTI Survey: Main Results (Luxembourg: Publications
Office of the European Union, 2014) 14.
51
  Michail Kosinski, David Stillwell and Thore Graepel, ‘Private Traits and Attributes are
Predictable from Digital Records of Human Behavior’ (2013) 110(15) Proceedings of the National
Academy of Sciences of the United States of America 5802–5, available at www.pnas.org/con​
tent/110/15/5802.full/ (accessed 28 September 2016).

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 419 13/12/2018 15:25


420  Research handbook on human rights and digital technology

assumptions about an individual’s personality.52 If widely-traded advertising information


‘correctly discriminates between homosexual and heterosexual men in 88% of cases’,53 then
most Internet users should assume that all companies advertising to them can predict their
sexual orientation with a high degree of accuracy – and are likely to do so in order to sell
them products. Issues may go well beyond simple product advertising, and can potentially
include different treatment in, for example, health and life insurance,54 employment,55 and
predictive policing (referring to prevention of crime before it happens).56
In stark contrast to a popular belief that such restrictions are targeting only particular
individuals (e.g. LGBTI activists) or specific groups (LGBTI organizations, collectives)
in specific countries, these limitations of LGBTI rights online are often imposed at the
global level (though local restrictive measures also do exist). The global scope of such
measures is possible because of the global nature of the Internet, and in particular the
technical structures and design of the Internet.57
These invisible structures, or what is often termed as ‘Internet architecture’,58 are
mostly hidden from the eyes of the Internet users, but not only do they have enormous
political and economic implications, often they impede the exercise of fundamental
human rights in the digital environment without users being aware of such limitations.
Particular features of the Internet architecture, such as Internet protocols and domain
names, as well as algorithms and standard contractual clauses of the Internet intermediar-
ies, create favourable conditions to restrict the fundamental rights of LGBTI communities
by various public and private actors,59 and to overcome constitutional protections for free
speech, equality and non-discrimination or privacy, as well as avoid the limitations of
international human rights law.60
Examples of various actors ‘co-opting Internet infrastructure’61 to limit LGBTI rights
online are numerous. First, critical information for LGBTI communities is often inaccessi-
ble due to the Internet censorship policies in many countries, ranging from a blanket ban on

52
  Wu Youyou, Michal Kosinski and David Stillwell, ‘Computer-based Personality Judgments
are More Accurate than Those Made by Humans’ (2015) 112(4) Proceedings of the National
Academy of Sciences.
53
  Kosinski, Stillwell and Graepel, ‘Private Traits and Attributes are Predictable from Digital
Records of Human Behavior’, n. 51 above.
54
  Angela Daly, ‘The Law and Ethics of “Self Quantified” Health Information: An Australian
Perspective’ (2015) 5 International Data Privacy Law 144.
55
  Pauline T. Kim, ‘Data-driven Discrimination at Work’ (2016) 58 Wm. & Mary L. Rev. 857.
56
  Zeynep Tufekci, ‘Engineering the Public: Big Data, Surveillance and Computational Politics’
(2014) 19 First Monday, available at http://firstmonday.org/ojs/index.php/fm/article/view/4901.
57
  DeNardis, ‘Hidden Levers of Internet Control’, n. 30 above; DeNardis, The Global War for
Internet Governance, n. 44 above.
58
  See Monika Zalnieriute and Stefania Milan, ‘Privatized Governance: Mediating Human
Rights via Internet Architecture’ Policy & Internet, forthcoming 2019.
59
  Laura DeNardis and Andrea M. Hackl, ‘Internet Control Points as LGBTI Rights
Mediation’ (2016) 19 Information, Communication and Society 753.
60
  American Civil Liberties Union, ACLU ‘Don’t Filter Me’ Initiative Finds Schools in Four More
States Unconstitutionally Censoring LGBTI Websites (11 April 2011), available at www.aclu.org/news/
aclu-dont-filter-me-initiative-finds-schools-four-more-states-unconstitutionally-censoring-LGBTI.
61
  DeNardis, ‘Hidden Levers of Internet Control’, n. 30 above; DeNardis, The Global War for
Internet Governance, n. 44 above.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 420 13/12/2018 15:25


Digital rights of LGBTI communities  421

LGBTI-related content in some countries to censorship filters installed into public libraries
networks and public schools. Even in Western democracies, such as the United States, with
a strong free speech reputation, Internet censorship is hardly a minor issue. For example, the
American Civil Liberties Union is pursuing certain Internet anti-censorship initiatives, such
as the 2011 ‘Don’t Filter Me’ project aimed at the removal of web filters on school comput-
ers that are unconstitutionally blocking access to hundreds of LGBTI websites, including
sites that contain vital resources on subjects like bullying and student gay-straight alliances.
Empirical research suggests that the filters do not block access to comparable anti-LGBTI
websites that address the same topics with a negative connotation.62
Since a large part of the Internet architecture and infrastructure is coordinated and
owned by private actors, this ownership allows them often to set de facto global standards
on human rights online,63 including the rights of LGBTI communities. This phenomenon
in effect leads to privatized human rights governance, whereby private actors establish
boundaries on human rights online, such as freedom of expression and data protection
and privacy, in accordance with their business models.64 Indeed, it is private Internet
platforms, such as Facebook, and not national governments or international treaties
or tribunals, who are setting the de facto global free speech standard on public nudity
(including banning pictures of female nipples) through its content moderation and stand-
ard policies; or the permissible levels of privacy and even physical security for LGBTI
individuals with Facebook’s ‘real name’ policies;65 or whether any LGBTI individual has a
‘right to be forgotten’. Such decisions on what is permissible are exercised internally by the
sub-contractors of Internet platforms, such as Google and Facebook, and the guidelines
and criteria for such decisions are largely unknown to the public. As such, the basic tools
of accountability and governance – public and legal pressure ­– are very limited, with the
private actors holding most power66 over LGBTI rights online, resulting in a ‘privatization
of human rights’.67
In this privatized human rights governance, the opportunities for LGBTI communities
to effectively communicate, raise awareness, as well as access critical information and
knowledge about the legal rights, health and community resources for LGBTI people,
praised by the celebrated dominant narrative of the Internet, are often intentionally
limited by various public and private actors in reality. In addition to these intentional
limitations imposed by powerful actors, scholars and activists have also noted how the
Internet is transforming the ways that LGBTI individuals experience violence and social

62
  ACLU, ‘Schools in Four More States Unconstitutionally Censoring LGBTI Websites’,
www.aclu.org/news/aclu-dont-filter-me-initiative-finds-schools-four-more-states-unconstitution-
ally-censoring-LGBTI.
63
 Wagner, Global Free Expression, n. 43 above.
64
  Ibid.; Emily Taylor, The Privatization of Human Rights: Illusions of Consent, Automation and
Neutrality, Paper No. 24 (Global Commission on Internet Governance, January 2016) 24, available
at https://ourinternet-files.s3.amazonaws.com/publications/no24_web_2.pdf.
65
  Griffin, ‘Facebook to Tweak “Real Name” Policy’, n. 47 above; Powers and Jablonski, The
Real Cyber War, n. 34 above.
66
  Catherine Buni and Soraya Chemaly, ‘The Secret Rules of the Internet’, The Verge, 13
April 2016, available at www.theverge.com/2016/4/13/11387934/internet-moderator-history-you​
tube-facebook-reddit-censorship-free-speech.
67
 Taylor, The Privatization of Human Rights, n. 64 above.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 421 13/12/2018 15:25


422  Research handbook on human rights and digital technology

­exclusion.68  For example, online threats, stalking, bullying and sexual harassment are
just part of the online aggression LGBTI communities face, and it is a challenge to find
meaningful ways to respond to such threats.
The counter-narrative gradually emerging among the Internet scholars69 thus suggests
that the architectural and technical features of the Internet can often be employed by both
public and private actors to intentionally limit LGBTI rights to freedom of expression
and association, privacy, security and right to self-autonomy.

4.  MOVING BEYOND THE STATE OF THE ART


What becomes clear from the existing competing narratives is that digital technologies could
both facilitate and impede the exercise of fundamental rights by LGBTI communities. It
seems that the celebrated narrative on the extremely positive role of the Internet and technol-
ogy in advancing LGBTI rights should now be losing its unshakeable traction; and it is time
to better understand the ways these rights can also be repressed using the very same digital
technologies and Internet infrastructure which replaced the traditional media gatekeepers.

4.1  Research Gap: Issues and Rationale

While the concerns that algorithmic monitoring and data classification techniques, Internet
filtering mechanisms and other restrictive measures may be disproportionately affecting
LGBTI persons’ rights to privacy and free speech and other fundamental rights online are
growing,70 there is, so far, no systematic research within academia on the subject matter.
The first review conducted of existing research at the intersection of LGBTI issues
and technology by sociologists Daniels and Gray, released in June 2014, has convincingly
demonstrated the general lack of research on LGBTI communities and the Internet.71
Nonetheless, there is a growing (albeit still very limited) amount of research among
sociologists, psychologists, and gender and media studies researchers, on online LGBTI
identity formation72 and utilization of online social media in the coming out process,73
self-identification strategies,74 and LGBTI advocacy in the digital age.75

68
 GLSEN, Out Online, n. 41 above.
69
  DeNardis and Hackl, ‘Internet Control Points as LGBTI Rights Mediation’, n. 59 above.
70
  See e.g., Carter Jernigan and Behram F.T. Mistree, ‘Gaydar: Facebook Friendships Expose
Sexual Orientation’ (2009) 14(10) First Monday.
71
  Daniels and Gray, ‘A Vision for Inclusion: An LGBTI Broadband Future’, n. 38 above, 25.
72
  Kay Siebler, Learning Queer Identity in the Digital Age (Springer, 2016).
73
  Y. Taylor, E. Falconer and R. Snowdon, ‘Queer Youth, Facebook and Faith: Facebook
Methodologies and Online Identities’ (2014) 16(7) New Media and Society 1138–53.

74
  Stefanie Duguay, ‘Lesbian, Gay, Bisexual, Trans, and Queer Visibility Through Selfies:
Comparing Platform Mediators Across Ruby Rose’s Instagram and Vine Presence’ (2016) Social
Media and Society 1–12; Gray, Out in the Country, n. 42 above; Natalie T.J. Tindall and Richard D.
Waters, Coming out of the Closet: Exploring LGBTI Issues in Strategic Communication with Theory
and Research (Peter Lang, 2013).
75
  Eve Ng, ‘Media and LGBT Advocacy: Visibility and Transnationalism in a Digital Age’
(2017) 309 Routledge Companion to Media and Human Rights – 3017; Jean Burgess et al.,

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 422 13/12/2018 15:25


Digital rights of LGBTI communities  423

In the civil society circles working in the field of Internet governance, the Association
for Progressive Communications (APC) has actively attempted, over the years, to
raise questions, issues and linkages between feminism, gender and the Internet, and
more recently has concentrated on emerging issues in the intersections of sexuality
rights and Internet rights. Back in 2008, the APC initiated a project, called EROTICS
(Exploratory Research on Sexuality and the Internet), which is an on the ground
research project with a range of Internet users who are most affected by the Internet
regulation measures, such as young women and individuals with diverse sexualities.
The project was conducted with local partners comprising feminist academics and
activists in Brazil, India, Lebanon, South Africa and the United States (in three steps:
2008–2012, 2012–2014, and 2017) aimed at informing and guiding ‘policy making for
a more accountable process of decision making’, by providing analysis of ‘the actual
lived practices, experiences and concerns of Internet users in the exercise of their
sexual rights’.76 This work, conducted by civil society activists, is in many ways ground-
breaking and in certain ways focuses on LGBTI issues. Its anthropological focus
presents a good empirical starting point for a systematic political or legal academic
analysis of the subject.
Although scholars have devoted more in-depth attention to the study of data classifica-
tion techniques and individuals,77 or have focused on the enabling role of businesses,78
neither the potentially disproportionate impact, nor potential advances of ‘big data’ and
algorithmic monitoring for reducing marginalization of discriminated groups such as
LGBTI, have received sufficient attention within academia. On the other hand, research
has started to emerge on the impact of Internet filtering of LGBTI-related content79 or
how LGBTI rights are mediated via Internet architecture and infrastructure.80 In particu-
lar, some scholarship at the intersection of Internet governance and LGBTI rights asked
‘how various functional areas of Internet Governance . . . serve as control points over
LGBTI speech, identity and expression, and community formation’.81 Such directions are
promising, however, as of yet these issues have only been addressed sporadically and do
not present systematic research on the subject. In particular, legal and political analysis
of LGBTI digital civil rights still seems to be a lacuna, as the European Digital Rights
Initiative (EDRI), the European association of NGOs working on the promotion of civil

‘Making Digital Cultures of Gender and Sexuality with Social Media’ (2016) 2(4) Social Media+
Society.
76
  Kee, ://EROTICS: Sex, Rights and the Internet, n. 32 above, 7.
77
  F. Brunton and H. Nissenbaum, ‘Vernacular Resistance to Data Collection and Analysis: A
Political Theory of Obfuscation’ (2011) First Monday.
78
  R. MacKinnon, Consent of the Networked: The Worldwide Struggle for Internet Freedom
(New York: Basic Books, 2012).
79
  Association for Progressive Communications, ://EROTICS: Sex, Rights and the Internet
(2017), survey results available at www.apc.org/sites/default/files/Erotics_2_FIND-2.pdf (accessed
30 December 2017).
80
  Monika Zalnieriute, ‘The Anatomy of Neoliberal Internet Governance: A Queer Political
Economy Perspective’ in Dianne Otto (ed.), Queering International Law: Possibilities, Alliances,
Complicities and Risks (Routledge, 2017); DeNardis and Hackl, ‘Internet Control Points as LGBTI
Rights Mediation’, n. 59 above.
81
  Ibid.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 423 13/12/2018 15:25


424  Research handbook on human rights and digital technology

rights in the field of information and communication technology,‘[is] not aware of any


research in the field of LGBTI and digital civil rights’.82
A shortage of comprehensive political-legal analysis and reliable data, in turn, weakens
advocacy efforts and prevents evidence-based policy-making. Thus, today there is no
existing public or private sector guidelines to protect discriminated-against groups,
such as LGBTI, from digital discrimination, let alone legal instruments addressing the
subject.83 The state of the art could be summarized as follows:

(1) on a substantive level, LGBTI communities are deprived of the meaningful exercise
of their digital rights and thus, digital discrimination contributes to their further
marginalization and invisibility;
(2) on a policy-making level, the voices of LGBTI communities in the discussions and
decision-making procedures related to the development and implementation of
Internet-related policies are excluded; thus, there is no means for them to influence
and affect substantive change;
(3) on an analytical and research level, the understanding of issues at the intersection
of LGBTI community and technology is very limited and almost non-existent; thus,
there are no resources, capacity and data for effective campaigning strategies to be
formed and built on the matter.

This state of the art is rather unfortunate from the perspective of the LGBTI rights
movement, as well as the so-called digital rights community. While the former currently
has very limited knowledge and understanding of the implications of digital technologies,
the latter has often overlooked LGBTI issues and focused all its attention on the so-called
‘classical digital rights’ and issues of censorship, surveillance and cybersecurity. However,
LGBTI issues are closely interlinked with matters of surveillance, censorship and digital
rights, albeit this linkage has for the most part been implicit if not disregarded. This does
not have to be the case.

4.2  What is Needed: Research and Advocacy

The current state of the art at the intersection of LGBTI rights and digital technologies
seems to necessitate a broad long-term scholarly and activist agenda of (1) changing
discriminatory policies so that LGBTI people can effectively exercise their basic rights
online; (2) increasing their participation in technology design debates and Internet
policy-making, and (3) understanding the problems at the intersection between LGBTI
communities and the Internet. This agenda, just like for any social movement, requires the
consistent development of strategies for breaking the prejudice, bias and stigmatization of
LGBTI communities, as well as having the resources and capacity to be able to participate
within the policy-making processes both at the national and international levels. These
issues will be discussed in turn.

82
  E-mail communication with Brussels@EDRI.org of 19 August 2014.
83
  See e.g., the World Wide Web Consortium, www.w3.org/2011/tracking-protection/drafts/
tracking-dnt.html, for generalist guidelines.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 424 13/12/2018 15:25


Digital rights of LGBTI communities  425

First, as a short history of LGBTI rights in international politics and law reveals,
LGBTI communities have been rather successful in mobilizing resources and capacity in
fighting for their civil rights and struggling for justice. There is no reason why the LGBTI
movement should ignore the injustices and civil struggle in the digital sphere. Quite to
the contrary, if it wants to remain relevant and stand up for the experiences of LGBTI
individuals, it should start making its way into the digital technology domain, because,
increasingly, crucial public policy matters, including LGBTI rights, are mediated via such
technologies and digital infrastructures.
Second, in this context, it is important to underline that these digital technologies are
socio-technical systems rather than merely physical hardware and software and that these
systems are negotiated and re-negotiated by humans.84 Therefore, individual technical
solutions, such as cryptography or ‘virtual private networks’ or anti-filtering apps, while
important in providing quick solutions, are not going to be sufficient on their own
to substantially improve LGBTI rights online. On the other hand, increasing LGBTI
communities’ participation in technology design debates and Internet policy-making
could lead to an increased awareness of the digital challenges by the LGBTI movement,
as well as facilitate its involvement in creating solutions for addressing those challenges.
The emergence of events such as the Global LGBTII  Tech and Science Conference
‘Connecting Unicorns’,85 and groups such as ‘Lesbians Who Tech’86 seem to suggest
crucial first hints in that direction. Even though these events and summits seem to be
oriented towards internal community-building rather than external advocacy or lob-
bying among other interests in technology debates and Internet policy-making, this is
nonetheless a start. Moreover, as evidenced with several projects, such as Take Back the
Tech and EROTICS, in the circles of Internet governance, the Association of Progressive
Communications (APC) has been actively working on the inclusion of feminist perspec-
tives, queer and LGBTI community voices in more mainstream Internet policy debates
and global events, such as the UN Internet Governance Forum. Thus, the opportunities
for collaboration with the representatives of the LGBTI movement and organizations
such as the International Lesbian, Gay, Trans and Intersex Association (ILGA) seem to
be gradually manifesting.
This brings us to the third point, that to increase this participation and form effective
alliances, we need to better understand the problems and issues at the intersection of
LGBTI communities and digital technologies. Considering that the dominant narrative
on ‘democratization of information’ has for long portrayed the Internet as a democratic
and liberating medium for marginalized groups, the analysis of how LGBTI communi-
ties are prevented from participation in the information exchange by, inter alia, Internet
censorship policies, discriminatory freedom of expression laws, and data monitoring
techniques emerges as a pressing scholarly project.
The following section will attempt to stimulate future research interest on this subject
by sketching out what such a research programme might entail, how the issues could be

84
  G. Baxter and I. Sommerville, ‘Socio-technical Systems: From Design Methods to Systems
Engineering’ (2011) 23(1) Interact. Comp. 4–17, at 4.
85
  See http://connecting-unicorns.com/.
86
  See http://lesbianswhotech.org/.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 425 13/12/2018 15:25


426  Research handbook on human rights and digital technology

framed and what methodologies could be applied to illuminate the linkages and address
the ‘digital discrimination’ of LGBTI communities

5.  A
 ROADMAP FOR THE LGBTI AND DIGITAL RIGHTS
FRAMEWORK

The understanding of the issues at the intersection of digital technologies and LGBTI
rights has been rather limited to date. It is an underexplored subject and not much shared
language or disciplinary approaches exist within academia. There are many ways to
approach the issues at the intersection of LGBTI rights and digital technologies from
various disciplinary and interdisciplinary angles and ask questions from sociological,
psychological, philosophical and economical perspectives, among others. While these
perspectives are crucial in advancing knowledge and equality of LGBTI individuals, it is
well beyond the scope of this short chapter to meaningfully discuss, let alone engage with,
all these perspectives. However, one of the potential strategies to approach these issues
that might be particularly well-suited for human rights scholars and advocates could be
to combine the ‘LGBTI rights’ and ‘digital rights’ frameworks. Such a combination would
allow us to reframe ‘digital rights’ as an LGBTI issue to broaden the spectrum of rights
being covered on the Internet to assure the equality of LGBTI communities both offline
and online. The subsequent sections will attempt to briefly discuss the foundations of
such discourse, before discussing potential research questions and methodologies. The
following section briefly sketches the digital rights discourse.

5.1  Digital Rights Discourse

The rapid development of Internet access and services over the last two decades has led
to the parallel growth of the centrality of what is often termed as ‘Internet rights’ or
‘digital rights’. The term digital rights is a relatively new concept of the last decade that
refers to the human rights that allow individuals to access, use, and share digital content
online. Similar to LGBTI rights, digital rights are not in fact substantially new rights
per se, but rather refer to the application of classical existing rights, such as the right
to privacy, freedom of opinion, freedom of expression and assembly in the context of
digital technologies and the Internet. The importance of digital rights has been recently
reassured by the UN Human Rights Council, which has affirmed by consensus that ‘the
same rights that people have offline must also be protected online’ in resolutions adopted
in 2012, 2014 and 2016 (HRC Resolutions 20/8 of 5 July 2012;87 26/13 of 26 June 2014;88
and 32/ of 27 June 201689).

87
  UN Human Rights Council, The Promotion, Protection and Enjoyment of Human Rights on
the Internet, UN Doc. A/HRC/20/L.13 (5 July 2012), available at https://documents-dds-ny.un.org/
doc/UNDOC/LTD/G12/147/10/PDF/G1214710.pdf ?OpenElement (accessed 22 August 2016).
88
  UN Human Rights Council, The Promotion, Protection and Enjoyment of Human Rights on
the Internet, UN Doc. A/HRC/26/L.24 (26 June 2014), available at / https://documents-dds-ny.un.org/
doc/UNDOC/LTD/G14/059/67/PDF/G1405967.pdf ?OpenElement (accessed 22 August 2016).
89
  UN Human Rights Council, The Promotion, Protection and Enjoyment of Human Rights on

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 426 13/12/2018 15:25


Digital rights of LGBTI communities  427

Of special importance in this context is the pronouncement of Frank La Rue, the Special
Rapporteur on the Promotion of the Right to Freedom of Opinion and Expression, who
said in his report to the UN Human Rights Commission back in 2011 that:

the right to freedom of opinion and expression is as much a fundamental right on its own accord
as it is an ‘enabler’ of other rights, including economic, social and cultural rights, such as the
right to education and the right to take part in cultural life and to enjoy the benefits of scientific
progress and its applications, as well as civil and political rights, such as the rights to freedom
of association and assembly. Thus, by acting as a catalyst for individuals to exercise their right
to freedom of opinion and expression, the Internet also facilitates the realization of a range of
other human rights.90

Similarly, privacy is one of the core basic fundamental rights, allowing individuals to
live freely from an unjustified interference with their private lives from public and private
authorities and thus contributes to the maintenance of the balance between the individual
and the state. Back in 2009, Special Rapporteur Martin Scheinin also highlighted that
‘[t]he right to privacy is therefore not only a fundamental human right, but also a human
right that supports other human rights and forms the basis of any democratic society’.91
The importance and recognition of the role of privacy in the digital age is reflected in
numerous UN General Assembly Resolutions, as well as the newly established position of
a Special Rapporteur on Privacy.92
Thus it seems that indeed ‘[i]t has become customary to emphasize that individuals
enjoy the same rights online as they do offline’.93 While these principles appear to have
gained certain traction among the international community and seem to be widely sup-
ported by the governments (as evidenced by the three UN General Assembly consensus
declarations), many countries have a long way to go to actually implement them.

5.2  A
 Dual Struggle: LGBTI Discourse on Digital Rights: Discrimination and
Marginalization Online for Safeguarding Traditional Values

Drawing a parallel to a famous quote by the UN High Commissioner for Human


Rights, the idea that digital rights are naturally applicable to LGBTI persons is neither
revolutionary nor puzzling: it rests on the two fundamental principles of equality
and non-discrimination (UNHCHR, 2012).94 While it is inevitable that these rights

the Internet, UN Doc. A/HRC/32/L.20 (27 June 2016), available at www.unwatch.org/wp-content/


uploads/2016/06/L.20.pdf (accessed 22 August 2016).
90
  La Rue, Report of the Special Rapporteur, n. 35 above.
91
  Martin Scheinin, Report of the Special Rapporteur on the Promotion and Protection of Human
Rights and Fundamental Freedoms While Countering Terrorism (28 December 2009) A/HRC/13/37.
92
  See OHCHR, Report of the Special Rapporteur on the Right to Privacy (2015), available at
www.ohchr.org/EN/Issues/Privacy/SR/Pages/SRPrivacyIndex.aspx (accessed 10 September 2016).
93
  David Kaye, Report of the Special Rapporteur on the Promotion and Protection of the Right to
Freedom of Opinion and Expression, A/HRC/32/38 (11 May 2016), available at https://documents-
dds-ny.un.org/doc/UNDOC/GEN/G16/095/12/PDF/G1609512.pdf ?OpenElement (accessed 22
August 2016).
94
  United Nations High Commissioner for Human Rights, Born Free and Equal: Sexual
Orientation and Gender Identity in International Human Rights Law, HR/PUB/12/06 (2012) 7.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 427 13/12/2018 15:25


428  Research handbook on human rights and digital technology

are sometimes restricted by various public and private actors, such restrictions have a
particularly significant impact on marginalized communities, such as ethnic and racial
minorities, women and LGBTI communities, for whom exercising their rights in the
offline world is often a daily struggle. In search of new, more open and inclusive spaces,
LGBTI communities, along with other discriminated groups, have been more active
in relying on technology in connecting with peers and fighting for their civil rights.95
The liberatory role of the Internet for such communities has already been discussed
in section 2 above, and there is no need to repeat it here. However, it is important to
emphasize that in the context of digital rights, LGBTI communities face a dual struggle
for their rights online.
A first struggle is a shared struggle for rights to freedom of expression, freedom
of association, privacy and security online along with all their fellow citizens. But
oftentimes LGBTI communities encounter an additional struggle, that of discrimina-
tory treatment, as their right to simply express a non-heteronormative sexuality and
gender identity is treated differently from a comparable right to privacy and freedom
of expression of their fellow citizens with normative sexualities and gender identities
(e.g. the earlier mentioned Russian ‘Anti-Propaganda’ Law). For this reason, LGBTI
communities’ digital rights are more often restricted than the respective rights of their
fellow citizens. This suggests that a right to freely express one’s sexuality and/or identity
is not always regarded as part of a right to self-determination and free expression by
public and private actors (e.g. Apple banning gay characters from their online games),96
but often it is perceived as a political (and potentially destabilizing) act in itself. It is not
too difficult to see that the LGBTI communities thus are often discriminated against
online (in addition to the limitations on their digital rights) by virtue of their sexual
orientation or gender non-conformity.
This brings us to the last, but crucially important, aspect to be incorporated into the
framework of analysis of the digital rights of LGBTI communities: how are the restric-
tions on the exercise of digital rights by LGBTI communities often justified? As it is well
known to all human rights lawyers, there might be certain justifications for interferences
with the exercise of human rights, including digital rights, which could be considered
necessary and proportionate in democratic societies, sometimes resulting in legally
acceptable limitations on the exercise of human rights. So, what are the justifications
often put forward by actors, both public and private, for limiting LGBTI communities’
digital rights?

5.3  Justifications for Restricting Digital Rights of LGBTI Communities

As suggested by Human Rights Watch’s latest global report, ‘traditional values are often
deployed as an excuse to undermine human rights’.97 Indeed back in 2011, the United
Nations Human Rights Council stressed that ‘traditions shall not be invoked to justify

95
  Daniels and Gray, ‘A Vision for Inclusion: An LGBTI Broadband Future’, n. 38 above.
96
  Jack Flannagan, The Complete History of LGBTI Video Game Characters (2014), available
at www.dailydot.com/parsec/gay-characters-video-games-history/ (accessed 15 September 2016).
97
  See Graeme Reid, The Trouble with Tradition: When ‘Values’ Trample Over Rights (Human
Rights Watch, 2013), available at www.hrw.org/pt/node/112847 (accessed 30 September 2016).

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 428 13/12/2018 15:25


Digital rights of LGBTI communities  429

harmful practices violating universal human rights norms and standards’,98 thereby
acknowledging that traditions are often invoked to justify human rights violations. A
number of NGOs have expressed reasonable concerns and noted that ‘traditional values’
are frequently invoked to restrict access to human rights for those segments of society
who, from the conservative viewpoint or perspective of those in authority, challenge the
mainstream or fall outside the dominant norm.99
This rhetoric has been once again recently utilized by the opponents to the HRC’s
Resolution on LGBTI Rights, adopted on 26 September 2014, led by Egypt and other
members of the Organization for Islamic Cooperation, who framed the Resolution as
a form of cultural imperialism and an attack on Islam and a human rights violation in
itself.100 While post-colonialist and queer scholars do have a valid criticism about the
North-American/European-centric model of ‘LGBTI rights’ negating and marginalizing
the experiences from other cultural backgrounds and non-essentialist views on sexuality
and gender,101 such a queer critique is, however, scarcely related to a ‘cultural’ denial of
the mere existence of non-heterosexual individuals in those particular cultures.
Internet censorship policies and restrictions of LGBTI expression online are often
justified by using the same justifications of cultural and religious ‘values’, that in effect
regulate and preserve mainstream gender and sexual norms, as well as gender roles and
stereotypes. These censorship policies and practices are most often built around the mobi-
lization of anxieties and ‘dangers’ around sexual content and interaction. As documented
by free speech scholars and advocates, child pornography, protection of children from
sexual dangers, and increasingly the protection of children from any ‘wrongful informa-
tion’ are primary examples of the enormous opportunities for ever-expanding mission
creep. This creep often results in filtering out crucial health-related information in public
spaces, such as libraries and schools, and prevents people from accessing information on
‘breast cancer’ because the word ‘breast’ is deemed ‘indecent’ by certain Internet filtering
programs.102 It would be naïve to think that this is the case only in countries where LGBTI
rights are not respected or even criminalized, because numerous examples of censorship
of LGBTI-content online suggest that this is just as prevalent in the Western liberal
democracies103 as it is in the conservative Eastern European, African, Asian or Middle
Eastern countries.

 98
  Resolution adopted by the Human Rights Council on Promoting Human Rights and
Fundamental Freedoms through a Better Understanding of Traditional Values of Humankind,
GA Res. 16/3, UN GAOR, 16th Sess., Agenda Item 3, UN Doc. A/HRC/RES/16/3 (8 April 2011).
 99
  See Joint Written Statement Addressed to the Advisory Committee Stemming from Human
Rights Council Resolutions: Promoting Human Rights and Fundamental Freedoms through a
Better Understanding of Traditional Values of Humankind, UN GAOR, 7th Sess., Agenda Item
3(a)(viii), UN Doc. HRC/AC/7/NGO/1 (8–12 August 2011).
100
  See Interview with the representative of Saudi Arabia on 26 September 2014, ‘LGBTI
Rights Resolution Passes United Nations Human Rights Council in Lopsided Vote’, available
at www.buzzfeed.com/lesterfeder/LGBTI-rights-resolution-passes-united-nations-human-rights-
co#102cptl/ (accessed 30 September 2016).
101
 Otto, Queering International Law, n. 10 above.
102
  Daniels and Gray, ‘A Vision for Inclusion: An LGBTI Broadband Future’, n. 38 above.
103
  For censorship by Internet companies in the Western liberal democracies, see Duguay,
‘Lesbian, Gay, Bisexual, Trans, and Queer Visibility Through Selfies’, n. 74 above; M. Olszanowski,

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 429 13/12/2018 15:25


430  Research handbook on human rights and digital technology

Not surprisingly, however, policy debates around the development of such censorship
mechanisms seldom include the perspectives of those who need that ‘special protection’
– children or youth, attempting to access vital information online.104 In the end, LGBTI
youth and children are also the very same youth and children who are the intended
beneficiaries of these policies and practices. What is needed, then, is a more proactive
approach from LGBTI communities and their leaders to make sure that their voices are
heard in such debates around technology design.
Albeit attempts have been made to amplify the voices of women and gender non-
conforming people in Internet governance circles by the Association of Progressive
Communications, more needs to be done to make sure that these debates around technol-
ogy design are no longer dominated solely by rich white males from Silicon Valley (even
if some of them are openly gay, such as Tim Cook), or the rich white males from the so-
called global Internet governance institutions, such as Internet Corporation for Assigned
Names and Numbers (ICANN), or Internet Engineering Task Force. Indeed, if issues
around sexual orientation and gender should occupy more than a rhetorical role in today’s
discussion of Internet and digital technologies,105 more research-informed advocacy and
pressure is needed to alter the rhetorical status quo.
The final section of this chapter will attempt to sketch out some potential research
designs, questions and methods that could illuminate the research and advocacy pro-
gramme on digital rights of LGBTI communities.

5.4  Advancing a New Research Programme

Just as there are many disciplinary perspectives on digital technologies and LGBTI rights,
there are also numerous ways to approach the analysis of the digital rights of LGBTI
communities, with many different research designs, methods and research questions.
The parameters of this short chapter permits to address only a few of these briefly, but
nonetheless it is hoped that these suggestions will prove useful for researchers interested
in advancing the digital rights of LGBTI communities.

5.4.1  Potential research approaches


Quite evidently, any research design on the digital rights of LGBTI communities must
approach digital technologies as socio-technical systems,106 looking at the socio-political

‘Feminist Self-imaging and Instagram: Tactics of Circumventing Censorship’ (2014) 21 Visual


Communication Quarterly 83–95, available at doi: 10.1080/15551393.2014.928154.
104
  Divina Frau-Meigs and Lee Hibbard, Education 3.0 and Internet Governance: A New Global
Alliance for Children and Young People’s Sustainable Digital Development (CIGI, 2016); Divina
Frau-Meigs, ‘Taking the Digital Social Turn for Online Freedoms and Education 3.0’ (2017) The
Routledge Companion to Media and Human Rights 114.
105
  See e.g., workshop proposal by the APC for the Internet Governance Forum, ‘How Can
Internet Policy-Making Support LGBT Rights?’, available at www.intgovforum.org/cms/wks2015/
index.php/proposal/view_public/47 (accessed 30 December 2017). See also Doria Avri, Women’s
Rights, Gender and Internet Governance, APC Issue Paper (2015), available at www.apc.org/en/pubs/
women%E2%80%99s-rights-gender-and-internet-governance (accessed 30 December 2017).
106
  Baxter and Sommerville, ‘Socio-technical Systems: From Design Methods to Systems
Engineering’, n. 84 above, 4.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 430 13/12/2018 15:25


Digital rights of LGBTI communities  431

(and legal implications) of the technology for these communities. To understand those
implications, the research programme, depending on one’s political and theoretical posi-
tions, may incorporate:

(1) theoretical frameworks developed in the fields of Internet/technology studies (e.g.


science and technology studies (STS));107 and
(2) frameworks in feminist and sexuality studies (e.g. radical or liberal feminist theory,
gender studies or queer theory) to better account for the particular struggles and
specific situation of LGBTI communities.

5.4.2  Potential methods and techniques


The research design regarding the digital rights of LGBTI communities thus, depend-
ing on one’s focus and approach, may combine empirical social scientific analysis and/
or humanities with a normative legal assessment. The research programme could rely
on various methodologies, coming from social sciences (qualitative and quantitative
techniques);108 humanities (critical discourse analysis, digital methods);109 computer
science (computational methods);110 and legal discipline (doctrinal, comparative legal
methods).111
The relevant examples of such research design combining critical approaches from STS
and feminist theory, and various methods and research techniques, can easily be found in
feminist scholarship on technology.112 Striving for a normative (legal) change and reform
on the subject, however, requires supplementing these critical theories with legal analysis
of human rights and discrimination in the digital age (e.g. Barocas and Selbst, 2016, on
data-driven discrimination; or Henry and Nicola, 2015, on sexual violence online).113
How could these different methods be useful for research on digital rights of LGBTI
communities? To give an example, qualitative interviews with LGBTI activists could test
community awareness of the risks connected to digital technologies and identify a list of
perceived threats from the point of view of LGBTI communities themselves. Such qualita-
tive data could, for instance, be subjected to critical content analysis and/or Foucauldian

107
  Francesca Musiani, ‘Practice, Plurality, Performativity, and Plumbing: Internet Governance
Research Meets Science and Technology Studies’ (2015) 40(2) Science, Technology, and Human
Values 272–86.
108
  Michael Keating and Donatella della Porta, Approaches and Methodologies in the Social
Sciences: A Pluralist Perspective (2008); Lawrence W. Neuman, Social Research Methods:
Qualitative and Quantitative Approaches (2002).
109
  Terry Locke, Critical Discourse Analysis (Bloomsbury Publishing, 2004); Richard Rogers,
The End of the Virtual: Digital Methods (Amsterdam University Press, 2009) vol. 339.
110
  Claudio Cioffi-Revilla, Introduction to Computational Social Science (Springer, 2014).
111
  Mike McConville (ed.), Research Methods for Law (Edinburgh University Press, 2007).
112
  For an overview of these approaches, see Judy Wajcman, ‘Feminist Theories of Technology’
(2010) 34(1) Cambridge Journal of Economics 143–52; Wendy Faulkner, ‘The Technology Question
in Feminism: A View from Feminist Technology Studies’ (2001) 24(1) Women’s Studies International
Forum.
113
  S. Barocas and A.D. Selbst, ‘Big Data’s Disparate Impact’ (2016) 104 California Law Review
671, available at www.ssrn. com/abstract 2477899; Nicola Henry, and Anastasia Powell, ‘Embodied
Harms: Gender, Shame, and Technology-facilitated Sexual Violence’ (2015) 21(6) Violence Against
Women 758–79.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 431 13/12/2018 15:25


432  Research handbook on human rights and digital technology

discourse analysis.114 As an alternative, such data could also be subjected to computational


methods115 to uncover which of the community’s perceived threats are in fact real threats,
and whether any of them could be dealt with on an individual level by adopting socio-
technical strategies (e.g. through measures to pre-empt or limit profiling, such as installing
web browsers that obfuscate the computer’s IP address), and which threats need a more
systemic approach and thus need to be addressed through policy. Finally, relying on legal
techniques, a research programme could focus on producing recommendations on how
to strengthen the digital rights of LGBTI communities in national and transnational
contexts. To this end, legal analysis of the international legal framework ensuring LGBTI
rights, national legislation, relevant case law, policy documents and reports would be
necessary to critically assess both the current legal landscape, its shortcomings, as well as
viable policy and legal options for the future.

5.4.3  Societal relevance of the research programme


Robust research in the area might allow to put significant pressure on the corporate giants
of the Internet to make their policies more LGBTI friendly, as well as the members of
the international community who have been lagging in their recognition of LGBTI rights
for many years. Of course, this pressure will not immediately lead to the acceptance and
recognition of LGBTI communities’ needs and rights in the digital sphere or entirely stop
the censorship of the LGBTI-related content online, but it will make it more difficult for
those denying LGBTI individuals their basic human rights online to sustain their policies
and justify them on the grounds such as damage of LGBTI-related content online to
children. The raised awareness about the social costs of LGBTI censorship technology,
and the privacy needs of LGBTI individuals, should mean that at least private actors
might be less prepared to engage in activities directly undermining the digital rights of
LGBTI individuals.
Thus, the innovative potential of such a research programme, as well as its practical
normative necessity and relevance, make the study of digital rights of LGBTI communi-
ties a meaningful choice for scholars working at the intersection of human rights and
digital technologies.

6. CONCLUSIONS

The success of the LGBTI movement, especially for those of us in Western Europe and
North America where initial signs of genuine equality appear to be manifesting, might
make us slightly sceptical towards certain pressing issues, the stakes of which are not
always immediately intelligible to us. As emphasized by Renato Sabbadini, Executive
Director of the International Lesbian, Gay, Trans and Intersex Association, LGBTI activ-
ists in such countries predominantly feel grateful towards the public institutions ‘which
sanctioned the social and cultural changes turning the most unspeakable and shameful of

114
  Michael Arribas-Ayllon and Valerie Walkerdine, ‘Foucauldian Discourse Analysis’ (2008)
Sage Handbook of Qualitative Research in Psychology 91–108.
115
 Cioffi-Revilla, Introduction to Computational Social Science, n. 110 above.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 432 13/12/2018 15:25


Digital rights of LGBTI communities  433

human orientations into something almost universally accepted, even banal’, and the idea
that those very public institutions might be censoring LGBTI speech online or collecting
unlimited digital traces of LGBTI communities is ‘too disturbing to be entertained’.
As Sabbadini noticed, ‘No one likes to spoil a party, where people are celebrating a
victory’.116 However, this chapter has attempted to demonstrate that the party needs to be
spoiled if LGBTI communities are to exercise their human rights both offline and online,
just like everyone else. To an increasing extent, LGBTI rights are mediated via digital
technologies and infrastructure, and there seems to be no reason why digital rights should
not be included on the agenda of the LGBTI movement. In the end, the idea that digital
rights are naturally applicable to LGBTI persons is neither revolutionary, nor puzzling: it
rests on the two fundamental principles of equality and non-discrimination.117

116
  Panel on ‘LGBT+ Communities and Digital Rights’ at the Computers, Privacy and Data
Protection Conference, Brussels, Belgium, 2015, available at www.youtube.com/watch?v=TUMh4C_
smJ4 (accessed 31 December 2017).
117
 UNHCRC, Born Free and Equal, n. 94 above, 7.

Monika Zalnieriute - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 433 13/12/2018 15:25


Monika Zalnieriute - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:32AM
via New York University

WAGNER_9781785367717_t.indd 434 13/12/2018 15:25


Index

Ablon, L. 115 manufacturers’ liability, self-modification


AccessNow 94, 152, 203, 333 294
activism 48–9, 51, 67–70, 86–7, 204–7, 217–18, multiagent systems and traceability of
348–9, 388 communications 294–5
Adams, P. 28 service providers 296
Adrian, D. 314 user-consumer, agent supervision 288
Agrafiotis, I. 130 user-consumer, fault-based 287–8
Åhäll, L. 335 user-consumer, strict liability 288–93
Akdeniz, Y. 347–8, 374 artificial agents, software with predetermined
Albin-Lackey, C. 362 behaviour 277–87
Albury, K. 388 consumer expectation test 282
Alces, P. 282 foreseeable risks 283
Aldred, J. 55 manufacturer liability 279–86
Aldrich, R. 40 manufacturers liability under German Civil
Allen, T. 296 Code, section 823(1) 285–6
Alper, M. 383 market and product monitoring 285–6
Amoretti, F. 175 product defect, manufacturing and/or design
Andersen, L. 318 281–4
Angle, S. 384, 387 product liability, burden of proof 284–5
animal keeper liability 289–91 risk/utility test 282
Aoyama, I. 392 service providers 286–7
argumentation maps see socio-technical software as a product 280–81
systems, liability and automation, user-operator, strict liability 278–9
argumentation maps, Legal Case user/operator, fault-based liability 277–8
methodology and liability risk analysis Ash, T. 64
Aroldi, P. 388, 394 Ashenden, D. 79
Arquilla, J. 78 Association for Progressive Communications
Arthur, C. 69 (APC) 14, 366–7, 423, 425
artificial agents 268–98 Aust, S. 230
human rights affected 269–70 Austin, G. 55, 56, 57
human rights legal framework 269–76 Austin, L. 214
legal personhood 296–8 Austin, P. 56
legal personhood, corporations 297 Australia, Toonen v. Australia 415
legal personhood, vicarious liability automation
principles 296–7 socio-technical systems see socio-technical
remedy right 270–75 systems, liability and automation
remedy right and treatment of private state cybersecurity interventions and harm
parties 272–5 potential 140–42
tort law and extra-contractual liability autonomous artificial agents see artificial
275–96 agents, autonomous artificial agents
artificial agents, autonomous artificial agents autonomous vehicles, cybercrime 102–3
287–96 Avri, D. 430
breach of post-sale duties 295 Axelrod, R. 79
employees and agents’ tortious acts 291–3
liability of an animal keeper 289–91 Bachan, K. 383
manufacturers’ liability 293–5 Badahdah, A. 26
manufacturers’ liability, design safety Bakir, V. 201
293–4 Ball, K. 202

435
Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 435 13/12/2018 15:25


436  Research handbook on human rights and digital technology

Bamberger, K. 356 Boulanin, V. 78


Bamford, J. 41, 196 Boulet, G. 151
Banaji, S. 383, 388 Bovens, M. 251
Bannink, R. 389 Bowden, C. 21
Barbovschi, M. 380, 381 Bowen, S. 18
Barlow, J. 8 Bradley, P. 255
Barnard-Wills, D. 79 Braga, M. 220
Barnett, A. 19 Bramley, G. 398
Barocas, S. 359, 431 Brandeis, L. 82
Barrett, B. 85 Bratus, S. 304
Barrett, E. 78 Brend, Y. 220
Barron, A. 54, 55 Brenner, S. 100, 103, 105
Bartlett, J. 325 Bridy, A. 67, 68
Bauman, Z. 34, 50–51 Broadhurst, R. 103
Baxter, G. 425, 430 Brodowski, D. 98–112
Beard, T. 255 Broeders, D. 87
Bechmann, A. 359 Bromley, M. 300, 302, 306, 313
Belling, D. 291, 292 Bronskill, J. 220
Benedek, W. 6, 118, 347, 364–75 Brown, I. 129, 325, 346, 347
Benjamin, H. 7 Brown, J. 394
Bennett, C. 195, 217 Brunton, F. 423
Berger, C. 22 Buchmann, D. 382
Berka, W. 275 Buckingham, D. 383, 388, 398, 400
Bernal, V. 25 Bulger, M. 376, 380, 381, 382, 401
Bernd Nordemann, J. 99 Buni, C. 421
Berríos, L. 389 Burchard, C. 100, 110
Berson, I. and M. 388 Bureau, B. 220
Bertot, J. 174 Burgess, J. 422–3
Best, M. 172 Burke-White, W. 94
Betz, D. 79 Burkhart, P. 99
Bhat, C. 389 Burns, J. 395
Biermann, K. 229 Burrell, J. 29
Big Data mining see state cybersecurity Burton, P. 382
interventions and harm potential, bad Butler, B. 9
behaviour identification and prediction by Buzan, B. 37, 77, 79, 320
algorithms (Big Data mining) Byrne, J. 376, 377, 389, 391, 398
Bigo, D. 33–52, 320, 327
Bijker, W. 324 cable splitters 147–9, 152, 211–12
Binney, W. 19 Cahn, A. 281
Birnhack, M. 63 Calabresi, G. 279–80
Blau, J. 56 Calo, R. 269
Blumler, J. 175 Campos, R. 383
Bobkowski, P. 394 Canada 83, 204–7
Bocas, P. 12 repatriation of domestic Internet traffic see
Boehm, F. 356 surveillance reform, repatriation of
Bohlin, E. 165 Canadian domestic Internet traffic
Boldrin, M. 54 see also national security, Five Eyes Internet
Bolton, R. 9 agencies
Bond, R. 320 Cannataci, J. 15, 138–9, 346, 347, 369
boomerang routing 212–14, 216 Carbonell, J. and M. 181
Booth, R. 21 Carpenter, M. 57
Borg, S. 335 Carrapico, H. 68
Born, H. 224, 245 Carter, D. 26
Böse, M. 102 Casey, E. 105
Bossong, R. 332 Castells, M. 417

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 436 13/12/2018 15:25


Index  437

Castro, D. 8 UN Convention on the Rights of the Child


Cavoukian, A. 95 (UNCRC) 376, 377, 379, 381–5
Cawley, R. 165 UNICEF 380, 384, 395
Cendoya, A. 117 China, export controls of surveillance
censorship 7, 323, 326, 419–21, 424, 427–8, technologies 308–12
429–30 Chirico, F. 165
self-censorship 199, 201 Chopra, S. 268, 269, 276, 281, 282, 284, 287,
Chakrabarti, S. 17–18, 21 288, 289, 290, 294, 296, 297, 298
Chandler, D. 301, 306, 313 Cilesiz, C. 400
Chanliau, M. 95 Cioffi-Revilla, C. 431, 432
Chassiakos, Y. 394 civil rights and freedoms, children’s rights
Chawki, M. 98 385–8, 395
Chemaly, S. 421 civil society involvement 12–14, 15–16, 21, 92,
Childers, S. 255 199–200, 209, 219, 302, 316
children’s rights 376–410 Clapham, A. 6, 272, 273
access limitations 382–3 Clement, A. 197, 211, 212, 216
accountability issues 378 Clifford, B. 412
age-blindness of Internet 378 cognitive-systems engineering 250
biometric devices and health 395 Cohen, F. 324
civil rights and freedoms 385–8, 395 Cohen, J. 358
Council of Europe Strategy for the Rights Cole, H. 396
of the Child 379 Coleman, J. 383
cyber-bullying 379, 390 Coleman, S. 175, 399
data protection 387–8, 391–2 Collin, P. 382, 385, 400
diet and fast-food marketing 394 Comninos, A. 348
and digital media 376–80, 382 complacency concerns 16–19
disability, basic health care and benefits consumer expectation test, and software with
394–7 predetermined behaviour 282
education, leisure and cultural activities Contissa, G. 247–67
397–400 Conway, M. 324, 325
evidence and children’s views 380–400 Cook, P. 389
family environment and alternative care Coolsaet, R. 323
392–4 cooperation
future potential for wellbeing 395–7 cybercrime 108–10
gender discrimination 383 Europol, EU Internet Referral Unit and
Global Kids Online project 395, 398 online radicalization 333–4, 338–9
International Institute for Child Rights international intelligence cooperation rules
and Development (IICRD), Child 236, 238, 241
Protection Partnership (CPP) 391 national security 34–5, 37, 41
LGBTQI and ‘Growing Up Queer’ project coopetition and national security 37–8, 42
396 copyright see digital copyright
Moraba mobile game 391 Corner, E. 325
non-discrimination, children’s best interests Cornish, P. 77
and optimum development 381–3 corporations
One Laptop per Child initiative 397 corporate world harm, and mass state
online violence 388–92, 394–5 surveillance 202–3
parental responsibilities 388, 392–3 cybersecurity and private company
right to be heard 383–5 involvement 82, 87–8, 92
right to freedom of expression 386 human rights commitment 348–9, 352–7,
right to privacy 387–8, 391 359–61
risk responses 391–2, 397 legal personhood 297
sexual exploitation and WeProtect Global socio-technical systems, liability and
Alliance 379 automation 252–4
sexual/pornographic content effects 395 see also manufacturer liability
transnational and cultural concerns 378–9 Cortesi, S. 383, 403

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 437 13/12/2018 15:25


438  Research handbook on human rights and digital technology

Cosco, V. 195 criminal investigations in cyberspace 105–10


Cotino, L. 180 definition 98
Cotter, T. 64 encryption and backups 101, 106, 107
Couclelis, H. 26 fight against and political leverage 98–9
Council of Europe 14, 370–72, 374 future research 110–11
Cybercrime Convention 147, 150–51, 154 and human rights, protection and limitations
Data Protection Convention 149–50 99–101, 110–11
Resolution on the Right to Internet Access ICT regulation 103–4
171, 172 international cooperation and enforcement
Rule of Law on the Internet 90–91 jurisdiction 108–10
Strategy for the Rights of the Child 379 investigatory powers 100
Couture, S. 199 jurisdiction to prescribe to extra-territorial
Cowls, J. 325 situations 102–3
Crang, M. 29 programmability challenges of ICT 100–101
Crenshaw, M. 322 remote forensic software 107
crime see cybercrime; cybersecurity subscriber identification and
Critical Internet Resources (CIRs) telecommunication data retention 107–8
management, and cybersecurity 119–20 cybersecurity 73–97
Cruft, R. 171 civil society organizations 92
Crutcher, M. 30 company self-regulation 92
Cuba, Internet access 180–94 cross-border, law enforcement access to data
Constitutional and political rights 184, 186, 85–6
187–93 cyberwar concerns 78–9
education promotion 187, 188–9 definitions 75–80
freedoms of expression and of the press EU General Data Protection Regulation 94
180–81, 182, 184 EU Network and Information Systems
future direction 194 (NIS) Security Directive 93–4
as human right 180–83, 184, 187–93 freedom of opinion and expression and free
information technologies, commitment to flow of information 86–9
increasing use of 190–92, 193 government surveillance and intelligence
jurisdictional intervention restrictions 184–5 gathering practices 83–5
navigation rooms and connectivity costs hacking 81, 87, 131, 138, 147–9, 152, 269–70
185–6 human rights 80–92
participation in the public life of citizens 188 human rights, threat to 81–8
penetration rate ranking 183–4 information security and CIA Triad
political, legal and social issues 183–6, 187 (confidentiality, integrity and
professional involvement and unification 192 availability) 75–7
right of association limitations 192–3 Internet protection from state interference
state provision as public service 185–6 87
telecommunications sector investment 190 mandatory anti-encryption regimes 84–5
transportability of services improvement 188 manipulation of social media and traditional
Cullen, C. 220 propaganda tools 82
cultural issues 378–9, 413–14, 429 national security 73–4, 77–8
Cutié, D. 185 privacy rights 82–6
cyber-bullying, and children’s rights 379, 390 private company involvement 82, 87–8
cybercrime 98–112 securitization and militarization 78–80,
attribution issues 102–3 87–8
autonomous vehicles 102–3 state interventions see state cybersecurity
CIA offenses (confidentiality, integrity and interventions and harm potential
availability) 104 territorial borders extension and state
coercive measures 106–7 sovereignty 79–80
computer- and content-related offenses terrorism, anti-terror legislation 83, 87–8,
104–5 90, 95, 96
conflict of jurisdiction 102 US International Strategy for Cyberspace
covert telecommunication surveillance 106–7 94–5

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 438 13/12/2018 15:25


Index  439

US NSA and Snowden allegations 83–5, foreign-foreign communications data see


90 Germany, intelligence law reform,
see also Internet; national security; foreign-foreign communications data
surveillance surveillance
cybersecurity protection and international law LGBTI communities 419–21, 423–4
113–28 metadata collection, Germany, intelligence
aggression ban 123 law reform 232–3, 243
and common interest 117–19 subscriber identification, cybercrime 107–8
criminal misuses of computers and networks see also surveillance
114–15 data protection 58, 213–14, 243, 373, 387–8,
Critical Internet Resources (CIRs) 391–2
management 119–20 Datta, B. 18
customary international law 122–4 Davidson, C. 398
freedom of expression 118 Davidson, J. 389
good neighbourliness (no harm) principle Davies, R. 175
123–4 Davies, S. 219
human error 113 De Hert, P. 12, 151, 157–79
Internet freedom comparison 116 De Pauw, L. 383
Internet and international law 121–2 Deibert, R. 25, 44, 73, 78, 80, 81, 118, 347, 348,
multi-stakeholder approach 126 352
national cybersecurity strategies 116–17 Dekker, S. 252
non-intervention principle 123 Della Porta, D. 322, 431
peaceful settlement of disputes principle 123 Demchak, C. 79
precautionary principle (due diligence) DeNardis, L. 417, 419, 420, 422, 423
124–6 Desai, T. 359, 363
ransomware 116, 127, 344 DeShano, C. 402
sovereign equality principle 123 Devriendt, L. 26
and state sovereignty 120 Dewald, A. 105
cyberspace regulation 24–32 Diffie, W. 314, 315
abstract space versus network 29–30, 31–2 digital copyright 53–71
code/space and coded space distinction 29 citizen participation in law-making 66–70
digital divide issues 31 copyright as human right 54–7
grammatical rules associated with the copyright-protected content, state
Internet 28, 32 cybersecurity interventions and harm
ontic role 28, 30–31 potential 141–2
power relationships between different groups data protection and privacy 58
of people 31 EU Anti-Counterfeiting Trade Agreement
spatial metaphor 26–8, 30 68–70
unregulated activity challenges 30–31 EU Enforcement Directive 58–9
Czychowski, C. 99 EU Information Society Directive 58
European Convention on Human Rights
Daase, C. 320 (ECHR) 56–7
Dachwitz, I. 333–4 fair balance and European Court of Justice
Dahya, N. 383 decisions 58–61
Dalby, S. 312–13 freedom of expression 58
Daly, A. 64, 420 Internet’s private superpowers 64–5
Daniels, J. 25, 418, 419, 422, 428, 429 IP blocking regimes 61
Darroch, J. 139 Technological Prevention Measures (TPMs)
Darwin, C. 41 58, 60
Daskal, J. 85, 86 US Constitutional protection of human
data collection and retention 20 rights 56, 61–6
Big Data mining see state cybersecurity US Digital Millennium Copyright Act
interventions and harm potential, bad (DMCA) 64–6
behaviour identification and prediction US First Amendment and freedom of
by algorithms (Big Data mining) expression 61–6

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 439 13/12/2018 15:25


440  Research handbook on human rights and digital technology

US First Amendment and freedom of EROTICS research project, LGBTI


expression, fair use doctrine 63, 64, communities 423, 425
67–8 Eroukhmanoff, C. 323
WTO TRIPS Agreement 54–5, 58 Estonia, Internet access 161
digital media, and children’s rights 376–80, EU
382 Anti-Counterfeiting Trade Agreement
digital reason of state and transnational 68–70
collaboration 34–5, 37, 41 Charter of Fundamental Rights 132, 160,
digital rights 8–9, 14, 17, 66, 352, 426–7 349
European Digital Rights (EDRi) 60, 142, Copyright Directive 141
151, 364, 423–4 Council of Europe see Council of Europe
and international organizations see Data Protection Directive 373–4
international organizations and digital digital copyright, citizen participation in
human rights law-making 68–70
Ranking Digital Rights Corporate Dual-Use Export Control Regulation 302,
Accountability Index 352, 355, 360 305–6
Dinh, T. 388, 392 East StratCom Task Force 333
discrimination and censorship 7, 323, 326, Enforcement Directive 58–9
419–21, 424, 427–8, 429–30 External Cooperation Actions Addressing
Doane, D. 354 Terrorism, Organised Crime and
Dodge, M. 29, 30 Cybersecurity 90
Dombernowsky, L. 318 Fundamental Rights Agency, Passenger
Dombrowski, P. 79 Name Record (PNR) data 49, 374
Domínguez, J. 186 General Data Protection Regulation 94
Donahoe, E. 16 General Export Authorization (EU GEA)
drone use 48, 143–4 for cryptographic products 315
dual-use goods and technologies, export Information Society Directive 58
controls of surveillance technologies 307, intellectual property protection as
309–10, 316, 317 fundamental right 57
Dubois, E. 205 Network and Information Systems (NIS)
Duguay, S. 422, 429 Security Directive 93–4
Dunleavy, P. 157, 174 New Skills Agenda for Europe 165–6
Dunn Cavelty, M. 73–97, 115 Organization for Security and Co-operation
Dutton, W. 205 in Europe (OSCE) 316
‘right to be forgotten’ and data protection
Earl, J. 348 373
Eberl-Borges, C. 291 Universal Service Directive 159–60
Edelman, B. 215, 216 EU Internet access 157–79
Edkins, J. 321 added value as fundamental right 171–2
Edwards, B. 15 broadband and digital literacy, universal
eGovernment policies and EU Internet access availability promotion 164–7
164, 166, 174–5 constitutional traditions common to
Eisgruber, C. 64 Member States 168–9
Elgin, B. 303 economic benefits 157–8, 165
Elizalde, R. 190 eGovernment policies’ impact 164, 166,
Elizondo, M. 181 174–5
Elkin-Koren, N. 54, 64 Electronic Communications Code 163
Elmer, G. 346 as fundamental right 167–71
Elsayed-Ali, S. 16 future direction possibilities 178–9
encryption 39, 50, 84–5, 101, 106, 107, 134–9 Gigabit Society and right to be connected
Engelhardt, T. 268–98 161–3
English, L. 25 legal issues involving the Internet 169–70,
Epstein, S. 412 171, 176–7
Erdur-Baker, O. 390 limitations to rights of access 175–7
Eriksson, J. 5, 79 minimum speed requirements 160

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 440 13/12/2018 15:25


Index  441

Open Internet and Net Neutrality in Europe European Parliament Sakharov Prize network
resolution 168 373
potential scope and margin of appreciation Europol, EU Internet Referral Unit and online
173–4 radicalization 319–44
Public Internet Access Points 166 censorship strategy 323, 326
regulation 159–60, 161 Check the Web (CTW) project 327
telecommunications sector funding 165 cooperation with Internet industry 333–4
European Arrest Warrant (EAW) 108–9 countering radicalization 320–26, 333
European Convention on Human Rights EU East StratCom Task Force 333
(ECHR) 132, 149, 349–50, 371–2 European Counter-Terrorism Centre
European Court of Human Rights (ECtHR) (ECTC) 328, 338
Ahmet Yıldırım v. Turkey 169, 176 free speech effects 323–4, 331, 343–4
Amann v. Switzerland 106 future direction 343–4
Anheuser-Busch v. Portugal 57 and illegal immigration 329, 343–4
Appleby v. United Kingdom 361 Internet take-down practices and upload
B v. France 414 filters 341, 343
Bykov v. Russia 106 metaphors to discuss radicalization 335
Delfi v. Estonia 169, 275, 372 mission creep process 333
Editorial Board of Pravoye Delo and Shtekel monitoring and censoring online content,
v. Ukraine 371 justification for 323
Goodwin v. United Kingdom 414 online content searches and analysis 328–31,
I v. United Kingdom 414 332–3, 343–4
Jankovskis v. Lithuania 169–70, 176–7 online extremism, countering 324–6
Kalda v. Estonia 177 police cooperation and information sharing
Malone v. United Kingdom 106 338–9
MTE v. Hungary 169, 275, 372 policy framing 334–42
Schrems v. Data Protection Commissioner 9 political context 331–4
Szabo and Vissy v. Hungary 85 private actors’ partnership 340, 344
X and Y v. The Netherlands 99 propaganda spread 324–5, 335–6
Yildirim v. Turkey 118, 372 radicalization as contested concept 322–3
European Court of Justice (ECJ) radicalization as a process 334
Base Company and Mobistar v. Ministerraad social media focus 329
163 support referrals 329
Digital Rights Ireland 108, 137, 374 Syria Strategic Communications Advisory
GS Media BV v. Sanoma Media Netherlands Team (SSCAT) 333
60–61 technology and prevention of the
Mario Kosteja Gonzales and AEPD v. Google unpredictable 340–42
Spain 373 technology as security threat 337–8
Maximillian Schrems v. Data Protection UK Counter-Terrorism Internet Referral
Commissioner 9 Unit (CTIRU) 327
Omega 168 violent extremism 334–7
Oracle v. UsedSoft 281 vulnerability aspect 336
Promusicae v. Telefónica de España SAU see also terrorism
58–9, 60 Everitt, D. 11
SABAM v. Netlog 59–60 export controls see global trade, export
Scarlet v. SABAM 59–60, 169 controls of surveillance technologies
Schrems v. Safe Harbor 137, 373
Strauder 168–9 Facebook 16–17, 356–7, 359–61, 362
Tele2 v. Watson 108 fair balance and European Court of Justice
UPC Telekabel v. Constantin Film Verleih 61 decisions 58–61
European Digital Rights (EDRi) 60, 142, 151, fair use doctrine, US First Amendment and
364, 423–4 freedom of expression 63, 64, 67–8
European Initiative on Democracy and Farrand, B. 53–71
Human Rights (EIDHR) 373 Farrar, J. 303
European Investigation Order (EIO) 109 Fassbender, B. 120

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 441 13/12/2018 15:25


442  Research handbook on human rights and digital technology

Faulkner, W. 431 foreign-foreign communications data


Fernandez, I. 26 surveillance 227, 228–9, 231, 232–5,
Fernandez, J. 188 239
Ficsor, M. 55 G10 Commission, lack of reform for 242
Finkelhor, D. 391, 392 improvements 237–8
Finland, Internet access 161 information exchange 240–41
Fish, A. 68 international intelligence cooperation rules
Fisher, A. 325 236, 238, 241
Fisher, J. 414 judicial oversight, insufficient 239–40
Five Eyes Internet agencies see national metadata collection 232–3, 243
security, Five Eyes Internet agencies omissions 242–3
Flannagan, J. 428 operational deficits, acknowledgment of
Floridi, L. 348 231–2
Forrest, C. 17–18 privacy discrimination, contested 238–9
France see national security, Five Eyes Internet private communication protection 226–7
agencies ‘search term’ reference 241
Franklin, M. 5–23, 365 signals intelligence (SIGINT) law 224–5,
Frau-Meigs, D. 398, 430 227, 229, 236, 237–8, 239, 241
Fredman, S. 174 soft restrictions 240–41
Freedman, D. 197 technical prowess concerns 243–4
Freedom Online Coalition (FOC) 353 Giacomello, G. 5
Freiling, F. 100, 102, 105 Gibson, W. 27
Fuster, G. 168 Gigabit Society 161–3
futures, Internet and human rights see Internet Gill, P. 325
and human rights futures Gillespie, A. 176
Githens-Mazer, J. 323
Gaál, N. 165 Global Kids Online project 395, 398
García, J. 181 Global Network Initiative 92, 142, 352, 353–6,
García, P. 182 363
Gärditz, K. 100, 225, 228, 232 global trade, export controls of surveillance
Gartzke, E. 79 technologies 299–318
Gasser, U. 379, 380, 382 challenges, ongoing 312–17
Geiger, C. 55, 57, 69 China 308–12
Gellman, B. 217 civil society pressure 302, 316
Gentile, D. 394 cryptography measures 314–15
Georges, M. 144, 145 cyber technology 308, 315
Gerber, P. 414 cybersecurity and human rights 313–14
Gercke, M. 100 dual-use goods and technologies 307, 309–
Germany 10, 316, 317
Products Liability Act 282–3 ethical international policies and
software manufacturers liability 285–6 implementation 301–2, 306
see also national security, Five Eyes Internet EU regulation on export controls for dual-
agencies use technologies 302, 305–6
Germany, intelligence law reform 223–45 human rights 303–6
abstract notification requirements 241–2 human security versus human rights
authorization and oversight regime, separate 312–14
227–8, 235–7, 240 India 307–8, 311–12, 316–17
automated filter program (DAFIS) Organization for Security and Co-operation
243–4 in Europe (OSCE) 316
ban on economic espionage 235 transparency, participation, governance
bulk powers, lack of legislation for 242 315–16
codified and uncodified surveillance powers UN Arms Trade Treaty (ATT) 302–3, 317
224–6, 241 Wassenaar Arrangement 300–305, 307–12,
deficiencies 239–42 315, 316–17
driving factors 230–32 Goggin, G. 383

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 442 13/12/2018 15:25


Index  443

Goh, W. 388 Helfer, L. 55, 56, 57, 414


Goldberg, D. 398 Helsper, E. 393, 400
Goldberg, G. 357 Henderson, J. 282
Goldberg, J. 277, 278, 282, 283, 287 Henderson, R. 398
Golden, A. 218 Henman, P. 174
Goldsmith, J. 122 Henry, N. 431
Gombert, D. 78 Herman, B. 66
Good, N. 363 Hern, A. 17
Goodman, S. 102 Hibbard, L. 398, 430
Goodwin, M. 109 Hides, L. 395
Google 356–7, 359–61 Hidvégi, F. 85
Gorman, S. 25 Higgins, P. 20
Gory, J. 414 Hilgendorf, E. 103
Graf von Westphalen, F. 281 Hill, K. 203
Graham, M. 24–32, 321 Hillebrandt, M. 294
Graham, S. 25, 29 Hilty, R. 55
Graulich, K. 233, 242 Hiranandani, V. 83
Gray, M. 418, 419, 422, 428, 429 Hirschoff, J. 279
Gree, S. 415 Hobbs, R. 398, 400
Greece, Internet access 161 Holland, B. 216
Green, L. 389, 392 Hollnagel, E. 249, 250
Green, N. 73–4 Holloway, D. 389
Greenfield, P. 396 Holt, T. 105
Greenhow, C. 398 Hoogensen, G. 75, 93
Greenwald, G. 50–51 Horth, S. 299–318
Griffin, A. 419, 421 Howard, P. 325
Griffiths, J. 57, 60 Howells, G. 256
Griffiths, M. 396 Howorun, C. 220
Gruber, M.-C. 293, 296 Hugenholtz, P. 60
Grünwald, A. 107 Human Rights Watch 16, 199, 362, 428
Guilhot, N. 6 Hussain, G. 325
Gulati, G. 167, 178 Huysmans, J. 324
Gutwirth, S. 169 Hynek, N. 313

hacking 81, 87, 131, 138, 147–9, 152, 269–70 Iliev, R. 79


see also cybersecurity immigration, illegal 329, 343–4
Hackl, A. 420, 422, 423 India, export controls of surveillance
Hagell, A. 383 technologies 307–8, 311–12, 316–17
Hager, J. 286 information access and exchange
Halpin, E. 348 children’s rights 385–7, 395
Hampson, F. 195, 201, 203 Cuba, Internet access 190–92, 193
Hansen, L. 77, 79, 321 and cybercrime 100–101, 103–4
Harcourt, B. 51 cybersecurity 75–7, 86–9
Hariharan, G. 18 EU Internet Referral Unit and online
Harper, J. 13 radicalization 338–9
Hasebrink, U. 393 Germany, intelligence law reform 240–41
Hashish, Y. 392 intelligence law reform, Germany see Germany,
Hassemer, W. 98, 100 intelligence law reform
Hautala, L. 217 International Arbitral Awards
Hawtin, D. 74 Lake Lanoux Arbitration (France v. Spain)
Hayes, B. 332, 333 124
health, and children’s rights 394–7 Trail Smelter Case (United States v.
Heath-Kelly, C. 322 Canada) 124
Heinson, D. 105 International Council on Global Privacy and
Heintschel von Heinegg, W. 113 Security by Design 95–6

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 443 13/12/2018 15:25


444  Research handbook on human rights and digital technology

International Court of Justice (ICJ) Internet


Corfu Channel (United Kingdom v. Albania) access see Cuba, Internet access; EU
123–4 Internet access
Legality of the Threat or Use of Nuclear browsing histories 136–7
Weapons Advisory Opinion 124 exchange points (IXPs), Canada 215–16
Military and Paramilitary Activities in and grammatical rules associated with 28, 32
against Nicaragua (Nicaragua v. USA) private superpowers and digital copyright
133 64–5
International Covenant on Civil and Political see also cybersecurity
Rights (ICCPR) 10, 81–2, 86, 89, Internet Governance Forum (IGF) 367–8
122, 132, 149, 186, 270–71, 349–50, 359, Internet and human rights futures 5–23
415 accountability mechanisms 21
International Covenant on Economic, Social censorship techniques 7
and Cultural Rights (ICESCR) 10, 55–6, civil society involvement 12–14, 15–16, 21
271 community involvement 21
International Institute for Child Rights and complacency concerns 16–19
Development (IICRD) 391 data retention regulation 20
international law, cybersecurity see digital rights 8–9, 14, 17
cybersecurity protection and international education requirements 21
law freedom of speech 20
international organizations and digital human government regulation 12, 17, 20
rights 364–75 historical context and terms of reference
Association for Progressive Communications 9–16
(APC) 14, 366–7, 423, 425 Internet-based mobilization 11
Council of Europe role 370–72, 374 mass online surveillance 15, 19–20, 21, 22
digital rights concept 364–5 multi-stakeholder decision-making 10–11
EU Data Protection Directive 373–4 privacy protection 20
EU Fundamental Rights Agency, Passenger private service providers’ dependence 17
Name Record (PNR) data 374 public awareness requirements 21
EU ‘right to be forgotten’ and data public-private partnerships 20–21
protection 373 ‘real name systems’ 14, 419, 421
EU role 372–4 tracking and monitoring 6–7
European Digital Rights (EDRi) 60, 142, Westphalian international state system 10
151, 364 whistleblowers 5–6, 7, 11–12, 13, 19, 22
European Initiative on Democracy and Internet Referral Unit, EU see Europol,
Human Rights (EIDHR) 373 EU Internet Referral Unit and online
European Parliament Sakharov Prize radicalization
network 373 Irving, C. 25
Information Society Tunis Agenda 367 Itkowitz, C. 201
Internet Governance Forum (IGF) 367–8 Ivory, R. 98
Organization on Security and Cooperation
in Europe (OSCE) role 374 Jablonski, M. 348, 417
UN Human Rights Council 369–70 Jackson, N. 6
UN Sustainable Development Goals (SDGs) Jägers, N. 272–3
368 Jahn, M. 100
UN World Summit on the Information James, J. 397
Society (WSIS) 117, 118, 122, 126, 347, Jankowska, M. 296
348, 366–8, 369 Jardine, E. 195, 201, 203
UNESCO 368–9 Jasmontaite, L. 157–79
UNGA Right to Privacy in the Digital Age Jenson, J. 383
369 Jernigan, C. 422
Universal Declaration on Human Rights Jones, C. 329, 332, 333
(UDHR) 366 Jones, T. 248
International Political Sociology (IPS) Jørgensen, R. 6, 18, 118, 319, 346–63, 365, 366,
approach 33 373

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 444 13/12/2018 15:25


Index  445

journalism and mass state surveillance 198–9, La Rue, F. 118, 170, 314, 320, 369, 385,
220 417–18, 427
jurisdiction Laidlaw, E. 349, 360
and cybercrime 102–3, 108–10 Landale, J. 203, 351
intervention restrictions, Cuba Landau, H. 100
184–5 Landau, S. 314, 315
see also individual courts Lannon, J. 348
Lansdown, G. 376–410
Kaminski, M. 65 Latour, B. 250, 324
Kapur, R. 414, 415 Laufer, W. 98
Karnow, C. 288, 289, 294, 295, 296, 298 Laurent, S. 40
Kaspar, L. 16, 93 lawyers and mass state surveillance 199–200
Kavanagh, C. 73–97, 115 Le Billon, P. 301
Kaye, D. 15, 140, 346, 350, 369, 427 Leander, K. 29
Keating, M. 431 Lee, M. 202
Kee, J. 417, 418, 423 Lee, P. 398
Kellerman, A. 26 Lee, T. 14
Kello, L. 79 legal personhood, artificial agents 296–8
Kelly, T. 166 Leigh, I. 224
Kern, S. 28 Lemley, M. 65
Kettemann, M. 1–3, 6, 113–28, 347, 364, 365, Lessig, L. 65, 66
367, 370, 371 Leung, L. 398
Khalil, J. 388 Leval, P. 64
Khullar, R. 195 Levine, D. 54
Kim, P. 420 Lewin, C. 398
Kimport, K. 348 LGBTI communities 411–33
King, M. 122 censorship and discrimination issues 419–21,
Kissel, R. 76 424, 427–8, 429–30
Kitchin, R. 29, 30 censorship and mission creep 429
Klein, M. 211 communities’ participation in technology
Kleine, D. 380, 381, 382, 384 design debates and Internet policy-
Kloza, D. 172, 173 making, call for 425
Kniep, R. 42 community awareness tests 431–2
Knox, J. 273 cultural and political differences 413–14, 429
Kochheim, D. 103 data collection and algorithm-driven analysis
Koivurova, T. 124 concerns 419–21, 423–4
Kollman, K. 66 digital rights discourse 426–7
Kongaut, C. 165 dominant liberatory narrative on Internet
Konieczny, P. 68 medium 417–18
Koops, B.-J. 109 emerging disquieting counter-narrative
Kopstein, J. 202 419–22
Korff, D. 129–55, 238, 346, 347, 348, 359 EROTICS research project 423, 425
Kosinski, M. 419, 420 future research 422–4, 430–32
Kötz, H. 278, 283 ‘Growing Up Queer’ project 396
Kovak, A. 74 international politics and law 412–16
Kraemer, K. 397 intersection of LGBTI communities and
Kramer, A. 320 digital technologies 425–6
Krause, R. 281 justifications for restricting digital rights of
Kroll, J. 96 LGBTI communities 428–30
Kshetri, N. 98, 103 private actors and global standards on
Kumar, M. 15 human rights online 421–2
Kumar, S. 92 research and advocacy requirements 424–6
Kundnani, A. 321, 322 right to freedom of expression 428
Kuner, C. 83 right to privacy 427, 428
Kuss, D. 11 socio-political implications 430–31, 432

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 445 13/12/2018 15:25


446  Research handbook on human rights and digital technology

UN Human Rights Council 415–16, 426–7, Matthews, D. 55


428–9 Matzner, T. 358
UN Security Council statements 416 Meier, K. 284
Yogyakarta Principles 414–15 Meijer, A. 175
Libicki, M. 78 Mendel, T. 6, 347, 368
Lievens, E. 387 Merges, R. 54
Lievrouw, L. 401 Metcalf, S. 381
Light, E. 195–222 Mexico, Internet access 182
Lim, S. 399 Micek, P. 203, 351
Litwak, R. 122 Milan, S. 86, 420
Livingstone, S. 18, 376–410 Mills, R. 68
Locke, T. 431 Mills, S. 11
Löffelmann, M. 229 Minnock, S. 67
Long, A. 65 mission creep 333, 429
Lucchi, N. 172, 173, 181 Mistree, B. 422
Lupton, D. 389 Mitchell, K. 388
Lusthaus, J. 103 Mitchell, W. 27
Lwin, M. 388, 393 Momsen, C. 105
Lyon, D. 50–51, 195, 321 monitoring process 6–7, 227–8, 235–7, 240,
285–6, 323
MacAskill, E. 84 see also surveillance
McDonagh, L. 57 Monroy, M. 332
McDonald-Brown, C. 393 Moody, G. 14
Macenaite, M. 387 Moolman, J. 418
McGinty, W. 63 Moore, R. 115
McGonagle, T. 140 Moraba mobile game 391
McKeown, M. 66 Mueller, M. 66, 79
McKim, K. 29 Muizniek, N. 372
MacKinnon, R. 11, 12, 66, 67, 118, 348, 349, Mulligan, D. 356
354, 359, 368, 423 multi-agent systems 251–2, 294–5
Macklem, P. 10 Murray, A. 16
Maclay, C. 354 Musella, F. 175
McQuillan, D. 196 Musiani, F. 431
Madden, M. 201, 387 Mutongwizo, T. 382
Maness, R. 79 Mutual Legal Assistance Treaties (MLATs)
Manners, I. 301 149–51, 152, 154
Mansell, R. 165 Myers, E. 398
manufacturer liability Mylly, T. 57
artificial agents 279–86, 293–5
socio-technical systems 254, 258 Nagesh, G. 12
see also corporations; product liability Nagin, D. 100
Marczak, B. 303 Nakashima, E. 334
Mardis, M. 398 Narayan, P. 413
Margetts, H. 174 Nasser-Eddine, M. 322, 326
Margolis, M. 157 national action plans (NAPs) 351–2, 363
Marly, J. 281 national security 33–52
Marquis-Boire, M. 303 actors, fracturing of positions between 48–9
Martellozzo, E. 389 agents’ a-legal actions 40
Martens, D. 200 Big Data analytics 50
Marthews, A. 201 Bourdieusian-inspired analysis 35–6, 40, 41
Mascheroni, G. 389 coalitions between Internet activists and
Masnick, M. 8 human rights lawyers 48–9, 51
Mason, B. 382 contemporary field of power 46–52
mass surveillance see Big Data mining; and coopetition 37–8, 42
surveillance cybersecurity 73–4, 77–8

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 446 13/12/2018 15:25


Index  447

definitions 36–7 non-intervention principle, cybersecurity


democratic and non-democratic regimes protection 123
38–9, 40 Nouri, L. 326
digital reason of state and transnational Novak, P. 17
collaboration 34–5, 37, 41 Nüβing, C. 107
encryption and decryption 39 Nyst, C. 92
enemy identification 37
guild of management of sensitive Obar, J. 195–222
information 35–6 Oechsler, J. 281, 282, 283
intelligence law differences 49 O’Flaherty, M. 414
intelligence services and data collection 35–6 Ólafsson, K. 389
International Political Sociology (IPS) Olsen, J. 247
approach 33 Olszanowski, M. 429–30
power-knowledge assumptions 39–40 Omtzigt, P. 138
predictive behaviour of terrorist suspects One Laptop per Child initiative 397
and issues of increased surveillance O’Neill, B. 380, 381, 382, 385, 392
50–51 online access see Internet
rule of law 47–8 online violence, children’s rights 388–92, 394–5
secrecy in democracy issues 51 O’Reilly, T. 174
and secret defence 47 Orlowski, A. 9
SIGINT (signals intelligence) 36, 38–9, 41, Otto, D. 414
44, 46, 48, 51
Snowden disclosure effects 46–7, 48–9 Pagallo, U. 269, 287–8, 289, 290, 292, 293
surveillance inevitability and effects on Pallitt, N. 383
privacy 49–52 Papier, H.-J. 224
war on terror 37, 38–9, 42, 47–8, 49 parental responsibilities, children’s rights 388,
see also cybersecurity 392–3
national security, Five Eyes Internet agencies Parsons, C. 201, 220
36, 38, 39, 40–46, 48 passenger name records (PNR) 144, 374
agent socialization 45–6 Paternoster, R. 100
Canadian Communications Security Pathak-Shelat, M. 402
Establishment (CSEC) 45 Patry, W. 55
France DGSE 44, 45 Paust, J. 273
Germany BND 44, 45 Pauwels, L. 100
mutual trust narrative 41–2 Payton, L. 204, 207
NATO and US deference 41–2 Péloquin, T. 202
NSA UPSTREAM platform 43 Peña Barrios, R. 180–94
personnel enrolled into intrusive SIGINT Penney, J. 86, 87, 89, 201
43–4 Pérez, L. 184, 187, 188, 192
position in Internet infrastructure 44–5 Perez, O. 188
UK GCHQ and Tempora project 44, 84 Pérez, Y. 181
US NSA power and personnel 43–5, 46, 47 Perkins, R. 301, 306
US NSA PRISM program 43, 46, 84 Perlroth, N. 85
NATO 41–2, 127, 133 Permanent Court of International Justice
natural language processing (NLP) tools 140 (PCIJ)
Nekmat, E. 399 Factory at Chorzów (Federal Republic of
Neuman, L. 431 Germany v. Poland) 270
Neumann, P. 323 S.S. ‘Lotus’ (France v. Turkey) 102
Neumayer, E. 301, 306 Perrow, C. 248, 252
Ng, E. 422 personal information economy (PIE) and data
Nimmer, M. 63, 64 transfer 357–9
Nimrodi, J. 9 Peter, J. 395
Nissenbaum, H. 77, 79, 321, 359, 423 Peterson, Z. 213
no harm principle, cybersecurity protection Pieth, M. 98
123–4 Pillay, N. 6, 15, 270

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 447 13/12/2018 15:25


448  Research handbook on human rights and digital technology

Pin-Fat, V. 321 Telecommunications Industry Dialogue (ID)


Pizzi, M. 343 354
Poitras, J. 217 UN Guiding Principles on Business and
Poitras, L. 217 Human Rights (UNGP) 273–4, 299,
Polak, M. 394 350–53, 355, 363
Pollicino, O. 168, 170 UN national action plans (NAPs) on
Porcedda, M. 201 business and human rights 351–2, 363
Posner, R. 54, 280, 289, 293 product liability see manufacturer liability
Powell, A. 392, 431 propaganda 133–4, 324–5, 335–6
Powers, S. 348, 417 Proulx, V.-J. 124
precautionary principle, cybersecurity Public Internet Access Points 166
protection 124–6 public resources, private actors and human
preventive, predictive policing, state rights governance 359–62
cybersecurity interventions 143–53 Puddephatt, A. 16, 93, 368
Prieto, M. 184, 187, 188, 192 Puig, Y. 193
Primo, N. 417 Pyetranker, I. 311
Prince, J. 286
privacy Raab, C. 195
children’s rights 387–8, 391 radicalization see Internet Referral Unit, EU
and cybersecurity 82–6 see Europol, EU Internet Referral Unit
digital copyright 58 and online radicalization
Germany, intelligence law reform 226–7, Radin, M. 357
238–9 Raftree, L. 383
Internet and human rights futures 20 Rainie, L. 201
LGBTI communities 427, 428 Rallings, J. 389
and national security 49–52 ransomware 116, 127, 344
surveillance reform 195–6 Ratner, S. 272
private actors and human rights governance Ravsberg, F. 189
346–63 Raymond, M. 10
company commitments 348–9, 352–7 ‘real name systems’, Internet 14, 419, 421
company framing of human rights (Google Reason, J. 248
and Facebook) 356–7, 359–61 Rehman, J. 413–14
EU Internet Referral Unit and online Reid, G. 428
radicalization 340, 344 remedy right, artificial agents 270–75
Facebook community standards 360–61, Rey, P. 27
362 Rhinesmith, C. 164, 165
Freedom Online Coalition (FOC), Tallinn Richards, A. 322
Agenda 353 Richardson, I. 381, 382, 396
future direction 363 Rid, T. 78
Global Network Initiative 352, 353–6, 363 right to be forgotten, EU 373
grievance mechanisms 363 right to be heard 383–5
hard and soft law framework 349–52 right to free speech 20, 323–4, 331, 343–4
human rights impact assessment (HRIA) right to freedom of expression 58, 86–9, 118,
363 180–81, 182, 184, 386, 428
Internet scholarship 347–9 right to privacy see privacy
LGBTI communities 421–2 Rijsdijk, L. 399
personal information economy (PIE) and risk assessment
data transfer 357–9 artificial agents, software with
privacy and freedom of expression 347, 348, predetermined behaviour 282, 283
349–51, 355–8, 360–61 children’s rights 391–2, 397
public resources 359–62 socio-technical systems see socio-technical
Ranking Digital Rights Corporate systems, liability and automation,
Accountability Index 352, 355, 360 argumentation maps, Legal Case
technology companies and online platforms methodology and liability risk analysis
348–9 Roach, K. 200

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 448 13/12/2018 15:25


Index  449

Robinson, K. 381, 382, 383, 396 Sell, S. 67


Rodriguez, K. 20 Seltzer, W. 65
Rogers, R. 431 Sepulveda, D. 12
Rohozinski, R. 25, 73 service providers, artificial agents 286–7, 296
Ronfeldt, D. 78 Shackelford, S. 62
Rose, K. 377 Shade, L. 205, 206–7, 221
Rosen, J. 9 Shapiro, J. 388
Ross, K. 296 Shaw, M. 113
Rossini, C. 73–4 Shelton, D. 272
Rossotto, M. 166 Shiffrin, S. 62
Rubinstein, I. 363 Sieber, U. 98, 100, 101, 103, 104, 105, 107, 109
Rudl, T. 332 Siebler, K. 422
Ruggie, J. 15, 203, 273, 349, 350 SIGINT (signals intelligence)
Russell, J. 326 Germany 224–5, 227, 229, 236, 237–8, 239,
Ryan, D. 255 241
national security 36, 38–9, 41, 44, 46, 48, 51
Sacino, S. 377 Silver, V. 303
Sadowski, J. 18 Simões, J. 383
Said, E. 324 Sinanaj, G. 203
Saiz, I. 415 Sinclair, S. 398
Sallot, J. 201 Singelnstein, T. 106
Saltman, E. 325, 326 Singer, N. 388
Salzberger, E. 54, 64 Singh, P. 20
Sampat, R. 14 Sinha, G. 198, 199
Samuels, C. 389, 390, 397 Skepys, B. 172
Santos, C. 208, 209 Sklerov, M. 125
Sartor, G. 247–67 Smahel, D. 390, 391
Savage, C. 202 Smith, J. 255
Schaake, M. 117, 304 Smith, P. 68
Schaub, M. 110 Smith, R. 301
Schebesta, H. 252, 257 Snowden, E. 5–7, 11–13, 22, 39–40, 44, 46–9,
Scheinin, M. 427 83–5, 90, 136, 147, 200, 211, 217, 228
Schiappa, E. 75 social media 82, 329
Schjolberg, S. 98, 105, 108 Facebook and Google 16–17, 356–7, 359–61,
Schmitt, M. 113, 121, 125 362
Schmitz, S. 67 socio-political implications, LGBTI
Schneider, T. 419 communities 430–31, 432
Schneier, B. 7, 95, 196, 202 socio-technical systems, liability and
Scholte, J. 5 automation 247–67
Scholtz, W. 119 actor-based analysis of liability 251–4
Schröder, U. 331 automation levels and liability allocation
Schulz, T. 288, 294, 295 257–8
Schulze, S.-H. 123 cognitive-systems engineering 250
Scola, N. 8 enterprise liability 252–4
security hybrid fusions between humans and
cybersecurity see cybersecurity machines 250–51
human security versus human rights 312–14 individual liability 251–2
national see national security innovations and complexity increase 249–50
technology as security threat 337–8 manufacturers’ liability 254, 258
Sedgwick, M. 322 multi-agent problem 251–2
Seemann, M. 326 organizational (systemic) fault 253
Segal, A. 116 product liability 254
Selbst, A. 431 software and liability 254–6
self-censorship 199, 201 software and liability, free or open-source
see also censorship software 255

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 449 13/12/2018 15:25


450  Research handbook on human rights and digital technology

software and liability, misuse (contributory NATO Talinn Manual 133


negligence) 255 natural language processing (NLP) tools
software and liability, strict (no-fault) 140
liability 255 passenger name records (PNR) 144
strict liability for technical risks 253 political interference and propaganda 133–4
task-responsibilities 249–51 preventive, predictive policing 143–53
socio-technical systems, liability and rule of law 136–7
automation, argumentation maps, Legal spying in digital environment 134–6, 138–9
Case methodology state-on-state cyber attacks 132–4
and liability risk analysis 258–66 undesirable content, monitoring of
actor-based liability maps 264–6 communications to block 139–42
legal analysis maps 262–3 unlawful laws 137–8
legal design maps 264 see also cybersecurity
modelling 259–60 state cybersecurity interventions and harm
rationale argument 259–61 potential, bad behaviour identification
Sofaer, A. 102 and prediction
software by algorithms (Big Data mining) 50, 144–53,
and liability, socio-technical systems 254–6 154
with predetermined behaviour see artificial access to information society services by
agents, software with predetermined backdoor 147–9, 152
behaviour built-in biases 145
remote forensic software and cybercrime 107 cable splitters 147–9, 152
Wassenaar Arrangement, intrusion software Council of Europe Cybercrime Convention
clause 304–5 147, 150–51, 154
Soltani, A. 218 evidence-gathering by law enforcement
Solum, L. 297 agencies in cyberspace 145–53
Sommer, P. 129, 140 hacking 147–9, 152
Sommerville, I. 425, 430 Mutual Legal Assistance Treaties (MLATs)
Sorell, T. 87 149–51, 152, 154
Spiegel, S. 301 USCLOUD (Clarifying Lawful Overseas
Spindler, G. 284 Use of Data) Act 151–2, 154
Spinelo, R. 25 Stefik, M. 26
Sprau, H. 295 Steinmueller, W. 165
Stacy, H. 56 Stephens, B. 273
Staiger, D. 196 Sterling, B. 27
Stalla-Bourdillon, S. 82 Stevens, T. 79
Standage, T. 27 Stone, C. 297
Stanford, U. 354 Storgaard, L. 57
state cybersecurity interventions and harm Strahilevitz, L. 72
potential 129–55 Strasburger, V. 394
attribution issues 133 Stuvøy, K. 75, 93
automation or algorithmic filtering tools Subrahmanyam, K. 396
140–42 Sucharov, M. 301
bad behaviour identification and prediction Suler, J. 26
by sensors 143–4 Sullivan, D. 350, 353, 357, 363
copyright-protected content 141–2 surveillance
counter-measures, potential negative effects codified and uncodified surveillance powers,
132 Germany 224–6
counter-terrorism actions 135–7 and cybersecurity 83–5
definitions 130–31 export controls of surveillance technologies
drone use 143–4 see global trade, export controls of
Internet browsing histories 136–7 surveillance technologies
mass surveillance and undermining of Internet and human rights futures 15, 19–20,
encryption by intelligence agencies 21, 22
134–9 and national security 49–52

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 450 13/12/2018 15:25


Index  451

state cybersecurity interventions and harm terrorism 37–9, 42, 47–51, 83, 87–8, 90, 95, 96,
potential 134–9 135–7
see also cybersecurity; data collection and and radicalization see Europol, EU Internet
retention; monitoring process Referral Unit and online radicalization
surveillance reform 195–222 Teubner, G. 250
Canada, digital mobilization against Therrien, D. 202
regulatory efforts to expand state Third, A. 376–410
surveillance 204–7 Thomas, S. 200
civil society involvement 199–200, 209, 219 Thomson, I. 6
corporate world harm, and mass state Tiemann, K. 26
surveillance 202–3 Tikk-Ringas, E. 116, 126
financial activism 217–18 Timm, T. 84
international policy standard design 219 Tindall, N. 418, 422
journalism and mass state surveillance Toggenburg, G. 168
198–9, 220 Toke, M. 283
lawyers and mass state surveillance 199–200 Tomuschat, C. 272, 273
and media reform differences 197 Torremans, P. 55
public and mass state surveillance 201–2, Torres, R. 184, 188
204–5 tort law and extra-contractual liability,
right to privacy 195–6 artificial agents 275–96
surveillance harms 198–203 Tréguer, F. 43, 48
Uruguay and human right to water 207–11, Tropina, T. 107
221 Tsoukala, A. 321
surveillance reform, repatriation of Canadian Tucker, C. 201
domestic Internet traffic 211–16 Tudge, R. 198
boomerang routing 212–14, 216 Tufekci, Z. 420
congestion concerns 216 Tulkens, F. 99
data sovereignty concerns 213–14 Tully, S. 170, 172, 173, 176
Internet exchange points (IXPs), Turkle, S. 11
development of additional 215–16 Turner, M. 313
network sovereignty efforts 211–15 Tushnet, R. 65
splitter technology revelation 211–12 Twerski, A. 282
and US Constitution protection 214–15 Tyler, T. 100
US surveillance threat reduction 215–16, 221 Tynes, B. 383
Swist, T. 379, 382, 385
Swyngedouw, E. 208 UAE, Dubai drone use 143
Syria Strategic Communications Advisory Udis, B. 301
Team (SSCAT) 333 UK
Counter-Terrorism Internet Referral Unit
Taks, J. 208 (CTIRU) 327
Talbert, T. 392 GCHQ 44, 84, 136, 200, 217
Tavani, H. 358 Internet browsing histories retention 136–7
Taylor, E. 349, 421 Investigatory Powers Act 17, 49
Taylor, Y. 422 R v. Smith 161
Technological Prevention Measures (TPMs), ‘Snooper’s Charter’ 17
digital copyright 58 see also national security, Five Eyes Internet
technology agencies
EU Internet Referral Unit and online UN Arms Trade Treaty (ATT) 302–3, 317
radicalization 337–8, 340–42 UN Convention on the Rights of the Child
export controls of surveillance 307, 309–10, (UNCRC) 376, 377, 379, 381–5
316, 317 UN Group of Governmental Experts
telecommunications, and cybercrime 106–8 (UNGGE) 79–80, 90, 114, 115, 118–19
Telecommunications Industry Dialogue (ID) UN Guiding Principles on Business and
354 Human Rights (UNGP) 273–4, 299,
Tenenbaum, J. 170 350–53, 355, 363

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 451 13/12/2018 15:25


452  Research handbook on human rights and digital technology

UN Human Rights Council 7, 20, 89–90, 92, International Strategy for Cyberspace 94–5
170–71, 181–3, 189, 346, 351, 369–70, Irvine v. Rare Feline Breeding Ctr. 290
415–16, 426–9 Jividen v. Law 289
UN national action plans (NAPs) 351–2, 363 Kamango v. Facebook 362
UN Resolution on the Internet and Human Kiobel v. Royal Dutch Petroleum Co. 270
Rights 19–20 Lenz v. Universal Music Group 65–6
UN Resolution on the Right to Privacy in the Mark v. Borough of Hatboro 362
Digital Age 89–90 Marsh v. Alabama 361
UN Sustainable Development Goals 5, 18, Marshall v. Ranne 289
20–21, 118, 368, 379 Microsoft Corp. v. United States 110, 151–2
UN World Summit on the Information Society Roe v. Wade 62
(WSIS) 117, 118, 122, 126, 347, 348, 366–9 Rosemont Enterprises v. Random House 62–3
UNESCO 368–9 Saloomey v. Jeppesen & Co. 256
UNGA Right to Privacy in the Digital Age 369 Usborne, S. 134
UNICEF 380, 384, 395 user-consumer liability, autonomous artificial
universal availability, EU Internet access 164–7 agents 287–93
Universal Declaration on Human Rights user-operator liability, software with
(UDHR) 10, 13, 55, 80–82, 86, 90, 118, predetermined behaviour 277–9
195–6, 270, 366, 413, 414
Uruguay, human right to water 207–11, 221 Valeriano, B. 79
US Valkenburg, P. 395
Constitution protection and repatriation Valsorda, F. 314
of Canadian domestic Internet traffic Van Driel, S. 259
214–15 Van Lieshout, M. 83
Constitutional protection and digital Velasco, C. 110
copyright 56, 61–6 Vermaas, P. 247, 250
Digital Millennium Copyright Act (DMCA) Vermeulen, M. 117
64–6 Vervaele, J. 146
First Amendment and freedom of Vickery, J. 398
expression 61–6, 63, 64, 67–8 Vieth, K. 1–3, 313, 319–44
Implementation Plan for the Principles for Vihul, L. 121
Intelligence Transparency 241 Villareal, A. 208, 209
Patriot Act 83 Vincent, A. 6
PROTECT IP Act (PIPA) 66–8, 69 Vittadini, N. 388, 394
Stop Online Piracy Act (SOPA) 66–8, 69 Voeten, E. 414
USCLOUD (Clarifying Lawful Overseas Vogel, J. 98
Use of Data) Act 151–2, 154 Volokh, E. 65
see also national security, Five Eyes Internet Von Behr, I. 325
agencies Von Hoene, S. 65
US, cases Von Notz, K. 229
Aetna Casualty & Surety Co. v. Jeppesen &
Co. 256 Wagner, B. 1–3, 80, 299–318, 343, 348, 373,
American States Ins. Co. v. Guillermin 290 419, 421
Barker v. Lull Enn’g Co. 282 Wagner, G. 278, 281, 283, 286, 287, 295
Blum v. Yaretsky 362 Wainwright, R. 335–6, 337, 338–9, 341
Brocklesby v. United States 256 Wajcman, J. 431
Cyber Promotions v. American Online 362 Wakeford, N. 29
Eldred v. Ashcroft 63–4, 65 Walden, I. 105
Fluor Corp. v. Jeppesen & Co. 256 Walker, R. 35
Golan v. Holder 64 Wall, D. 98, 100, 103, 114
Greenman v. Yuba Power Products 280 Walton, M. 383
Griswold v. Connecticut 62 Warren, S. 82
Harper & Row Publishers v. Nation Warusfel, B. 40
Enterprises 63–4 Wassenaar Arrangement 300–305, 307–12,
Hudgens v. NLRB 361 315, 316–17

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 452 13/12/2018 15:25


Index  453

Waters, R. 418, 422 Wilson, M. 29


Watson, S. 313 Wilson, S. 394
Watts, J. 17 Wolak, J. 395
Weber, R. 13, 196, 364 Wolfers, A. 37
Webster, S. 389 Wolverton, J. 8
Wehlau, A. 284 Woodcock, B. 215, 216
Weimann, G. 324 Woods, D. 249
Wein, L. 296 Wright, M. 390, 391
Weisman, J. 68 WTO TRIPS Agreement, digital copyright
Welzel, H. 103 54–5, 58
Wernham, M. 379 Wu, Y. 420
Wertheim, M. 28
Westerwelle, G. 304 Yanik, L. 302
Westin, A. 82–3 Yates, D. 167, 178
Weston, G. 196 Ybarra, M. 388, 394
Wettig, S. 296, 298 Yoder, C. 67
Wetzling, T. 223–45 Yogyakarta Principles, LGBTI communities
White, G. 217 414–15
White, L. 268, 269, 276, 281, 282, 284,
287, 288, 289, 290, 294, 296, 297, Zágoni, R. 85
298 Zajko, M. 202
White, R. 173 Zalnieriute, M. 411–33
Whitehead, T. 200 Zehendner, E. 296, 298
Whiting, A. 326 Ziolkowski, K. 113, 121, 126
Widdenson, R. 296 Zipursky, B. 277, 278, 282, 283, 287
Wikipedia 68 Zittrain, J. 80
Wildman, S. 196 Zollers, F. 254, 255
Williamson, B. 389 Zook, M. 26, 30
Wilson, B. 29 Zuboff, S. 346, 348

Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724


Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 453 13/12/2018 15:25


Ben Wagner, Matthias C. Kettemann and Kilian Vieth - 9781785367724
Downloaded from Elgar Online at 12/18/2020 12:51:37AM
via New York University

WAGNER_9781785367717_t.indd 454 13/12/2018 15:25

You might also like