You are on page 1of 348

The continuous evolution of internet and related social media technologies and plat- forms have opened up vast new means for communication, socialization, expression, and collaboration. They also have provided new resources for researchers seeking to explore, observe, and measure human opinions, activities, and interactions. However, those using the internet and social media for research—and those tasked with facili- tating and monitoring ethical research such as ethical review boards—are confronted with a continuously expanding set of ethical dilemmas. Internet Research Ethics for the Social Age: New Challenges, Cases, and Contexts directly engages with these discus- sions and debates, and stimulates new ways to think about—and work towards resolv- ing—the novel ethical dilemmas we face as internet and social media-based research continues to evolve. The chapters in this book—from an esteemed collection of global scholars and researchers—offer extensive reflection about current internet research ethics and suggest some important reframings of well-known concepts such as justice, privacy, consent, and research validity, as well as providing concrete case studies and emerging research contexts to learn from.

MICHAEL ZIMMER (Ph.D., New York University) is Associate Professor in the School of Information Studies at the University of Wisconsin-Milwaukee, where he also serves as Director of the Center for Information Policy Research. As a privacy and internet ethics scholar, Zimmer has published and provided expert consultation on internet research ethics internationally.

KATHARINA KINDER-KURLANDA (Ph.D., Lancaster University) is Senior Researcher at the GESIS Leibniz Institute for the Social Sciences in Cologne, Germany. As a cultural an- thropologist with links to computer science, her research covers big data epistemology, social media archiving, security, research ethics, and the internet of things.

www.peterlang.com

big data epistemology, social media archiving, security, research ethics, and the internet of things. www.peterlang.com

Internet Research Ethics for the Social Age

Steve Jones General Editor Vol. 108 The Digital Formations series is part of the Peter

Steve Jones

General Editor

Vol. 108

The Digital Formations series is part of the Peter Lang Media and Communication list. Every volume is peer reviewed and meets the highest quality standards for content and production.

the highest quality standards for content and production. PETER LANG New York  Bern  Frankfurt

PETER LANG

New York Bern Frankfurt Berlin Brussels Vienna Oxford Warsaw

Internet Research Ethics for the Social Age

New Challenges, Cases, and Contexts

Edited by Michael Zimmer and Katharina Kinder-Kurlanda

Edited by Michael Zimmer and Katharina Kinder-Kurlanda PETER LANG New York  Bern  Frankfurt 

PETER LANG

New York Bern Frankfurt Berlin Brussels Vienna Oxford Warsaw

Library of Congress Cataloging-in-Publication Data

Names: Zimmer, Michael, editor. | Kinder-Kurlanda, Katharina, editor. Title: Internet research ethics for the social age: new challenges, cases, and contexts / edited by Michael Zimmer and Katharina Kinder-Kurlanda. Description: New York: Peter Lang, 2017. Includes bibliographical references. Identifiers: LCCN 2017015655 | ISBN 978-1-4331-4266-6 (paperback: alk. paper) ISBN 978-1-4331-4267-3 (hardcover: alk. paper) | ISBN 978-1-4331-4268-0 (ebook pdf) ISBN 978-1-4331-4269-7 (epub) | ISBN 978-1-4331-4270-3 (mobi) Subjects: LCSH: Internet research—Moral and ethical aspects. Classification: LCC ZA4228 .I59 2017 | DDC 174/.90014—dc23 LC record available at https://lccn.loc.gov/2017015655 DOI 10.3726/b11077

Bibliographic information published by Die Deutsche Nationalbibliothek. Die Deutsche Nationalbibliothek lists this publication in the “Deutsche Nationalbibliografie”; detailed bibliographic data are available on the Internet at http://dnb.d-nb.de/.

© 2017 Peter Lang Publishing, Inc., New York 29 Broadway, 18th floor, New York, NY 10006 www.peterlang.com

Broadway, 18th floor, New York, NY 10006 www.peterlang.com This work is licensed under a Creative Commons

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) United States License.

Table of Contents Foreword: Grounding Internet Research Ethics 3.0: A View from (the) AoIR Charles

Table of Contents

Foreword: Grounding Internet Research Ethics 3.0: A View from (the) AoIR Charles Ess

ix

Introductory Material

xvii

Introduction Michael Zimmer and Katharina Kinder-Kurlanda

xix

Internet Research Ethics: Twenty Years Later Elizabeth Buchanan

xxix

Part One: Challenges Conceptual Challenges

1

Chapter One: Recasting Justice for Internet and Online Industry

Research

Ethics

3

Anna Lauren Hoffmann and Anne Jonas

Reaction by Céline Ehrwein Nihan

19

Chapter Two: A Feminist Perspective on Ethical Digital Methods Mary Elizabeth Luka, Mélanie Millette, and Jacqueline Wallace

21

vi

|

table of contents

Chapter Three: Sorting Things Out Ethically: Privacy as a Research Issue beyond the Individual Tobias
Chapter Three: Sorting Things Out Ethically: Privacy as a Research
Issue beyond the Individual
Tobias Matzner and Carsten Ochs
39
Reaction by Céline Ehrwein Nihan
Reaction by Christian Pentzold
Reaction by D. E. Wittkower
Chapter Four: Chasing ISIS: Network Power, Distributed Ethics
and Responsible Social Media Research
Jonathon Hutchinson, Fiona Martin, and Aim Sinpeng
53
54
55
57
Reaction by Katleen Gabriels
Reaction by Christian Pentzold
72
73
Data Challenges
Chapter Five: Lost Umbrellas: Bias and the Right to Be Forgotten
in Social Media Research
Rebekah Tromble and Daniela Stockmann
75
Reaction by Zoetanya Sujon
Reaction by Arvind
Chapter Six: Bad Judgment, Bad Ethics? Validity in Computational
Social Media Research
Cornelius Puschmann
92
93
95
Reaction by Nicholas Proferes
Chapter Seven: To Share or Not to Share? Ethical Challenges
in Sharing Social Media-based Research Data
Katrin Weller and Katharina Kinder-Kurlanda
114
115
Reaction by Alex
Reaction by Bonnie Tijerina
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
130
131
Applied Challenges
Chapter Eight: “We Tend to Err on the Side of Caution”: Ethical
Challenges Facing Canadian Research Ethics Boards When
Overseeing Internet Research
Yukari Seko and Stephen P. Lewis
133
Reaction by Michelle C. Forelle and Sarah Myers West
Reaction by Katleen Gabriels
Chapter Nine: Internet Research Ethics in a Non-Western Context
Soraj Hongladarom
148
149
151

table of contents

|

vii

Part Two: Cases

165

Chapter Ten: Living Labs – An Ethical Challenge for Researchers and Platform Operators Philipp Schaer

167

Chapter Eleven: Ethics of Using Online Commercial Crowdsourcing Sites for Academic Research: The Case of Amazon’s Mechanical Turk Matthew Pittman and Kim Sheehan

177

Chapter Twelve: Museum Ethnography in the Digital Age:

Ethical Considerations Natalia Grincheva

187

Chapter Thirteen: Participant Anonymity and Participant Observations:

Situating the Researcher within Digital Ethnography James Robson

195

Chapter Fourteen: The Social Age of “It’s Not a Private Problem”:

Case Study of Ethical and Privacy Concerns in a Digital Ethnography of South Asian Blogs against Intimate Partner Ishani Mukherjee

203

Chapter Fifteen: Studying Closed Communities On-line: Digital Methods and Ethical Considerations beyond Informed Consent and Anonymity Ylva Hård af Segerstad, Dick Kasperowski, Christopher Kullenberg and Christine Howes

213

Chapter Sixteen: An Ethical Inquiry into Youth Suicide Prevention Using Social Media Mining Amaia Eskisabel-Azpiazu, Rebeca Cerezo-Menéndez, and Daniel Gayo-Avello

227

Chapter Seventeen: Death, Affect and the Ethical Challenges of Outing a Griefsquatter Lisbeth Klastrup

235

Chapter Eighteen: Locating Locational Data in Mobile and Social Media Lee Humphreys

245

Chapter Nineteen: How Does It Feel to Be Visualized?:

Redistributing Ethics David Moats and Jessamy Perriam

255

Part Three: Contexts

267

Chapter Twenty: Negotiating Consent, Compensation, and Privacy in Internet Research: PatientsLikeMe.com as a Case Study Robert Douglas Ferguson

269

viii

|

table of contents

Chapter Twenty-One: The Ethics of Using Hacked Data: Patreon’s Data Hack and Academic Data Standards Nathaniel Poor

277

Chapter Twenty-Two: The Ethics of Sensory Ethnography: Virtual Reality Fieldwork in Zones of Conflict Jeff Shuter and Benjamin Burroughs

281

Chapter Twenty-Three: Images of Faces Gleaned from Social Media in Social Psychological Research on Sexual Orientation Patrick Sweeney

287

Chapter Twenty-Four: Twitter Research in the Disaster Context – Ethical Concerns for Working with Historical Datasets Martina Wengenmeir

293

Epilogue: Internet Research Ethics for the Social Age Katharina Kinder-Kurlanda and Michael Zimmer

299

Contributor Biographies

307

Foreword Grounding Internet Research Ethics 3.0: A View from (the) AoIR charles ess Michael Zimmer

Foreword

Grounding Internet Research Ethics 3.0: A View from (the) AoIR

charles ess

Michael Zimmer and Katharina Kinder-Kurlanda have brought together a care- fully organized and critically important anthology – important for its own sake and, as I will try to show, as this volume both indexes and inaugurates a third wave of Internet Research Ethics (IRE). That is, we can think of the first era of IRE – IRE 1.0 – to emerge alongside the initial IRE guidelines developed and issued by the Association of Internet Researchers (AoIR) in 2002 (Ess & AoIR Ethics Working Committee). To be sure, as Elizabeth Buchanan makes clear here in her magisterial overview of the past 20 years of IRE, the first AoIR document rests on considerably older roots. At the same time, it served as at least a par- tial foundation for the second AoIR document, “Ethical Decision-Making and Internet Research: Recommendations from the AoIR Ethics Working Committee (Version 2.0)” (Markham & Buchanan, 2012). As Buchanan shows, this second document – what many of us take as central to IRE 2.0 – was catalyzed by the multiple shifts affiliated with “the era of social computing took hold, circa 2005.” This shifts included first and foremost the rise of social networking sites (SNSs) such as MySpace, Friendster, and the now hegemonic Facebook, followed by an explosion of diverse venues and forms of social media that more or less define contemporary communication venues (e.g., Twitter, Snapchat, WhatsApp, and Instagram, just to name the primary usual suspects). These shifts further included dramatically expanded possibilities for “users” to actively engage in internet- facilitated communication venues, e.g., in the form of blogs, certainly, but also by creating and uploading content to sites such as YouTube. Many of the ethical

x

|

charles ess

questions and dilemmas taken up by IRE 2.0 were further occasioned by the rise of mobile internet access – i.e., the introduction and diffusion of smartphones, beginning roughly in 2008. Last but certainly not least, as Buchanan notes, the era of Big Data research, beginning ca. 2010, likewise issued in a new range of issues and possible resolutions addressed in the AoIR 2.0 document. The current volume does nothing less than inaugurate and critically ground an emerging IRE 3.0. To be sure, the volume takes social media as its defining thread:

but several of the individual contributions make equally clear that “social media” encompass a staggering range of venues, communicative possibilities, and thereby research interests, methodologies, and so on. To my eye, they thereby highlight and help initiate the still richer and more extensive ethical discussions that will constitute IRE 3.0. To begin with, it is important to notice that just as IRE 2.0 was an extension, not a rejection of IRE 3.0 – so IRE 3.0 continues to build upon these previous documents and now rather extensive affiliated literatures. Indeed, I have argued that in some cases of IRE 2.0, “what is old is new again” – i.e., IRE 1.0 consid- erations surrounding informed consent can prove useful in 2.0 Big Data research projects (Ess, 2016). In these directions, it is not surprising to see in the current volume that concerns with privacy remain paramount. That is, privacy intersects with the most basic obligations of Human Subjects Protections at the foundations of IRE – namely, obligations to preserve anonymity, confidentiality, and informed consent. So privacy concerns figure large in the contributions here from Ishani Mukherjee; Ylva Hård af Segerstad et al; Amaia Eskisabel-Azpiazu, Rebeca Cerezo-Menéndez, and Daniel Gayo-Avello; Lisbeth Klastrup; Robert Douglas Ferguson; Patrick Sweeney; and Martina Wengenmeir. But of course, these concerns are often dramatically transformed – not only by the rise of social media and big data, as IRE 2.0 already recognized. In addition, what I see as the most significant novelties here, i.e., what can be taken on board as signature themes and concerns for IRE 3.0, rest on a growing recognition of perhaps the most profound shift affecting IRE – namely, the shift from a pri- marily individual conception of selfhood and identity towards a more relational conception This shift emerges early in the volume in the title of the chapter by Tobias Matzner and Carsten Ochs, “Sorting Things Out Ethically: Privacy as a Research Issue Beyond the Individual.” Matzner and Ochs rightly point out that strictly individualistic conceptions of privacy have shaped by far the greatest bulk of philosophical and ethical theorizing, at least until relatively recently. Taking on board more relational understandings of selfhood means, however, that our under- standings of privacy likewise become more relational, i.e., as implicating both the individual and his/her close-tie relationships. Their account of “Post-Individualist Privacy in Internet Research” points correctly to Helen Nissenbaum’s now central theory of privacy as “contextual integrity” (2010). At the same time, their discussion

foreword

|

xi

is in some ways importantly anticipated in the (Norwegian) National Committee for Research Ethics in the Social Sciences and the Humanities (NESH, 2006; cf. Ess, 2015; Ess & Fossheim, 2013). Broadly speaking, responsibility in ethics depends upon agency: I am directly responsible (only) for what I can enact, and not responsible for what I cannot initiate or affect (“ought implies can”). Hence relational selfhood – especially as literally distributed across networked communications – entails a (comparatively) new sense of ethical responsibility, i.e., one that understands that responsibil- ity is distributed across the relationships that shape and define us. So Jonathon Hutchinson, Fiona Martin, and Aim Sinpeng, in their “Chasing ISIS: Network Power, Distributed Ethics and Responsible Social Media Research,” call for “new professional standards, such as the AoIR guidelines, and to advocate for social media research in context – based on an understanding of the limits of distrib- uted responsibility and the different meanings of social visibility for diverse social media agents, human and non-human.” By the same token, in their “How Does It Feel to Be Visualized?: Redistributing Ethics,” David Moats and Jess Perriam further explore a distributed ethics as a way of resolving the challenges evoked by the technologies of networked interconnection, including algorithms, APIs, and related research tools. Finally, this shift towards more relational understandings of selfhood in West- ern societies directly intersects the attention to Confucian and Buddhist concepts of relational selfhood taken up here by Soraj Hongladarom, as part of his consid- erations of how an IRE might function across both Western and Eastern cultural differences. Hongladarom’s chapter thereby highlights an IRE 1.0 theme that will become all the more significant for IRE 3.0. That is: from the earliest days of developing the AoIR 2002 document, we were acutely aware of the ethical chal- lenges further evoked by the internet as a communication technology more or less oblivious to national borders and thereby the diverse national/cultural back- grounds and traditions that deeply shape and inflect our ethical sensibilities. Our awareness and deliberations in these directions were fostered by many commit- tee members – including the venerable Soraj Hongladarom. Here, Hongladarom highlights the importance of ethical pluralism as a primary way of conjoining shared ethical norms and principles with resolute insistence on the importance of sustaining the multiple and diverse approaches and traditions rooted in distinct cultures: as he notes, such ethical pluralism is already in play in IRE 1.0 (Ess & AoIR Ethics Working Committee, 2002, pp. 29–30). At the same time: one of the most striking dimensions of the AoIR 2016 ethics panels was that, for the first time in our history, both the majority of presentations and participants represented countries and culture domains outside the United States. Both presentations and subsequent discussion made clear that more and more countries and cultures are taking up new initiatives and/or expanding extant initiatives in developing an IRE

xii

|

charles ess

within their distinctive national and cultural domains. Manifestly, an IRE 3.0 will become ever more informed by these rapidly expanding initiatives. More immediately, Hutchinson et al identify an increasingly significant topos that emerges from this relational-distributed focus – namely, that “social media research is often located between corporations and governments, which is often a zone within which an individual researcher has limited power.” As they point out, a primary ethical problem resulting from these sorts of entanglements is how researchers are to maintain rigorous standards of scientific integrity, objectivity, accuracy, and so on, vis-à-vis corporate and government agendas that may run contrary to these standards. That these issues are of growing concern can be seen in their prominence in the 2016 AoIR ethics panels: so Franzke, 2016; Locatelli, 2016; and Schäfer & van Schie, 2016.

e x pa n d i n g

a n d

t h e

s o f t wa r e

f r a m e w o r k s :

e n g i n e e r s

f e m i n i s m ,

v i r t u e

e t h i c s ,

In both IRE 1.0 and 2.0, feminist approaches to both methodology and ethics played a significant role (e.g., Ess & AoIR Ethics Working Committee, 2002, ftn. 5, p. 29; Hall, Frederick, & Johns, 2004; see Buchanan & Ess, 2008, 276f. for additional discussion and examples). These feminist orientations and approaches are both represented and importantly extended in this volume by Mary Elizabeth Luka, Mélanie Millette and Jacqueline Wallace. Their “A Feminist Perspective on Ethical Digital Methods” pursues the critical ambition to further “the integrated nature of an ethics of care with feminist values that include respecting diversity, understanding the intention of research as a participant and as a researcher, and paying attention to our responsibility as researchers to do no harm in the commu- nities within which we work, even as we aim to generate potentially transformative engagements.” In foregrounding a feminist ethics of care, Luka et al add to an increasing number of researchers and, more importantly, research communities that likewise – but only very recently – turn to an ethics of care in order to better come to grips with the ethical challenges evoked by networked technologies and the relation- ships they facilitate and afford. To be sure, since Sara Ruddick’s inauguration of care ethics (1980), care ethics has been closely interwoven with both feminist eth- ics and virtue ethics as these emerged in the 1970s and 1980s. Broadly, virtue ethics centered Norbert Wiener’s foundational text ([1950] 1954) in Information and Computing Ethics (ICE); virtue ethics has become increasingly central to both (ICE) in general and especially in recent years (e.g., Vallor, 2016) – as it has also become more prominent in approaches to the design of ICTs, perhaps most notably in the work of Sarah Spiekermann (2016). Indeed, Spiekermann’s

foreword

|

xiii

implementations of virtue ethics in ICT design underlies nothing less than the critical new initiative of the IEEE, “Global Initiative for Ethical Considerations in the Design of Autonomous Systems” (<http://standards.ieee.org/develop/ind- conn/ec/autonomous_systems.html>) As this IEEE initiative makes clear, what is most striking here is how virtue ethics broadly and care ethics in particular have been taken up and endorsed by our colleagues in software engineering, networked systems design, and related, centrally “technical” fields. As a further example: at the conclusion of a global, two-year project of seeking to discern the ethical frameworks best suited to the technically-intensive field of networked systems research, Bendert Zevenbergen and his colleagues concluded nothing less than that “… virtue ethics should be applied to Internet research and engineering – where the technical persons must fulfil the character traits of the ‘virtuous agent’” (Zevenbergen et al., 2016, p. 31:

emphasis added, CME; cf. Zevenbergen, 2016). Similarly, Jackson, Aldrovandi, & Hayes (2015) have endorsed both virtue ethics and ethics of care as primary frameworks for the Slándáil Project (2015). The project exploits big data tech- niques to harvest and analyze social media data during a natural disaster, for the sake of improving the efficiencies of emergency responders. The project is also cross-cultural, involving institutes and agencies from four (Western) countries – Ireland, Italy, Germany, and the U.K. – and thereby invokes an explicit ethical pluralism. The turn to virtue ethics and care ethics begins with an account of the self as relational. In these ways, Jackson et al. directly draw from IRE 1.0 and 2.0 – and instantiate an application of virtue ethics and care ethics directly parallel to the approach articulated here by Luka et al. Simply as an indication of the growing influence and importance of feminist ethics, such these examples would be heartening enough. Even more importantly:

these examples index a still more profound development and thereby characteristic of IRE 3.0. That is, it is an understatement to say that efforts over the past five decades or so to overcome the profound disciplinary boundaries between applied ethics, on the one hand, and the more technical fields of engineering, computer science, software engineering, and so on, on the other, have been fundamentally challenging on multiple grounds. The recent two decades or so have shown some progress as our more technical colleagues have come to more enthusiastically embrace primary ethical frameworks such as utilitarianism and deontology. But for these colleagues to now go still further and endorse, primarily as a result of their own initiatives and insights, both virtue ethics and care ethics, as clearly fem- inist, is the disciplinary equivalent of the fall of the Berlin Wall. Both within and beyond this volume, then, we could hardly ask for better starting points for IRE 3.0. To be sure, there are additional issues and topics that will require (re)new(ed) attention in this third wave. For example, the rela- tional-distributed focus foregrounds the increasingly central issue of the need to

xiv

|

charles ess

protect researchers as much as (if not more than) our informants, as their research risks exposing them to the full array of hate speech, threats, and acts that are now routinely directed at them – especially if they are women researching predomi- nantly male hate behaviors (e.g., Massanari, 2016). Another increasingly central issue, brought forward here by Katrin Weller and Katharina Kinder-Kurlanda (“To Share or Not to Share? Ethical Challenges in Sharing Social Media-based Research Data”), concerns the multiple ethical issues confronting researchers who increasingly depend on commercial sources for “big data” – and/or “grey data,” i.e., data that has been leaked and made public by hackers: so Nathaniel Poor’s “The Ethics of Using Hacked Data: Patreon’s Data Hack and Academic Data Stan- dards.” For relational selves, “sharing is caring” – but such sharing is often ethically fraught in ways that remain to be fully explored and at least partially resolved. Of course, still more issues – and, ideally, possible resolutions – will fill the agenda of our developing IRE 3.0. But as I hope these comments make clear: if anyone needs or wants to know what IRE 3.0 will look like – s/he can do no better but to begin with this volume.

r e f e r e n c e s

Buchanan, E., & Ess, C. (2008). Internet research ethics: The field and its critical issues. In K. Himma & H. Tavani (Eds.), The handbook of information and computer ethics (pp. 273–292). Hoboken, NJ: John Wiley & Sons. Ess, C. (2015). New selves, new research ethics? In H. Ingierd & H. Fossheim (Eds.), Internet research ethics (pp. 48–76). Oslo: Cappelen Damm. Retrieved from Open Access: http://press.nordico-

Ess, C. (2016, October 6). Introduction, Internet research ethics roundtable I: New problems, new relation- ships. Panel presentation, Association of Internet Researchers annual Internet Research confer- ence, Berlin. Ess, C., & AoIR Ethics Working Committee. (2002). Ethical decision-making and Internet Research:

Recommendations from the AoIR Ethics Working Committee. Retrieved from http://www.aoir.org/ reports/ethics.pdf Ess, C., & Fossheim, H. (2013). Personal data: Changing selves, changing privacy expectations. In M. Hildebrandt, K. O’Hara, & M. Waidner (Eds.), Digital enlightenment forum yearbook 2013: The value of personal data (pp. 40–55). Amsterdam: IOS Amsterdam. Franzke, A. (2016). Big data ethicist: What will the role of the ethicist be in advising governments in the field of big data? (MA thesis in applied ethics). Utrecht University. Retrieved from http://tinyurl.

Hall, G. J., Frederick, D., & Johns, M. D. (2004). “NEED HELP ASAP!!!”: A feminist communi- tarian approach to online research ethics. In M. Johns, S. L. Chen, & J. Hall (Eds.), Online social research: Methods, issues, and ethics (pp. 239–252). New York, NY: Peter Lang. Jackson, D., Aldrovandi, C., & Hayes, P. (2015). Ethical framework for a disaster management decision support system which harvests social media data on a large scale. In N. Bellamine Ben Saoud et al. (Eds.), ISCRAM-med 2015 (pp. 167–180), LNBIP 233. doi:10.1007/978-3-319-24399-3_15.

foreword

|

xv

Locatelli, E. (2016, October 6). Social media and erasing the boundaries between academic/critical and corporate research. Panel presentation, Association of Internet Researchers annual Internet Research conference, Berlin. Markham, A., & Buchanan, E. (2012). Ethical decision-making and Internet Research, Version 2.0: Rec- ommendations from the AoIR Ethics Working Committee. Retrieved from www.aoir.org/reports/

Massanari, A. (2016, October 6). The changing nature of research in the age of #Gamergate. Panel presen- tation, Association of Internet Researchers annual Internet Research conference, Berlin. NESH (National Committee for Research Ethics in the Social Sciences and the Humanities). (2006). Forskningsetiske retningslinjer for samfunnsvitenskap, humaniora, juss og teologi [Research ethics guidelines for social sciences, the humanities, law and theology]. Retrieved from http://www. etikkom. no/Documents/Publikasjoner-som-PDF/Forskningsetiske%20retningslinjer%20for%20

Nissenbaum, H. (2010). Privacy in context. Technology, policy, and the integrity of social life. Stanford, CA: Stanford University Press. Ruddick, S. (1980). Maternal thinking. Feminist Studies, 6(2), 342–367. Schäfer, M. T., & van Schie, G. (2016, October 6). Big data and negotiating the blurring boundar- ies between academic, corporate, and public institution research. Panel presentation, Association of Internet Researchers annual Internet Research conference, Berlin. Spiekermann, S. (2016). Ethical IT innovation: A value-based system design approach. New York, NY:

Taylor & Francis. Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford:

Oxford University Press. Wiener, N. ([1950] 1954). The human use of human beings: Cybernetics and society. Boston, MA:

Houghton Mifflin (2nd Revised ed.). New York, NY: Doubleday Anchor. Zevenbergen, B. (2016). Networked systems ethics. Ethics in networked systems research: Ethical, legal and policy reasoning for Internet Engineering. Oxford Internet Institute, University of Oxford. Retrieved from http://networkedsystemsethics.net/ Zevenbergen, B., Mittelstadt, B., Véliz, C., Detweiler, C., Cath, C., Savulescu, J., & Whittaker, M. (2015). Philosophy meets Internet engineering: Ethics in networked systems research. (GTC workshop outcomes paper). Oxford Internet Institute, University of Oxford.

Introductory Material

Introductory Material

Introduction michael zimmer and katharina kinder - kurlanda The internet and related social media technologies

Introduction

michael zimmer and katharina kinder-kurlanda

The internet and related social media technologies and platforms have opened up vast new means for communication, socialization, expression, and collabora- tion. They also have provided new resources for researchers seeking to explore, observe, and measure human opinions, activities, and interactions. Increasingly, social media tools are used to aid traditional research: subjects might be recruited through Facebook or Twitter, surveys are administered and shared online, and data is often stored and processed on social and collaborative webbased platforms and repositories. Social media has also emerged as a preferred domain for research itself: ethnographies take place within massively online social environments, entire collections of Facebook profile pages are scraped for data analysis, and the content of public Twitter streams is routinely mined for academic studies. And we have now entered the era of big data, where researchers can access petabytes of transac- tional data, clickstreams and cookie logs, media files, and digital archives, as well as pervasive data from social networks, mobile phones, and sensors. In short, academic research has begun to fully embrace what Maria Azua, a Vice President of Technology and Innovation at IBM, describes in her book, The Social Factor: Innovate, Ignite, and Win through Mass Collaboration and Social Net- working, as “the social age,” the leveraging of the internet and pervasive connected devices to enhance communication, information exchange, collaboration, and social interactions (Azua, 2009, p. 1). As a result, researchers studying the inter- net and big data find themselves immersed in a domain where information flows freely but is also potentially bound by contextual norms and expectations, where

xx

|

michael zimmer and k atharina kinder-kurlanda

platforms may oscillate between open and closed information flows, and where data may be usergenerated, filtered, algorithmically-processed, or proprietary. As a result, just as in its offline counterpart, internet and social mediabased research raises critical ethical concerns about the role of the “subject” in such research endeavors. Various disciplines with a history of engaging in human subjects research (such as medicine, anthropology, psychology, communica- tion) have longstanding ethical codes and policies intended to guide research- ers and those charged with ensuring that research on human subjects follows both legal requirements and ethical practices, and often ethical review boards are charged with approving, monitoring, and reviewing research involving humans to ensure the rights and welfare of the research subjects are protected. But in the socalled “social age” – where individuals increasingly share personal information on platforms with porous and shifting boundaries, the aggrega- tion of data from disparate sources is increasingly the norm, and webbased services, and their privacy policies and terms of service statements change too rapidly for an average user to keep up – the ethical frameworks and assump- tions traditionally used by researchers and review boards alike are frequently challenged and, in some cases, inadequate. Researchers using the internet as a tool or a space of research – and those tasked with facilitating and monitoring ethical research such as ethical review boards – are confronted with a continuously expanding set of ethical dilemmas:

What ethical obligations do researchers have to protect the privacy of subjects engaging in activities in “public” internet spaces? Which national or international ethical standards apply when researching global networks, communities, or infor- mation flows? How is confidentiality or anonymity assured online? How is and should informed consent be obtained online? How should research on minors be conducted, and how do you prove a subject is not a minor? Is deception (pretend- ing to be someone you are not or withholding identifiable information) in online spaces a norm, or a harm? Is “harm” possible to someone existing online in digital spaces? What are researchers’ obligations in spaces which are governed by platform providers? How should we contend with inequalities in data access and uncertain- ties about data provenance and quality? In recent years, a growing number of scholars have started to address many of these open questions within this new domain of internet research ethics (see, for example, Buchanan, 2004; Buchanan & Zimmer, 2016; Heider & Massanari, 2012; Markham & Baym, 2008; McKee & Porter, 2009). As regulatory author- ities responsible for the oversight of human subject research are starting to con- front the myriad of ethical concerns internetbased research brings to light (see, for example, SACHRP, 2013), numerous scholarly associations have drafted ethical guidelines for internet research, including the American Psychological Associa- tion’s Advisory Group on Conducting Research on the Internet, the Association

introduc tion

|

xxi

of Internet Researchers (AoIR) Ethics Working Group Guidelines, and the Nor- wegian National Committee for Research Ethics in the Social Sciences and the Humanities. Yet, even with increased attention and guidance surrounding internet research ethics, significant gaps in our understanding and practices persist. Across the research community, for example, there is disagreement over basic research eth- ics questions and policies, such as what constitutes “public” content and at what stage computational research becomes human subjects research (see, for example, Carpenter & Dittrich, 2011; Metcalf & Crawford, 2016; Zimmer, 2016). Fur- ther, media attention surrounding Facebook’s “emotional contagion” experiment (Kramer, Guillory, & Hancock, 2014) sparked widespread concern over how inter- net companies conduct research on their own users, and big data research generally (see, for example, Fiesler et al., 2015; Goel, 2014; Schroeder, 2014). Our goal with Internet Research Ethics for the Social Age: New Challenges, Cases, and Contexts is to directly engage with these discussions and debates, and to help stimulate new ways to think about – and work towards resolving – the novel eth- ical dilemmas we face as internet and social media-based research continues to evolve. The chapters within this volume – which present novel ethical challenges, case studies, and emerging research contexts from a collection of global scholars and researchers – accomplishes this in three critical ways:

First, as internet tools and social platforms continue to evolve at a rapid pace, Internet Research Ethics for the Social Age highlights new research contexts and case studies that introduce readers to unique uses – and related ethical concerns – of the current state-of-the-art technologies and platforms, including crowdsourcing on Amazon Mechanical Turk, the health sharing platform PatientsLikeMe, new forms of data visualization and facial recognition platforms, and automated tools for flagging potentially suicidal behavior online. Second, Internet Research Ethics for the Social Age recognizes the broad disci- plinary terrain impacted by internetbased research, and brings together discussions of ethical issues from the familiar domains of the social sciences (such as commu- nication studies, sociology, and psychology) alongside perspectives from computer science, data science, gender studies, museum studies, and philosophy. The result is a more inclusive umbrella of domains that can learn from each other and collab- orate to confront the challenges of internet research ethics. Third, Internet Research Ethics for the Social Age provides a global approach to the challenges of internet research ethics, bringing together contributions from researchers in diverse regulatory environments, as well as those dealing with the complex ethical dimensions of researching platforms and users in geographically diverse locations. Global regions and cultures represented within the volume include Australia, Canada, Denmark, Germany, the Netherlands, New Zealand, South Asia, Spain, Sweden, Thailand, the United Kingdom, and the United States.

xxii

|

michael zimmer and k atharina kinder-kurlanda

o r g a n i z at i o n

o f

t h e

b o o k

To set the stage for the current state of internet and social media research eth - ics, immediately following this Introduction is a short historical essay, “Internet Research Ethics: Twenty Years Later,” where Elizabeth Buchanan, the leading expert in the field, outlines the trajectory internet research ethics has taken since its emergence in the mid-1990s. Buchanan reflects on the myriad of changes in technologies, users, research disciplines, and ethical considerations that have evolved over the last two decades. In order to include the broadest range of voices, perspectives, and research domains, Internet Research Ethics for the Social Age: New Challenges, Cases, and Con- texts has three main forms of contributions: Challenges, Cases, and Contexts. The opening Challenges section features nine chapters, each providing an in-depth discussion of new conceptual challenges faced by researchers engaging in internet and social media-based research, organized into three categories of Conceptual Challenges, Data Challenges, and Applied Challenges. In an attempt to highlight that many of the addressed issues are subject to ongoing, lively and often controversial discussions within various research communities and to ensure the broadest set of viewpoints, each of the ten Challenges is followed by a brief reaction. These reaction pieces were provided by leading thinkers in the field, who often looked at the issue from a complimentary – or even contrasting – point of view. The second part of the book includes ten Cases, brief discussions of unique social media and internet-based research projects that generated novel ethical challenges for the investigators. The final section, Contexts, presents five short descriptions of new research contexts that describe emerging technologies and platforms that present new ethical dilemmas for researchers.

c h a l l e n g e s

The first of four Conceptual Challenges is “Recasting Justice for Internet and Online Industry Research Ethics,” where Anna Lauren Hoffmann and Anne Jonas argue for a broadening of how we conceive of the principle of justice within research eth- ics, especially in the face of increased industry-sponsored research activities that impact vulnerable or disadvantaged user populations. The next chapter, “A Fem- inist Perspective on Ethical Digital Methods” by Mary Elizabeth Luka, Mélanie Millette, and Jacqueline Wallace, makes a similar demand for deriving more rigor- ous ethical protocols, working from feminist epistemological and methodological foundations. In the third Conceptual Challenge, “Sorting Things Out Ethically:

Privacy as a Research Issue beyond the Individual”, Tobias Matzner and Carsten

introduc tion

|

xxiii

Ochs argue that, given the explosive use of “social data” within digital research, the longstanding concern over privacy within research ethics must be re-concep- tualized beyond a traditional individualist notion, and recognize that privacy is inherently social, relational, and sociotechnical in the context of internet-based research. Jonathon Hutchinson, Fiona Martin, and Aim Sinpeng continue this impetus for reconceptualizing traditional research ethics in the face of new social and digital research methods and environments. Building from a case study of the communicative activities of Islamic State (IS) agents, their contribution, “Chasing ISIS: Network Power, Distributed Ethics and Responsible Social Media Research,” argues that ethical review boards must move beyond purely normative approaches and embrace more procedural approaches to ethical decision-making, while also recognizing the networked and distributive nature of internet-based research environments and actors. Three important Data Challenges follow, led by Rebekah Tromble and Dan- iela Stockmann’s chapter, “Lost Umbrellas: Bias and the Right to be Forgotten in Social Media Research,” which outlines how the so-called right to be forgot- ten creates new challenges for scholars conducting internet research, including research subjects’ privacy rights if data originally made public on social media platforms is later deleted, or when consent might be withdrawn. The next chap- ter is “Bad Judgment, Bad Ethics? Validity in Computational Social Media,” where Cornelius Puschmann argues that concerns over data quality in empirical social media research are more than just a methodological issue, but also have considerable ethical ramifications. The third Data Challenge comes from Katrin Weller and Katharina Kinder-Kurlanda, whose contribution, “To Share or Not To Share: Ethical Challenges in Sharing Social Media-based Research Data,” reports on the ethical decision-making processes of social media researchers con- fronted with the question of whether – and under what conditions – to share datasets. Finally, two Applied Challenges are presented. In “‘We Tend to Err on the Side of Caution’: Ethical Challenges Facing Canadian Research Ethics Boards When Overseeing Internet Research,” Yukari Seko and Stephen Lewis report on the eth- ical challenges faced by Canadian ethical review boards face when reviewing inter- net research protocols, illuminating various gaps in both the regulatory framework and the underlying ethical assumptions that need to be addressed to ensure proper guidance is taking place. Challenges in pragmatically applying research ethics is further explored in Soraj Hongladarom’s contribution, “Internet Research Eth- ics in a Non-Western Context,” which demands increased sensitivity to cultural norms and traditions when internet-based research extends across one’s local envi- ronment, and suggests that traditional ethical frameworks and guidelines must need to be flexible to adaptation when applied outside of the Western context.

xxiv

|

michael zimmer and k atharina kinder-kurlanda

c a s e s

The second part of the book includes ten Cases, brief discussions of unique social media and internet-based research projects that have generated novel ethical chal- lenges for the investigators. In “Living Labs – An Ethical Challenge for Research- ers and Platform Operators,” for example, Philipp Schaer introduces the “living labs” paradigm, where industry-based researchers conduct experiments in real- world environments or systems – such as the ill-fated Facebook emotional conta- gion experiment – and outlines numerous ethical concerns that researchers must address in such scenarios. Another helpful case study comes from Matthew Pitt- man and Kim Sheehan, who discuss the growing use of crowdsourcing in scholarly research in “Ethics of Using Online Commercial Crowdsourcing Sites for Aca- demic Research: The Case of Amazon’s Mechanical Turk.” The unique case of engaging in digital ethnography within online museum communities presents numerous ethical challenges related to privacy, publicness, and consent, according to Natalia Grincheva’s contribution “Museum Ethnogra- phy in the Digital Age: Ethical Considerations.” This discussion of digital eth- nographies is continued in “Participant Anonymity and Participant Observations:

Situating the Researcher within Digital Ethnography,” where James Robson engages with the ethical challenges of ensuring participant anonymity, as well as researcher integrity, in digital environments. In “The Social Age of ‘It’s Not a Private Problem’: Case Study of Ethical and Privacy Concerns in a Digital Ethnography of South Asian Blogs against Inti- mate Partner Violence,” Ishani Mukherjee presents her study of blogs focusing on violence against immigrant women within the South Asian-American dias- pora, revealing various ethical concerns that stem from publishing testimonies of participants within these digital spaces, which might act as identity markers and broadcast information that is private, sensitive, and controversial. Consideration of such ethical challenges with studying sensitive online communities is continued in “Studying Closed Communities On-line: Digital Methods and Ethical Consid- erations beyond Informed Consent and Anonymity,” by Ylva Hård af Segerstad, Christine Howes, Dick Kasperowski, and Christopher Kullenberg. In their case study of closed Facebook groups for parents coping with the loss of a child, the authors point to ways of modifying our research methods when studying vulner- able communities online, including the use of various digital tools and techniques to help protect subject anonymity. The next case, “An Ethical Inquiry into Youth Suicide Prevention Using Social Media Mining” by Amaia Eskisabel-Azpiazu, Rebeca Cerezo-Menéndez, and Daniel Gayo-Avello, discusses the increasing interest in mining social media feeds for clues about the feelings and thoughts of potentially-suicidal users, and highlights the related ethical concerns of privacy, anonymity, stalking, and the

introduc tion

|

xxv

algorithmic challenges of trying to profile sensitive online activities. In “Death, Affect and the Ethical Challenges of Outing a Griefsquatter,” Lisbeth Klas- trup shares a unique case where, while researching Facebook memorial pages for recently deceased users, she came across an imposter creating public memorial pages for people he did not know, apparently in order to attract attention. Klas- trup discusses the ethical dilemma of whether to go public with the name of this imposter, and the resulting aftermath. Lee Humphreys, in her case on “Locating Locational Data in Mobile and Social Media,” addresses how locations as immensely meaningful information are increasingly a byproduct of various interactions on the internet but are accompa- nied by inequalities in the ability to access and understand locational data. And in the final case, “How Does It Feel to Be Visualized?: Redistributing Ethics,” David Moats and Jess Perriam delve into the world of visualizations of data on social media platforms and explain the need for distributed ethics that also consider the growing role of tools, APIs, and algorithms in the research process.

co n t e x t s

The final section of the book presents five short Contexts, descriptions of new research domains, technologies, and platforms that present new ethical dilem- mas for researchers. Robert Douglas Ferguson reflects on the use of online health data in his ethnographic study in the contribution “Negotiating Consent, Com- pensation, and Privacy in Internet Research: PatientsLikeMe.com as a Case Study,” and discusses issues around informed consent, compensation and privacy. In “The Ethics of Using Hacked Data: Patreon’s Data Hack and Academic Data Standards,” Nathaniel Poor shows the ethical dilemmas and complex decision making processes required to deal with an opportunity to used hacked data for research. Jeff Shuter and Benjamin Burroughs, in their contribution “The Ethics of Sensory Ethnography: Virtual Reality Fieldwork in Zones of Conflict,” explore the ethical issues of using virtual reality devices in ethnographic fieldwork and show how sensory ethnography may need to be redefined. In “Images of Faces Gleaned from Social Media Used as Experimental Stimuli” Patrick Sweeney raises concerns about using images of human faces downloaded from social media pro- files for the use in experiments that ask participants to assess sexual orientation. He concludes that social psychologists are struggling with the blurring of what counts as private or public content on the internet. And, Martina Wengenmeir’s example of “Twitter Research in the Disaster Context – Ethical Concerns for Working with Historical Datasets” shows how Twitter users may, for example, during crises such as earthquakes, publicly post messages without any consideration of privacy

xxvi

|

michael zimmer and k atharina kinder-kurlanda

implications – which requires researchers to reflect on the circumstances of origi- nation of a dataset when making decisions about how to present or clean the data.

ac k n o w l e d g e m e n t s

We would like to thank our authors for their thoughtful, innovative, and stimu- lating contributions, revealing how varied the perspectives are from which indi- vidual researchers approach the internet and social media as an object of study. The ethical challenges you faced were often idiosyncratic, unexpected, or tricky to deal with, and the work presented here confirms your commitment to engaging in ethically-informed research. We particularly would like to thank Charles Ess and Elizabeth Buchanan for helping “set the scene” for the book, and our generous colleagues who provided short reaction pieces: your valuable insights have greatly improved the volume. We would also like to thank the editorial team at Peter Lang for their advice and excellent collaboration in getting making this book a reality. Last, but certainly not least, we thank Rebekka Kugler, our editorial assistant, who, with the help of Marcel Gehrke, did a fantastic job proofreading and formatting the manuscripts.

r e f e r e n c e s

Azua, M. (2009). The Social Factor: Innovate, Ignite, and Win through Mass Collaboration and Social Networking. Boston, MA: Pearson Education. Buchanan, E. (Ed.). (2004). Readings in virtual research ethics: Issues and controversies. Hershey, PA:

Idea Group. Buchanan, E., & Zimmer, M. (2016). Internet research ethics (revised). In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2016 Edition). Retrieved from http://plato.stanford.edu/ entries/ethics-internet-research/ Carpenter, K., & Dittrich, D. (2011). Bridging the distance: Removing the technology buffer and seeking consistent ethical analysis in computer security research. 1st International Digital Ethics Symposium. October 28, Chicago, IL. Fiesler, C., Young, A., Peyton, T., Bruckman, A. S., Gray, M., Hancock, J., & Lutters, W. (2015). Ethics for studying online sociotechnical systems in a big data world. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing (pp. 289–292). New York, NY. Goel, V. (2014, August 12). As data overflows online, researchers grapple with ethics. The New York Times. Retrieved from http://www.nytimes.com/2014/08/13/technology/the-boon-of-online- data-puts-social-science-in-a-quandary.html Heider, D., & Massanari, A. L. (Eds.). (2012). Digital ethics: Research and practice. New York, NY:

Peter Lang.

introduc tion

|

xxvii

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. Markham, A., & Baym, N. (Eds.). (2008). Internet inquiry: Conversations about method. Los Angeles, CA: Sage Publications. McKee, H. A., & Porter, J. E. (2009). The ethics of Internet research: A rhetorical, case-based process. New York, NY: Peter Lang. Metcalf, J., & Crawford, K. (2016). Where are human subjects in Big Data research? The emerging ethics divide. Big Data & Society, 3(1), 1–14. Schroeder, R. (2014). Big data and the brave new world of social media research. Big Data & Society, 1(2), 1–11. Secretary’s Advisory Committee on Human Research Protections (SACHRP). (2013). Considerations and Recommendations Concerning Internet Research and Human Subjects Research Regulations, with Revisions. SACHRP meeting March 12–13, 2013. Retrieved from http://www.hhs.gov/ohrp/ sachrp/mtgings/2013%20March%20Mtg/internet_ research.pdf. Zimmer, M. (2016, May 14). OkCupid study reveals the perils of big-data science. Wired.com. Retrieved May 29, 2016 from https://www.wired.com/2016/05/okcupid-study-reveals-perils- big-data-science/

Internet Research Ethics Twenty Years Later elizabeth buchanan i n t r o d u

Internet Research Ethics

Twenty Years Later

elizabeth buchanan

i n t r o d u c t i o n

It has been twenty years since the seminal issue of The Information Society which opened the scholarly discourse on internet research ethics in an expansive way. At that time, Rob Kling commented that “research ethics are normally a specialty topic with relatively little public interest” (1996, p. 103). Little did anyone suspect that in less than twenty years, a technological revolution would occur and such platforms and tools as American Online, Facebook, Google, Twitter, and countless others would make research ethics a topic of near daily conversation. Research ethics would grow beyond a narrow consideration for predominately medical/ biomedical researchers to directly include social-behavioral-educational and com- putational researchers; these researchers would define and engage with “research subjects” or “research participants” in novel ways. Indeed, the notion of a “research participant” itself has been redefined. And, the fields or locales of research settings moved beyond traditional field sites, physical spaces, or labs, to online spaces, vir- tual worlds, chat rooms, profiles, and massive open courses. In twenty years, the internet has contributed to a new paradigm in the research enterprise, contesting long-standing tenets and principles of research methods and praxis; but, has inter- net research changed the foundations from which considerations of traditional research ethics principles begin? Researchers have reflected and debated these questions many times and in many different ways: Is there something unique about internet research and are there new ethical standards or principles that apply?

xxx

|

elizabe th buchanan

Since that 1996 issue and its influential articles that demanded attention to such myriad issues as identification and representation of subjects and participants, informed consent, tensions and discrepancies between public and private spaces, and rights and responsibilities of researchers in online spaces, the internet research ethics landscape has grown; discrete disciplinary approaches have developed, and, so has the inter- and cross-disciplinary field of internet studies, complete with its own methods and ethics. Researchers from across the disciplines, and their research ethics boards, have debated what is “ethical” and how ethics are deter- mined in light of new methods and new spaces. Who, or what, determines those ethics, and for whom? Along these twenty years, the role of ethics boards has been disputed, as standing definitions of core principles themselves, for example, “human subjects,” or “private information” have faced multiple, and inconsistent, interpretations. In these twenty years, the research enterprise became dispersed, with more stakeholders with competing interests, and competing ethics. There is, at least, one constant sentiment, which began in the 1996 Informa- tion Society papers, and continues through the chapters herein, and across internet research as a whole: Internet research complexities more often than not end with the following conclusions: “We need more ethical understanding;” “we need more ethical training;” “we need more ethics in our research;” “we need more ethics in our computing practices;” “we need to explore the ethics of [social media, big data, algorithms …].” Are these conclusions sufficient, now that ethical controversy has become commonplace, routine, with a new case, a new dilemma, arising consistently and persistently? How many Facebook experiments, or reidentifications need to occur before researchers are able to answer with confidence “a” correct, ethical response and reaction? What has the past twenty years of internet research ethics taught the research enterprise, and how does it progress into the next decade? The past twenty years has produced countless volumes, papers, workshops, regulatory responses, and yet, researchers across disciplines continue to seek answers or reme- dies. Researchers borrow from different ethical frameworks to explain an issue, but still often conclude with “more exploration is needed.” To understand where internet research ethics is circa 2016, and to understand why it is so incredibly challenging to respond with certainty to the ethics of […], consider how experientially and epistemically different the internet is today. It is fundamentally different than it was in the mid-1990s when there was no Google, no Facebook, no Twitter, and there was much weaker computing and algorithmic processing, much less manipulation of data, and much less social data, and much fewer sources and consumers of “public” data. We were not in an era of ubiqui- tous and pervasive connectivity; there were fewer stakeholders in the fields and locales of internet research. That concept of the internet as a medium or locale was sufficient; individuals “got online,” and chose the who, what, where, and when

interne t research e thics

|

xxxi

of participation. There was an active researcher role and more often than not, a clearly delineated research frame (Buchanan, 2016). Then, the operational defini- tion of internet research was established, and suited the times:

Internet-based research, broadly defined, is research which utilizes the Internet to collect information through an online tool, such as an online survey; studies about how people use the Internet, e.g., through collecting data and/or examining activities in or on any online environments; and/or, uses of online datasets, databases, or repositories. (Buchanan & Zimmer, 2016)

By extension, internet research ethics was deemed “the analysis of ethical issues and application of research ethics principles as they pertain to research conducted on and in the Internet” (Buchanan & Zimmer, 2016). The ethical foundation cho- sen early on, from the inception of “internet research ethics,” was modeled after traditional principles of research ethics, namely justice, beneficence, and respect for persons (Frankel & Siang, 1999). This seemed logical, and appropriate, as these concepts were considered somewhat universal, having their origins in the Decla- ration of Helsinki and later the UN Declaration of Human Rights. But, Frankel and Siang pointed to areas of contestation in the applicability of the principles to internet research, and there were legitimate reasons to question this applicabil- ity, as these principles were grounded in biomedical research, and are applied to research in a very precise way in the biomedical model. Not only has their appli- cation in and to social-behavioral-educational-computational research in general been questioned, but internet research in particular pushed direct alignment and applicability even further. Obvious reasons, such as epistemological and ontolog- ical differences in the disciplines and their methods, account for some of the dis- agreements, but has the prevalence of these three principles clouded clarity around internet research ethics? We are, some twenty years into the field, still asking those fundamental questions about the ethical specificity of internet research. Twenty years later, fundamental issues and questions linger, albeit in differ- ent modalities. Internet research in the late 1990s, early 2000s was dominated by online surveys, participant observations, online interviews and focus groups, great interest in newsgroups, support groups, and online communities. Researchers were actively engaging participants, observing participants, and communities; large- scale data surveillance and scraping from those communities were still a few years away. This early phase of internet research saw ethical conundrums in relation to basic research considerations: “Is this human subjects” “research,” “is an avatar a human subject,” “is consent needed,” “is an internet message board a public space,” and “is an IP address identifiable private information?” (AoIR, 2002). Research- ers in this first phase of internet research would quickly see these questions as somewhat straightforward, once the era of social computing took hold, circa 2005. Social computing and social media became interchangeable with “the internet.”

xxxii

|

elizabe th buchanan

Looking back, the transition to the era of social was swift. Ethical reflections grounded in the 1999 internet were not necessarily fitting, but, newfound questions were already at play. The 1990s internet, now, seems immature, unsophisticated, in comparison to the intricacies and strengths of Facebook, Google, and Twitter, and the powerful analytics that drive them. The early processes for conducting and participating in research, those that required a conscious and intentional act, were changing. Analytical processes were quickly growing and expanding, becoming more dominant. Simultaneously, while individuals had greater means to create and share their data, there was also less awareness of the pervasiveness of data analytics and the omnipresent research that was underway. From cloud-based ubiquity, to mobile to wearable devices and the internet of things, the second phase of internet research ethics, the social era, would dominate, and would eventually overlap with the era of big data research, circa 2010. As 2016 closes, the era of big data continues to push the research enterprise in pioneering ways; the rate with which knowledge production and scientific findings occur is unprecedented. Social knowledge, citizen science, crowd-based solutions, and a global knowledge enterprise characterize this time. Internet research has become so multi-faceted that a congruous definition eludes any particular dis- cipline or methodological approach. And, as for ethics, the discourse of big data points to at least four major trends: The rise of large-scale data sets and corre- spondingly immense processing power available to some stakeholders; non-con- sensual or minimally-consensual access to and sharing of individual data; data misappropriation; and, different values and ideologies between and among cre- ators, processors, distributors and consumers of data. Not surprisingly, the regulatory arm of the research enterprise has struggled over the past twenty years, as the concept of “human subjects” merges with “data subjects.” The research industry, too, looks and acts differently today than it did twenty years ago. In the United States, for example, research regulations have not kept up with the methods and stakeholders as those involved with the Facebook Contagion Study. There was no regulatory response despite the significant debates about the ethics of the methods and procedures used. But, to some researchers, there was nothing ethically dubious about the methods or procedures – they were simply a sign of the times, characteristic of big data research. While readying themselves for the next frame of internet research, researchers across the globe face significant regulatory changes, including the ways in which ethics review and approval is and should be sought and obtained; fundamental definitions of privacy and identification have been, and are under revision, in the European Union’s Right to Be Forgotten Act and the proposed changes to the US Common Rule. Twenty years later, research ethics have become commonplace; discussions about misaligned data releases or violations of personal privacy are hardly shocking

interne t research e thics

|

xxxiii

because of the degree to which we are all now data subjects, the participants in or of some social experiment. Twenty years from now, we will look back at this time of social data and big data, and think, how simple. If only respect, beneficence, and justice were so simple.

r e f e r e n c e s

Ess, C. and AoIR. (2002) Ethical decision-making and Internet research: Recommendations from the aoir ethics working committee. Retrieved from http://aoir.org/reports/ethics.pdf Buchanan, E. (2016). Ethics in digital research. In M. Nolden, G. Rebane, & M. Schreiter (Eds.), The handbook of social practices and digital everyday worlds (pp. forthcoming). Springer. Buchanan, E. A., & Zimmer, M. (2016). Internet research ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2016 Edition). Retrieved from http://plato.stanford.edu/archives/

Frankel, M. S., & Siang, S. (1999, June 10–11). Ethical and legal aspects of human subjects research in cyberspace. A Report of a Workshop. Washington, DC: American Association for the Advance- ment of Science. Kling, R. (1996). Letter from Rob Kling. The Information Society, 12(2), 103–106.

p a r t

o n e

Challenges

Conceptual Challenges

 

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

3–73

Data Challenges

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

75–131

Applied

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

133–164

c h a p t e r

o n e

Recasting Justice for Internet and Online Industry Research Ethics

anna lauren hoffmann and anne jonas

i n t r o d u c t i o n

The rise of behavioral research and experimentation by online companies and, in particular, social networking sites presents new challenges for internet research ethics (IRE) today. It has also reinvigorated conversations about respect, auton- omy, and consent in online research, especially as it relates to users who may not know that their data is being used for research purposes. Compared to these val- ues, however, the value of justice for online industry research has received relatively little attention. In this chapter, we revisit feminist and other discussions of conventional research ethics that argue for a conception of justice in research that goes beyond matters of subject selection and distribution of benefits and burdens. After doing so, we explore the implications of a more expansive notion of justice for industry researchers who have greater access to – and power to influence – the design, practices, and (often proprietary) systems of online platforms that are often hostile or violent environments for vulnerable or otherwise disadvantaged populations. As we discuss below, conditions of harassment and abuse on social networking sites systematically affect women and people of color. Consequently, these groups shoulder a greater share of the social, political, and emotional burden of online par- ticipation – the very thing that generates the sorts of data that support the efforts of industry researchers. In view of this, we argue that researchers have – specifi- cally as a matter of research ethics – an obligation not only to avoid replicating or

4

|

anna lauren hoffmann and anne jonas

compounding existing injustices in their research but to foreground the needs and safety of vulnerable users and attend to those conditions and practices that give rise to injustice in the first place.

j u s t i c e

a n d

r e s e a r c h

e t h i c s

Justice occupies a precarious position in the history of research policy and ethics. In some instances, concerns of justice are explicitly detailed, as in ideals of fairness in subject recruitment and in the distribution of research’s burdens and benefits. In other ways, justice seems implicit in all values relevant to research ethics, like respect for persons, informed consent, and beneficence. For example, informed consent is – in part – an expression of classical liberalism’s prioritization of individ- ual liberty and autonomy in considerations of justice. However, despite its explicit and implicit value, justice has received comparatively less explicit attention than other values, especially informed consent and beneficence. Arguably, the sub-do- main of internet research ethics has inherited this relative blind spot; while justice is by no means absent in discussions of IRE, it has received considerably less atten- tion than issues of consent, privacy, and data security. 1 Below, we briefly map this tension – between implicit and explicit considerations of justice – in both IRE and research ethics generally. Along the way, we demonstrate some of the various ideas of justice employed in research ethics conversations, and we articulate some of the challenges to these ideas advanced by critics. Justice in a research context refers largely to social justice, that is, the articula- tion of fair or equitable relations between individuals and between individuals and institutions in a given society. 2 Though social justice has been articulated in differ- ent ways at different times, the term as we use it today owes its meaning to polit- ical and scholarly movements from the early and mid-20th century. In the United States context, efforts by labor unions in the 1930s brought attention to unfair and oppressive practices of employers and were influential in gaining protections and benefits for American workers (Goldfield, 1989). During the 1960s and 1970s, activist and student movements in both the United States and in other countries brought a number of social issues to the forefront of public consciousness, in par- ticular race, gender, and poverty. For example, organizing and activism around issues of racism sought greater rights and political liberties as well as the allevi- ation of oppression and liberation of self-respect for marginalized racial groups (Boxill, 1992; Thomas, 1983). Parallel to these social and political developments, discussions of social justice gained increasing prominence in scholarly contexts, as exemplified by John Rawls’ A Theory of Justice (published in 1971). Rawls’ (1971) work offered “a standard whereby the distributive aspects of the basic structure of society are to be assessed” (p. 8) and has sparked decades of debate over the fair

recasting justice for internet and online industry research ethics

|

5

distribution of the benefits and burdens of social cooperation (see, for example:

Nozick, 1974; Okin, 1989; Sen, 2009; Walzer, 1984; Young, 1990). In United States’ federal research policy, concerns over social justice are made explicit in discussions related to the selection of subjects and the distribution of the benefits and risks of research practice. The first concern – selection of subjects – represents a protective conception of justice, one that aims to shield or guard the vulnerable against the imposition of significant or undue risk (Powers, 1998, p. 151). Importantly, this conception recognizes that informed consent may not be sufficient for protecting all research subjects and, therefore, further ethical over- sight is required, as through ethical review boards or other formalized processes. For these review boards, “the emphasis is on protecting those who are perceived as vulnerable to coercion or manipulation, exploitation, or deception, or persons otherwise thought to be less capable of relying on their own judgments to protect their own interests” (Powers, 1998, p. 151). The second concern – fair distribution of the benefits and burdens of research – goes beyond the protective conception of justice to also consider issues of fairness in access to research. This dimen- sion relates to the protectionist conception through a consideration of subjects and potential beneficiaries, but goes beyond it by requiring researchers to con- sider future applications and benefits of the research in question. As the Belmont Report frames the connection, “research should not unduly involve persons from groups unlikely to be among the beneficiaries of subsequent applications of the research” (National Commission, 1978, p. 10). Eventually, however, this second, distributive conception of justice in research ethics was expanded to include considerations not of protection, but of inclu- sion in research. In the 1980s, in particular, certain vulnerable or marginalized groups began demanding greater access to participation in research. Chief among them were gay men and other people suffering from the AIDS epidemic and seek- ing access to possibly beneficial clinical trials for innovative drugs and therapies (McCarthy, 1998, pp. 26–27). In addition, advocates for women’s health – sparked by an increasingly visible and influential feminist movement – demanded that more women be included in the research process. Paternalistic rules preventing any woman capable of bearing children (regardless of whether or not one was preg- nant) from participating in drug trials resulted in a situation where many therapies eventually approved by the Food and Drug Administration (FDA) were not ade- quately tested on women. Rather than protecting women’s interests, women simply became “guinea pigs” not during but after the research and trial stages, as they were exposed to undue risk once a therapy hit the market (Merton, 1993). Against the protectionist strain of early conceptions of justice in research ethics, exclusion from research came to be seen as itself a kind of discrimination and injustice (McCarthy, 1998, p. 27). In addition, considerations of dissemination of research raise further questions of distributive justice. Availability (or unavailability) of research findings

6

|

anna lauren hoffmann and anne jonas

can “have significant influence on establishing future research priorities, including the development of future research questions and the populations in which they will be explored” (Kahn, Mastroianni, & Sugarman, 1998, p. 170). This evolution points to broad and unresolved issues articulated by feminist and other scholars critical of the ways research ethics debates frame justice. In different ways, both AIDS and feminist activism demonstrated that research and researchers do not exist in a vacuum – rather, their work is embedded in (and has implications for) a larger social and political context of power and vulnerability, privilege, and oppression. But, as Susan Wolf (1999) has argued in the context of bioethics, “even when the racism, ethnocentrism, and sexism haunting key events have been acknowledged, they have usually been treated as a layer of extra insult added into [a] more fundamental harm” – an approach that fails to fully appreciate the ethical significance of difference (pp. 65–66). We submit that this is true not only of bioethical analyses, but of broader research ethics analyses generally. Fol- lowing Sherwin (2005): “No sooner does the Belmont Report note that injustice affects social groups in many complicated ways than it reverts back to warning that it is easy accessibility that exposes groups (or classes) to high risk of exploitation” (p. 152). As a result, the Belmont Report reinforces the limited view that injustice in research is primarily a problem of accessibility (Sherwin, 2005, p. 152). Ulti- mately, this focus on issues of accessibility and selection of research subjects fails to take into account a broader range of vulnerabilities and risks associated with membership in disadvantaged groups. At times, informed consent and respect for autonomy are invoked as a way of compensating for the shortcomings of justice’s place in research ethics discussions. As Rogers and Lange (2013) note, the Belmont Report’s “treatments of minority vulnerability have focused on mandating informed consent and protecting against the exploitation and over-representation of minorities in research. But precisely identifying the nature, source, and best response to minority vulnerability has been more difficult” (p. 2146). But focusing on respect and consent can ultimately pre- vent researchers from grappling with broader social and political implications of their research demanded by justice. In a health context, for example, “justice may require steps to eliminate the more fundamental sources of inequality that result in the inequalities in medical research” (Kahn et al., 1998, p. 158). Moreover, subsuming some justice concerns under the umbrella of informed consent prevents research ethics from reaching beyond the narrow scope of research design and processes. As King (2005) points out, accounting for justice issues does not mean that “all concerns about consent [can] be alleviated by modifying the consent process or the interaction between institution or researcher and subject. For example, it may be necessary to improve conditions in prisons before permit- ting prisoners to participate in research” (p. 136). Instead of prioritizing individ- ual autonomy, alternative approaches to justice and research ethics could position

recasting justice for internet and online industry research ethics

|

7

subjects’ vulnerabilities as a starting – rather than stopping – point for research. Doing so allows for the recognition that individuals face distinct risks as a result of distinct features of their identities; accordingly, “research should address specific sources of vulnerability in minority populations, rather than taking the vulnera- bility of minority groups to be homogenous” (Rogers & Lange, 2013, p. 2144). Regardless of the approach, however, it is clear that “significantly more must be said about the risks of exploitation that are associated with membership in an oppressed social group” (Sherwin, 2005, p. 152).

n e w

i n d u s t ry

c h a l l e n g e s :

j u s t i c e

r e s e a r c h

a n d

o n l i n e

Outstanding questions of justice and research ethics take on new urgency as online companies and platform operators – spurred by a larger “big data” movement – conduct more data-intensive and (at times) social scientific kinds of research. It also presents new problems given the particular position and nature of this kind of work. Industry research has a long history and comes in different forms – from the sorts of basic research being conducted at large industrial research labs to narrowly applied research that aims to improve or increase engagement with an existing product (boyd, 2016). In either case, researchers’ work is constrained in at least some way by the companies that employ them. Even those conducting basic research with relative freedom “know that they will be rewarded for helping the company. As a result, researchers in these institutions often balance between their own scholarly interests and their desire for corporate recognition in deciding which projects to pursue” (boyd, 2016, p. 6). As with basic research for industrial companies, social scientific and behav- ioral research is integral to online companies. Social networking site Twitter, online dating site OKCupid, and the data-rich transportation network company Uber all actively advertise and share parts of their research in more or less acces- sible ways, through blogs, informational webpages, and academic publications (OKCupid, 2016; Twitter, 2016; Uber Design, 2015). OKCupid’s Christian Rud- der has, in particular, been a vocal proponent of online industry research and what it can contribute to broader understandings of human sociality, through both his company’s OKTrends blog and elsewhere (see: Rudder, 2014a, 2014b). Academic research is also integral to Facebook’s institutional culture – the company’s data science division has extensive ties to academia (Grimmelmann, 2015, p. 222; see also, Research at Facebook, 2016). As Eytan Bakshy, Dean Eckles, and Michael S. Bernstein (2014) – Facebook researchers themselves –point out, A/B tests and randomized field experiments “play a central role throughout the design and deci- sion-making process” for many organizations (p. 1).

8

|

anna lauren hoffmann and anne jonas

Online research conducted by internet companies like Facebook present new challenges to our conceptions of research ethics and, in particular, internet research ethics. As Shilton and Sayles (2016) describe our current moment: “Never before have data about human interactions been so available to researchers. Public atten- tion to research ethics in this space has grown as reports of the amount and inten- sity of research using data from digital and social media platforms have appeared” (p. 1909). Research being conducted outside of academia by online platforms and internet companies is no longer limited to simple A/B testing, but “now encom- passes information about how humans live, eat, sleep, consume media, move about, and behave in the seclusion of their home” (Polonetsky, Tene, & Jerome, 2015, p. 337). In addition, the lack of clear oversight mechanisms and the use of propri- etary systems and algorithms in corporate settings add another layer of concern beyond the usual concerns that have come to mark discussions of IRE in the last 25 years. Moreover, recent research has suggested that there is notable disagree- ment between how researchers in academia and researchers in industry perceive the ethical practices of the other; academics are more likely to think ethics in industry is lacking, whereas researchers in industry are more likely to consider their ethics to be comparable to academics (Vitak, Shilton, & Ashktorab, 2016). The now infamous Facebook emotional contagion study is perhaps the par- adigmatic example of these new concerns. Published in 2014, the study reported on efforts by Facebook researchers (along with researchers from Cornell Univer- sity) to better understand the possible effects the site’s NewsFeed algorithm might have on users’ mood or mental health (Kramer, Guillory, & Hancock, 2014). The experiment – which ran for one week in 2012 – split a small set of users into two groups. One group’s NewsFeeds were tailored to show more positive posts from friends and family (that is, posts with language that was deemed to have an emotionally positive valence); the other group’s feeds were tailored to show more negative posts (that is, posts with language that was deemed to have an emotion- ally negative valence). The researchers then measured whether or not the modified NewsFeed had an impact on the emotional valence of the posts of the research subjects. Both academics and the public forcefully questioned the ethics of an experiment designed to emotionally manipulate users without consent or post-ex- periment debrief (Flick, 2016; Gray, 2014; Puschmann & Bozag, 2014). Others, especially those more sympathetic with industry research, defended the study as ultimately posing little risk to subjects or even as a responsible move on Facebook’s part (Meyer & Chabris, 2015; Watts, 2014). As Mary Gray (2014) detailed during the aftermath of the study’s publication, “social media platforms and the technology companies that produce our shared social playgrounds blur the boundaries between practice and research” (n.p.) in order to understand and improve the products and services they provide users. Moreover, this blurring is not incidental but often necessary, as it exposes the

recasting justice for internet and online industry research ethics

|

9

limits of our extant understandings, methods, and ethical frameworks. “Indeed,” Gray (2014) rightly points out, “‘ethical dilemmas’ are often signs that our meth- odological techniques are stretched too thin and failing us” (n.p.). While the ter-

rain of these new challenges is still unsettled, it is nonetheless uncontroversial to say that internet service providers are conducting research that may have ethical and political consequences for the lives and online experiences of users. And, since data frequently involves persons – either directly or indirectly – “consideration of principles related to research on human subjects may be necessary even if it is not immediately apparent how and where persons are involved in the research data” (Markham & Buchanan, 2012, p. 4). Moreover, the particularities and additional challenges of industry research and online experimentation may ultimately require

a more or less radical rethinking of research ethics and oversight (Fiske & Hauser,

2014); but “although the options and the necessary procedures may differ from those seen in traditional experiments, experiments’ responsibility to protect users from harm remains as strong as ever” (Felten, 2015, p. 201). Advocates of online industry research point to the fact that online platforms – and their attendant algorithms – are already impacting users’ lives on a daily basis and, consequently, it would be unethical not to research their effects. Commenting on the Facebook emotional contagion study, Duncan Watts (2014) argues that “we should insist that companies like Facebook – and governments for that matter – perform and publish research on the effects of the decisions they’re already making on our behalf. Now that it’s possible, it would be unethical not to” (Watts, 2014, n.p.). As boyd (2016) puts it, “Facebook algorithmically determines which content to offer to people every day. If we believe the results of this study, the ongoing psychological costs of negative content on Facebook every week prior to that one- week experiment must be more costly” (p. 9). Overly burdensome or ill-fitting

ethical or legal regulation would possibly stunt this important work or, worse, drive

it underground altogether (Meyer, 2014).

In addition, social networking sites are part of a broader online ecosystem of social data generation and dissemination enabled by the web – a system that some commentators and evangelists believe marks the “end of theory,” as knowledge pro- duction today simply involves “[throwing] numbers into the biggest computing clus- ters the world has ever seen and [letting] statistical algorithms find patterns where science cannot” (Anderson, 2008, n.p.). This particular historical moment seems to demand the work and insight of industry researchers – especially in the form of data scientists – who are in the best position to deploy the tools, methods, and expertise necessary to make sense of all this data. Though claims as to the “end of theory” are ultimately misguided or shortsighted (see: Boellstorff, 2013; Bowker, 2014), the era of Big Data nonetheless “ushers in new forms of expertise and promises to render various forms of human expertise increasingly unnecessary” (Bassett, 2015, p. 549).

As Robert W. Gehl (2015) sardonically summarizes, “a common refrain … is that

10

|

anna lauren hoffmann and anne jonas

we are in for a revolution, but only if we recognize the problem of too much data and accept the impartial findings of data science for the good of us all” (p. 420). As Bakshy et al. (2014) describe, “the Internet industry has distinct advan- tages in how organizations can use experiments to make decisions: developers can introduce numerous variations on the service without substantial engineering or distribution costs, and observe how a large random sample of users (rather than a convenience sample) behave when randomly assigned to these variations” (p. 1). Moreover, these sorts of tests and experiments can have profound implications for the design and direction of a service, both now and into the future: “Some of the most effective experiments directly inform decisions to set the parameters they manipulate, but other well-designed experiments can be effective through broader, longer-term influences on beliefs of designers, developers, scientists, and managers” (Bakshy et al., 2014, p. 9). Even clear proponents recognize the unique position of power these companies are in with regard to research and the devel- opment of their platform; writing on the Facebook emotional contagion study, Meyer (2015) notes that “the company alone was in a position to … rigorously determine the mental health effects of [the] News Feed” (p. 275). As discussed in the preceding section, however, justice in research ethics has, especially in recent decades, focused on more and more inclusive research. Conse- quently, claiming that conventional research ethics have always been overly pro- tectionist and would constrain industry research in undue ways fails to recognize the value of justice for opening up new conversations about inclusion and the moral imperative of research. Simply dismissing the concerns of critics as merely reactionary and hindering responsible innovation is not a genuine representation of the landscape of research ethics discussions. Claims that academics, legislators, and the public view research and experimentation without consent as inherently dangerous and absolutely unethical are not fairly representative of broader research ethics discussions that prioritize the problem of inclusion and the social and polit- ical responsibility of researchers. The relative underdevelopment of discussions of justice in this emerging research ethics domain is, in part, a consequence of the conceptions of justice that dominate both conventional research ethics and IRE. Though researchers at companies like Facebook need to think carefully about particular features of users’ identities – gender, race, sexuality, and beyond – in study design and analysis, it is not necessarily an issue of subject recruitment. After all, these sites have a built-in pool of subjects and their data – they have, in some ways, already been “recruited” simply by agreeing to use the site. As a consequence, one of the phases of research where justice provides guidance – the recruitment of subjects – is not applicable, at least not in a conventional sense. This is not to say, however, that these built-in subject pools do not raise other questions regarding fairness. From the Menlo Report (Homeland Security

recasting justice for internet and online industry research ethics

|

11

Department, 2012) to discussions of possible consumer subjects ethics oversight mechanisms (Calo, 2013; Polonetsky et al., 2015), researchers and practitioners have highlighted the importance of avoiding discrimination and ensuring that vulnerable populations are not unfairly targeted in research. For example, while many users are simply “opted-in” as research subjects when they sign up to use a site, other users are systematically excluded from signing up for other reasons. For example, sites may be inaccessible or unusable for people with certain disabilities; “real name” policies that require people to use a legally or administratively-recog- nized name may prevent other vulnerable populations from joining the site, like transgender users or domestic violence survivors. In addition, as users of any given social network tend not to reflect global demographics, findings or insights from research on social media users are likely not generalizable (Hargittai, 2015). The relevance of the second explicit concern of justice – the distribution of the burdens and benefits of research – is also tentative. If we accept the claims of the most ardent defenders of online industry research, A/B testing and online experiments are inherently good in this regard, as they help designers and devel- opers better understand the impact of their services and make adjustments and improvements that benefit all users simultaneously. Meyer (2015) argues that this “tight fit between the population upon whom (no more than minimal) risks are imposed and the population that stands to benefit from the knowledge produced by a study” is integral to responsible innovation and part of what makes A/B tests like Facebook’s emotional contagion experiment ethically permissible (p. 278). But this “tight fit” relies on a narrow conception of burden and benefit – one that min- imizes the commercial and commodified aspects of research by online companies. Saying that “it is not the practitioner who engages in A/B testing but the prac- titioner who simply implements A who is more likely to exploit her position of power over users or employees” (Meyer, 2015, p. 279) ignores the fact that online platforms exert power and may be exploitative even if they engage in A/B test- ing. Online experimentation might indeed be an important marker of responsible innovation or necessary to the development of trust between users and platforms, but this does not at the same time forestall or render exploitation impossible or even unlikely. After all, responsible innovation still raises questions of whom com- panies should be responsible to and for what reasons. For instance, it can be argued that the very mechanisms that make online experimentation possible – impene- trable terms of service agreements, active user bases, and proprietary algorithms – alienate people from their data and online activities in ways that might best be described as “data colonialism” (Thatcher, O’Sullivan, & Mahmoudi, 2016; see also Ogden, Halford, Carr, & Earl, 2015). But one need not accept the more forceful view of corporate data practices by corporations as wholly exploitative in order to accept that, at a minimum, the owners, developers, and researchers of online platforms are in a position of power

12

|

anna lauren hoffmann and anne jonas

relative to users (Proferes, 2016). This lack of power afforded users by many plat- forms is not only a broader political problem – it should also be viewed specifi- cally as a problem of research ethics for those companies that build and maintain social platforms. Inversely, the affordances and features of the internet and online platforms may magnify or exacerbate the harms and injustices that some inter- net users face. Digital information is easily transmittable and doggedly persistent; harmful or abusive photos, records, or online exchanges may be transmitted far beyond a user’s control and persist in far-flung servers or databases despite deletion elsewhere. Similarly, harmful or shaming content (like tweets or Facebook posts) may “go viral” and trigger an onslaught of harassment that is hard to escape and even harder to erase. Moreover, while these features of online communication can impact anyone, they are particularly damaging when mapped on to deeply-rooted problems of racism and sexism – in this way, the internet and online platforms not only reproduce but magnify or intensify already existing injustices. Given the radically different experiences and vulnerabilities of different types of users – who exist at the intersections of a range of marginalized identities – online research conducted by internet services should, in order to be just, also attend to conditions of harassment, abuse, and oppression that exist on many social networking sites. In this sense, we can learn a great deal from feminist and other critics of conventional research ethics that demand greater attention not just to the ways in which we should avoid reproducing existing injustices in our research but also to alleviating the conditions that give rise to oppression and injustice in the first place. Consequently, fully understanding the value and importance of social justice in the context of online research requires going beyond conventional concerns of subject recruitment and distribution of research’s burdens and benefits.

o n l i n e

h a r a s s m e n t

a n d

v u l n e r a b l e

p o p u l at i o n s

Since online industry research resists easy capture under conventional conceptions of justice and research, it may be necessary to revisit and reassert critiques of con- ventional research ethics that argue for a more expansive conception of justice. We can begin this process by pointing to well-documented vulnerabilities of certain groups in online spaces and digital platforms. Legal scholar Danielle Citron (2014) has extensively detailed the widespread sexism and harassment of women, an issue that has also received coverage in mainstream publications (for example: Buni & Chemaly, 2014). Designer and engineer Alyx Baldwin (2016) has reviewed in detail the ways in which automated systems based on machine learning are encoded with racial and gender biases. Both Google and Facebook, two of the largest internet corporations, have been criticized for enforcing aforementioned “real name” poli- cies that disproportionately targeted vulnerable populations, disabling accounts of

recasting justice for internet and online industry research ethics

|

13

individuals or forcing them to conform to standards that might either put them at risk for harassment and abuse or that misrepresent their identities (boyd, 2011; Haimson & Hoffmann, 2016; Hassine & Galperin, 2015). The response to these concerns from the relevant platforms has generally been to attempt to tweak policies to minimize explicit harm, without engaging with the broader and more complex social and political dynamics that generate certain harms in the first place. Law student Kendra Albert (2016) argues that “defamation” is commonly invoked in the terms of service for online platforms as shorthand for amorphous forbidden bad speech about another person. However, truthfulness, the quality at the heart of defamation cases, is rarely what is most at stake when it comes to harmful speech acts online. Legal scholar and journalist Sarah Jeong (2015) suggests that platforms generally pay too much attention to harmful content in these cases, rather than considering patterns of behavior which might require greater attention and interaction. Social media platforms often exacerbate these problems by relying on users to report one another, thus shifting the onus for policing behavior off of the com- pany and onto individual users without recognition of their differential positions of power and uneven experiences of harm (Crawford & Gillespie, 2014). For example, Gamergate and related campaigns of misogynist harassment exploit site features like algorithms and reporting buttons to target specific users for repeated and sustained anti-feminist harassment (Massanari, 2017). Tinder user Addison Rose Vincent (2016), for example, describes being alerted of an inordinate num- ber of reports and “blocks” from other Tinder users – a problem Vincent attri- butes to their non-normative, transfeminine genderqueer identity. In this case, Tinder’s reliance on a relatively unsupervised user base to determine what counts as offensive tilts normative expectations towards the dominant group (even when that group is displaying egregious discrimination and structural violence). These examples can further be viewed as online analogs to other failures of justice where technological interventions ostensibly intended to support vulnerable groups ulti- mately serve to further victimize them, as when video footage from police body cameras is used as evidence against victims (most often people of color) of police brutality. While some feminist activists have called for easier access to mechanisms of punishment via social media, others understandably remain concerned that these very “protections” will be weaponized against them. The fear that mechanisms intended to empower users will ultimately undermine or be used against vulner- able individuals takes on additional force in a broader social and political system that targets those who speak out against particular injustices and oppressions. For example, feminists of color who have critiqued racism among white feminists are often labeled in the popular press as “toxic” and “bullies” (Ross, 2014). Similarly, black faculty members have been reprimanded by their institutions for discussing

14

|

anna lauren hoffmann and anne jonas

structural racism (McMillan Cottom, 2013). As Tressie McMillan Cottom (2014) has argued in regards to the “trigger warnings” for material dealing with racism, sexism, and similar topics, flagging mechanisms do not support but rather prevent or actively block discussions of larger, systemic social problems. Ultimately, by shifting the responsibility for flagging and reporting onto individuals users, these mechanisms simply “rationalize away the critical canon of race, sex, gender, sexu- ality, colonialism, and capitalism” (McMillan Cottom, 2014, n.p.). It is thus not enough to develop methods for flagging practices or content of concern – there must be a greater push towards shifting the underlying systems of oppression that produce inequality and systemic violence. While, as Joan Donovan (2016) argues, the coerciveness of online platforms may not be equivalent to the carceral abilities of the state, being excluded from social networks and prevented from effective online participation nonetheless has real consequences for individ- uals, including the potential for isolation and for one’s reputation to be tarnished without recourse.

co n c lu d i n g

r e m a r k s

Unlike conventional academic research that has to carefully consider the selection and recruitment of subjects, researchers working for online platforms are able to harness the massive amounts of behavioral and digital trace data produced by users to create value for site owners and develop new insights into human sociality. Without these troves of data, the unique and increasingly lauded research per- formed by online companies is not possible. At the same time, users’ experiences on these platforms are not uniform or monolithic – for many users, these platforms are sites of (often racial or gendered) violence. These experiences are in part the result of platform norms or mechanisms that produce new opportunities for racial or gendered harm, enable new strategies or methods by which these harms may be enacted, and too often work to legitimate and reify the values and expectations of dominant or privileged groups over others. These problems of injustice and vio- lence online must be accounted for within an internet research ethics framework if such a framework is to be politically relevant today and in the foreseeable future. The anemic responses of online companies to the dangers faced by vulnerable subjects online have not only failed to stem problematic or violent behavior but ultimately reinforce and amplify existing hierarchies and biases. Consequently, tar- geted or vulnerable individuals along certain lines (for example, racial or gendered) may remove themselves from the user pool of online platforms. These groups then bear the burden of being reified into the margins through research, without ben- efiting from continued access to the products. While reminiscent of conventional conversations of justice and research ethics that emphasize a fair distribution of

recasting justice for internet and online industry research ethics

|

15

the burdens and benefits of research, it breaks from conventional discussions in its persistence. Whereas other forms of research might need to consider justice in the selec- tion of subjects to test an already outlined research question, online industry researchers capture the ongoing activity of users prior to inquiry. The result is an ever-present pool of captive subjects – represented by their personal and social data – that, in the meantime, are exposed to ongoing problems of harassment and abuse. Ultimately, then, realizing justice in internet and industry research ethics must go beyond issues of representation in research or selection of subjects, it must also incite broader social and political transformation.

n ot e s

1. Consider, for example, recent special issues from two journals on ethics and indus- try research after the Facebook emotional contagion experiment controversy:

Colorado Technology Law Journal, vol. 13, no. 2 (see: Jeans, 2015) and Research Ethics, vol. 12, no. 1 (see: Hunter & Evans, 2016). Both issues cover a wide-range of topics – from privacy to informed consent to autonomy – but none of them explicitly attend to matters of justice. At best, we can infer implications for justice from the few articles that discuss issues of power and control online.

2. Other forms of justice may attend to other relations (as with environmental jus- tice, which focuses on fair or good relationships between humans and the rest of the natural world) or a specific subset of social justice issues (like criminal justice or reproductive justice).

r e f e r e n c e s

Albert, K. (2016, April). Put down the Talismans: Abuse-handling systems, legal regimes and design. Pre- sented at the Theorizing the Web, New York, NY. Retrieved from http://theorizingtheweb.

Anderson, C. (2008, June 23). The end of theory: The data deluge makes the scientific method obso- lete. Wired. Retrieved from http://www.wired.com/2008/06/pb-theory/ Bakshy, E., Eckles, D., & Bernstein, M. S. (2014). Designing and deploying online field experiments. In C. Chung, A. Broder, K. Shim, & T. Suel (Eds.), WWW’14: 23 rd International World Wide Web Conference (pp. 1–10). Seoul, KR: ACM. Baldwin, A. (2016, April 25). The hidden dangers of AI for queer and Trans people. Model View Culture. Retrieved from https://modelviewculture.com/pieces/the-hidden-dangers-of-ai-for- queer-and-trans-people Bassett, C. (2015). Plenty as a response to austerity? Big data expertise, cultures and communities. European Journal of Cultural Studies, 18(4–5), 548–563. Boellstorff, T. (2013). Making big data, in theory. First Monday, 18 (10), n.p. Bowker, G. (2014). The theory/data thing. International Journal of Communication, 8, 1795–1799.

16

|

anna lauren hoffmann and anne jonas

Boxill, B. (1992).Two traditions in African American political philosophy. Philosophical Forum, 24(1–3),

119–135.

boyd,d.(2011,August 4).“Real names”policies are an abuse of power.Retrieved from http://www.zephoria.

boyd, d. (2016). Untangling research and practice: What Facebook’s “emotional contagion” study teaches us. Research Ethics, 12(1), 4–13. Buni, C., & Chemaly, S. (2014, October 9). The unsafety net: How social media turned against women. The Atlantic. Retrieved from http://www.theatlantic.com/technology/archive/2014/10/

Calo, R. (2013). Consumer subject review boards: A thought experiment. Stanford Law Review, 66,

97–102.

Citron, D. K. (2014). Hate crimes in cyberspace. Cambridge, MA: Harvard University Press. Crawford, K., & Gillespie, T. (2014). What is a flag for? Social media reporting tools and the vocabulary of complaint. New Media & Society, 18(3), 410–428. Retrieved from http://doi.

Donovan, J. (2016, April). The Ferguson effect. Presented at the Theorizing the Web, New York, NY. Retrieved from http://theorizingtheweb.tumblr.com/2016/program Facebook. (2016). Research at Facebook. Retrieved from https://research.facebook.com Felten, E. (2015). Privacy and A/B Experiments. Colorado Technology Law Journal, 13(2), 193–201. Fiske, S. T., & Hauser, R. M. (2014). Protecting human research participants in the age of big data. PNAS, 111(38), 13675–13676. Flick, C. (2016). Informed consent and the Facebook emotional manipulation study. Research Ethics, 12(1), 14–28. Gehl, R. W. (2015). Sharing, knowledge management and big data: A partial genealogy of the data scientist. European Journal of Cultural Studies, 18(4–5), 413–428. Goldfield, M. (1989). Worker insurgency, radical organization, and new deal labor legislation. Amer- ican Political Science Review, 83(4), 1257–1282. Gray, M. L. (2014, July 9). When science, customer service, and human subjects research collide. Now what? Culture Digitally. Retrieved from http://culturedigitally.org/2014/07/when- science-customer-service-and-human-subjects-research-collide-now-what/ Grimmelmann, J. (2015). The law and ethics of experiments on social media users. Colorado Technol- ogy Law Journal, 13(2), 219–271. Haimson, O. L., & Hoffmann, A. L. (2016). Constructing and enforcing “authentic” identity online:

Facebook, real names, and non-normative identities. First Monday, 21(6), n.p. Hargittai, E. (2015). Is bigger always better? Potential biases of Big Data derived from social media data. The Annals of the American Academy of Political and Social Science, 695, 63–76. Hassine, W. B., & Galperin, E. (2015, December 18). Changes to Facebook’s “real names” policy still don’t fix the problem. Retrieved from https://www.eff.org/deeplinks/2015/12/changes-facebooks- real-names-policy-still-dont-fix-problem Homeland Security Department. (2012). The Menlo report: Ethical principles guiding information and communication technology research. Washington, DC: Author. Hunter, D., & Evans, N. (2016). Facebook emotional contagion experiment controversy. Research Ethics, 12(1), 2–3. Jeans, E. D. (2015). From the editor. Colorado Technology Law Journal, 13(2), viii–ix.

recasting justice for internet and online industry research ethics

|

17

Jeong, S. (2015). The Internet of garbage. Jersey City, NJ: Forbes Media. Kahn, J. P., Mastroianni, A. C., & Sugarman, J. (1998). Implementing justice in a changing research environment. In J. P. Kahn, A. C. Mastroianni, & J. Sugarman (Eds.), Beyond Consent: Seeking Justice in Research (pp. 166–173). Cary, NC: Oxford University Press. King, P. A. (2005). Justice beyond Belmont. In J. F. Childress, E. M. Meslin, & H. T. Shapiro (Eds.), Belmont revisited: Ethical principles for research with human subjects (pp. 136–147). Washington, DC: Georgetown University Press. Kramer, A., Guillory, J., & Hancock, J. (2014). Experimental evidence of massive-scale emotional con- tagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. Markham, A., & Buchanan, E. (2012). Ethical decision-making and Internet Research Recommenda- tions from the AoIR Ethics Working Committee (Version 2.0). Association of Internet Researchers. Retrieved from http://aoir.org/reports/ethics2.pdf Massanari, A. (2017). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.

McCarthy, C. R. (1998). The evolving story of justice in federal research policy. In J. P. Kahn, A. C. Mastroianni, & J. Sugarman (Eds.), Beyond consent: Seeking justice in research (pp. 11–31). Cary, NC: Oxford University Press. McMillan Cottom, T. (2013, December 3). The discomfort zone . Retrieved from http://www.slate.

manded_for_talking_about_racism.html McMillan Cottom, T. (2014, March 5). The trigger warned syllabus . Retrieved from https://tressiemc.

Merton, V. (1993). The exclusion of pregnant, pregnable, and once-pregnable people (a.k.a. women) from biomedical research. American Journal of Law and Medicine, 19, 369–451. Meyer, M. N. (2014). Misjudgements will drive social trials undergound. Nature, 511, 265. Meyer, M. N. (2015). Two cheers for corporate experimentation: The A/B illusion and the virtues of data-driven innovation. Colorado Technology Law Journal, 13(2), 273–332. Meyer, M. N., & Chabris, C. F. (2015, June 19). Please, corporations, experiment on us. The New York Times. Retrieved from http://www.nytimes.com/2015/06/21/opinion/sunday/please-corpora- tions-experiment-on-us.html National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1978). Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Washington, DC: United States Government Printing Office. Nozick, R. (1974). Anarchy, state, and Utopia. New York, NY: Basic Books. Ogden, J., Halford, S., Carr, L., & Earl, G. (2015). This is for everyone? Steps towards deconoloniz- ing the web. ACM Web Science Conference, Oxford, UK, 2015. New York, NY: ACM. OKCupid. (2016). OKTrends. Retrieved from http://oktrends.okcupid.com Okin, S. M. (1989). Justice, gender, and the family. New York, NY: Basic Books. Polonetsky, J., Tene, O., & Jerome, J. (2015). Beyond the common rule: Ethical structures for data research in non-academic settings. Colorado Technology Law Journal, 13(2), 333–367. Powers, M. (1998). Theories of justice in the context of research. In J. P. Kahn, A. C. Mastroianni, & J. Sugarman (Eds.), Beyond consent: Seeking justice in research (pp. 147–165). Cary, NC: Oxford University Press. Proferes, N. (2016). Web 2.0 user knowledge and the limits of individual and collective power. First Monday, 21(6), n.p.

18

|

anna lauren hoffmann and anne jonas

Puschmann, C., & Bozag, E. (2014, June 30). All the world’s a laboratory? On Facebook’s emotional contagion experiment and user rights. Retrieved from http://www.hiig.de/en/all-the-worlds-a- laboratory-on-facebooks-emotional-contagion-experiment-and-user-rights/ Rawls, J. (1971). A theory of justice (Revised ed.). Cambridge, MA: Belknap Press. Rogers, W., & Lange, M. M. (2013). Rethinking the vulnerability of minority populations in research. American Journal of Public Health, 103(12), 2141–2146. Ross, T. (2014, May 30). Mikki Kendall and her online beefs with white feminists. Vice. Retrieved from https://www.vice.com/read/their-eyes-were-watching-twitter-0000317-v21n5 Rudder, C. (2014a). Dataclysm: Who we are (When we think no one is looking). New York, NY: Crown Publishing Group. Rudder, C. (2014b, July 28). We experiment on human beings! Retrieved from http://blog.okcupid.com/ index.php/we-experiment-on-human-beings/ Sen, A. (2009). The idea of justice. Cambridge, MA: Belknap Press. Sherwin, S. (2005). Belmont revisited through a feminist lens. In J. F. Childress, E. M. Meslin, & H. T. Shapiro (Eds.), Belmont revisited: Ethical principles for research with human subjects (pp. 148–164). Washington, DC: Georgetown University Press. Shilton, K., & Sayles, S. (2016). “We aren’t all going to be on the same page about ethics:” Ethical practices and challenges in research on digital and social media. In T. X. Bui & R. H. Sprague (Eds.), 49 th Hawaii International Conference on System Sciences (HICSS), (pp. 1909–1918). Hawaii, HI: IEEE. Thatcher, J., O’Sullivan, D., & Mahmoudi, D. (2016). Data colonialism through accumulation by dispossession: New metaphors for daily data. Environment and Planning D: Society and Space, 0(0), 1–17. Thomas, L. (1983). Self-respect: Theory and practice. In L. Harris (Ed.), Philosophy born of struggle:

Anthology of Afro-American philosophy from 1917. Dubuque, IA: Kendall/Hunt. Twitter. (2016). Research at Twitter. Retrieved from https://engineering.twitter.com/research Uber Design. (2015, September 23). Field research at Uber: Observing what people would never think to tell us. Retrieved from https://medium.com/uber-design/field-research-at-uber-297a46892843#.

Vincent, A. R. (2016, March 25). Does Tinder have a transphobia Problem? The Huffington Post. Retrieved from http://www.huffingtonpost.com/addison-rose-vincent/does-tinder-have-a-

Vitak, J., Shilton, K., & Ashktorab, Z. (2016). Beyond the Belmont principles: ethical challenges, practices, and beliefs in the online data research community. In D. Gergle, M. Ringel Morris, P. Bjørn, & J. Konstan (Eds.), CSCW’16: Proceedings of the 19th ACM Conference on Computer-Sup- ported Cooperative Work & Social Computing (pp. 941–953). San Francisco, CA: ACM. Walzer, M. (1984). Spheres of justice: A defense of pluralism and equality. New York, NY: Basic Books. Watts, D. J. (2014, July 7). Stop complaining about the Facebook study. It’s a golden age for research. The Guardian. Retrieved from http://www.theguardian.com/commentisfree/2014/jul/07/facebook- study-science-experiment-research Wolf, S. (1999). Erasing difference: race, ethnicity, and gender in bioethics. In A. Donchin & L. M. Purdy (Eds.), Embodying bioethics: Recent feminist advances (pp. 65–81). Lanham, MD: Rowman and Littlefield. Young, I. M. (1990). Justice and the politics of difference. Princeton, NJ: Princeton University Press.

recasting justice for internet and online industry research ethics

|

19

r e ac t i o n

by

c é l i n e

e h r w e i n

n i h a n

The discussion conducted by Hoffmann and Jonas makes it clear that ethical framing of online research cannot be reduced to a strictly private matter. Neither users – who are put in a position of powerless captive subjects, nor online indus- tries – that are caught in the defence of their market shares, nor researchers – who are trapped in loyalty conflicts and financial dependencies towards their employers – seem to be able to provide alone freely and impartially designed prin- ciples of conduct in order to ensure greater justice, and to apply them. In this context and in the light of the challenges mentioned by the authors, it appears even more necessary not only to advocate for the development of “an internet research ethics framework” that takes into account “the injustice and vio- lence online” and to call for a “broader social and political transformation”, but to support also the strengthening of the legal framework that is supposed to regulate research work in order to achieve a better balance of powers between its different actors (users – vulnerable/dominant; researchers – academics/working for on-line platforms; online industry/public institutions; etc.) and thus combat inequalities and injustices arising therefrom. More specifically, this might lead us, for instance, to question the relevance and legitimacy of setting up the following:

• a multi-party system (managed by online industries, users, state institu- tions, etc.) designed to control the content of online platforms;

• an obligation for online industries to make their research data fully and freely available to all researchers;

• an authorization and control system designed to define what kind of researches might be conducted with online data.

Céline Ehrwein Nihan, PhD, is a Professor in Ethics at the School of Management and Engineering Vaud (HEIG-VD), University of Applied Sciences and Arts Western Switzerland (HES SO).

c h a p t e r

t

w o

A Feminist Perspective on Ethical Digital Methods

mary elizabeth luka, mélanie millette, and jacqueline wallace

i n t r o d u c t i o n

From big data to thick data, social media to online activism, bloggers to digital humanities, evolving digital technologies have gained prominence over the last two decades in humanities and social science research. Scholars seek to refine tools, techniques, and tactics for examining the cultural, social, economic, politi- cal, and environmental entanglements between these forms and our everyday lives (e.g. Couldry & van Dijck, 2015; Hine, 2015; Kozinets, 2010). These approaches vary widely across the field. They include the emergence of ethical review boards and concomitant requirements to share vast data sets scraped from seemingly public environments (Mauthner, 2012). The development of highly technical quantitative methods for capturing the meaning of expansive data sets and social networks is sometimes subject to questionable ethical practices rather than sub- stantive understandings about the underpinnings of the technological systems on display (Kim, 2014; Shepherd & Landry, 2013). Fraught debates over the ethics of collecting and analyzing digital qualitative data in online spaces where questions of privacy, safety, and veracity linger (boyd & Crawford, 2012; Housley et al., 2014) find resonance in established feminist scholarly examinations of the his- torical binary between quantitative methods, often seen as more objective, ratio- nal, or masculine, versus qualitative methods, framed as subjective and intuitive (Haraway, 1988; Hughes & Cohen, 2010). In this chapter, we seek to problema- tize assumptions and trends in the context of feminist digital research, to help

22

|

mary elizabeth luka et al.

expand the disciplinary terrain upon which the ethics of internet studies rests. Here, a commitment to an “ethics of care” approach can help to suggest a broader range of methodological and equity concerns (Miller, Birch, Mauthner, & Jessop, 2012). This leads to the identification of overlaps and blind spots from which to articulate how a set of coherent feminist digital methods and a corollary episte- mology is being developed. Echoing feminist work on materiality (Asberg, Thiele, & van der Tuin, 2015), our argument is articulated through an analysis of digital labor, which aims to incorporate “practices of care” in research and a reassertion of the importance of “situated knowledge” from a feminist perspective (Haraway, 1988; Hughes & Lury, 2013) in relation to digital sociality and lively data. In a broader sense, our aim is to lay out the potential for more rigorous ethical protocols for feminist digital research methods as a contribution to strengthening the relevance of scholarly research to contemporary networked life. We also intend for our investigative framework to be fluid and responsive, able to accommodate iterative discoveries and explorations. The commitment to equity, however, is firm. To operationalize this work on an ongoing basis, we recognize and rely on contributions from our own international research group in the field of feminist digital research methods, as well as similar assemblages in scholarly and scholar- activist traditions. 1 The international Fembot Collective based in the United States (http://fembotcollective.org/) is one such example, as are more loosely affil- iated clusters of interest such as those represented in Miller et al. (2012), in the collaborative work of Housley et al. (2014), or of Mountz et al. through prac- tices of “slow scholarship” (2015). So is the work of professionalizing the field of digital research ethics, including that of the international Association of Internet Researchers (Markham & Buchanan, 2012).

to wa r d s

a n

e p i s t e m o lo g y

o f

f e m i n i s t

d i g i ta l

r e s e a r c h

e t h i c s

 

Traditional analogue qualitative and quantitative methods are in the midst of being transferred and transformed into digital tools, including interviews, focus groups, oral histories, participant observation, and audience research (Luka, 2014; Mahrt & Scharkow, 2013). While digital methods articles, workshops, conferences, and books are becoming more popular (e.g. Kitchin, 2014; Kozinets, 2010), there is a need to continue to critically interrogate the underlying politics and ethics of these increasingly interdisciplinary and cross-sectoral projects. In this section, we take these challenges as an opportunity to demarcate some crucial ethical dimensions of a feminist digital epistemology, addressing today’s politics of access concerning research on the internet through an ethics of care, practices of care, and “lively” data. In other words, we consider the implications of embedding ethics mobilized

a feminist perspec tive on ethical digital methods

|

23

in practices associated with triangulations of “dissonant data” (Vikström, 2010, p. 212). We reflect on these elements through a feminist lens, stemming from related yet distinct perspectives – including critical race, feminist, queer, trans, decolonial, and disability studies. To undertake inclusive and reflective research, we argue that ethical concerns have to be taken into account at every stage. Mobilizing Sandra Harding’s work in philosophy of science regarding gender disparities (1983), Joey Sprague (2005) distinguishes between epistemology, methodology, and methods. Epistemology “is a theory about knowledge” and observing it reveals under “what circumstances knowledge can be developed” (p. 5). Methods are techniques we design and apply “to gathering and analyzing information” (idem). Methodology is the complex pro- cess between these two stages, constituting the “researcher’s choices of how to use these methods” (idem, italics in original). While emphasizing that our biases come into play at every stage, Harding and Sprague simultaneously note that methodol- ogy is key because it is “the terrain where philosophy and action meet. [R]eflecting on methodology … opens up possibilities and exposes choices” (2005, p. 5). Consequently, we mobilize ethics of care as an epistemological framework with commitments to equity and explicate below how practices of care act as a method- ology to bridge between methods and a feminist epistemology. Finally, we exam- ine what is meant by lively data to reinforce how feminism can ethically inform big, small, and thick data in digital research.

e t h i c s

o f

c a r e

a s

a

co m m i t m e n t

to

e q u i t y

i n

r e s e a r c h

To move towards feminist digital ethics, then, we draw attention to strengths offered by a well-established qualitative and reflexive feminist “ethics of care” (Miller et al., 2012; Mountz et al., 2015), emerging from the work of feminist psy- chology, sociology, and cultural studies from the 1980s onwards. Ethics of care is grounded in the ethnographic work of Carol Gilligan (1982), Norman K. Denzin (1997) and ongoing work by Angela McRobbie (2015), among others. Miller et al. (2012) recently republished a collection of essays on feminist ethics to incorporate maturing digital research.The updated essays continue to emphasize the integrated nature of an ethics of care with feminist values that include respecting diversity, understanding the intention of research as a participant and as a researcher, and paying attention to our responsibility as researchers to do no harm in the commu- nities within which we work, even as we aim to generate potentially transformative engagements (Edwards & Mauthner, 2012). Consequently, an ethics of care is a core approach in qualitative feminist research, but also finds strong application in other settings. More recently, these include professional standards articulated by AoIR respecting quantitative and qualitative datasets (Markham & Buchanan,

24

|

mary elizabeth luka et al.

2012), and research justice methodologies that embrace social change, thus gener- atively “queering” (p. 47) the research endeavor (Zeffiro & Hogan, 2015). According to Gillies and Aldred (2012), an ethics of care is built on three separate and related epistemologies: “representing women” (p. 49) (which includes finding ways to include dominant colonial and other perspectives), “initiating per- sonal change through action research” (p. 51), (which relates to practices of care) and “deconstructing and undermining ‘knowledge’ structures” (p. 55). Today, all three elements find agreement in critical work on digital labor (e.g. McRobbie, 2015), digital sociality (e.g. Cardon, 2015), and the false assumption of “neutrality,” (discussed below).

Practices of Care

Practices of care in research actually articulate what we intend to embody in an ethics of care (see Hughes & Lury, 2013; Mountz et al., 2015). What we do mat- ters, and so does what we say. Talking about patterns, reflections, and imbrica- tions “speaks” of the “situatedness” of political and ethical positions of both the researcher and the researched (Hughes & Lury, 2013, p. 797). A useful example can be found in the increasingly iterative need to negotiate and respect informed consent processes. Moya Bailey (2015) and Tina Miller (2012) discuss the role of ethical review boards in North America over the last two decades, lauding the principle of ethical oversight, but cautioning researchers and ethics boards alike to pay attention, especially “in the digital landscape, [to] a more nuanced and fluid understanding of the way power flows between researcher and researched” (Bailey, 2015, para. 16). In Bailey’s view, “social media users are not the traditionally infan- tilized research subjects that the IRB assumes” (idem, para. 17). Simultaneously, as Sanjay Sharma (2013) points out in her discussion of black Twitter:

Modalities of race wildly proliferate in social media sites such as Facebook, YouTube and Twitter: casual racial banter, race-hate comments, “griefing”, images, videos and anti-racist sentiment bewilderingly intermingle, mash-up and virally circulate; and researchers strug- gle to comprehend the meanings and affects of a racialized info-overload. (p. 47)

Miller (2012) also describes the delicate, sometimes-public dance that emerges from the “answering back” (p. 37) affordances of social media for participants and researchers alike, noting that a participant who blogs “is not subject to any of the […] professional codes of ethical practice [that academics are]” (2012, pp. 36–37). This begs the question: If an understanding of the evolving power dynamics in research is neither mandatory nor surfaced during scholarly ethics reviews, and if mutual responsibility (researcher/researched) is not a foundation of ethics prac- tices, then we may find ourselves not just complaining about the bureaucracy of such processes, but also becoming willing participants – even instigators – in

a feminist perspec tive on ethical digital methods

|

25

reinforcing posthumanist systems of surveillance on populations we wish to sup- port or observe.

Taking Diversity into Account: New Materialism’s Implications for “Lively” Data

Moreover, Housley et al. (2014) make explicit some of the many social, politi- cal, and cultural assumptions embedded in data collection, particularly distrib- uted data analysis. What they term the gathering of “lively data” (p. 3) – that is, the iterative collection and interpretation of “big and broad” (p. 1) digital data from social media sites over time, such as Facebook, etc. – indicates the fluid and often non-representative nature of interpretation as it is practiced by some big data researchers. Analyses of hate speech, racism, and related topics suggest that what we have so far determined is mixed evidence about whether social media can radically reshape civic society to become more equitable (idem, p. 25), or as Sharma (2013) and boyd and Crawford (2012), among others, make clear, some fairly negative evidence that social media may be implicated in quite the opposite. It is helpful, then, to consider the influence of new materialist approaches that have emerged in the last decade in tandem with digital research, and particularly those that have touched on ethical concerns emanating from feminist commit- ments. Asberg et al. (2015) suggest that new feminist materialisms (that is, a focus on the posthumanist object as an object of study) “is not a move away from but rather a renewed move towards the Subject” (p. 164). They draw on legacies of “difference” (p. 154) in feminism (including earlier work by Elizabeth Grosz and Donna Haraway) as a set of practices as well as an object of study incorporating critiques, ontologies, and linked systems of knowledge. Asberg et al. (2015) reas- sert the necessity of “connectivity, power-imbued co-dependencies … and other similar concepts for the formative topologies of force and power that cause us to materialise” (p. 149). Such an approach opens the door to understanding why “blind” (seemingly value-neutral) big data claims are fallacious (Kitchin, 2014), as discussed further below. Furthermore, the ethical challenges of promulgating blind data is complicated enormously in a digital research environment where big data is supported by billions of research dollars annually, predominantly in the domains of science, technology, engineering, and math (STEM).

Investigating Digital Labor: Practices of Care in the Knowledge Economy

Digital labor is a key site for the critical study of networked technologies, given that networked creative and computational practices not only alter existing forms of work but also engender new modes of labor that are often invisible or underval- ued as an element of technical infrastructures (McRobbie, 2010; Wallace, 2014).

26

|

mary elizabeth luka et al.

Attending to such invisible forms of labor entails understanding the persisting inequalities in gendered, raced, and classed divisions of labor that inform work- places in a globalized knowledge economy (Bailey, 2015; Roberts, 2016). In today’s convergent online media era, social (science) research is undergoing vast trans- formations in terms of how to study the pervasive digital labor that underpins a globalized knowledge economy. Networked technologies have engendered an “always on” work culture (Wallace, 2014) that is changing the very nature of work and its relationship to social and leisure time. Feminist scholar Tara McPherson (2009) explains how this merged work/lifestyle is deeply entangled with discourses of electronic culture, which “simultaneously embody prohibition and possibility” (p. 383). These discourses

[let us] feel we can control our own movements and create our own spaces, shifting our roles from consumers to producers of meaning. Such moments of creation and making can be quite heady and personally powerful, but we’d do well to remember that this promise of transformation is also something the web (and electronic culture more generally) con- certedly packages and sells to us. From my “personal” account on numerous web pages to my DVR to countless makeover shows on cable, electronic culture lures us in with its promises of change and personalized control.… Although transformation and mutability may be inherent in digital forms, an aspect of their ontology, transformation is also a com- pelling ideology that can easily stitch us back into the workings of consumer capital. (p. 383; emphasis in original)

Moreover, the ability to negotiate technological infrastructures and networks requires certain capacities and literacies that arguably replicate hegemonic struc- tures and privileges. In turn, these structures need epistemological and method- ological frameworks that pursue critical study of these new modes of labor. In the digital environment, digital research methods must also acknowledge and make space for a relationship between material and immaterial labor. Following the eth- ics of care model, we argue for a feminist perspective on digital labor that engages “practices of care” to shed light on inequality and reveal the invisible labor behind contemporary networked technologies (McRobbie, 2010; Roberts, 2016). Ensur- ing practices of care are built into research design from conceptualization through analysis to writing up of findings is vital to ensuring knowledge production. Such an approach goes beyond the standard scientific research model that positions the researcher as impartial authority and is conventionally predicated on a masculinist rationality and objectivity. Instead, we argue for feminist approaches that acknowl- edge individual standpoints, stress the importance of specificity of context, make space for negotiation and dialogue, and advocate for partial stances and situated knowledges (Edwards & Mauthner, 2012; Haraway, 1988). Feminist practices of care ensure methods that open up critical pathways for an ethical unraveling of the power dynamics of digital labor, including making visible a bias toward youth, masculinity, and technical savvy that punctuates the

a feminist perspec tive on ethical digital methods

|

27

celebratory “work hard, play hard” rhetoric of digital culture. The “new” economy is often celebrated for the emancipatory potential of creative work that is typically associated with digital labor. In this idealized conception of work, the indepen- dence of flexible work (e.g. freely setting one’s own hours and work schedule), and the “cool-factor” of producing for such cultural sectors as gaming, design, digital journalism, social media, or tech start-ups are said to chart a more exciting, cre- ative, and fulfilling path than traditional industries. Dubbed “no collar,” and “mec- cas for the creative class” (Florida, 2002; Ross, 2004) these work environments claim to value freedom, fun, and equality among employees and management often depicted by the flattening of organizational structures. They also consider workers’ creative capacities as a prized resource to be cultivated. However, by employing care-based practices that critically interrogate the so-called emancipa- tory potential of creative digital labor, we can probe the persistent inequalities that mark its underbelly. Significant is that an ethics of care enables a foregrounding of questions of gender, race, ability, sexuality, and class that are so often overlooked in methodological design, but essential to understanding the power dynamics and politics of digital labor. These ethical considerations help to shed light on the considerable amount of unaccounted affective and immaterial labor of the digital age, including, for instance, such repetitive tasks as data entry, tagging, or keywording, or the constant work of updating web profiles, social media, and ecommerce storefronts that meld together traditionally feminine service-sector work with the continual stream of “always on” digital labor. McRobbie (1998, 2010) refers to this political-economic phenomenon as defining a precarious feminized sector. Weigel and Ahern (2013) go as far as to argue that:

Today the economy is feminizing everyone. That is, it puts more and more people of both genders in the traditionally female position of undertaking work that traditionallypatriar- chal institutions have pretended is a kind of personal service outside capital so that they do not have to pay for it. When affective relationships become part of work, we overinvest our economic life with erotic value. Hence, “passion for marketing”; hence, “Like” after “Like” button letting you volunteer your time to help Facebook sell your information to advertisers with ever greater precision. 2

Digital Labor, Digital Dilemmas: Messiness at Work

To understand what approaches are best suited to understanding the digital labor force, given the implicit and explicit ethical considerations generated by working in precarious and increasingly transnational mobile workplaces, we must not rush to simply use the latest digital adaptation of an analogue method, but instead critically interrogate the power dynamics and technological affordances embedded in new modes of digital labor. Moreover, when seeking to uncover the practices

28

|

mary elizabeth luka et al.

of digital labor, rather than idealizing research design and its framework as tidy and clean, a feminist approach wrestles with the two-headed dragon of deploying digital methods to gather data while recognizing that these practices are messy, contingent, and situated. John Law (2004) develops his notion of “messiness,” as a means to seek knowl- edge in the social world where things are often elusive and multiple, intentionally countering a tradition of methods that seek absolute clarity and precision. He argues for “methodological assemblages” that “detect, resonate with, and amplify particular patterns of relations in the […] fluxes of the real” (p. 14). In other words, he argues for practices of care that embody a clear ethical stance. Sawchuk (2011) takes up Law’s position, reinforcing how we become a part of the processes we study. In our desire to understand contemporary cultural phenomena,

We are constantly stopping the flow of events and activities, and as soon as we write about a subject, it already seems out of date … Law (2004) suggests that the chaotic messiness of the social world demands not just one method, but a knowledge of a set of methods [… to] allow researchers to write about their subjects intelligibly, creatively and rigorously … methods are a delightfully messy business that asks us to take pleasure in uncertainty and to confidently learn to be accountable, even if we are not revealing a truth that will hold for all people, at all times, in all places. (Sawchuk, 2011, p. 339)

This is evident, for example, in the study of DIY production cultures, which are a delightfully messy business of creativity and commerce, of ever-changing tech- nologies and networks, of identities and standpoints, as shown in research on the cultural economy of craft, including the ascendance of Etsy and digitized market- places (Wallace, 2014). Such research aims to make process visible and also the people and the multiplicity of threads that bind us to one another. It is also evident in adaptations of conventionally analogue methods with digitally-enabled tools and techniques, such as digital ethnography, immersive online participant-obser- vation, discourse analysis of social media content and data sets, blogs as a dis- cursive tradition, hashtagged images and threads, among others. Practices of care moves well beyond quantitative capture of posts, keyword analysis, or splicing of metadata tables toward not only recording the qualitative outcomes of digital pro- duction but investigating the labor and technical affordances behind it.

Studying Online Sociality: Reflecting (on) the Human in Digital Traces

French sociologist Dominique Cardon (2015) illustrates how algorithms, which come with very powerful and prescriptive affordances, are ideologically loaded. Google’s Page Rank, Facebook’s Edge Rank, and other systems of reference for- mat content and behavior, ensuring that algorithms reflect political views – mostly neoliberal, capitalist ones – governed by calculation and probability. His analysis

a feminist perspec tive on ethical digital methods

|

29

finds agreement in political economy analyses of big data such as that of Shep- herd and Landry (2013), who take up Lawrence Lessig’s argument about how the values of the dominant society are reflected in the writing, sorting, and coding of data itself (p. 262). Shepherd and Landry hail “agency and the absence of restric- tions” as a kind of freedom (p. 269), embedded in the modes of resistance they enumerate (e.g. hacktivism, pp. 268–270) in response to their own categorizations of oppression (pp. 260–267). Even so, the elegant simplicity of the typologies that Shepherd and Landry offer is challenged by experiences on the ground, where the participants involved may damage themselves or others, or suffer from a sys- tem of oppressions that they themselves reinforce or accept. This can be seen, for example, in the 2016 political race for the Republican leadership in the United States, where surges in Donald Trump’s popularity with working and middle-class Americans both correlates and stands in opposition to what he says about racism and also with “deindustrialization and despair” (Frank, 2016). Consequently, more equitable, and “lively” (or responsive) digital methods can be developed by opening the “black boxed” processes of data-driven research (Mansell, 2013). When it comes to sociality as an object of research, there are always two layers involved: the object of study and the researcher as interpreter. The latter includes academic, professional, and personal relationships, and, thus, includes relationships developed in the field, with interviewees, etc. In digital life, as in “real” life, we paradoxically deal with deterministic views (utopic or dystopic) about technique and its supposed neutrality: There are ongoing debates about the empowering capacities of the internet for civil society. For example, as suggested earlier, digital mediation is never neutral nor is it necessarily empowering; social media and Internet use often reproduce offline power dynamics (e.g. boyd & Crawford, 2012; Mansell, 2016; Sharma, 2013). The most recent research in this field concerns embodied bias in online sociability and algorithms, including posts and likes on Facebook, Snapchat exchanges, or “hashtag-mediated assemblages” on Twitter (Rambukkana, 2015). From an ethics of care perspective, we must keep in mind that the meaning generated by these digital traces can only be grasped if we reconnect it to its social context of production and circulation. As Rambukkana puts it, these traces:

[…] inherit their character neither solely from the social material poured into them nor from their specific nature as technology or code alone but from the singular composition of each of their “continuous locality” (Latour, 1996, p. 6): in how they are crafted; for what purpose; among what other actants and actors, individual, technical, communal or corpo- rate; as well as in what spaces, through what technologies; with what coverage; and finally articulated with which effects, with what temporality and through which history. (p. 5)

By making digital data lively, Rambukkana implies that the black box metaphor developed by Bruno Latour (1987) reinforces how technological devices and

30

|

mary elizabeth luka et al.

scientific facts are presented as autonomous, not open, and usually create the illu- sion of being ineluctable. Understanding how they were conceived, funded, and deployed is never a given. Twisted by the pressures of dominant ideologies, the methodology is often a part of the black box. By being explicit about our own methodology and methods, we can be open and outspoken about what there is to know about the relationship between our own epistemology and the results generated. Academic research requires us to explain how to reproduce our pro- tocols – but this is often reduced to a limited report: number of interviews, lists of variables, perhaps time spent on the field, yet all the struggles and failures, the e-mails to manage and make sense of, and all the help we sought are not part of it. A lot of what was challenging, inexplicable, hard, or unfair, is left out. Yet these messy details quite often shape the research, what is kept in and left out, and what the results will and will not say. Similarly, Miller (2012) points out the difficulty of anticipating where the research starts and ends when it comes to the “official” data covered by ethical accreditation. E-mail exchanges can become a more consistent source of informa- tion than an initial interview (the “doorknob disclosures” of digital research, p. 33), and sampling imbalances are caused by requirements for participants to self-select into a study. How many times has a researcher understood more about a social experience when an interviewee made a seemingly minor correction in an e-mail, rather than during an interview? For example, Millette (2015) faced this situation when she first contacted interviewees during her research on French-Canadian minority communities, who pointed out via e-mail that they were not sure they could call themselves “communities,” preferring to talk about themselves as “fran- cophonies” outside Quebec because of their scattered nature and very different political, historical, and cultural backgrounds. These exchanges indicated her own bias and the politics of her field, but also the need to document data collection and the iterative, often serendipitous (Verhoeven, 2016) and situated nature of data- based research. As discussed below, understanding how a social media dataset is collected, which decision-making processes are used for cleaning and processing the dataset, and where the data originated (including who has access to the plat- forms used and the affordances of those platforms), are all crucial considerations (Langlois, Redden, & Elmer, 2015). Not only does the digital environment have situated and embedded power relationships but the specificity of identities matter more than ever when we con- duct digital research on any aspect of social life. Massive quantitative data sets on Facebook and social media may inform us about some trends, networks, and patterns of uses. But the meaning of digital uses for these people will never be fully understood if we do not trace it back to the particularities of their situation. In 1988, long before big data, mobile phones, and social media, Haraway argued for a plurality of “situated knowledges” (p. 581) instead of a singular, dominant

a feminist perspec tive on ethical digital methods

|

31

“manufactured-knowledge” (p. 577). Prefiguring the emergence of a digital femi- nist materialism (Asberg et al., 2015), Haraway (1988) suggested that objectivity could only exist through the embodiment of knowledge, by ensuring that those who are studied are involved in how their lives are being examined. In other words, equitable social science must emerge through dialogue and the observation of situated knowledges. For methodological design, this standpoint requires us to embody both an ethics and practices of care, putting ourselves in vulnerable and often messy positions, where each researcher looks her or his own biases in the eye. Being deeply aware of our own identity and agency is critical to being able to understand marginalized subjects without romanticizing or appropriating their experiences. These are real and present dangers, according to Haraway and others noted above. An ethical feminist methodology takes into consideration the context within which the data was created. This holds true, whether for traditional inter- views or data collection with users, or for those involving digital traces where the researcher brings the data collected online to the people, who produced it, in order to trigger dialogue about the context of its creation and purpose (Dubois & Ford, 2015; Kozinets, 2010). That is not all that is required, but it is a good beginning.

co n c lu s i o n :

a n

e t h i c a l

e n g ag e m e n t

To consolidate these three elements (ethics of care, practices of care, and the growth of lively data), it is helpful to consider what Gibson-Graham (2015) has termed an “ethical economical and ecological engagement” (p. 47). While new materialists focus on immanence, including the relationship to spirit, “language, the symbolic order and so on” (Asberg et al., 2015, p. 166), Gibson-Graham rethinks what the economics of research and value-based social engagement could mean, and how that rethink harks back to a feminist understanding of multiple and fluid iden- tities, requiring mutual commitment and care. Gibson-Graham (2015) refers to self-positionality in the “everyday economy” as informative:

People are involved in many other identities in excess of that of employee, business owner, consumer, property owner, and investor; that is, those associated with capitalist practice that are theorized as motivated by individual self-interest. There are, for example, vol- unteers who want to offer their services “free”…; workers-owner-cooperators who want to make enough returns to live a good life, sustain the cooperative, and contribute to the community; consumers who want to reduce, recycle, reuse; [and] property owners who want to care for a commons. (p. 56)

Such an approach points towards “the possibility of ethical rather than structural dynamics” (idem, p. 57), which in turn could ameliorate the tensions between demands for open data and ongoing calls to protect big data collections, and the already established pattern of parsing out high-powered computing contracts

32

|

mary elizabeth luka et al.

primarily to STEM projects, in corporate as well as government-funded digital research. Mauthner (2012) describes the increasing tension between maintaining confidentiality of data and the growing number of public funders with the require- ment to make big data reusable and shareable (including through ethical review boards). This is problematic for sensitive or power-laden research, as well as for the development of long-term relationships with participants, communities, and orga- nizations. Even while noting that, above all, “we do research” (p. 171), Mauthner (2012) elucidates ethical tensions:

Digital data sharing methods are reshaping the research environment and research prac- tices; … seeking informed consent, anonymizing data, and reusing digital data reconfigure the relationships we develop with our respondents, our moral responsibilities and commit- ments to respondents, [and] our moral ownership rights over the data we produce. (p. 158)

A brief survey of recent innovative feminist digital research suggests several direc-

tions for how to expand the disciplinary terrain upon which a feminist ethics of internet studies rests. These include, for example, Bivens on Facebook (2015), Harvey’s work on video games (2015), Hogan’s examination of data centres and digital archives (2015), and the work of Luka (2014, 2015) and Matthews and Sunderland (2013) on media-based narratives and networks. In the domain of ethnographic digital sociality and digital labor, it includes Latzko-Toth, Bonneau, and Millette (2017) on thick data, as well as Gajjala and Oh (2012), Roberts (2016), and Wallace (2014) on feminized digital labor practices.

As a starting point for developing an epistemology of feminist digital ethics,

it is clear that an ethics of care, practices of care, and the situatedness of lively –

rather than simply big – data, involve ongoing negotiations of power relations. Our preliminary delineation of the current field suggests how science and human- ities alike gain from such perspectives in terms of inclusion and studying original knowledge from the field. In many ways, research on emerging digital technologies requires greater academic consideration of the value and, increasingly, necessity of ongoing collaborative engagements with community, business, non-profit, activist, artistic, and other public partners (Luka, 2014; Wallace, 2014; Zeffiro & Hogan, 2015). This aligns with our own commitments to social and epistemic well-being enabled by networked digital technologies. By working together on a feminist epistemological and methodological foundation for such research, we aim in turn

to provide new means for civil society, government, and industry to benefit from

scholarly research. In this sense, then, the chapter has aimed to elucidate under- lying ethical opportunities and challenges in collaborative projects, including those with non-academic partners, including industry, community, governmental, NGO, activist, artist, and other institutions. A consideration of feminist perspec- tives in the context of digital methods to address social phenomena and labor practices does not only help to frame a feminist epistemology that can incorporate

a feminist perspec tive on ethical digital methods

|

33

considerations of race, gender, queer, trans, decolonial, and disability studies. It also enables us all to move towards a more ethical, reflexive, and situated way to carry out research on contemporary digital life.

n ot e s

1. Other than the authors, our research group includes: Rena Bivens, Alison Harvey, Mél Hogan, Jessalyn Keller, Koen Leurs, Joanna Redden, Sarah Roberts, Tamara Shepherd, Samantha Thrift, Andrea Zeffiro.

r e f e r e n c e s

Asberg, C., Thiele, K., & van der Tuin, I. (2015). Speculative before the turn: Reintroducing femi- nist materialist performativity. Cultural Studies Review, 21(2), 145–172. Retrieved from http://

Bailey, M. (2015). #transform(ing)DH writing and research: An autoethnography of digital human- ities and feminist ethics. Digital Humanities Quarterly, 9(2), n.p. Retrieved from http://www.

Bivens, R. (2015). Under the hood: The software in your feminist approach. Feminist Media Studies, 15(4), 714–717. doi:10.1080/14680777.2015.1053717. boyd, d., & Crawford, K. (2012). Critical questions for big data. Information, Communication & Soci- ety, 15(5), 662679. Retrieved from http://doi.org/10.1080/1369118X.2012.678878 Cardon, D. (2015). À quoi rêve les algorithmes? Nos vies à l’heure des big data. Paris: Seuil. Couldry, N., & van Dijck, J. (2015). Researching social media as if the social mattered. Social Media & Society, 1(7), 1–7. doi:10.1177/2056305115604174. Denzin, N. K. (1997). Interpretive ethnography: Ethnographic practices for the 21st century. Thousand Oaks, CA: Sage Publications. Dubois, E., & Ford, H. (2015). Qualitative political communication. Trace interviews: An actor-cen- tered approach. International Journal of Communication, 9(0), 2067–2091. Edwards, R., & Mauthner, M. (2012). Ethics and feminist research: Theory and practice. In T. Miller, M. Birch, M. Mauthner, & J. Jessop (Eds.), Ethics in qualitative research (2nd ed., pp. 14–28). Thousand Oaks, CA: Sage Publications. Florida, R. (2002). The rise of the creative class and how it’s transforming work, leisure, community and everyday life. New York, NY: Basic Books. Frank, T. (2016, March 8). Millions of ordinary Americans support Donald Trump. The Guardian. Retrieved from http://www.theguardian.com/commentisfree/2016/mar/07/ donald-trump-why-americans-support Gajjala, R., & Oh, Y. J. (Eds.). (2012). Cyberfeminism 2.0. New York, NY: Peter Lang. Gibson-Graham, J. K. (2015). Ethical economic and ecological engagements in real(ity) time: Experi- ments with living differently in the anthropocene. Conjunctions:Transdisciplinary Journal of Cultural Participation, 2(1), 47–71. Retrieved from http://www.conjunctions-tjcp.com/article/view/22270

34

|

mary elizabeth luka et al.

Gillies, V., & Aldred, P. (2012). The ethics of intention: research as a political tool. In T. Miller, M. Birch, M. Mauthner, & J. Jessop (Eds.), Ethics in qualitative research (2nd ed., pp. 43–60). Thou- sand Oaks, CA: Sage Publications. Gilligan, C. (1982). In a different voice. Boston, MA: Harvard University Press. Haraway, D. (1988). Situated knowledges: The Science question in feminism and the privilege of partial perspective. Feminist Studies, 14(3), 575–599. Retrieved from http://doi.org/10.2307/3178066 Harding, S. (1983). Why has the sex/gender system become visible only now? In S. Harding & M. B. Hintikka (Eds.), Discovering reality: Feminist perspectives on epistemology, metaphysics, methodol- ogy, and philosophy of science (pp. 311–324). Dordrecht: D. Reidel Publishing Company. Harvey, A. (2015). Gender, age, and digital games in the domestic context. New York, NY: Routledge. Hine, C. (2015). Ethnography for the Internet: Embedded, embodied and everyday. London & New York, NY: Bloomsbury Academic. Hogan, M. (2015). Water woes & data flows: The Utah Data Center. Big Data and Society, 2(2), 1–12. Retrieved from http://dx.doi.org/10.1177/2053951715592429 Housley, W., Procter, R., Edwards, A., Burnap, P., Williams, M., Sloan, … Greenhill, A. (2014). Big and broad social data and the sociological imagination: A collaborative response. Big Data and Society, 1(2), 1–15. Retrieved from http://dx.doi.org/10.1177/2053951714545135 Hughes, C., & Cohen, R. L. (2010). Feminists really do count: The complexity of feminist methodol- ogies. International Journal of Social Research Methodology, 13(3), 189196. Retrieved from http://

Hughes, C., & Lury, C. (2013). Re-turning feminist methodologies: From a social to an ecological epistemology. Gender and Education, 26(6), 786–799. Retrieved from http://dx.doi.org/10.1080

Kim, D. (2014). Social media and academic surveillance: The ethics of digital bodies. Model View Culture. Retrieved from https://modelviewculture.com/pieces/social-media-and-academic- surveillance-the-ethics-of-digital-bodies Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 1–12. Retrieved from http://doi.org/10.1177/2053951714528481 Kozinets, R. (2010). Netnography: Doing ethnographic research online. London: Sage Publications. Langlois, G., Redden, J., & Elmer, G. (2015). Compromised data: From social media to big data. New York, NY: Bloomsbury. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. Latour, B. (1996). On actor-network theory. A few clarifications plus more than a few complications. Soziale Welt, 47, 369–381. Latzko-Toth, G., Bonneau, C., & Millette, M. (2017). Small data, thick data: Thickening strategies for trace-based social media research. In A. Quan-Haase & L. Sloan (Eds.), SAGE handbook of social media research methods (pp. 199–213). Thousand Oaks, CA: Sage Publications. Law, J. (2004). After method: Mess in social science research. New York, NY: Routledge. Luka, M. E. (2014). The promise of crowdfunding in documentary filmmaking and audience devel- opment. In R. DeFillipi & P. Wikstrom (Eds.), International perspectives on business innovation and disruption in the creative industries: Film, video, photography (pp. 149–176). Northampton, MA & Cheltenham: Edward Elgar Publishing. Luka, M. E. (2015). CBC ArtSpots and the activation of creative citizenship. In M. Mayer, M. Banks, & B. Conor (Eds.), Production studies, Volume II (pp. 164–172). New York, NY: Routledge.

a feminist perspec tive on ethical digital methods

|

35

Mahrt, M., & Scharkow, M. (2013). The value of big data in digital media research. Journal of Broad- casting & Electronic Media, 57(1), 20–33. doi:10.1080/08838151.2012.761700. Mansell, R. (2013). Employing digital crowdsourced information resources: Managing the emerging information commons. International Journal of the Commons, 7(2), 255–277. Mansell, R. (2016). Power, hierarchy and the internet: Why the internet empowers and disempowers. Global Studies Journal, 9(2), 19–25. Retrieved from http://gsj.cgpublisher.com/ Markham, A., & Buchanan, E. (2012). Ethical decision-making and Internet research. Recommendations from the AoIR Ethics Working Committee (Version 2.0). Retrieved from http://aoir.org/reports/

Matthews, N., & Sunderland, N. (2013). Digital life story narratives as data for policy makers and practitioners: Thinking through methodologies for large-scale multimedia qualitative datasets. Journal of Broadcasting and Electronic Media, 57(1), 97–114. doi:10.1080/08838151.2012.761706. Mauthner, M. (2012). Accounting for our part of the entangled webs we weave: Ethical and moral issues in digital data sharing. In T. Miller, M. Birch, M. Mauthner, & J. Jessop (Eds.), Ethics in qualitative research (2nd ed., pp. 157–175). Thousand Oaks, CA: Sage Publications. McPherson, T. (2009). Response: “Resisting” the Utopic-Dystopic Binary. In Blair, K., Gajjala, R. and C. Tulley (eds.) Webbing Cyberfeminist Practice: Communities, Pedagogies and Social Action. Cresskill, NJ: Hampton Press Inc. 379–384. McRobbie, A. (1998). British fashion design: Rag trade or image industry? New York, NY: Routledge. McRobbie, A. (2010). Reflections on feminism, immaterial labor and post-Fordist regimes. New For- mations, 17, 60–76. McRobbie, A. (2015). Notes on the perfect. Australian Feminist Studies, 30(83), 1–20. doi:10.1080/0

8164649.2015.1011485.

Miller, T. (2012). Reconfiguring research relationships: Regulation, new technologies & doing ethical research. In T. Miller, M. Birch, M. Mauthner, & J. Jessop (Eds.), Ethics in qualitative research (2nd ed., pp. 29–42). Thousand Oaks, CA: Sage Publications. Miller, T., Birch, M., Mauthner, M., & Jessop, J. (Eds.). (2012). Ethics in qualitative research (2nd ed.). Thousand Oaks, CA: Sage Publications. Millette, M. (2015). L’usage des médias sociaux dans les luttes pour la visibilité: le cas des minorités franco- phones au Canada anglais (Doctoral dissertation). Université du Québec à Montréal, Montreal, Canada. Mountz, A., Bonds, A., Mansfield, B., Loyd, J., Hyndman, J., Walton-Roberts, M., … Curran, W. (2015). For slow scholarship: A feminist politics of resistance through collective action in the neoliberal university. ACME: An International E-Journal for Critical Geographies, 14(4), 1235– 1259. Retrieved from http://ojs.unbc.ca/index.php/acme/article/view/1058 Rambukkana, N. (2015). Hashtag publics. The power and politics of discursive networks. New York, NY:

Peter Lang. Roberts, S. T. (2016). Commercial content moderation: Digital laborers’ dirty work. In S. U. Noble & B. Tynes (Eds.), Intersectional Internet: Race, sex, class and culture online (pp. 147–159). New York, NY: Peter Lang. Ross, A. (2004). The golden children of razorfish. In No-collar: The humane workplace and its hidden costs (pp. 55–86). Philadelphia, PA: Temple University Press. Sawchuk, K. (2011). Thinking about methods. In W. Straw, S. Gabriele, & I. Wagman (Eds.), Inter- sections of media and communications: Concepts, contexts and critical frameworks (pp. 333–350). Toronto, ON: Emond Montgomery.

36

|

mary elizabeth luka et al.

Sharma, S. (2013). Black Twitter?: Racial hashtags, networks and contagion. New formations: A Jour- nal of Culture/Theory/Politics, 78, 46–64. doi:10.1353/nfm.2013.0006. Shepherd, T., & Landry, N. (2013). Technology design and power: Freedom and control in commu- nication networks. International Journal of Media & Cultural Politics, 9(3), 259–275. Retrieved from http://dx.doi.org/10.1386/macp.9.3.259_1 Sprague, J. (2005). Feminist methodologies for critical researchers: Bridging differences. Oxford: Rowman Altamira. Verhoeven, D. (2016). As luck would have it: Serendipity and solace in digital research infrastructure. Feminist Media Histories, 2(1), 7–28. doi:10.1525/fmh.2016.2.1.7. Vikström, L. (2010). Identifying dissonant and complementary data on women through the triangu- lation of historical sources. International Journal of Social Research Methodology, 13(3), 211–221.

doi:10.1080/13645579.2010.482257.

Wallace, J. (2014). Handmade 2.0: Women, DIY networks and the cultural economy of craft (Doctoral dissertation). Concordia University, Montreal, Canada. Weigel, M., & Ahern, M. (2013). Further materials toward a theory of the man-child. The New Inquiry. Retrieved from http://thenewinquiry.com/essays/further-materials-toward-a-theory- of-the-man-child/ Zeffiro, A., & Hogan, M. (2015). Queered by the archives: No more potlucks and the activist poten- tial of archival theory. In A. J. Jolivétte (Ed.), Research justice: Methodologies for social change (pp. 43–55). Bristol: Policy Press.

a feminist perspec tive on ethical digital methods

|

37

r e ac t i o n

by

a n n e t t e

n .

m a r k h a m

The authors of A Feminist Perspective on Ethical Digital Methods articulate nicely how feminist epistemologies exemplify an ethic of care. The examples in this piece contribute nuance to how ethics are (and can be) enacted by social researchers

looking at digital media. Taking ethics to the level of practice, the authors empha- size the importance of situated knowledge (à la Donna Haraway) as a foundation for building “methods that open up critical pathways for an ethical unraveling of the power dynamics of digital labor” (p. 9 draft). From my perspective on internet research ethics, this begins with a bottom-up and context-sensitive rather than top-down approach to decision making throughout all stages of research. Below,

I augment the points made in this chapter by mentioning two closely aligned

frameworks I’ve developed in the past few years: ethics as method and ethics as impact. A method- or practice-centered approach to ethics in researching digitally- saturated social contexts (Markham, 2003, 2005a) highlights the epistemologi- cal and political privilege of the researcher to make decisions on behalf of other actors in the context of study (Markham, 2005b) as well as the dangers of rely- ing on inflexible disciplinary approaches. A methods-centered approach to ethics allows the researcher to notice and then reflect on critical junctures or decision points rather than the overall conceptual approach to method. Focused reflexivity is strongly associated with consciousness raising, which is one of many ways to bring integrity to the surface for introspection and scrutiny. One of the results of such reflexive practice is to make more transparent the ways in which modernist

legacies for conduct and evaluation of scholarship tend to “flatten, depoliticize, and individualize our research” (Fine & Gorden, 1992, p. 14). Reflexivity facilitates mindfulness “from start to finish. By mindful, I mean present, prepared, honest, reflexive, and adaptable” (Markham, 2003, p. 52). Whether or not this means we focus on issues of labor or questions of what counts as data, a mindful attitude

is one that is more readily able to flexibly adapt to the situation, to let both the

research question and the larger cultural questions help determine the approach, rather than the discipline, method, or norm. The challenge, I have long argued, is one of recognizing, resisting, and then reframing the situation of inquiry to better match the goals underlying an ethic of care. To the end of reframing how we might operationalize an “ethics as choices at critical junctures” approach, I have begun to develop a framework that shifts our basic vocabulary from ethics to impact, so that we can take a more future-oriented, speculative perspective, on the various ways in which our everyday research, as well as the outcomes our studies, have impact (Markham, 2015). Adding temporality to the ethic of care, we can begin to experiment with various “what if ” scenarios. These speculative fabulations, in the way Haraway (2007) discusses, eventually

38

|

mary elizabeth luka et al.

allow us to move beyond critical description of “what is” to the critical interven- tionist ethic of analyzing multiple possibilities for what we might become.

r e f e r e n c e s

Fine, M., & Gorden, M. (1992). Feminist transformations of/despite psychology. In M. Fine (Ed.), Disruptive voices: The possibilities of feminist research (pp. 1–25). Ann Arbor, MI: University of

Michigan Press. Haraway, D. (2007). Speculative fabulations for technoculture’s generations: Taking care of unexpected coun- try. Catalog of the ARTIUM Exhibition of Patricia Piccinini’s Art, Vitoria-Gasteiz (Spain),

pp. 100–107.

Markham, A. (2003). Critical junctures and ethical choices in internet ethnography. In M. Thorseth (Ed.), Applied ethics in internet research (pp. 51–63). Trondheim: NTNU University Press.

Markham, A. (2005a). Ethic as method, method as ethic. Journal of Information Ethics, 15(2), 37–54. Markham, A. (2005b). The politics, ethics, and methods of representation in online ethnography. In

N. Denzin & Y. Lincoln (Eds.). The SAGE handbook of qualitative research (3rd ed., pp. 793–

820). Thousand Oaks, CA: Sage. Markham, A. (2015). Assessing the creepy factor: Shifting from regulatory ethics models to more

proactive approaches to ‘doing the right thing’ in technology research. Colloquium lecture given at Microsoft Research New England, May 8, 2015, Boston, MA. Video available online at:

Annette Markham is Professor of Information Studies at Aarhus University, Denmark & Affiliate Professor of Digital Ethics in the School of Communication at Loyola Uni- versity, Chicago. She researches how identity, relationships, and cultural formations are constructed in and influenced by digitally saturated socio-technical contexts.

c h a p t e r

t h r e e

Sorting Things Out Ethically

Privacy as a Research Issue beyond the Individual

tobias matzner and carsten ochs

i n t r o d u c t i o n

Manifold approaches within the academic social sciences aim at raising the trea- sure of “social data” generated on the internet for the purpose of enhancing social knowledge. E.g. Bruno Latour (2007) notes that thanks to the internet:

[t]he precise forces that mould our subjectivities and the precise characters that furnish our imaginations are all open to inquiries by the social sciences. It is as if the inner workings of private worlds have been pried open because their inputs and outputs have become thoroughly traceable. (p. 2)

However, while some researchers have dealt with the problematics that come with this new potential to force open the “inner workings of private worlds” (boyd, 2010; boyd & Crawford, 2011; Heesen & Matzner, 2015; Ochs, 2015a) most undertakings are rather focused on analyzing and tapping the potential that is created by digitally networked interactions, and the traces they leave (Manovich, 2012; Rogers, 2014; Venturini, 2012; Venturini & Latour, 2009). This article aims to contribute to theoretically characterize the comparatively novel sociotechnical situation by asking for ethical problematics that threaten to arise from a non-re- flexive application of the methods in question. More precisely, we will analyze the ethical challenges that these developments bring about for the modern social ordering mechanism called privacy. How to relate to “privacy” in research environ- ments harnessing digital methods? Why are there privacy-related ethical questions to be posed, and how so?

40

|

tobias matzner and carsten ochs

To approach these issues our starting point will be an assessment of the role privacy plays for processes of subject formation. The first step of our argument (Sec- tion “Privacy as Individual Control: The Bourgeois Legacy of Privacy Discourse”) will be decidedly sociological in that it specifies (roughly) the entwined social his- tory of privacy and subjectivity from the 18th century onwards. The role of privacy in bourgeois practices of subject formation is to protect the subject from revelation of her innermost personal individuality. Such protection is based on historically quite specific, thoroughly individualistic ideas of subjective personality (having an essential core), but also of data (having a representational character), and privacy (protecting from revelation via individual retreat and/or control). In a second step we will show how the current sociotechnical situation renders increasingly diffi- cult the practicing of a bourgeois subject, who as an individual retreats form the social or controls “his” information (Section “Why Individualistic Privacy Dis- course Fails to Match Current Practices”). The third step of the analysis will be more conceptually minded and designate the outlines of the novel problem space as constituted by current sociotechnical practices: We will argue that, while novel problems tend to exceed the individual dimension and transgress well-known pri- vacy practices, such as individual control, informed consent, de-/re-anonymization etc., the latter concepts still retain elements of a bourgeois notion of individuality. As current practices entail novel notions of personality (being performed in soci- otechnical relations), data (enacting subjects rather than representing some “truth” about them), and privacy (fostering subject performance rather than revelation) they at the same time effectively shift the very basis for ethical considerations (Section “Post-Individualist Privacy in Internet Research: Data Enacting Subjects”). This shift results in a number of novel ethical orientations internet research is bound to consider, which we will specify in the conclusion (Section “Conclusion: Novel Ethical Orientations for Internet Research”). Our critique of an overly individu- alist notion of privacy thus does not merely aim at emphasizing the social value of privacy (Regan, 1995) but at the inherently social character of the individual value of privacy – with internet research being bound to take this into consideration.

p

r i vac y

a s

i n d i v i d ua l

co n t r o l :

t h e

b o u r g e o i s

l

e g ac y

o f

p r i vac y

d i s co u r s e

 

Privacy is one half of the public/private distinction, and the practicing of this dis- tinction goes back as far as to the city state of Ancient Greece (e.g. Arendt, 1958). The distinction belongs to the “fundamental ordering categories in everyday life” (Bailey, 2000, p. 384) structuring social aggregates of various kinds. Privacy is not only multidimensional and extremely scalable, in that it may be applied to diverse phenomena from individual minds to social spheres; it also covers the whole of

sorting things out e thically

|

41

Euro-American cultural history from antique to contemporary social formations. Still, though, while social and cultural history lays quite some stress on the role of public/private occurring in the form of oikos/polis in Ancient Greece (Veyne, 1989), there is only little research on the status and practice of the distinction between, say, privacy in the years 600 and 1600 (an exception is Duby, 1990). Moreover, influential socio-historical reconstructions, such as Elias’ (1969) work on the “civilizing process”, at least implicitly suggest that the distinction for a while was of minor relevance before starting to reemerge around 1600 when multiple processes of social differentiation took off, and shifted ever more aspects of human bodies and bodily functions to some unobservable “private realm” (Elias, 1969). We will consider privacy’s genealogy here only, roughly, from the 18th century onwards, i.e. historical starting point of our reconstruction is the advent of the Bürgertum. The bourgeoisie interests us predominantly as a fraction of modern society that manages to distribute its own idea of subjectivity up to a point where it becomes the dominant subject culture. The reason for setting out from this observation is that the particular understanding of privacy coined at that time still haunts today’s discourses with fresh notions appropriate for current sociotechnical conditions only having begun to develop recently. In this spirit, we follow Reck- witz (2006, p. 34) in also conceiving of subjectivity as a historically contingent product of symbolic regimes. The latter contain “subject codes” (Reckwitz, 2006, p. 42) materially and semiotically performed in the course of everyday practices. According to Reckwitz, around the 18th century bourgeois culture, includ- ing its subject code, became dominant in European societies, and it remained so until the beginning of the 20th century (Reckwitz, 2006, p. 75) followed by firstly, post-bourgeois organized modernity’s employee culture and later on by Postmo- dernity’s “creative class.” Bourgeois culture may be understood as a cultural train- ing program, cultivating subjects who feature three characteristics: autonomy, inwardness, and morality. These characteristics are to be attained, amongst others, by making extensive use of “technologies of the self ” (Reckwitz, 2006, p. 58), i.e. script-based and printed media technologies, such as letters, diaries, and novels – leading to the creation of “inward-oriented” subjects (Reckwitz, 2006). The triad autonomy/inwardness/morality and its relationship to media technologies is also highlighted by Habermas (1991) who believes that the practices in question make subjectivity “the innermost core of the private” (p. 49). In sum, the form of privacy occurring here is thus to a certain extent the contingent outcome of a culturally specific style of using media technologies, a style that is part of a more encompass- ing subject culture. The latter manages to establish its own blueprint of subjectivity as societal role model: Humans in general are considered to feature the threefold bourgeois characteristics noted above, which is why there is an implicit normative claim directed at everyone to feature these traits (Habermas, 1991). Quite obviously, privacy discourse is massively influenced by such ideas. E.g. Warren and Brandeis

42

|

tobias matzner and carsten ochs

(1890) state at the outset: “That the individual shall have full protection in person and property is a principle as old as the common law” (p. 193), and they explicitly argue to extend this protection to “products and processes of the mind” (p. 194). What is striking is that the reproduction of basic elements of this bour- geois interlocking of subject formation mode and privacy notion stretches into post-bourgeois historic phases, i.e. into an era in which, according to Reckwitz, (2006), bourgeois culture has already ceased to provide the dominant subjectifi- cation mode. While organized modernity’s “employee culture” gains dominance in 1920, it is the creative class of post-modern culture that provides the blueprint for subject formation from the 1970s onwards (Reckwitz, 2006). However, if we turn to Westin’s privacy theory, we see how basic features of bourgeois subjectivity still inform privacy theory in the late 1960s. For Westin (1967), perfect privacy is achieved in the seemingly non-social situation of solitude, and as far as the secluded individual is concerned Westin holds that “In solitude he will be espe- cially subject to that familiar dialogue with the mind or conscience” (p. 31). Pri- vacy-as-retreat thus allows the individual to constitute inwardness, which in turn enables individualized actors to develop autonomy, “The most serious threat to the individual’s autonomy is the possibility that someone may penetrate the inner zone and learn his ultimate secrets, either by physical or psychological means” (p. 33). While having achieved autonomy via privacy, the individual is called upon to govern itself by becoming a moral human being, “The evaluative function of privacy also has a major moral dimension – the exercise of conscience by which the individual ‘reposseses himself.’ (…) it is primarily in periods of privacy that [people] take a moral inventory” (p. 37). Thus, there is far-reaching congruity of bourgeois subject culture on the one hand, and privacy discourse on the other. In bourgeois culture, privacy seems to concern the essential core of the subject, the “inner zone” of her subjective person- ality. The latter may be threatened if the outside world learns about the subject’s “ultimate secrets”, hence data about the essential personal core are conceived as representational. The role of privacy, then, is to protect the representational data speaking the truth about subjects’ essential personal core from revelation; as there is a constant threat of revelation, subjects protect themselves by way of retreating from the social (Sennett, 1992), or by way of controlling personal information. In this respect, Westin’s famous nutshell definition of privacy as “the claim of indi- viduals, groups or institutions to determine for themselves when, how, and to what extent information about them is communicated to others” (Westin, 1967, p. 7) is prototypical for theoretical (e.g. Rössler, 2004 defines individual control over information as one dimension of privacy) as well as juridical definitions of privacy (e.g. the German Federal Constitutional Court’s “Census Ruling” in which the judges specified the Right to Informational Self-Determination as individual right to determine who knows what about oneself on what occasion).

sorting things out e thically

|

43

It is therefore only consistent that influential scholars have characterized pri- vacy as bourgeois and individualistic a concept (Bennett, 2011). Now, is there any- thing wrong with such an individualism?

w hy

to

i n d i v i d ua l i s t i c

c u r r e n t

p r i vac y

m atc h

p r ac t i c e s

d i s co u r s e

fa i l s

In this section we will render plausible the claim that individualistic privacy notions are inadequate to grasp current practices of digital networking. To illus - trate our claim, we will draw on two types of sociotechnical practices. 1 The first set of practices, that we will deal with here is the one constituting subjectivities in Online Social Networks (OSN). Considering the wide distribution and central role of these practices for subject formation (Paulitz & Carstensen, 2014) they can be deemed paradigmatic when focusing on what Reckwitz (2006) in the style of Foucault calls “technologies of the self ” (p. 58), i.e. media technol- ogies used to constitute the subjectivity of the self. As was explained above, the bourgeois ideal type of the subject withdraws at times from company, to generate and consume script-based media. It is especially novels being read and discussed in the “world of letters” (Habermas, 1991, p. 51) that they make use of so as to constitute themselves as subjects: collectively developed practices, however, practices that can be, and are, performed in solitude. This use of media technologies features two traits: The quiet reading of literary products creates a self-referential modern subject, and thus the inner core of subjective personality (Reckwitz, 2006, p. 58); and the semantics of these media are deeply moral (Reckwitz, 2006, p. 67). Thus, if the paradigmatic media practice of the bourgeois subject is the reading of moral content in privacy-as-isolation, this situation at once allows subjects to be trained in autonomy (temporary cutting off of social relations, and emotional release from the burden of playing social roles (Westin, 1967, pp. 34–35)), inwardness (quietly reading in seclusion, a dialogue only with oneself ), and morality (consuming the semantics of morally uplifting writings). Now, if the “writing culture” of the 18th/19th century is paradigmatic for the constitution of the bourgeois subject, today it is rather OSNs, digitally networked sociality par excellence, that may count as a media practice central for subject consti- tution. And obviously the conditions for constituting subjectivity in OSNs are thor- oughly different to those of bourgeois media practice. Right from the outset, current subjectivity does not require a retreat from the social but its complete opposite: social networking by digital means. As a result, the constitution of the subjectivity at hand is not based on self-referential inwardness, but on other-referential expressiveness. It is only consistent, then, that also the basis of autonomy is affected by this shift:

44

|

tobias matzner and carsten ochs

Increasingly, social networking is becoming the condition through which to pursue indi- vidual goals, by connecting people with the resources (information, other people, oppor- tunities, etc.) necessary to act autonomously – that is, to be able to follow their particular agenda for life. (…) by providing personal information generously and without too much worrying about privacy, individual autonomy – and the ability to act in groups – can be increased. (Stalder, 2011, pp. 510, 511)

Stalder here has a bourgeois meaning of privacy in mind, just to establish that such privacy runs fundamentally counter to the practices of digital networking. How- ever, from an empirical point of view, there are in fact privacy problems within OSNs, insofar as users still tend to perceive and address problems concerning information flows as privacy issues – it is just the bourgeois conception of privacy that is overburdened with grasping them. In this sense, privacy as a precondition for autonomy has principally ceased to be about retreat from the social. But what about individual control over informa- tion? The second set of practices that we refer to illustrates how practicing individ- ual control becomes difficult, too. Those practices concern what we call Calculating Spaces (Ochs, in press). The latter term is meant to point out the transformed relationship between space and digital networking. In the early stages of the inter- net, a lot of effort was spent on simulating space within the digital medium. In the 1990s terms, such as “cyber-space” or “web-sites” became common, and the virtual landscape of Second Life by the beginning of the 2000s attracted a lot of attention due to the spatial experience it allowed for. While internet access at this time still predominantly happened via desktop computers, today it is rather mobile technologies (smart phones, tablets) that are used. Networked sensor technologies pervade physical space, a tendency that is likely to be intensified by, say, the Inter- net of Things, Cyberphysical Systems, Smart Homes, or Wearable Computing. The question arises, how individual control over information in such densely networked environments might still be possible – how to give “informed consent” to all these services? The problem is, of course, well-known. Framing the setting in terms of individual rights renders it practically impossible for the individual to effectively exercise her rights, for only legal scholars disposing of sufficient time are able to read and understand all the inscriptions materializing in services’ terms and conditions. The advent of Calculating Spaces is suited to further aggravate the problematic: The technologies in question tend to operate below the threshold of perception, and it is difficult to see how the individual subject is in a position to make decisions concerning her personal information in such a situation. Moreover, wearable technologies, such as the now-failed Google Glass, seem to further take away individual control over information. The resistance Glass raised, leading to the freezing of the project as a mass market device, is quite telling. To illustrate the whole affair we may refer to an anecdote documented by the New York Times when Glass was in its introduction phase:

sorting things out e thically

|

45

The 5 Point Cafe, a Seattle dive bar, was apparently the first to explicitly ban [Google] Glass. […] the bar’s owner, Dave Meinert, said there was a serious side. The bar, he said, was “kind of a private place.” (Stretfield, 2013)

The banning of the technology from the bar space implies that the owner does not act on the assumption that anyone will be able to exercise individual con- trol over information once Google Glass (wearers) are present. There is a turn to collective rules instead, safeguarding the particular privacy of the place. Spatial technologies such as this one thus provide a taste of the difficulties that arise when living in completely networked surroundings. boyd (2012) drives this point home eloquently:

Expecting that people can assert individual control when their lives are so interconnected is farcical. Moreover, it reinforces the power of those who already have enough status and privilege to meaningfully assert control over their own networks. In order to address net- worked privacy, we need to let go of our cultural fetishization with the individual as the unit of analysis. We need to develop models that position networks, groups, and communities at the center of our discussion. (p. 350)

We are well aware that privacy research has already begun to realize the shortcom- ings of individual notions (Nissenbaum, 2010; Rössler & Mokrosinska, 2015), and our ultimate goal is to contribute to this shift. With a view to the thrust of the arti- cle, however, we would like to establish at this point only that the bourgeois legacy of privacy, and especially its individualistic framing, as it plays out in ideas, such as retreat and individual control, is not adequate to grasp current digital practices; and there are good reasons to expect, that, at least in online environments, it will be even more difficult in the future to practice bourgeois privacy. As a result, also established strategies of privacy protection are pressurized, with ethical implications not least for internet research. This is what we will turn to next.

p o s t - i n d i v i d ua l i s t

data

p r i vac y

e n ac t i n g

s u b j e c t s

i n

i n t e r n e t

r e s e a r c h :

Whereas bourgeois subject conceptions, as implicated by the prevalent discourse on privacy, are no longer viable, such conceptions nevertheless still constitute the basis for a range of privacy protection strategies, including those protecting the privacy of research subjects in internet research. One such strategy is asking the research subject for “informed consent,” a provision to be found in many common research settings. Making sure that the subject actually is informed is usually the responsibility of the researcher, e.g. by providing leaflets, explanations, or conversations. Consequently, the subject signs a

46

|

tobias matzner and carsten ochs

form or otherwise declares consent. The underlying assumption is that the subject is enabled to control his information in the sense that he makes a conscious deci- sion based on what he is told about the research setting by the researcher. Thus, the research setting suggests that the subject is a person that can understand the infor- mation provided and the consequences of the use of data for herself and others. Correspondingly, the researcher would be a person able to estimate probable risks of the data collection and processing, and to care for the orderly conduction of the research. Similarly, using pseudonyms for interlocutors and removing directly identifying information from statements enacts an ethnographer as a responsible, privacy respecting researcher. We will see below that the control thus suggested is no longer effective, nei- ther for researches nor their subjects. Still the practices of informed consent (as well as of anonymization and pseudonymziation) are widely accepted since they cohere with notions of the individual that we have been trained to adopt for sev- eral centuries. The force of these established discourses, which makes individual control a matter of course, then structures the subject positions of researcher and subject – however, without actually safeguarding the actual control they have over the data. From this perspective, provisions, which are meant to protect individuals’ privacy, actually contribute to who the subjects and the researchers are. So, rather than protecting persons, they contribute to the creation of this subject position in the first place. This way, an illusion of control, which is essentially an illusion of privacy, is created. Bottom line is that the research setting helps to reproduce bour- geois modes of subject formation, while at the same time failing to grant effective bourgeois privacy (in the sense of control over information to the subjects). This is related to the way privacy breaches are conceptualized in this context. Usually, affordances to ensure the privacy of subjects have to ensure that no “personally identifiable” information leaks. This means that no information about the auton- omous individuals who have consented to the use of “their” data should become available. But why is individual control over information no longer effective? This is above all due to the transformed character of data. Advances in data analytics, sometimes dubbed “Big Data,” promise an exploratory approach to data (Kitchin, 2014). Generally speaking, they promise that the data will tell us what to look for rather than looking for something in the data. Thus, data are seen as a resource, where relations, correlations, patterns, rules, and associations can be detected. This entails a shift in the idea of data itself. In the context of information technology 2 data has long been used in the way established by (relational) databases. There are fields for particular entries – name, age, blood type, Twitter handle, etc. Thus, each field contains a particular, determined piece of information, which can be queried, sorted, tagged, and recombined. New methods in data analysis treat data as a resource for very different modes of inquiry – rather than as entries with fixed

sorting things out e thically

|

47

meanings. As a resource data are something that our actions leave in the (digital) environment and which can be appropriated (“mined”) in very different ways. This way of inquiry and the context, in which the data are put, determines its signif- icance. Cheap computing power allows researchers to experiment with all kinds of algorithms and tools for analysis to see which one brings interesting results. Against this background, the bourgeois concept of privacy, entailing a view on personality with fixed features that are represented by personal data such as names, addresses etc. is insufficient for in the right context, using an appropriate tool, any data can become personal information. This renders, say, anonymizing statements collected from social media or blogs utterly difficult. Any string longer than a few words will do to find the original statement using a search engine, no matter how its source is concealed. The ease of searching and parsing the web also makes it very convenient to procure additional contextual information necessary for data analytics. E.g., approvedly, it was possi- ble to identify people in a data set that just contained anonymized movie ratings (Narayanan & Shmatikov, 2008). Other researchers managed to predict the sexual orientation by analyzing Facebook friendships ( Jernigan & Mistree, 2009). Many more attempts at de-anonymizing datasets attest to this set of problems. And there are still more issues. What people post on social media does not only reveal things about them, but might also allow deriving insights about their “friends”, followers, or similarly associated users. Even if the individualist setting worked in a perfect fashion with every individual carefully reflecting on what they put on their social media sites before allowing researchers to use these data, the resulting research could create an aggregate allowing to infer information that none of the subjects provided individually – and thus was not part of the things they pondered before giving consent. Such possibilities of aggregative, inductive, or explorative data analysis also change the meaning of “privacy breach.” When research data are released (e.g. in anonymized or aggregate form) or when security measures are circumvented and unauthorized, persons gain access to the data, third parties still might not be able to identify individuals – in the sense that they know names, addresses etc. However, quite often this is not their aim in the first place. It might be enough if they can send targeted ads, or perhaps verbal abuse, or tailored viruses at subjects’ (rather anonymous) e-mail address or Twitter handle. Thus, a lot of morally or legally negative actions may happen to persons without identifying their essential personality: Having said this, a lot of data analytics is not about knowing who you are, but what you are like (Matzner, 2014). It is not about personal identifica- tion or revealing the essential core of one’s personality, but about sorting people into groups. And depending on who performs the sort, this group may consist of valuable customers or suspected political activists. It is for these reasons that also mathematical approaches to maintaining privacy threaten to fail. E.g., concepts,

48

|

tobias matzner and carsten ochs

such as “differential privacy” (Dwork, 2008) or “k-anonymity” (Sweeney, 2002) are valuable tools that provide mathematical guarantees for the impossibility to identify members of a database when selectively revealing information from it. But they are still tailored towards identifying the individuals and do not concern other information which can be learned from the aggregate database. The thesis to be derived from these observations is that ethical considerations of privacy as regards internet research must be based on novel notions of subjec- tive personality, of data, and also of privacy. For, at least some of the problems privacy protection currently faces have nothing to do with personality understood as essential core of the subject; data are not necessarily problematic because they represent facets of this essential core; and privacy does not generally amount to protecting the core of a subject’s personality from being revealed through public representation of personal data. But how to conceive, then, differently of subjec- tivity, data, and privacy in internet research settings? For a start, we may point out that the classification or sorting of people – the way we appear to each other on the internet – is not just determined by informa- tion representing facts about subjects, i.e. our understanding of data must not be reduced to the information they contain or allow to derive; likewise, algorithms must not be reduced to their functionality. They must be understood as embedded in social practices. Thus Gillespie (2014) writes:

What users of an information algorithm take it to be, and whether they are astute or ignorant, matters. […] This means that, while the algorithm itself may seem to possess an aura of technological neutrality, or to embody populist, meritocratic ideals, how it comes to appear that way depends not just on its design but also on the mundane realities of news cycles, press releases, tech blogs, fan discussion, user rebellion, and the machinations of the algorithm provider’s competitors. (p. 182)

Similar considerations hold concerning data. In particular social media sites, blogs, and similar resources from the internet come with a sheen of authenticity. It is where persons perform themselves. This sheen of the information is central to the social networking based mode of individuality described above. In contrast, using sophisticated analytical tools can add a layer of objectivity to information derived from data (Gillespie, 2014), in particular in the context of science and research. Quite generally, data and the algorithms used to analyze them enact subject posi- tions. And this potential to enact goes beyond the information or facts contained in the data (the veracity of which the critics of big data rightfully contest). In this sense, data have to be conceived as performative: To an increasing extent our personali- ties are enacted using data and digital communication (Matzner 2016). When data or results released by researchers enable that the subjects appear in a different, negative light, the damage is done – even if that is not a fact but depends on “what users of an information algorithm take it to be” as Gillespie writes. For instance when social

sorting things out e thically

|

49

media research data arouse suspicions concerning sexual orientation, then the issue is not only, whether this information is correct in a representational sense – but whether and how the research contributes to the appearance of a person. 3 Research, then, might put a subject into a position where she has to (publicly) deal with a justified, private issue – and this issue is not necessarily about the revelation of a sub- ject’s personal core by way of stripping the subject off her control over personal data that represent facets of this core. In such a case data, and the way they are manipu- lated, rather enact a type of subjectivity that is unwanted by the person; preserving privacy in such a case means to preserve the person’s subject performance. And as such a type of privacy is beyond the individual control of the subject, but forms part of collective data practices instead, ethical considerations have to set out from a rela- tional (subjectivity), enactment type (data), and performative (privacy) perspective. The conclusion will translate the outlines of this novel problem space into ethical considerations internet research is supposed to take into account.

co n c lu s i o n :

f o r

i n t e r n e t

n o v e l

e t h i c a l

r e s e a r c h

o r i e n tat i o n s

Most of the newly arising issues are not reflected upon in individualistic research settings. Neither researchers nor subjects can sensibly claim to identify the type and extent of information used in internet research. Neither can they foresee all the uses of that data that will potentially be possible. For these reasons, internet research settings should abstain from configuring the subject and the researcher as self-contained individuals that have rational oversight over a determined situation. Instead, they should consciously reflect the openness and transience of data-an- alytics. Rather than asking what will be known about the subject and who might control this knowledge, the ethical question should be: Will the data allow to enact a subject in a problematic manner? Thus, researchers should act the role of a person that potentially interferes with the way in which persons appear to others (peers, advertisers, criminals, etc.). They should take into account what the context of the research, the researcher’s discipline and personality, the institutional setting, etc. add to this appearance, to the performativity of data – beyond the facts and knowledge derivable from it. This means that researchers should

• change their notions of privacy breaches from releasing personal informa- tion to releasing enough information to create meaningful aggregates.

• avoid creating the illusion of control by maintaining research settings that perpetuate bourgeois individualist notions of privacy. Rather they should reflect on and point out the openness and the possibility of permanent resignification of data.

50

|

tobias matzner and carsten ochs

• reflect that avoiding identification is not the only issue of anonymous data- bases: privacy issues may not only be constituted by knowing who a person is, but also by knowing what this person is like.

• conceive of data not only as digital representations of the world but as per- forming various meanings according to use and context.

• orient their research not on facts and personal information (what becomes known about the person) but on their appearance in a networked world:

will the research allow to enact the subject in a problematic manner?

This is not to say, of course, that existing provisions to preserve privacy should be abandoned for good, but that they should be supplemented by caring for how the research results allow enacting the subjects in a problematic manner. This should also include considerations that persons who are completely uninvolved in the research are probably among those concerned. Finally, the openness of potential uses of data and the group of people concerned should be made an explicit part of privacy provisions. While research in times of social data has to be seen as a risk for privacy that cannot be fully mitigated, these orientations can be a first step to re-invent privacy-related research ethics for future internet research.

n ot e s

1. Due to space limitations we provide here a very condensed analysis. Elaborated treatments of the issues dealt with can be found in Ochs (2015b, in press).

2. For a more general discussion of the concept of data see Rosenberg (2013).

3. This is not to imply that there is anything suspicious about any sexual orientation, but regard- ing the discriminatory reality of our societies, sexual orientation is considered private for good reasons.

r e f e r e n c e s

Arendt, H. (1958). The human condition. Chicago, IL: University of Chicago Press. Bailey, J. (2000). Some meanings of “the private” in sociological thought. Sociology, 34(3), 381–401. Bennett, C. (2011). In defense of privacy: The concept and the regime. Surveillance and Society, 8(4),

485–496.

boyd, d. (2010). Privacy and publicity in the context of Big Data. Retrieved from http://www.danah.org/

boyd, d. (2012). Networked privacy. Surveillance and Society, 10(3/4), 348–350. boyd, d., & Crawford, K. (2011). Six provocations for Big Data. Retrieved from http://papers.ssrn.com/

Duby, G. (Ed.). (1990). Geschichte des privaten Lebens. 2. Band: Vom Feudalzeitalter zur Renaissance. Frankfurt a. M.: S. Fischer.

sorting things out e thically

|

51

Dwork, C. (2008). Differential privacy: a survey of results. In M. Agrawal, D. Du, Z. Duan, & A. Li (Eds.), Theory and applications of models of computation: 5th International Conference, TAMC 2008, Xi’an, China, April 25–29, 2008. Proceedings (pp. 1–19). Berlin: Springer. Retrieved from http://

Elias, N. (1969). The civilizing process. Vol. I: The history of manners. Oxford: Blackwell. Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. Boczkowski, & K. Foot (Eds.), Media technologies (pp. 167–194). Cambridge, MA: MIT Press. Habermas, J. (1991). The structural transformation of the public sphere. An inquiry into a category of bourgeois society. Translated by Thomas Burger with the assistance of Frederick Lawrence. Cambridge, MA: MIT Press. Heesen, J., & Matzner, T. (2015). Politische Öffentlichkeit und Big Data. In P. Richter (Ed.), Pri - vatheit, Öffentlichkeit und demokratische Willensbildung in Zeiten von Big Data (pp. 151–167). Baden-Baden: Nomos. Jernigan, C., & Mistree, B. (2009). Gaydar: Facebook friendships expose sexual orientation. First Monday, 14(10). Retrieved from http://firstmonday.org/article/view/2611/2302 Kitchin, R. (2014). Big Data, new epistemologies and paradigm shifts. Big Data & Society, 1(1), 1–12. Latour, B. (2007, April 6). Beware, your imagination leaves digital traces. Times Higher Education Literary Supplement. Retrieved from http://www.bruno-latour.fr/sites/default/files/P-129- THES-GB.pdf Manovic, L. (2012). Trending: The Promises and the Challenges of Big Social Data. In M.K. Gold

460–471). Minneapolis: University of Minesota

Press. Matzner, T. (2016). Beyond data as representation: The performativity of Big Data in surveillance. Surveillance and Society 14(2), 197–210. Matzner, T. (2014). Why privacy is not enough privacy in the context of “ubiquitous computing” and “big data.” Journal of Information, Communication and Ethics in Society, 12(2), 93–106. Narayanan, A., & Shmatikov, V. (2008). Robust de-anonymization of large sparse datasets. In Pro- ceedings of the 2008 Security and Privacy, IEEE Symposium on (pp. 111–125). Oakland, CA: IEEE Computer Society. Nissenbaum, H. (2010). Privacy in context. Technology, policy, and the integrity of social life. Stanford, CA: Stanford University Press. Ochs, C. (in press). Rechnende Räume: Vier Merkmale der digitalen Transformation von (räumli- chen) Privatheit(en). Zeitschrift für theoretische Soziologie. ZTS 3. Sonderband: “Raum & Zeit“. Ochs, C. (2015a). BIG DATA – little privacy? Eine soziologische Bestandsaufnahme. In P. Rich- ter (Ed.), Privatheit, Öffentlichkeit und demokratische Willensbildung in Zeiten von Big Data (pp. 169–186). Baden-Baden: Nomos. Ochs, C. (2015b). Die Kontrolle ist tot – lang lebe die Kontrolle! Plädoyer für ein nach-bürgerliches Privatheitsverständnis. Mediale Kontrolle unter Beobachtung, 4(1). Retrieved from http://www.

(Ed.), Debates in the Digital Humanities (pp

Paulitz, T., & Carstensen, T. (Eds.). (2014). Subjektivierung 2.0. Machtverhältnisse digitaler Öffentlich - keiten. Wiesbaden: Springer VS. Reckwitz, A. (2006). Das hybride Subjekt. Eine Theorie der Subjektkulturen von der bürgerlichen Moderne zur Postmoderne. Weilerswist: Velbrück. Regan, P. (1995). Legislating privacy. Technology, social values, and public policy. Chapel Hill, NC: The University of North Carolina Press. Rogers, R. (2014). Digital methods. Cambridge, MA: MIT Press.

52

|

tobias matzner and carsten ochs

Rosenberg, D. (2013). Data before the fact. In L. Gitelman (Ed.), Infrastructures series. Raw data is an oxymoron. Cambridge, MA: MIT Press. Rössler, B. (2004). The value of privacy. Cambridge: Polity Press. Rössler, B., & Mokrosinska, D. (Eds.). (2015). Social dimensions of privacy. Interdisciplinary perspec- tives. Cambridge: Cambridge University Press. Sennett, R. (1992). The fall of public man. New York, NY: W. W. Norton. Stalder, F. (2011). Autonomy beyond privacy? A rejoinder to Bennett. Surveillance & Society, 8(4),

508–512.

Stretfield, D. (2013). Google Glass picks up early signal: Keep out. The New York Times. Retrieved from http://www.nytimes.com/2013/05/07/technology/ personaltech/google-glass-picks-up-

Sweeney, L. (2002). K-anonymity: A model for protecting privacy. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems, 10(5), 557–570. Venturini, T. (2012). Building on faults: How to represent controversies with digital methods. Public Understanding of Science, 21(7), 796–812. Venturini, T., & Latour, B. (2009). The social fabric: Digital traces and quali-quantitative methods. Retrieved from http://www.medialab.sciences-po.fr/publications/Venturini_Latour-The_Social_ Fabric.pdf Veyne, P. (Ed.). (1989). Geschichte des privaten Lebens. 1. Band: Vom Römischen Imperium zum Byzan- tinischen Reich. Frankfurt a.m.: S. Fischer. Warren, S. D., & Brandeis, L. D. (1890). The right to privacy. Harvard Law Review, 4(5), 193–220. Westin, A. F. (1967). Privacy and freedom. New York, NY: Atheneum.

sorting things out e thically

|

53

r e ac t i o n

by

c é l i n e

e h r w e i n

n i h a n

Matzner and Ochs forcefully demonstrate the limits and weaknesses of bourgeois privacy used as a moral principle to frame Internet researches and developments. Therefore they prompt us to radically rethink the meaning and scope of this notion. In this perspective, it appears to me that the thought of Hannah Arendt, and more specifically her reflections on plurality in its relation to the private realm, might be particularly fruitful. In her writings, Arendt recognizes and highlights the importance of privacy for subject formation and the construction of his/her autonomy. However, it seems to me that her vision of the private sphere differs quite significantly from that of the bourgeois tradition. According to her, privacy is (1) neither devoid of otherness (2) nor the one and only place of self-fulfilment and self-disclosure. 1) In the solitude of his/her thoughts (the place of privacy par excellence), the individual is never completely lonely nor detached from the world to which he/she belongs. Namely, his/her personal judgment is forged through a fictitious dialogue with partners on the surrounding world. And even his/her self-consciousness con- stitutes him/her as a being inhabited by plurality. Yet, it is this inner plurality, as well as the permanent relation to others and to the world (that are both close and distant), which allows the subject to form his/her personality. (2) This being said, according to Arendt, the subject personality is also built and, in fact, only reveals itself in the confrontation of the political and public plu- rality. “In acting and speaking, men show who they are, reveal actively their unique personal identity and thus make their appearance in the human world […]. This disclosure of “who” in contradistinction to “what” somebody is […] implicit in everything somebody says and does. It can be hidden only in complete silence and perfect passivity, but its disclosure can almost never be achieved as a wilful purpose, as though one possessed and could dispose of this “who” in the same manner he has and can dispose of his qualities. On the contrary, it is more than likely that the “who”, which appears so clearly and unmistakably to others, remains hidden from the person himself ” (Human Condition, University of Chicago Press, 19582, 179). Hannah Arendt’s thoughts lead me to think that, in the Internet age, it is perhaps not so much privacy that we should seek to preserve as the conditions of realization of plurality which permit the revelation of our singularities and com- mon humanity.

Céline Ehrwein Nihan, PhD, is a Professor in Ethics at the School of Management and Engineering Vaud (HEIG-VD), University of Applied Sciences and Arts Western Switzerland (HES SO)

54

|

tobias matzner and carsten ochs

r e ac t i o n

by

c h r i s t i a n

p e n t zo l d

References to privacy cannot, it seems, escape normative generalizations when it comes to defining who should be entitled to which sort of privacy. In principle, the right to privacy pertains to all human beings. Yet, the problems already start when it comes to say what privacy actually is – and thus what historical, social or cultural frameworks should be mobilized in order to account for its peculiar status and the demands that could be derived from it. Therefore, Matzner and Ochs develop a poignant critique of the bourgeois legacy within the privacy discourse. But the quaint location of privacy within reclusive realms of autonomous isolation and moral devotion is not only empirically outdated in industrialized societies and a post-modern world. Whether or not this patrician vision once held true for well-to-do people, it never accounted for the reality of privacy practice and privacy concerns among the majority of the people at all times. Of course, this ideal had and still might have a considerable force in set- ting expectations of what privacy is about and through which measures it could be secured. In addition, some studies have shown that internet users in different nations seem to have similar attitudes toward privacy (Dutta, Dutton, & Law, 2011). Yet precisely such spurious dominance should encourage our creativity to seek out as many environments and circumstances as possible for contextualizing concepts of privacy instead of assuming some homogenous model. In making the case for such a much more nuanced understanding of privacy in terms of the per- formative qualities of data, Matzner and Ochs aim at providing an adequate and updated understanding and management of privacy issues of digitally networked people. However, by asking to whom their thoughtful considerations apply we might also be able to challenge the uneven terms and conditions under which peo- ple become enrolled in these dynamics – or are left out. This would allow seeing the new model of privacy as again one, again perhaps particularly Western model of privacy, which needs to be complemented by experiences, and paradigms that would otherwise again be marginalized.

Dutta, S., Dutton, W. H., & Law, G. (2011). The New Internet World. INSEAD Faculty and Work- ing Paper. Retrieved from https://flora.insead.edu/fichiersti_wp/inseadwp2011/2011-89.pdf

Christian Pentzold is an Associate Professor (Junior professor) for Communication and Media Studies with a focus on Media Society at ZeMKI, Centre for Media, Communi- cation and Information Sciences, University of Bremen, Germany.

sorting things out e thically

|

55

r e ac t i o n

by

d .

e .

w i t t ko w e r

Matzner and Ochs do an excellent job of situating the liberal individualist con- ception of the research subject in Euro-American culture and history. We can cer- tainly point to other cultural conceptions of persons that have relatedness to others and to communities at their core – relational ideas of the self that are primary in, for example, East Asian cultures influenced by Confucianism, or in African cul- tures as seen through the community ideals of ubuntu. From the perspective of feminist ethics of care, we can see how the liberal indi- vidualist conception of the self that remains itself in disconnection from others is not only a particularly Euro-American conception, but a typically male Euro-American conception. Women in Euro-American culture more often than men describe their fundamental self in relation to others. Who am I? A daughter; a wife; a mother. Men are enculturated to view these relationships as modifications of a separable, core self that exists prior to and essentially unchanged through the development of these relationships – often to men’s social, psychological, and moral detriment. But while women are enculturated to view their core selves in their caring relationships with others, the standard conception of the “human” in a Euro-American cultural context adopts the typically masculine self-conception. We see this in the concep- tualization of the research subject, and in many other places as well – the way that interviews with successful women (but not men) nearly invariably ask about the sacrifices and tensions of having (or not having!) spouses and children, or the way that having children or having had multiple marriages can become a source of con- cern about female politicians where it is typically a non-issue for male politicians. The isolatable “core self ” is also a typically white conception of self in Euro- American culture. Consider John Rawls’s famous “veil of ignorance” thought experiment, in which we are asked to imagine what kind of distribution of rights, opportunities, and goods we would want if we didn’t know our position in society, including our age, gender, race, socio-economic status, sexual orientation, able/disabled status, etc. This presumes that we would still be ourselves in a meaningful sense apart from these aspects of the self, which are framed, as a hidden assumption, as secondary and accidental to rather than primary and constitutive of who we are. Critics have rightly pointed out that this hidden assumption is questionable, and that women, non- white persons, and other marginalized persons often view our identities as inseparably bound up with who we are and how we conceive of ourselves and others. How can we conceive of the research subject outside of these liberal individual- ist, Euro-American, white/male/het/cis/able background assumptions? What prac- tices of care and respect are demanded of the ethical researcher by the relational self?

D. E. Wittkower is an Associate Professor of Philosophy & Religious Studies at Old Dominion University. He is a philosopher of technology and culture.

c h a p t e r

f o u r

Chasing ISIS

Network Power, Distributed Ethics and Responsible Social Media Research

jonathon hutchinson, fiona martin, and aim sinpeng

i n t r o d u c t i o n

Of all the ethical dilemmas facing internet researchers in the so-called “social age” (Azua, 2009) the most difficult is negotiating the expansion of actors and risks in distributed communicative environments, particularly where actors have unequal power over the terms and conduct of research. As digital media ethicist Charles Ess (2013) notes academic research ethics have historically focused on the duties of individual, institutional researchers to protect vulnerable human subjects from harm – a focus that is destabilized in studies of social media envi- ronments. Social media users, bots, communities, platform providers, and gov- ernments all possess degrees of creative and legal autonomy that pose problems for ensuring traditional procedural research controls such as consent or confiden- tiality. Some users may deliberately mask their identities and troll their observers. Terms of service and privacy controls may change mid-study. States may surveil and control certain data transactions. The proliferation of non-human actors alone challenges scholars to reimagine the concepts of subject, sociality, and social responsibility. Intellectual property and data protection laws are evolving too, provoking difficult questions about the terms of research access to platforms and their responsibilities to the creators of that information. Thus the contingent nature of internetworked power relations complicates the identification of risks, evaluation of benefits, and assessment of social implications for those involved in any social media study.

58

|

jonathon hutchinson et al.

In this chapter we review normative, procedural, and practice-based challenges to negotiating this complexity, based on our work in media and communications research and political science. We also discuss ethical dilemmas scholars face when researching a contemporary and highly controversial research on the Islamic State (IS). Over the past decade academics in these fields, and the ethics review commit- tees that approve their work, have looked to guidelines developed by professional organisations such as the Association of Internet Researchers (AoIR), the Euro- pean Society for Opinion and Market Research (ESOMAR), and the British Psy- chological Society (BPS) (BPS, 2013; ESOMAR, 2011; Markham & Buchanan, 2012) for guidance on best practice ethical approaches to social media research. However, conceptions of what actions might be inappropriate or harmful, for whom and in what context, are not always shared or agreed by all stakeholders of these environments, particularly where proprietary and government interests dominate, consumer protections are limited, and users may have already consented to market research as part of their platform use contracts. Facebook’s “emotional contagion” study (Kramer, Guillory, & Hancock, 2014), which tried to manipulate users’ emotional states without their specific informed consent, is an infamous case in point. It triggered global debate about the principles of corporate social media research, such as the need for opt out provisions for users who could not anticipate the scope or consequences of that company’s intended research (Bowser & Tsai, 2015; Carter et al., 2016). In practice then, research reflexivity is central to con- ceiving and administering network-aware, respectful, and just social media inqui- ries. It is also critical to informing and revising ethics review processes in light of research recruitment, data collection and analysis (see Guillemin & Gillam, 2004). Drawing on Manuel Castell’s (2011) four-fold conception of network power and ideas of “distributed morality” (Floridi, 2013, p. 728) or “distributed responsi- bility” (Ess, 2013, p. 21) among network agents, we explore how we might apply reflexive ethical thinking to map the shifting ethical roles and relations of the social media systems we are studying and to negotiate the procedural and practice based aspects of the ethical process. As a case study we consider how the pursuit of socially contentious knowledge, such as the identities and communicative activ- ities of Islamic State (IS) agents, might lead us to question normative research ethics approaches and processes. Finally, with a mind to innovation in social media studies we make recommendations to university ethics bodies about productive ways to strengthen ethical consultation, decision-making, and review processes.

normative approaches to social media research ethics

While there are many conceptions of social media research in this collection, we find that academics use social media platforms in four ways:

chasing isis

|

59

1. to find and recruit participants;

2. to gather information about people, communities, their activities and/or their discursive engagement;

3. to observe social actor behaviors and relations; or

4. to analyse the environments in which actors interact (eg. as code, interfaces or systems).

Human ethics research approval is required for studies that involve interaction with identifiable living human subjects or interventions into their activities, social environments and communication flows. Ethical reviews are a procedural guar- antee that normative principles of research integrity have been considered and codified in the research methodology, as well as a forum for ongoing debate about ethics in practice. In the West, our general understanding of human ethics principles is based on canonical international declarations, including The Belmont Report (Department of Health, Education, and Welfare, 1979), the United Nations Declaration of Human Rights (United Nations, 1948), and the Declaration of Helsinki (World Medical Association, 1964). These documents represent humanity’s understand- ing of basic human rights, which have emerged from medical science, and have been widely adopted across the scholastic disciplines. They collectively highlight the need for human dignity, justice, safety, protection, respect for persons, and participant autonomy. They also emphasize that human research subjects should always be part of research in a voluntary condition. A core ethical consideration with any research is the maximization of benefits that the project can potentially produce, alongside the minimization of harm to the participants. These guiding principles provide the foundation for ethical decision-making, from which schol- ars then need to assess, evaluate, and prioritize their actions. The primary rights based ethical framework for social media scholars in media and communications disciplines was developed by the Association of Inter- net Research (AoIR) Ethics Working Committee in 2002 (Ess & AoIR Ethics Working Committee, 2002), and revised in 2012 (Markham & Buchanan, 2012). In AoIR’s Ethical Decision-Making and Internet Research report, internet research encompasses the collection of data from web, mobile, locative, and social media platforms; establishing an understanding of how people access and use the inter- net and how they design systems (via software, code and interface studies); the storage of internet research datasets; the analysis (semiotic, visualisation, or other) of images, text and media; and the large-scale use and regulation of the internet (Markham & Buchanan, 2012). Importantly the report notes that “the fields of internet research are dynamic and heterogeneous” (Markham & Buchanan, 2012, p. 2), and as such any rules or recommendations for research conduct can never be static. A field of research whose objects and relations are in a constant state

60

|

jonathon hutchinson et al.

of innovation, therefore, requires perpetually new ethical approaches that accom- modate technological and regulatory étalonnage. With that in mind, the report recommends contextual flexibility in ethical decision making based on key recom- mendations and rights-based principles. First, it proposes that the more vulnerable the study population, the greater the obligation of the researcher to protect them. Second, it argues that a “one size fits all” approach is unhelpful in internet research because harm should be considered as contextual. Third, it notes human research principles should be considered even where identifiable individuals’ presence or involvement in the research data is not obvious, for example, in platform or interface studies. Fourth, it emphasizes that social media users have rights and agency “as authors, research participants and people” (p. 4) and as such the social benefits of any study should outweigh the risks of the research and fifth, that ethical decision-making should be followed at each step of the research from data collection through to dis- semination. Finally, it advocates that ethical research should be a deliberative or consultative process that incorporates input from many stakeholders including other researchers, review boards, and the research participants themselves. This final recommendation is of interest in that it embodies a notion of distributed power and responsibility in ethical decision-making that is fundamental to digital media studies.

d i s t r i b u t e d

m o r a l i t y

a n d

e t h i c s

Notions of distributed ethics, exemplified in the work of Luciano Floridi (2013) and Charles Ess (2013), explore the consequences of the interconnected, shared and reciprocal actions that constitute networked digital communications. Floridi’s concept of “distributed morality” considers whether “‘big’ morally-loaded actions” can also be “the result of many, ‘small’ morally-neutral or morally-negligible inter- actions” (2013, p. 729), undertaken by human, artificial or hybrid agents. In dis- tributed environments, Floridi argues “good” actions can be aggregated and “bad” actions can be fragmented, but the interdependent operation of moral agents means that the ethical impacts of any actions need to be evaluated in terms of the consequences for systemic and individual well-being. Many academics and corporate researchers have assumed that publicly visible and legally accessible data streams such as Tweets or Instagram posts can be collected and analysed with- out the informed consent of the creators or communities. However, the analytical judgements made about this material may not only lack qualitative substance, but also have unseen consequences for those deemed to be part of a contentious social movement (e.g. political extremists or anti-government activists) or responsible for transgressive behaviors (e.g. sexting or racism).

chasing isis

|

61

Distributed ethical decision-making relies on a first order framework of infraethics, agreed expectations, attitudes, values, and practices that can foster coordination for morally good ends, as well as socio-technological mechanisms or moral enablers that support and promote positive actions. For example, social media platforms ordinarily have standards that guide users in how they should communicate with each other and a community manager who might “block” or “delete” those who deviate from these norms. Here good outcomes are the result of an ensemble of ethical facilitation with coordinated investment from as many communicative agents as possible. Ess (2013) describes “distributed responsibility” across a network as “the understanding that, as these networks make us more and more interwoven with and interdependent upon one another, the ethical responsibility for a given act is distributed across a network of actors, not simply attached to a single individual” (Ess, 2013, p. xii). That is, some responsibility for research relations and outcomes may lie beyond academics, with other moral agents in a social network – with those communicating in a network and their choice of platform, privacy tools, and form of address; with the platform provider in enabling public data streams or effective privacy settings; or with regulatory bodies in ensuring user rights are not breached by illegal uses of data. This notion sees unethical practices as the burden of all system agents and in that respect demands researchers recognize and deliberate their methodology and ethical strategy with all those stakeholders. Ideally then, as ethics is a moving feast, researchers should also share in progress findings, along with any changes to ethical decision-making. There is an element of action research to this approach in that both researcher and research participant are collaborating to improve pro- cess, with the benefits and risks of a study debated and made more apparent to all. Communicative reciprocity, a key concept in online communities literature (Bonniface, Green, & Swanson, 2005; Papadakis, 2003), is critical to this process of consultation and to researchers’ goal of consulting effectively with their network agents. Distributed models have obvious limits in terms of motivating agent par- ticipation (persuading powerful players such as platforms or government to take part in research design), and engaging non-human participants or vulnerable and hidden populations, such as pro-anorexia communities or hackers. In some cases it may not be practical, appropriate, or safe for other research subjects to share every ethical decision or all findings with all stakeholders. As a result social media researchers cannot defer or deflect their primary ethical obligations to other agents in a social network. Rather we are challenged to consider the effective moral agency of any partic- ipant and the characteristic power relations of specific network contexts to better understand how otherwise seemingly morally neutral research actions may interact/

62

|

jonathon hutchinson et al.

aggregate with unjust and/or damaging outcomes. Mapping the operation of net- work power and ethical obligation is a key procedural step in finalizing research design and securing ethics approval and remains a factor throughout any study.

n e t w o r k

a n d

p o w e r ,

i n

e t h i c a l

e t h i c s

p r ac t i c e

p r o c e d u r e ,

Social media systems involve diverse social identities, many forms of agency and complex, shifting social connections. Often the rights and interests of different parties may be in conflict, particularly where platforms and governments have the power to dictate access to data, the terms under which one can conduct one’s study, or the rights of the participants. In order to define the research relations with one’s study cohort, and to ensure their rights and safety as far as is possible, it is helpful to conceptualize these power relations and to map the scope of the ethical process. Castells (2011) proposes there are four models for exercising power within human actor networks, each of which can be used to consider the ethical roles and responsibilities of those agents involved in a research project during the procedural phase of seeking ethics approval:

1. Networking power operates as a form of gatekeeping, to include or exclude actors based on their potential to add value to or jeopardise the network;

2. Network power is used to coordinate the protocols of communication or the rules for participation within the network;

3. Networked power the collective and multiple forms of power, referred to by Castells as “states”, within the network; and finally,

4. Network-making power, critical as it is “(a) the ability to constitute net- work(s) and to program/reprogram the network(s) in terms of the goals assigned to the network; and (b) the ability to connect and ensure the cooperation of different networks by sharing common goals and combin- ing resources while fending off competition from other networks by setting up strategic cooperation.” (2011)

The last model indicates, for example, the critical institutional role of university ethics review committees in mediating between powerful interests and for the rights of participants and researchers. Switchers “control the connecting points between various strategic networks,” for example, “the connection between the political net- works and the media networks to produce and diffuse specific political-ideological discourses” (Castells, 2011, p. 777). Review boards can critically assess researchers’ commitment to participant rights and protections, while advocating for academic rights in relation to claims by corporations, governments or hostile agents.

chasing isis

|

63

The review board also provides a forum for investigating contentious fields of study, where normative ethical strategies are unworkable and may threaten aca- demic freedom or participant rights. For example, in the case study below of ISIS and its social media recruitment strategy, distributing some responsibility for eth- ical conduct to extremist organisations or foreign combatants publishing on social media is unviable, as they are a hidden population being studied without informed consent. Similarly, sharing research strategy or results with those organisations (a common aspect of procedural ethics) may reduce the likelihood of researchers capturing authentic communications and increase risks for other parties identified as being part of an IS network, while adding no demonstrable social advantage. The difficulty is that ethics review committees may have little or no training in ethical review of new methodologies (Carter et al., 2016). In the dynamic state of social media research, this places added emphasis on the researchers capacity to promote shifts in ethical thinking brought about by research practice. As the AoIR guidelines suggest “ethical concepts are not just hurdles to be jumped through at the beginning stages of research” (Markham & Buchanan, 2012, p. 4), but ground ongoing enquiry during any study and lead revision of norms and procedures.

p r i vac y ,

co n f i d e n t i a l i t y ,

a n d

s o c i a l

v i s i b i l i t y

One important dimension of social media ethics is investigating the privacy and confidentiality of research participants – that is, the extent to which any social media study cohort has made their data open to collection or assents to being monitored by others outside their social networks with awareness of the atten- dant risks and benefits of being research subjects. Privacy has a normative ethical dimension in defining constraints on the use of personal information, but as the AoiR guidelines indicate understandings of “publicness” or “privateness” may vary from person to person and shift as the research progresses. Recent studies suggest cultural differences in the way users express privacy concerns and manage per- sonal information and changes to their perception of privacy over time (Trepte & Masure, 2016; Tsay-Vogel, Shanahan, & Signorielli, 2016). As Nissembaum (2010) notes privacy is also contextually mediated, and its integrity differently framed according to relational expectations, social norms and contracts. We propose that a users’ intention to disclose information with (or beyond) their social network can be understood in terms of the value they place on social visibility and the benefits they perceive, or power that accrues, through recognition and connectedness. Social visibility refers to the degree of privacy and confiden- tiality participants might assume in different uses of social media, such as sharing content with close friends, making a statement of political allegiance, or engaging in civil disobedience. Brighenti (2007) notes that visibility “lies at the intersection

64

|

jonathon hutchinson et al.

of the two domains of aesthetics (relations of perception) and politics (relations of power)” (p. 324). The ways that different stakeholder groups behave on social media platforms reflect how they conceive and value the “publicness” or aesthetic visibility of their activities and the impact on this attention on their self-perception and network status. Within youth cultures, online celebrity or popularity equate with positive visibility, while negative connotations are associated with less widely lauded activities like sexting or doxing (Berriman & Thompson, 2015). In organi- sational communications Uldam (2016) argues “management of visibility in social media plays a key role in securing corporate legitimacy” (p. 203) and in deflecting crisis narratives. Certain types of visibility then are productive, while others may attract opprobrium, particularly where information that was not intended for pub- lic consumption is widely distributed through activities like outing, revenge porn or cloud account hacking (Meikle, 2016, p. 93). As Meikle notes:

The cost of this creativity, sharing and visibility is that the user loses control over what is done with their personal information, loses control over the new contexts into which others may share it, and loses control over to whom the social media firms might sell it. (p. x)

The problem for social media researchers is understanding a user’s intention of visibility within any research relationship, alongside their expectations of privacy and information control. Academics cannot presume that social media users are aware of how accessible their communications are to longitudinal tracking and professional scrutiny, or how their activities may be interpreted as part of broader social movements or patterns of use. In our examination of peer reviewed journal articles on IS use of social media, we find some disagreement about what forms of social media data are verifiable, how users might be identified, and what aspects of their communications are intentionally public and performed for political affect.

r e s e a r c h i n g

t h e

i s l a m i c

s tat e

o n l i n e

Studying the use of social media by the Islamic State (IS or ISIS for Islamic State of Iraq and al-Sham) 1 presents a fascinating focus for understanding the rise and operation of today’s transnational terrorist organisations. Social media platforms, once simplistically dubbed a “liberation technology” in connection with the Arab Spring protests (Diamond, 2010, p. 70) are now pivotal to the recruitment strategy and propaganda machine of ISIS, an organisation whose pursuit of a caliphate has resulted over 1,000 civilian casualties, beyond the hundreds of thousands lost on the battlefields of Syria and Iraq (Yourish, Watkins, & Giratikanon, 2016). IS use of social media far outstrips that of any other terrorist organisation known to date, with each IS-linked Twitter account having an average of 1,000 followers – far

chasing isis

|

65

higher than regular Twitter users (Berger & Morgan, 2015). Beyond the Twitter- sphere, IS also famously employed YouTube to show their disturbing beheading videos, many of which went viral, as a way to recruit supporters. The video, “There is No Life without Jihad,” released in 2014, was a strategy for online radicaliza- tion aimed at recruiting young Western-based future foreign fighters. Abdel Bari Atwan (2015) refers to IS activities online as “the digital caliphate.” After the Paris and Brussels attacks in 2016, which led to over 160 deaths, governments in the United States and Europe have been pouring considerable resources into countering so-called homegrown extremism, including work- ing with social media companies to identify individuals with active recruitment accounts. In 2016 Twitter reported having deleted 125,000 accounts believed to be tied to violent extremism – many of which belonged to IS agents (Calamur, 2016). Despite this, extremist accounts continue to proliferate (Karam, 2016). Indeed a 2015 report by the Brookings Institute argues that closing accounts is unlikely to be very effective in combating extremist activity:

Account suspension isolates ISIS and can increase the speed of radicalization for those who manage to enter the network, and hinder organic social pressures that could lead to de-radicalisation.” (p. 3)

With this in mind, researching the Islamic State’s online strategy is now a political priority for the West – the European Union alone having increased its counter terrorism budget from 5.7 million (in 2002) to 93.5 million (in 2009) (European Parliament, 2016). A Google search of the terms “ISIS,” “research,” and “media” returns 25 million results, while Google Trends shows “ISIS” and “social media” searches peaking at times of key attacks on Western citizens, such as beheadings. 2 In recent years governments, foundations, and universities in North America and Europe have funded an increasing number of projects investigating the understand- ing of extremism online. 3 Given the continued intensity of military engagements in Iraq and Syria, IS attacks across the Middle East and Europe and the protracted Syrian civil war, research on IS will remain important in the foreseeable future. Despite the clear policy importance of studying IS activities online, there are a number of ethical challenges to doing this from a distributed ethical perspective. First, while the research needs and interests of nations and research institutions take priority over those of IS protagonists, there will be little concern for any potential negative research impacts on the broader network of IS communicants, including potential child recruits. Yet, given the degree of security interest in this community, identifying social media users as IS-connected may make them vul- nerable to ongoing surveillance and legal prosecution. The Islamic State fighters are among the most wanted terrorists in many countries, and some governments have gone to great lengths to locate their identity of their supporters. The astonish- ing revelation over the U.S. National Security Agency’s (NSA) “spying” program

66

|

jonathon hutchinson et al.

shows it has access to details of every American’s call history, real time phone and Internet traffic and received cooperation from tech giants like Google and Facebook in obtaining that data (Electronic Frontier Foundation, n.d.). In March 2016 the UK parliament introduced the Investigatory Powers Bill, condemned by lawyers for breaching international standards on civilian surveillance that will give the security agencies wide ranging powers to intercept and access electronic data (Bowcott, 2016). Similarly new data retention laws in Australia, passed in 2015, permit the intelligence and law enforcement agencies immediate access to all tele- phone and Internet metadata without a warrant. Under the scope of such govern- ment-sanctioned surveillance programs, it is imperative that IS researchers debate how and whether they are able to protect those they study. Second, given that these subjects are a hidden population, legally and socially stigmatized by membership, and often represented by bots and apps (Berger & Morgan, 2015, p. 42), there is little chance that they will respond to an ethical framing consultation and so share some degree of individual or shared responsi- bility for the way social media research is framed and conducted. Thus, while an automated social media network analysis of IS Twitter supporters could provide tremendous empirical insights into the organisation’s campaign strategy, including identification of its close and wide support bases and key issues that gain cur- rency among followers and potential recruits, in the absence of informed consent, researchers and review committees must still assume the responsibility for weigh- ing the risks to those identified as part of that network. In the most comprehen- sive analysis of ISIS supporters on Twitter to date (Berger & Morgan, 2015) the authors present a strong public policy case for their research, while poorly attesting to the risks faced by research subjects. The only concern they note is the poten- tial negative consequences of Twitter intervening to suspend suspected IS-linked accounts. Third, in the absence of distributed engagement with research subjects, there is little consensus about what constitutes valid, ethical data collection and analysis in IS studies. Most scholarly research examining IS use of new media technolo- gies has emerged from political science and related disciplines. 4 An examination of university and non-profit funded published studies has shown that researchers have taken two approaches to discussing ethical research conduct. The first is choosing to investigate only “publicly available” data in order to avoid any involvement in government surveillance and intervention. Twitter is the most studied platform because of its openly accessible data streams and culture of promotion, networking, and amplification. Yet,Twitter studies of IS differ in defin- ing what constitutes verifiable “public” data and ISIS identities. Lorenzo Vidino and Seamus Hughes (2015) from the Center for Cyber and Homeland Security at George Washington University, sought to explain the upsurge in American jihadi recruits by investigating IS activity. They conclude that the majority of online

chasing isis

|

67

supporters are men and show their support by introducing new, pro-ISIS accounts to the online community. However, the authors conducted a qualitative analysis of “known” ISIS supporters already identified by the U.S. authorities, and did not examine the social networks of these identified individuals, instead relying heavily on information from newspapers and published government reports. In contrast Jytte Klausen (2015) first identified the Twitter accounts of 59 “potential” foreign fighters in Syria, based on “news stories, blogs, and reports released by law enforce- ment agencies and think tanks” (2015, p. 5) and, then, conducted a social net- work analysis of the 29,000 accounts they followed or that followed them. Carter, Maher, and Neumann (2014) took a similar approach in identifying 190 Syrian foreign fighter Twitter accounts and, then, applying the snowball method to anal- yse patterns among their followers. The second approach to IS studies ethics is to omit that discussion entirely. This was commonplace among the literature surveyed from both political science and media and communications. It is possible that political scientists perceive the public policy implications of IS research to be so great and “evident” beyond the potential risks to individuals studied, that the consequences do not warrant dis- cussion in the space of a 8,000-word article. Even so, why then would Berger and Morgan’s 68-page report, which has a section called “challenges and caveats” (2015, p. 41), lack discussion of ethical risks? Such oversight, intentional or not, points to the need for more debate about distributed ethics principles to inform research design and critique. Centrally researchers need to interrogate more closely how social media users understand and practice social visibility in different cultural contexts as a means to building or reinforcing their networking power.

co n c lu s i o n

In researching social media systems we advocate ethical approaches that acknowl- edge the relational aspects of agency and obligation that characterize our new online communities and that problematize the assumption that all network agents have equivalent moral agency. In this chapter distributed responsibility func- tions less as an attributive framework for determining fixed ethical obligations in networked environments and more as a means of locating and analysing the sometimes conflicting expectations of scholars, users, user communities, platform and internet service providers, governments, and research institutions about how inquiries might be designed and conducted. Distributed ethics can be used to con- ceptualize the scope of networked agents and their relations in globally connected, transcultural, proprietary environments and to surface conflicts over infraethi- cal meaning or enabling strategies. It cannot provide researchers a blueprint for fully weighing participant interests, rights, and capacities, or evaluating research

68

|

jonathon hutchinson et al.

impacts, where those entities are unable to engage in a deliberative ethical consul- tation. This leaves hidden populations and transgressive agents like IS promoters. Social media research takes place in commercially oriented environments, gov- erned by less than transparent corporate communication paradigms and variable forms of nation state regulation. In this respect the job of developing fair and effec- tive procedural research ethics for these complex social systems falls to researchers and university review committees, which can act as forums for ethical debate. It is not uncommon though for review boards to be dominated by individuals unfamil- iar with trends in social network studies and digital media research methodologies, or to be cautious about approving innovative research projects. This is why we urge social media researchers to promote new professional standards, such as the AoIR guidelines, and to advocate for social media research in context – based on an understanding of the limits of distributed responsibility and the different meanings of social visibility for diverse social media agents, human and non-human. What we offer in this chapter, based on our empirical evidence of research that has been conducted on social media associated with IS, are two recommenda- tions for university ethics boards. First, university boards need to understand the normative approach towards social media research better, building on the AoIR ethics procedural approach towards ethical decision-making. Second, university ethics boards should undertake the role of network-making power agents and engage in “switching” activities during the application of the social media research. These two recommendations are designed to be applied to the field of internet research, which remains in constant flux as new cultures and technologies emerge at a rapid pace. These recommendations should be applied to existing university ethics boards processes, while also beginning a transformation of the role of these boards within the research process. As we see it, university ethics boards, should remain active in the research process until the completion of the research, which is inclusive of the dissemination of findings. Our first recommendation notes that university boards need to understand the publicness versus the private nature of the social media conversation before approving research projects. Social media research projects apply to a variety of disciplines where some will have significantly different outcomes compared with others. normative ethics approach for decision-making only works when the uni- versity ethics board has a transparent understanding of the specific conversation the researcher is attempting to understand. Our second recommendation calls for university ethics boards to embody an ongoing active role within the ethical deci- sion-making process by becoming what Castells terms a network-making power agent, or a switcher. As we noted earlier, social media research is often located between corporations and governments, which is often a zone within which an individual researcher has limited power. However, an institutional approach, which is spearheaded by a university ethics board, has more capacity to “control

chasing isis

|

69

the connecting points between various strategic networks” (Castells, 2011, p. 777). We suggest ethics review bodies take a continuing interest in the evolution of social media research and in educating members in innovative methodologies. Their supportive and constructive engagement will enable scholars to safely navi- gate disputed research territories, where their work is not fully sanctioned by plat- form hosts and their independence is not guaranteed by governments.

n ot e s

1. In Arabic, the group is known as Al-Dawla Al-Islamiya fi al-Iraq wa al-Sham, which is a geo- graphical construct that includes territories stretching from southern Turkey through Syria to Egypt. The group seeks to build an Islamic state, or a caliphate, in this area.

2. Google trends: ISIS + social media. Peak searches: September 2014, February 2015 and Novem- ber 2015.

3. For example, see the projects being funded by the Minerva Institute, a research arm of the US Department of Defence from 2013 to 2018.

4. According to CrossSearch results across scholarly databases, the majority of results on key word searches of “ISIS” and “media” since January 2013 were articles in political science journals, which include the fields of international relations, security and military studies.

r e f e r e n c e s

Atwan, A. B. (2015). Islamic state: The digital caliphate. London: Saqi. Azua, M. (2009). The social factor: Innovate, ignite, and win through mass collaboration and social net- working. San Francisco, CA: IBM Publishing. Berger, J. M., & Morgan, J. (2015). The ISIS Twitter census: Defining and describing the popula- tion of ISIS supporters on Twitter. The Brookings Project on US Relations with the Islamic World. No. 20, March 2015. Retrieved from https://www.brookings.edu/wp-content/uploads/2016/06/ isis_twitter_census_berger_morgan.pdf Berriman, L., & Thompson, R. (2015). Spectacles of intimacy? Mapping the moral landscape of teenage social media. Journal of Youth Studies, 18(5), 583–597. Bonniface, L., Green, L., & Swanson, M. (2005). Affect and an effective online therapeutic commu- nity. M/C Journal, 8(6), n.p. Retrieved from http://journal.media-culture.org.au/0512/05-bon- nifacegreenswanson.php. Bowcott, O. (2016). Investigatory powers bill not fit for purpose, say 200 senior lawyers. Guard- ian Online. March 14 2016. Retrieved from http://www.theguardian.com/world/2016/mar/14/

Bowser, A., & Tsai, J. Y. (2015). Supporting ethical web research: A new research ethics review. Paper presented at the WWW 2015 conference, Florence, Italy. Brighenti, A. (2007). Visibility: A category for the social sciences. Current Sociology, 55(3), 323–342. British Psychological Society. (2013). Ethics guidelines for internet-mediated research. British Psychol- ogy Society. Leicester. Retrieved from: http://www.bps.org.uk/system/files/Public%20files/

70

|

jonathon hutchinson et al.

Calamur,K.(2016).Twitter’s new ISIS policy.Retrieved from http://www.theatlantic.com/international/