Brennan Center for Justice
at New York University School of Law
Double Standards in Social Media Content Moderation
By Ángel Díaz and Laura Hecht-Felella
PUBLISHED AUGUST 4, 2021
 
Table of Contents
2
Brennan Center for Justice
 Double Standards in Social Media Content Moderation
ABOUT THE BRENNAN CENTER FOR JUSTICE
The Brennan Center for Justice at NYU School of Law is a nonpartisan law and policy institute that works to reform, revitalize — and when necessary defend — our country’s systems of democracy and justice. The Brennan Center is dedicated to protecting the rule of law and the values of constitutional democracy. We focus on voting rights, campaign finance reform, ending mass incarceration, and preserving our liberties while also maintaining our national security. Part think tank, part advocacy group, part cutting-edge communications hub, we start with rigorous research. We craft innovative policies. And we fight for them — in Congress and the states, in the courts, and in the court of public opinion.
STAY CONNECTED TO THE BRENNAN CENTER
Visit our website at
© 2021. This paper is covered by the Creative Commons Attribution-NonCommercial-NoDerivs license. It may be reproduced in its entirety as long
as the Brennan Center for Justice at NYU School of Law is credited, a link to the Center’s web pages is provided, and no charge is imposed. The paper may
not be reproduced in part or in altered form, or if a fee is charged, without the Center’s permission. Please let the Center know if you reprint.
Introduction
 .........................................................
3I. Content Moderation Policies: Discretion Enabling Inequity
 ..........................................
4
 A. Terrorism and Violent Extremism Policies ...........................
5
 B. Hate Speech .....................................................
7
 C. Harassment ......................................................
8II. Enforcement of Content Moderation Policies: Who Is Affected and How?
 ...........................................
10
 A. Latent Biases in How Offending Content Is Identified and Assessed ...
10
 B. Intermediate Enforcement and Public Interest Exceptions Protections for the Powerful .........................................
12III. Appeals Processes and Transparency Reports: A Start, Not a Solution
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
18
 A. User Appeals ...................................................
18
 B. Transparency Reports ............................................
19IV. Recommendations
 ................................................
20
 A. Legislative Recommendations ....................................
20
 B. Platform Recommendations ......................................
23Conclusion
 ..........................................................
27Endnotes
 ...........................................................
28
 
3
Brennan Center for Justice
 Double Standards in Social Media Content Moderation
research, and investigative journalism because the plat-
forms’ tight grip on information veils answers to systemic
questions about the practical ramifications of platform
policies and practices.
Section III concludes with a series of recommenda-
tions. We propose two legislative reforms, each focused
on breaking the black box of content moderation that renders almost everything we know a product of the information that the companies choose to share. First,
we propose a framework for legally mandated transpar-
ency requirements, expanded beyond statistics on the
amount of content removed to include more information on the targets of hate speech and harassment, on govern-
ment involvement in content moderation, and on the
application of intermediate penalties such as demoneti-zation. Second, we recommend that Congress establish
a commission to consider a privacy-protective framework for facilitating independent research using platform data, as well as protections for the journalists and whistleblow-
ers who play an essential role in exposing how platforms
use their power over speech. In turn, these frameworks
will enable evidence-based regulation and remedies.
Finally, we propose a number of improvements to plat-
form policies and practices themselves. We urge plat-forms to reorient their moderation approach to center
the protection of marginalized communities. Achieving
this goal will require a reassessment of the connection
between speech, power, and marginalization. For exam-
ple, we recommend addressing the increased potential of
public figures to drive online and offline harms. We also
recommend further disclosures regarding the govern-ment’s role in removals, data sharing through public-
private partnerships, and the identities of groups covered
under the rules relating to “terrorist” speech.
This report demonstrates the impact of content moder-
ation by analyzing the policies and practices of three
platforms: Facebook, YouTube, and Twitter.
3
 We selected
these platforms because they are the largest and the
focus of most regulatory efforts and because they tend
to influence the practices adopted by other platforms. Our evaluation compares platform policies regarding terrorist content (which often constrict Muslims’
speech) to those on hate speech and harassment (which can affect the speech of powerful constituencies), along
with publicly available information about enforcement of those policies.
4
 In section I, we analyze the policies themselves, show-
ing that despite their ever-increasing detail, they are drafted in a manner that leaves marginalized groups under constant threat of removal for everything from discussing current events to calling out attacks against
their communities. At the same time, the rules are crafted
narrowly to protect powerful groups and influential
accounts that can be the main drivers of online and offline
harms.
Section II assesses the effects of enforcement. Although
publicly available information is limited, we show that
content moderation at times results in mass takedowns of speech from marginalized groups, while more domi-nant individuals and groups benefit from more nuanced approaches like warning labels or temporary demoneti-zation. Section II also discusses the current regimes for
ranking and recommendation engines, user appeals, and
transparency reports. These regimes are largely opaque
and often deployed by platforms in self-serving ways that can conceal the harmful effects of their policies and prac-tices on marginalized communities. In evaluating impact,
our report relies primarily on user reports, civil society
Introduction
S
ocial media plays an important role in building community and connecting people with the wider world. At the same time, the private rules that govern access to this service can result in divergent experiences across different populations. While social media companies dress their content moderation policies in the language of human rights, their actions are largely driven by business priorities, the threat of government regulation, and outside pressure from the public and the mainstream media.
1
 As a result, the veneer of a rule-based system actually conceals a cascade of discretionary decisions. Where platforms are looking to drive growth or facilitate a favorable regulatory environment, content moderation policy is often either an afterthought or a tool employed to curry favor.
2
 All too often, the viewpoints of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of over-enforcement, while harms targeting them often remain unaddressed.
View on Scribd