You are on page 1of 21

Digital Journalism

ISSN: (Print) (Online) Journal homepage: https://www.tandfonline.com/loi/rdij20

Expanding the Analytical Boundaries of Mob


Censorship: How Technology and Infrastructure
Enable Novel Threats to Journalists and Strategies
for Mitigation

Jennifer R. Henrichsen & Martin Shelton

To cite this article: Jennifer R. Henrichsen & Martin Shelton (2022): Expanding the Analytical
Boundaries of Mob Censorship: How Technology and Infrastructure Enable Novel Threats to
Journalists and Strategies for Mitigation, Digital Journalism, DOI: 10.1080/21670811.2022.2112520

To link to this article: https://doi.org/10.1080/21670811.2022.2112520

Published online: 14 Sep 2022.

Submit your article to this journal

Article views: 1173

View related articles

View Crossmark data

Citing articles: 1 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rdij20
DIGITAL JOURNALISM
https://doi.org/10.1080/21670811.2022.2112520

Expanding the Analytical Boundaries of Mob Censorship:


How Technology and Infrastructure Enable Novel Threats
to Journalists and Strategies for Mitigation
Jennifer R. Henrichsena and Martin Sheltonb
a
Department of Journalism and Media Production, Washington State University, Pullman, WA, USA;
b
Freedom of the Press Foundation, Brooklyn, NY, USA

ABSTRACT KEYWORDS
Mob censorship, which “expresses the will of ordinary citizens to Information security;
exert power over journalists through discursive violence” is trad- journalists; mob censorship;
itionally considered a grassroots phenomenon. However, within online harassment;
technically mediated systems, who is behind the mob is some- technology; trauma
times unclear. We therefore ask how the technical affordances of
the Internet and telecommunications networks complicate the
identification of attackers and their motivations and multiply the
forms of retaliation that attackers level against journalists. We
conducted 18 semistructured interviews with seven current or for-
mer journalists, as well as 11 professionals with experience
defending news organizations, including security specialists, press
freedom advocates, and newsroom infrastructure support staff.
Through a constructivist grounded theory approach and in con-
versation with Lewis and Westlund’s (2015) 4A framework, we
found that journalists and those defending news organizations do
not reliably identify sources and motivations behind attacks,
which may be grassroots in nature but may also be instigated by
corporate or government actors. Journalists nonetheless infer
attribution and motivation from the context surrounding attacks.
Systemic issues related to the lack of diversity, ongoing financial
constraints, and journalistic norms of engagement, alongside a
lack of internal and platform support, exacerbate repercussions
from these attacks and harm journalism’s role in a democracy.

Introduction
Online abuse against journalists is increasing around the world, resulting in self-cen-
sorship among journalists and affecting the robustness and plurality of democratic
communication in public spheres. A variety of actors are engaged in digital abuse
against the media, from ordinary citizens who aim to intimidate and silence the press
through mob censorship (Waisbord 2020a) to state and parastate actors who use
sophisticated surveillance technologies and other methods to track, harass, and inflict

CONTACT Jennifer R. Henrichsen jennifer.henrichsen@wsu.edu


This article has been corrected with minor changes. These changes do not impact the academic content of
the article.
! 2022 Informa UK Limited, trading as Taylor & Francis Group
2 J. R. HENRICHSEN AND M. SHELTON

digital and physical harm against journalists (Kirchgaessner et al. 2021). These attacks
range in scope and sophistication but can encompass harassment, surveillance, hack-
ing, and physical harm (Brooks 2017; Henrichsen, Betz, and Lisosky 2015; Miller 2021;
Posetti et al. 2021; Waisbord 2020a, 2020b). They frequently intersect with misogynistic
and racist sentiments (Posetti et al. 2021) and have harmful repercussions for journal-
ists and news organizations, from censorship, intimidation, and trauma to labor precar-
ity, financial strain, and violence (Posetti et al. 2021; Stoycheff 2016).
As Waisbord (2020a) notes, three main developments have contributed to online
harassment of journalists in the United States: “easy public access to journalists, the
presence of toxic internet right-wing and far-right cultures, and populist demonization
of the mainstream press” (p. 1037). Mob censorship is an outgrowth of these three fac-
tors and has been framed as “bottom-up citizen vigilantism aimed at disciplining and
silencing journalists” (p. 1031). This definition captures a growing trend of discursive
attacks lobbed at journalists. The example of then-New York Times reporter Taylor
Lorenz is illustrative, with spikes of insults, threats, and toxicity appearing in her social
media feeds any time the mob is stoked by right-wing commentators (Brown,
Sanderson, and Silva Ortega 2022). To date, this framing of mob censorship does not
yet engage explicitly with the underlying technical systems used to organize and
enact such attacks. Some forms of mob censorship involve breaches of computers and
networks, such as distributed denial-of-service attacks on newsrooms and using mal-
ware to retaliate against reporting. Such technical attacks may overlap with discursive
mob censorship, constituting aggressive actions that exercise power over journalists
and that prevent them from doing their work. Yet, little research has examined how
mob censorship is enacted through the exploitation of computers, software, and net-
works, and mitigated or preempted through journalistic digital security practices.
To assess this gap, we use Waisbord’s (2020a) notion of mob censorship and Lewis
and Westlund (2015) 4A analytical framework of social actors (e.g., journalists, technol-
ogists, businesspeople), technological actants (e.g., email), audiences (e.g., recipients,
commodities, active participants), and activities (access/observation, selection/filtering,
processing/editing, distribution, interpretation). The 4A framework provides a way to
think through the different technological and sociocultural aspects that are associated
with journalism practice in a digital environment.
Drawing on these concepts and frameworks, we ask the following research ques-
tions: (1) How does the abuse of technical systems introduce distinct vulnerabilities
that further enable mob censorship?; (2) How do attackers’ uses of technical systems
challenge concepts of motivation and attribution?; (3) How have journalists imple-
mented digital security practices to mitigate mob censorship attempts?; and (4) What
are the responsibilities of news organizations and digital platforms in preventing and
responding to attacks?

The Growing Landscape of Digital Attacks against the Press


In recent years, online harassment of journalists has become an alarming trend, with
the capacity to pressure journalists into redirecting their work or to silence them. The
term online harassment has been used in scholarly and practitioner circles to
DIGITAL JOURNALISM 3

encompass a broad variety of behaviors, including abusive name calling, encouraging


others to harass a target, stalking, physical threats, sexual harassment, etc. (Vogels
2021). Waisbord (2020a) has argued that online harassment is mob censorship, or
“bottom-up, citizen vigilantism aimed at disciplining and silencing journalists” (p.
1031). Journalistic accounts have begun highlighting how the race and nationality of
journalists intersects with racist, xenophobic, and bigoted abuse, alongside misogynis-
tic attacks (Chen et al. 2020; Nelson 2021), while Posetti et al. (2021) have found that
women of color and those identifying as lesbian and bisexual were especially likely to
report experiences of online attacks.
Mounting evidence suggests that harassment influences journalists’ abilities to par-
ticipate professionally, with journalists avoiding certain stories, or considering leaving
the field (Ferrier 2018; Lo €
€fgren Nilsson and Ornebring 2016). Women journalists are
particularly likely to experience harassment and sexual harassment in their work (Chen
et al. 2020; Miller 2020) and female journalists are more likely than male respondents
to have considered leaving their job or self-censor (Miller 2020).
While journalists garner professional advantages when using social media, including
connections to new sources and promotional opportunities, institutional inertia and a
lack of professional training about how to recognize and address these attacks can
leave journalists underprepared (Miller 2021; Nelson 2021; Waisbord 2020b).
Furthermore, scholars have found that journalists are reluctant to adopt digital security
tools and practices for fear of slowing down their work (McGregor et al. 2015).
Additionally, journalistic perceptions of security and their particular beat inform what,
if any, digital security precautions they might take. McGregor and Watkins (2016)
revealed that journalists consider security risks through a mental model of “security by
obscurity,” or the belief that they do not need to concern themselves with security
risks unless they are working on particularly sensitive beats. Crete-Nishihata et al.
(2020) have argued that investigative reporters have mental models about digital
security that are distinct from those of non-investigative colleagues and are more
likely to cite surveillance, harassment, and legal actions against them as pri-
mary concerns.
Against this backdrop, a constellation of individuals within and beyond news organ-
izations are working to develop newsroom capacity to address online and physical
threats to their journalists. Security champions (Henrichsen 2020) build capacity along-
side colleagues by having conversations and organizing events and resources to pro-
mote digital safety education. These connective actors are often interested journalists
and IT staff who have taken on the mantle of an “accidental” security trainer
(Henrichsen 2021). McGregor (2021) has described measures that journalists can take
to better protect themselves and colleagues online, including institutional adoption of
cross-functional security teams involving editors, legal, and IT professionals, as well as
individual adoption of stronger security practices, including stronger authentication
practices and encryption software. Yet, information security cultures within newsrooms
remain nascent, ad hoc, or nonexistent, despite increasingly vitriolic environments fac-
ing the media (Henrichsen 2021).
In this article, the authors contribute to emerging scholarship on mob censorship
by examining how the abuse of technical systems introduces distinct vulnerabilities
4 J. R. HENRICHSEN AND M. SHELTON

that further enable mob censorship and how technological affordances challenge con-
cepts of attribution and motivation behind attacks on journalists. Although attribution
and motivation should be informed by contextual clues, such as the political climate
surrounding attacks against journalists, it is increasingly difficult to assume the identi-
ties of attackers who, under the cloak of relative anonymity online, may borrow other
identities. In an environment where attackers leverage bots to send targeted hate
messages on Twitter, and spam networks to overload journalists’ email servers, tech-
nically mediated attacks complicate the assumptions scholars and others may make
when describing who is attacking journalists and their motivations for doing so.

Methods
Following IRB approval, and between December 2021 and January 2022, the authors
utilized a snowball sample to conduct 18 semistructured interviews with seven current
or former journalists, as well as 11 professionals with experience defending news
organizations, including security specialists, press freedom advocates, and newsroom
infrastructure support staff. A snowball sample was used because of the sensitivity of
the topic, the need for trust, and because it facilitates inductive, theory-building analy-
ses (Miles, Huberman, and Saldan ~ a 2014). The authors did not conduct additional
interviews once theoretical saturation occurred because saturation is an indication of
validity. The authors required that participants have experience (1) being the victim of
a technically mediated attack, (2) having worked to support newsroom infrastructure
against such attacks, or (3) having worked for a technology company or organization
that has developed strategies/services to help journalists counter attacks. Nearly all
participants are U.S. nationals, and the majority of the interviewees work in the United
States, so the analysis is primarily U.S. focused. Interviews averaged 53 min in length
and were recorded and transcribed. The authors solicited participation through email,
Slack groups, LinkedIn, and private messages on Twitter. Interviewees were given the
option to be anonymized, but all chose to be identified for this article.
Through a constructivist grounded theory approach (Charmaz 2006), the authors read
through the transcripts multiple times and inductively identified interview themes fol-
lowing open, select, and theoretical coding using MaxQDA 2022 software. The authors
then collaboratively iterated to unite those themes. In constructing this analysis, the
authors relied on Lewis and Westlund (2015) framework examining actors, actants, audi-
ences, and activities in journalistic work. This framing is especially insightful for orienting
journalists’ experiences of harassment in relation to the numerous technologies that
journalists use to interface with each other and their audiences, the expansive networks
of workers and related business interests within media institutions and technology pro-
viders, and the behaviors of media organizations and mob harassers.

Understanding the Technological Dimensions of Mob Censorship and Its


Impact on Journalists
The authors’ analysis revealed the characteristics of mob censorship and the difficulty
in determining attackers’ motivations and the challenges of pinpointing attribution.
DIGITAL JOURNALISM 5

The authors also discovered a lack of support from newsrooms and platforms to miti-
gate or prevent such attacks and conclude with possible solutions for the amelioration
of mob censorship.

Characteristics of Mob Censorship


Mob censorship can occur in patterned ways, including retaliatory and endemic forms
of harassment. According to former Committee to Protect Journalists Advocacy
Director Courtney Radsch, retaliatory harassment is “related to specific reporting on a
specific issue or a specific type of topic that the person covers,” whereas endemic har-
assment focuses on reporters of specific beats, such as those “covering QAnon, or
female tech reporters, or, increasingly, COVID reporters.” Harassment also may be
endemic to the journalist in part because of their identity. Echoing prior work (Konow-
Lund and Høiby 2021, Posetti et al. 2021) female reporters and minority journalists dis-
proportionately experience both types of harassment (Radsch, January 4, 2022).
The forms and types of attacks against journalists are myriad, ranging from amateur
efforts to more technically sophisticated mechanisms that may be platform specific. As
ProPublica’s Mike Tigas (December 20, 2021) suggests, sometimes attacks against
news organizations’ websites are untargeted and organizations are just “unlucky” the
day the attacker decided to exploit a particular vulnerability in their platform. Other
times, the attacks are coordinated, planned, and carried out with the express intent to
harass, discredit, and silence (Ferrier 2018; Vilk, Vialle, and Bailey 2021). In the context
of mob censorship, digital attacks that prevent or slow the progress of journalistic
work should be distinguished from surreptitious risks to reporters, such as surveillance
and other less overtly disruptive threats. Further, mob censorship may be marked by
attacks that are either prolonged or episodic in nature.
Another characteristic of mob censorship is how interconnected physical threats
and digital risks can be in a technologically mediated environment. Director of Digital
Security for the Freedom of the Press Foundation Harlo Holmes recounts one of the
scariest harassment campaigns she has witnessed thus far in her career. The campaign
was state-sponsored but resembled a citizen-led grassroots campaign, targeting a par-
ticular journalist who wrote a critical article, drawing the ire of a South American gov-
ernment. She says the government responded by weaponizing supporters and
launching a campaign over Facebook, WhatsApp, and Twitter simultaneously, encour-
aging people to dox the journalist (i.e., leak sensitive personal data, such as the names
of family members or a home address). Shortly thereafter the journalist moved to a
safe house. Despite this harassment and intimidation, the journalist appeared on the
media to talk about the article he had written. Following his interview, he was over-
whelmed by a violent mob that threatened his life. In response, Holmes’ organization,
alongside Reporters Without Borders and the Committee to Protect Journalists, alerted
U.S. government resources to assist the journalist in escaping the region (Holmes,
January 12, 2022).
A number of factors facilitate mob censorship, including the particular affordances
of the technologies that journalists rely on, and the technological infrastructure used
by individuals and groups to coordinate and carry out digital attacks. The nature of
6 J. R. HENRICHSEN AND M. SHELTON

these underlying affordances allows technologies to be used for positive or negative


engagement. Researchers have shown how attackers can repurpose tools built to sup-
port users such as account reporting features, into weapons that can harm the very
communities they were designed to protect (Vilk, Vialle, and Bailey 2021). Harassers
are increasingly savvy about how to exploit these systems. As TrollBusters Founder Dr.
Michelle Ferrier points out, “Even the block, mute, and report functions on Twitter are
problematic in that these groups will use those reporting functions to mass report
innocuous and neutral content that’s posted by journalists.” The use of this reporting
function triggers the platform’s algorithms, which in turn shuts down the journalist’s
account and requires them to go through a process of getting their account reinstated
(Ferrier, January 7, 2022).
Gaming platform algorithms is not new and evolves over time. Shortly after the
September 11 attacks, television and culture writer Lorraine Ali began to focus more
of her writing on the portrayal of Muslims and Arabs in U.S. media. This reporting
drew the attention of trolls, who took advantage of the Google algorithm to derank
her work (although companies like Google regularly refactor their algorithms to detect
such behavior):
when you would Google my name, the top results were all these guys saying I was a
terrorist, a sleeper cell babe … It just felt really isolating. I felt like nobody understood
how disturbing this was. (Ali, January 3, 2022)

The abuse was fairly amateur in the beginning, with attackers frequently linking her
stories and her Wikipedia to their blog and mentioning her name, which had the
effect of leading search engines to rank malicious entries higher than genuine ones.
She says they were “ahead of the game” and knew how to co-opt the algorithms,
including how many times they should mention her name or what to link it to.
“Eventually, Wikipedia locked my page because it got so bad [with] all these people
messing with it.”
Social media companies are often aware of users gaming these feedback systems,
such as Facebook’s work on coordinated inauthentic behavior, but abuse persists
(Iosifidis and Nicoli, 2020; Keller et al. 2020). When Twitter introduced a content mod-
eration policy intended to prohibit people from posting images without the permis-
sion of the depicted individuals, New York Times journalist Kate Conger (December 21,
2021) recounts how it started being weaponized against groups that were reporting
on white supremacist groups because it forced the removal of posts and images that
identified white supremacists.
Another factor that can facilitate mob censorship is the ability of users to engage
in cross-platform coordination. As Ferrier (January 7, 2022) notes, sometimes these are
smear campaigns designed with coordinated activity that occurs on platforms like
8kun, 4chan, or subreddits. PEN America Program Director of Digital Safety and
Freedom of Expression, Viktorya Vilk (January 4, 2022) concurs, pointing to how coord-
ination can occur in “really dark corners of the Internet … where people say, ‘Let’s stalk
this person next’” then point the mob to resources where they can find the personal
information necessary to harass that individual. In the end, “it looks organic on Twitter
or Facebook, but it’s not, it actually started in various dark corners of the Internet.”
DIGITAL JOURNALISM 7

Social media platforms like Twitter operate as interstitial actants that serve as
bridges between journalists’ work products (i.e., stories) decided on internally within a
newsroom and the promotion of the finished products on Twitter. Journalists leverage
the affordances of Twitter to connect with sources and research story ideas, revealing
a fluid exchange of activities mediated by the platform. Simultaneously, mob harassers
leverage the technological affordances of Twitter and its interstitial nature to game
algorithms and reporting functions to carry out coordinated harassment campaigns
against journalists. Harassers thus use technological actants to disrupt journalistic
activities, punish journalists who are social actors, and shape audiences’ perspectives
on journalism.

The Ambiguous Nature of Motivations and Attribution


When describing an attack through the lens of mob censorship, one fundamental chal-
lenge is identifying when a mob is truly responsible. Sometimes genuine grassroots
activity is replaced by orchestrated content by political actors to usurp the activity
and bend it to their benefit, a phenomenon known as grassroots orchestra (Vergani
2014). State-sponsored attackers may also mimic “bottom-up activity by autonomous
individuals,” a concept known as digital astroturfing (see Kovic et al. 2018).
Additionally, individual and state-sponsored attackers may work in isolation while pos-
ing as a mob through the use of social bots that they leverage to create an inaccurate
impression that a specific opinion has widespread support (Zerback, To €pfl, and
Kno€pfle 2021). Ostensibly grassroots attacks are sometimes state-coordinated, as illus-
trated by the example of Rappler CEO and Executive Editor Maria Ressa, who for years
has suffered online attacks from paid trolls, bot armies, and supporters of former
Philippines President Rodrigo Duterte (International Press Institute 2020). Because of
the ease with which attackers can impersonate others online and how computing sys-
tems can be made to automate such attacks, attribution for online attacks may be
ambiguous or misleading; requiring researchers and media organizations to also
account for the behavioral context surrounding an apparent mob attack. This context
is also important for making inferences about the motivation for such attacks — an
important piece of information to understand why an attack occurred.
Just as the types of attacks against journalists are myriad in nature, so too are
attackers’ motivations. They may not like what a journalist said or published, they may
be misogynistic and racist and aim to remove female journalists and persons of color
from platforms, they may perceive journalists as the enemy, or they may be spiteful
individuals looking for an avenue to channel their hate and vitriol (Ferrier 2018;
Petersen 2018). In situations where a news organization has published a critical article
about hate groups and then is attacked, journalists can make a “pretty good guess”
about why and what types of groups might be behind it. In those cases, a news
organization might debrief about the attack, including who might have coordinated it
and how it happened (Tigas, December 20, 2021). The Citizen Lab’s work highlights
opportunities to determine when an attack is not a bottom-up mob attack by closely
analyzing how an attack was conducted with forensic indicators, which may point to
corporate or state backing. From there, it becomes a bit easier to pinpoint an
8 J. R. HENRICHSEN AND M. SHELTON

attacker’s motivation if an attack is larger scale or affects a particular population dur-


ing a certain period of time, such as when 36 journalists at Al Jazeera were hacked
with Pegasus spyware in a short time span by two governments working together
(Marczak, December 9, 2021).
The murkiness of identifying abusive actors online stems in part from the way
the Internet is designed, including its decentralized infrastructure, which allows for
mass spam and denial of service attacks, as well as the ability of users to route
their communications through anonymized servers (Angwin 2017). Despite this,
researchers have been able to trace some attacks by examining the business model
of corporations that sell spyware that can be used to track, harass, and hack jour-
nalists. According to Marczak, companies like NSO Group, which sell spyware, have
to first demo the spyware to new customers and perform quality assurance testing,
etc., which requires them to set up infrastructure using their own domain names or
websites. Marczak and his fellow researchers can then fingerprint the infected links
that have been sent to journalists, scan, and trace them to servers registered to
NSO Group, like NSO’s quality assurance and demo servers (Marczak, December 9,
2021). This makes attribution to a well-established company with a wide customer
base like NSO Group fairly easy. Marczak acknowledges that attribution becomes
more difficult when researchers try to examine “bespoke” threat actors who are
“only operating using tools natively developed by a government or by a single
group, that are not sold, are not shared, are not used anywhere else” (Marczak,
December 9, 2021).
Additionally, mob censorship–related attacks can come under the guise of an aver-
age citizen, even when they are actually carried out by corporations or by govern-
ments. Holmes says, “We have definitely seen that, and it continues to be incredibly
effective.” (Holmes, January 12, 2022)
Attributing the apparent actors behind mob censorship is complicated by the meld-
ing of motivations between state and grassroots actors. Harassment may be state-
sponsored or ideologically inspired by state actors. Such state-inspired harassment
could be considered “state-aligned,” which “allow[s] for different types and degrees of
state or government involvement” (Radsch, January 4, 2022). State-sponsored and
state-aligned harassment can intersect with state-sponsored harassment involving
regular people and state-aligned harassment involving the government. Radsch cites
the United States as a prime example of state-aligned harassment because it has inde-
pendent media with a degree of ideological alignment.
Increasingly, there is evidence of an interplay between individuals carrying out
online harassment on social media platforms and the amplification of harassment
through the broader media ecosystem (Brown, Sanderson, and Silva Ortega 2022). The
harassment may start out with a target saying something on Twitter before going viral
across various media platforms. Media pundits, like Fox News’s Tucker Carlson, may
amplify it before it gets carried by a top radio circuit and then the podcast circuit. “It’s
a whole playbook” that “ricochets,” says Vilk (January 4, 2022). Then-New York Times
tech journalist Taylor Lorenz experienced this firsthand when she was berated by
Carlson and others, resulting in a gendered disinformation campaign to silence her
(Brown, Sanderson, and Silva Ortega 2022).
DIGITAL JOURNALISM 9

Pundits who have written Twitter posts to harass and intimidate members of the
press may have far-right or conservative platforms, amplifying their harassment to an
audience beyond journalists’ traditional orbit (Holmes, January 12, 2022). This amplifi-
cation points to an underlying feature of the media ecosystem, namely that engage-
ment—regardless of merit or tone—is often lucrative to the business model of the
media. As Holmes notes, “as much as it’s hurtful, as much as it’s so psychologically
damaging and harmful, it’s very good for business, no matter where on the ideological
spectrum you’re coming from.” Such “cycles of amplification” (Phillips 2015) suggest
an interplay between trolls who are explicitly trying to instigate and grab media atten-
tion and the media who may unwittingly or deliberately play into it.
Assessing motivation for harassment requires looking closely at the context sur-
rounding the attack. Kate Conger describes a story about Indian women in journalism
who were targeted by scammers masquerading as Harvard University staff. After
extensive conversations with the women, the scammers extended offers for roles at
Harvard—roles that didn’t exist (Gettleman, Conger, and Raj 2021). Victims sometimes
quit their jobs, only later to learn the reality. Conger (December 21, 2021) concurs
with the difficulty in assigning attribution for such attacks and notes how attacks can
be “a weird crossover between nation-state behavior and state and culture.” The diffi-
culty in determining who is behind an attack leads some defenders to deprioritize
attribution. Tigas (December 20, 2021) says, “You’ll never get the explicit link between
A and B … I care less about that, I just care more about being able to survive
the attacks.”
To the extent mob censorship is conducted by a grassroots collective, that collect-
ive may be politically motivated by state or corporate actors. These actors can lever-
age technical actants to disguise themselves as citizen actors engaged in harassment
while they coordinate campaigns across platforms and through a sophisticated relay
with the broader media ecosystem. These conditions suggest the concept of a mob as
a “bottom-up” phenomenon is porous and requires closely examining the context sur-
rounding a particular mob attack.

Individual Level Responses to Mob Censorship


In response to trauma, journalists have developed various coping strategies, including
ignoring and reporting harassing behavior and setting digital boundaries. Selena
Larson (December 21, 2021), a cybersecurity analyst who previously worked as a
reporter, describes how in the beginning the harassment and unpleasant comments
would “irk” her, but she quickly realized that harassers are “not operating from a place
of genuine human interaction” so now she ignores or blocks, and reports the abuse.
She realized she “just can’t spend the emotional energy caring about this, or putting
any thought into it” and she thinks her lack of response has led some harassers to
leave her alone. First Look Media’s Director of Information Security, Micah Lee, has
responded to trauma inflicted by harassment and by reporting on Neo-Nazi groups by
removing social media applications from his phone, taking a full day off from work
each week, and setting boundaries to better bracket his workday. He says, “I have my
own internal policies of, ‘Don’t read any news after you’re done with work’. It’s really
10 J. R. HENRICHSEN AND M. SHELTON

important to not fixate on terrible things that you’re working on” (Lee, December
15, 2021).
Sometimes journalists choose to keep the harassment they’ve experienced to them-
selves. Security consultant and former New York Times Information Security Director
Runa Sandvik (December 17, 2021) argues that the subjectivity underscoring harass-
ment is partly what makes online risks so dangerous and effective because:
You’ve got reporters that receive a lot of hate, but don’t necessarily say anything, and, at
least to colleagues and to the general public, don’t seem affected by it, and don’t really
talk about it … It’s just become normal.

Experiences of harassment and abuse are taxing because they require constant
evaluation and, in turn, push journalists to either disengage or confront attackers.
These constant evaluations accumulate and demand coping responses, such as setting
boundaries around participation at work or on social media—a difficult task consider-
ing the journalistic norm of engagement with audiences.

Lack of Holistic Support in Newsrooms and on Platforms


The lack of systematic support structures within news organizations compounds the
difficulties that journalists face when they experience harassment. This gap is par-
tially informed by who traditionally has experienced the most harassment, such as
the historically marginalized, and who has not. The individuals who are in positions
of power, including editorial and management roles, skew older, white, and male, a
demographic that is less traditionally victimized by online abuse. The lack of per-
sonal experience with harassment may inform the hands-off approach some manag-
ers take with employees who experience abuse. Charlot (December 22, 2021) says
this may occur because divergent voices are not prioritized. She argues, “If they
don’t feel like they’re directly being harassed or being mobbed in some kind of
way, then it’s not a concern.” Crypto Harlem Founder Matt Mitchell (December 14,
2021) says that another problem facing journalists who experience harassment is
that “the people who are supposed to help you, they don’t look like you. They
don’t understand the problem.” He argues that these distinct experiences can add
onto the trauma journalists experience from the initial harassment. “They actually
create more harm … Where you’re like, ‘Hey, I’m going through the proper channels.
There’s nothing here for me’.”
Another aspect of the trauma is that it is not only external to news organizations.
Mitchell recounts, “It comes from inside and outside. It comes from your colleague
and from a Proud Boy on Twitter.” These experiences are common among underrepre-
sented reporters (Reuters Institute 2021) who report experiences of dismissal among
colleagues, particularly those in Western newsroom leadership, who are also far less
diverse than the general population (Eddy and Nielsen 2022). This imbalance reflects a
lack of opportunity and understanding of the experiences of newsroom colleagues
and audiences that media organizations serve.
The lack of newsroom support around online harassment sometimes ties into
broader conversations about the lack of diversity in newsrooms and whose voices are
heard or prioritized and whose are not. Conversely, sometimes when conversations
DIGITAL JOURNALISM 11

about diversity, equity, and inclusion occur within newsrooms, they now include con-
versations around safety, security, and even mental health because journalists of color
and female journalists tend to be disproportionately harassed (Vilk, January 4, 2022).
Incongruent experiences of harassment between reporters and more senior staff
may leave reporters feeling isolated and frustrated. When journalist Yael Grauer
(December 13, 2021) received suspicious emails from a group of readers questioning
the veracity of her reporting about China’s regional surveillance of Uyghurs, she was
willing to point to specific documents to defend her reporting, yet she was told
by individuals at a former organization she was freelancing for, to just ignore it.
Because experiences of online attacks are deeply exhausting and increasingly visible,
Carew Grovum (January 5, 2021) questions why so many newsrooms have failed to
formulate responses: “There’s just so much visibility for the stuff that is so toxic … we
just haven’t as an industry stopped and looked these staffers in the eye and said, ‘Are
you okay?’”
Even when newsroom leadership anticipate the seriousness of technically mediated
attacks on their reporters, they may be forced to consider the potential costs of
reasonable defenses against such attacks; thereby implicating the business side of
cross-media newswork. They may also imagine these attacks to be disproportionately
expensive to address, which depends on each news organization’s individual capacity.
Holmes (January 12, 2022) notes:
people are always just thinking, “Oh, man, people are going to tell us to put everybody in
a hotel when they get retweeted.” No, part of that decision-making tree is having
resources and support systems on hand for somebody to just get through a bad day
on Twitter.

Although journalists continue to receive harassment across social media platforms,


these platforms also play a role in journalists’ abilities to connect with audiences, pub-
lish stories, and cultivate sources. Engagement with these social media platforms has
become an integral part of journalists’ routines and professional norms. Grauer
(December 13, 2021) remarks, “I check Twitter, I check Facebook, I check Instagram, I
check LinkedIn. It’s a constant part of my day.” This was especially important to her
for maintaining momentum on her articles when she was a freelancer: “I needed to
get some buzz that would help me get more work in the future. It would have just
been shooting myself in the foot to just leave.”
Just as newsrooms encourage reporters to promote their work on social media,
they also hire social media editors and audience engagement professionals to manage
social media handles, newsletters, and to field conversations with audiences. Carew
Grovum (January 5, 2022), previously a social media editor, observes that newsrooms
with which she had worked regularly received a barrage of hateful comments. In her
experience, the responsibility for dealing with them is disproportionately shouldered
by women and people of color. She notes that, “over the past, say, 10 years, you’ve
seen a lot of journalists of color being put on the front lines of social media and audi-
ence engagement.” These journalists are “often young, unsupervised, or undertrained
people of color” who have to see hateful tweets to the newsrooms’ branded accounts
(e.g., @NYTimes, @Washington Post) and receive hateful tweets on their own
Twitter accounts.
12 J. R. HENRICHSEN AND M. SHELTON

The splintered experiences of harassment among newsroom staff often leave jour-
nalists improvising defensive and psychological strategies, yet platforms also play a
role in how the abuse proliferates. For example, the incentive structures within social
media platforms sometimes promote abusive behavior because toxic content brings
more engagement while the solutions that exist around online abuse on social media
platforms cost money to build and don’t bring in profit (Vilk, January 4, 2022).
Additionally, platforms’ business models often prioritize engagement and lend support
to more high profile individuals like celebrities rather than journalists because they
attract more ad traffic and money to the platforms. Journalists are rarely superusers,
and, in Mitchell’s experience, it’s that level of following that social media platforms pri-
oritize: “The level of response you get from the company is tied to the level of
engagement around your account. Stats that you’ll never know. Stats that you cannot
see and they do not share.”

Mechanisms to Mitigate Mob Censorship on Platforms and in Newsrooms


Participants described a need for platforms to take a larger role in the development of
tools and techniques that could mitigate mob censorship. Radsch (January 4, 2022)
envisions tools for observing trends that precipitate abusive behavior, while also point-
ing out the need for further human intervention to stem such abuse. She suggests
that tech companies could create a system that involves a series of signals that would
inform their algorithm and trigger human review, such as when a large group of peo-
ple post about the same person around the same time or use suspiciously similar lan-
guage. Some content moderation tools do exist, such as Jigsaw’s Perspective tool and
The Thomson Reuters Foundation’s TRFilter, which both use machine learning to rec-
ognize and flag harmful tweets (Marvin 2019; Thomson Reuters Foundation 2022).
While technology platforms can provide stronger defensive technologies, users may
not always know about or take advantage of them. Compared to defenses against dis-
cursive harassment, in recent years platforms have developed clear standards to
defend users from the most common types of technical attacks, such as account
hijacking through phishing (Verizon 2021). Technology standards like two-factor
authentication (2FA), which requires users to provide a second piece of information
beyond their password before logging in, are now commonly supported, but not uni-
versally so. Where 2FA is supported, most users do not enable it. For example, in a
2020 Twitter transparency report, the company shared that just 2.3% of users used
this technology to protect their accounts (Twitter 2020).
Christopher Harrell (January 12, 2022), CTO at Yubico, a company that builds tech-
nology to defend users’ accounts, points out that attacks tend to be unsophisticated
phishing attacks, which is why he pushes strong phishing-resistant authentication
methods that are enabled by default. He says that enterprise platforms commonly
used in newsrooms, such as Google Workspace or Microsoft 365, allow entire organiza-
tions to require conditional access so that multifactor authentication is necessary to
access a particular resource; thereby helping to secure accounts.
However, journalists operate many accounts that they rely on individually, such as
their social media profiles, and may not use the technical protections available. Those
DIGITAL JOURNALISM 13

who are most vulnerable would also stand to benefit the most from such interven-
tions. That said, anti harassment interventions from platforms and policymakers are
rarely obvious and widely deployable, and need to be considered carefully for how
they may be used to inflict further abuse in the contexts where they will be utilized.
Interviewees acknowledged the need for social media platforms to take more
responsibility in combating mob censorship against journalists by building the tooling
and reporting systems necessary to stop the spread of vitriol, such as tools for parsing
account histories and acting on old content in bulk.

Mitigating Mob Censorship in Newsrooms


Despite the lag in providing interventions, security specialists working with news
organizations note growing concern over harassment, galvanizing some newsrooms to
ask critical questions or formalize internal policies concerning abuse, including when
the newsroom should intervene.
Some newsrooms … at least recognize that these are questions that keep coming up.
Do we support them? When do we support them? What do we say? Which platform do
we say it on? Who writes a message internally? (Sandvik, December 17, 2021)

These policies sometimes include technical interventions to preempt severe harass-


ment, exemplified by doxing tactics. A growing number of news organizations invest
in services to help remove public records from “people search” websites such as
Radaris, Intelius, and others. Ali (January 3, 2022) described how her news organization
reacted after one of its journalists wrote about a famous person and swiftly got doxed
by the celebrity’s fans in response, including the posting of the journalist’s physical
address online. The paper “helped her enroll with a service that helps totally scrub all
her information offline, and then they just helped me do that too. We’re getting there.
They are trying to help.”
Another news organization that has recognized the escalation of doxing and the
need for content removal sites is First Look Media’s The Intercept. Micah Lee says his
organization has had access to a corporate account for DeleteMe, a service for remov-
ing personal entries from people search websites. While this approach will slow down
most casual harassment, a persistent attacker may still be able to find personal infor-
mation. Likewise, it’s also possible to target the names of family members, prompting
Lee’s newsroom to change their policies to extend DeleteMe subscriptions to the fam-
ily members of journalists who have been targeted (December 15, 2021).
Fundamentally, a culture of supporting colleagues experiencing harassment can
take many forms, from recognizing and validating colleagues’ experiences and utilizing
trauma specific resources (e.g., Dart Center for Journalism and Trauma) to actively
defending one another. When done well, collective assistance recontextualizes experi-
ences of harassment and abuse into experiences of aid and solidarity. Grauer
(December 13, 2021) observes how some news organizations have a supportive cul-
ture for collectively addressing trolls and recounts experiences defending colleagues
when they needed it. The tool, Block Party, allows users to block or mute trolls on
Twitter and enlist friends or colleagues to assist can be helpful in this regard, but she
argues it’s important to have broader newsroom support when responding to attacks:
14 J. R. HENRICHSEN AND M. SHELTON

“I feel there should be pushback. BuzzFeed does this really well if trolls come. It’s not
the writer responding, it’s the editors swatting the trolls on their behalf.” Despite this,
blocking and reporting is sometimes not easy or possible amid massive coordinated
attacks, like those that have targeted Filipino journalist and Nobel Prize winner Maria
Ressa (International Press Institute 2020).
A constellation of policy changes and technical support systems belies a deeper
need for cultural change within news institutions and a more enduring commitment
to prioritizing colleagues’ safety and well-being. In essence, structural changes need to
occur at the individual, institutional, and societal levels because of the technologically
mediated interplay that occurs between news organizations, platforms, and publics.

Conclusion
This article addresses how actors use technological infrastructure and its inherent
properties of ambiguity to reshape the dimensions of mob censorship beyond bot-
tom-up citizen attacks to encompass a broader range of actors, who use the guise of
grassroots action to launch attacks against journalists. When abuse is filtered through
pseudonymous websites and abusers leverage attack techniques that do not provide
obvious attribution, the identities of attackers are often obscured. Journalists and
those who defend newsrooms are ill-equipped to discern the identities and motiva-
tions of their harassers, as unknown trolls and state actors disguise themselves as
grassroots mobs to retaliate against the press (Angwin 2017; Reporters Without
Borders 2018). In turn, state-aligned grassroots actors aim to discredit journalists,
inflamed by the words of political leaders and even media actors. These issues are fur-
ther amplified by ambiguities surrounding coordination, when attackers may work
together through concealed backchannels, may pile on dyadic harassment in an ad-
hoc manner, or may hurl similar attacks at journalists independently of other harassers’
actions.
The boundaries of mob censorship are also expanded because not all mob attacks
on reporters are discursive in nature, such as campaigns to overwhelm newsroom
email servers with mass spam (Angwin 2017) or denial-of-service attacks that take a
news organization’s servers offline (Tigas, December 20, 2022). Journalists may none-
theless look to the context surrounding these technical attacks and recognize when
they may be retaliatory. At other times, such attacks are not intended to harass.
Newsrooms may be simply in the crossfire of an ongoing campaign, as with financially
motivated ransomware attacks that lock up newsroom computers so they can only be
unlocked for a hefty payment (Brooks 2017). Many attacks are possible only through
the technical affordances of online platforms, such as coordinated, mass campaigns
that abuse reporting functions on social media sites to deplatform journalists’
accounts (Harwell 2021).
The 4A framework clarifies the ways in which journalism is increasingly intercon-
nected with “technological tools, processes, and ways of thinking as the new organiz-
ing logics of media work” (Lewis and Westlund 2015, 21). Mob harassers using
technological affordances complicate the well-established conceptual notions of audi-
ences as recipients (e.g., Westley and MacLean 1957), commodities (e.g., Smythe 1977),
DIGITAL JOURNALISM 15

and active participants (e.g., Singer et al. 2011) because they leverage technological
actants like Twitter to shape the intersection between actors (journalists) and journalis-
tic activities. Although active audiences may be involved in journalistic activities and
innovation (Picard and Westlund 2012, as cited in Lewis and Westlund 2015), scholars
have also shown that journalists view active participants as only reacting to journalistic
work rather than taking an active role in the creation of news (Singer et al. 2011).
While active audiences have generally been conceptualized as prosocial in their reac-
tions to journalistic work (e.g., when news media utilize a participation-centric
approach (Picard and Westlund 2012, as cited in Lewis and Westlund 2015), mob har-
assers may be active audience members with malicious agendas.
Despite this, journalists often follow the journalistic norm of engagement in which
they interact with audiences in good faith. Scholars have conceptualized this prosocial
relationship between journalists and their audiences as “reciprocal journalism” (Lewis,
Holton, and Coddington 2014) or a mutually beneficial exchange that underpins
norms and participatory practices in journalism (Coddington, Lewis, and Holton 2018)
and that seeks to explain the ways in which audiences and journalists interact (see
also Groshek and Tandoc 2017; Russell 2019).
Our findings expand the boundaries of mob censorship by showing how mob har-
assers indirectly affect some of the journalistic activities outlined by Domingo et al.
(2008) and utilized in the 4A framework. For instance, mob harassers may pollute the
typically internal process of selection/filtering if the harassment is severe and the jour-
nalist self-censors or changes beats to avoid harassment. The activity of distribution
through a technological actant like Twitter may also be impacted if individual journal-
ists decide to not promote their stories on the platform in order to mitigate harass-
ment. The activity of interpretation is also on display in the context of mob censorship
with mob harassers commenting on social media platforms following the publication
and/or promotion of a story. The other activities articulated by Domingo et al. (2008)
and utilized in the 4A framework (i.e., Access/Observation, Processing/Editing) are not
as clearly implicated in our study.
Our findings also reveal how the lack of systematic support structures within news
organizations, including among important actors in the newsroom (e.g., technologists
and business people), compound journalistic difficulties when responding to mob cen-
sorship. Power differentials within the newsroom, which are often informed by gender,
race, and role, contribute to incongruent experiences of harassment between reporters
on the front lines and more senior staff and limit the ways in which attacks
are mediated.
As the practice of digital journalism continues to expand, the role of new social
actors and technological actants are increasingly important to evaluate in the context
of mob censorship. The emergence of new social actors—from grassroots individuals
to parastate or state actors pretending to be grassroots actors—who leverage inter-
connected technological actants and technological infrastructure to attack and silence
journalists reveals the newly expansive ways that they can influence and change jour-
nalistic activities and journalistic engagement with audiences.
Alongside discursive harassment, technical forms of aggression place even steeper
demands on contemporary journalism. The kaleidoscopic properties of computing
16 J. R. HENRICHSEN AND M. SHELTON

systems enable novel forms of hostility against journalists and newsrooms, contesting
existing ideas of attribution, motivation, and the nature of attacks themselves. While
contemporary attacks may depend on overwhelming newsroom infrastructure with
unwanted data or abusing the features of social media websites to deplatform journal-
ists, future attacks will continue in relation to the affordances of newsroom and plat-
form infrastructure in unforeseen ways.
These forms of aggression have important implications for the practice of journal-
ism in a technologically mediated environment, where journalistic norms necessitate
online engagement, yet where journalists remain undersupported in responding to
attacks. Scholars have shown how female journalists have engaged in culturally gen-
dered strategies to confront intimidation, threats and violence in their work because
they do not have systematic support from their news organizations (Konow-Lund and
Høiby 2021). Similarly, scholars (Henrichsen 2021) have shown how “security
champions” fill gaps in news organizations that lack systematic solutions to hostile
environments. Sustained harassment and mob censorship have real impacts on jour-
nalists, including trauma, self-doubt, and psychological exhaustion, leading to disen-
gagement, disassociation, self-censorship (Waisbord 2002), and departure from the
profession (Ferrier 2018; Freedom House 2017). Harassment also impacts journalistic
autonomy or “the ability of individual journalists to work and act independently of fac-
tors internal and external to the newsroom” (Lo €
€fgren Nilsson and Ornebring 2016; see
also Reich and Hanitzsch 2013, 315), thereby complicating journalists’ ability to carry
out their roles in a democratic society.
The technical dimensions of attacks on journalists have implications for the further
study of mob censorship, raising thorny questions: How might researchers better
detect and attribute technically contingent attacks? What policy or technology
changes at the organizational and platform levels could minimize harms? The answers
to these questions are important to investigate amid an increasingly polluted informa-
tion ecosystem rife with mob censorship and other digital attacks against journalists
whose work remains important to democracy.

Acknowledgements
The authors would like to thank their interviewees for sharing their time and expertise.

Disclosure Statement
No potential conflict of interest was reported by the author(s).

ORCID
Jennifer R. Henrichsen http://orcid.org/0000-0003-4527-151X
Martin Shelton http://orcid.org/0000-0002-6130-5823

References
Angwin, J. 2017, November 9. “Cheap Tricks: The Low Cost of Internet Harassment.” ProPublica.
https://www.propublica.org/article/cheap-tricks-the-low-cost-of-internet-harassment
DIGITAL JOURNALISM 17

Brooks, J. 2017, November 21. “At KQED, Lessons from a Crippling Ransomware Attack.”
Columbia Journalism Review. https://www.cjr.org/analysis/kqed-npr-lessons-ransomware-attack.
php
Brown, M., Z. Sanderson, and M. A. Silva Ortega. 2022, January 26. “Gender-Based Online
Violence Spikes after Prominent Media Attacks.” Tech Stream. Brookings. https://www.brook-
ings.edu/techstream/gender-based-online-violence-spikes-after-prominent-media-attacks/
Charmaz, K. 2006. Constructing Grounded Theory: A Practical Guide through Qualitative Analysis.
London: Sage Publications.
Chen, G. M., P. Pain, V. Y. Chen, M. Mekelburg, N. Springer, and F. Troger. 2020. “‘You Really
Have to Have a Thick Skin’: A Cross-Cultural Perspective on How Online Harassment
Influences Female Journalists.” Journalism 21 (7): 877–895.
Coddington, M., S. C. Lewis, and A. E. Holton. 2018. “Measuring and Evaluating Reciprocal
Journalism as a Concept.” Journalism Practice 12 (8): 1039–1050.
Crete-Nishihata, M., J. Oliver, C. Parsons, D. Walker, L. Tsui, and R. Deibert. 2020. “The
Information Security Cultures of Journalism.” Digital Journalism 8 (8): 1068–1091.
Domingo, D., T. Quandt, A. Heinonen, S. Paulussen, J. B. Singer, and M. Vujnovic. 2008.
“Participatory Journalism Practices in the Media and beyond: An International Comparative
Study of Initiatives in Online Newspapers.” Journalism Practice 2 (3): 326–342.
Eddy, K., and R. K. Nielsen. 2022, March 21. “Race and Leadership in the News Media 2022:
Evidence from Five Markets.” Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/race-
and-leadership-news-media-2022-evidence-five-markets
Ferrier, M. 2018. “Attacks and Harassment: The Impact on Female Journalists and Their
Reporting.” International Women’s Media Foundation and TrollBusters. https://www.iwmf.org/
wp-content/uploads/2018/09/Attacks-and-Harassment.pdf
Freedom House. 2017. “Chasing Stories, Women Journalists Are Pursued by Trolls.” Freedom
House. https://freedomhouse.org/article/chasing-stories-women-journalists-are-pursued-trolls
Gettleman, J., K. Conger, and S. Raj. 2021, December 16. “The Harvard Job Offer No One at
Harvard Ever Heard of.” The New York Times. https://www.nytimes.com/2021/12/16/technol-
ogy/harvard-job-scam-india.html
Groshek, J., and E. Tandoc. 2017. “The Affordance Effect: Gatekeeping and (Non)Reciprocal
Journalism on Twitter.” Computers in Human Behavior 66 (1): 201–2010.
Harwell, D. 2021, December 3. “Twitter Says It Suspended Accounts in Error Following Flood of
‘Coordinated and Malicious’ Reports.” The Washington Post. https://www.washingtonpost.com/
technology/2021/12/03/twitter-admits-error-in-account-suspensions/
Henrichsen, J. R. 2020. “The Rise of the Security Champion: Beta-Testing Newsroom Security
Cultures.” Tow Center for Digital Journalism at Columbia University. https://www.cjr.org/tow_
center_reports/security-cultures-champions.php
Henrichsen, J. R. 2021. “Understanding Nascent Newsroom Security and Safety Cultures: The
Emergence of the “Security Champion.” Journalism Practice, 1–20.
Henrichsen, J. R., M. Betz, and J. M. Lisosky. 2015. Building Digital Safety for Journalism: A Survey
of Selected Issues. Paris, France: UNESCO Publishing. http://unesdoc.unesco.org/images/0023/
002323/232358e.pdf.
International Press Institute. 2020, September 17. “Maria Ressa: Social Media Being Weaponized
to Discredit Journalists Online.” International Press Institute. https://ipi.media/maria-ressa-
social-media-being-weaponized-to-discredit-journalists-online/
Iosifidis, P., and N. Nicoli. 2020. “The Battle to End Fake News: A Qualitative Content Analysis of
Facebook Announcements on How It Combats Disinformation.” International Communication
Gazette 82 (1): 60–81.
Keller, T. R., T. Graham, D. Angus, A. Bruns, N. Marchal, L.-M. Neudert, R. Nijmeijer, et al. 2020.
“Coordinated Inauthentic Behaviour’ and Other Online Influence Operations in Social Media
Spaces.” Panel presented at AoIR 2020: The 21st Annual Conference of the Association of
Internet Researchers. Virtual Event: AoIR. October 28–31. Retrieved from http://spir.aoir.org
Kirchgaessner, S., P. Lewis, D. Pegg, S. Cutler, N. Lakhani, and M. Safi. 2021, July 18. “Revealed:
Leak Uncovers Global Abuse of Cyber-Surveillance Weapon.” The Guardian. https://www.
18 J. R. HENRICHSEN AND M. SHELTON

theguardian.com/world/2021/jul/18/revealed-leak-uncovers-global-abuse-of-cyber-surveillance-
weapon-nso-group-pegasus
Konow-Lund, M, and M. Høiby. 2021. “Female Investigative Journalists: Overcoming Threats,
Intimidation, and Violence with Gendered Strategies.” Journalism Practice, 1–16. https://doi.
org/10.1080/17512786.2021.2008810.
Kovic, M., A. Rauchfleisch, M. Sele, and C. Caspar. 2018. “Digital Astroturfing in Politics:
Definition, Typology, and Countermeasures.” Studies in Communication Sciences 18 (1): 69–85.
Lewis, S. C., A. E. Holton, and M. Coddington. 2014. “Reciprocal Journalism: A Concept of Mutual
Exchange between Journalists and Audiences.” Journalism Practice 8 (2): 229–241.
Lewis, S. C., and O. Westlund. 2015. “Actors, Actants, Audiences, and Activities in Cross-Media
News Work.” Digital Journalism 3 (1): 19–37.
Lo €
€fgren Nilsson, M., and H. Ornebring. 2016. “Journalism under Threat: Intimidation and
Harassment of Swedish Journalists.” Journalism Practice 10 (7): 880–890.
Marvin, R. 2019, January 29. “How Google’s Jigsaw Is Trying to Detoxify the Internet.” PC
Magazine. https://www.pcmag.com/news/how-googles-jigsaw-is-trying-to-detoxify-the-internet
McGregor, S. E. 2021. Information Security Essentials: A Guide for Reporters, Editors, and Newsroom
Leaders. New York: Columbia University Press.
McGregor, S., P. Charters, T. Holliday, and F. Roesner. 2015. “Investigating the Computer Security
Practices and Needs of Journalists.” In Proceedings of the 24th USENIX Security Symposium,
Washington, DC.
McGregor, S., and E. Watkins. 2016. “‘Security by Obscurity’: Journalists’ Mental Models of
Information Security.” International Symposium on Online Journalism 6 (1): 33–49. https://iso-
jjournal.wordpress.com/2016/04/14/security-by-obscurity-journalists-mental-models-of-informa-
tion-security/
Miles, M., A. M. Huberman, and J. Saldan ~a. 2014. Qualitative Data Analysis: A Methods
Sourcebook. Thousand Oaks: Sage Publications.
Miller, K. C. 2020. “Harassing the Fourth Estate: The Prevalence and Effects of Outsider-Initiated
Harassment towards Journalists.” PhD diss., University of Oregon.
Miller, K. C. 2021. “Hostility toward the Press: A Synthesis of Terms, Research, and Future
Directions in Examining Harassment of Journalists.” Digital Journalism, 1–20.
Nelson, J. 2021. “A Twitter Tightrope without a Net: Journalists’ Reactions to Newsroom Social
Media Policies.” Columbia Journalism Review. https://www.cjr.org/tow_center_reports/news-
room-social-media-policies.php
Petersen, A. H. 2018. “The Cost of Reporting While Female.” Columbia Journalism Review. https://
www.cjr.org/special_report/reporting-female-harassment-journalism.phpS
Phillips, W. 2015. This Is Why We Can’t Have Nice Things: Mapping the Relationship between
Online Trolling and Mainstream Culture. Cambridge, MA: The MIT Press.
Picard, R. G., and O. Westlund. 2012. “The Dynamic Innovation Learning Model: A
Conceptualization of Media Innovation.” Paper presented at the 10th World Media Economics
and Management Conference, Thessaloniki, Greece, May 23–27.
Posetti, J., N. Shabbir, D. Maynard, K. Bontcheva, and N. Aboulez. 2021. “The Chilling: Global
Trends in Online Violence Against Women Journalists.” UNESCO. https://en.unesco.org/publi-
cations/thechilling
Reich, Z., and T. Hanitzsch. 2013. “Determinants of Journalists’ Professional Autonomy: Individual
and National Level Factors Matter More than Organizational Ones.” Mass Communication and
Society 16 (1): 133–156.
Reporters Without Borders. 2018. “Online Harassment of Journalists: Attack of the Trolls.”
Reporters Without Borders. https://rsf.org/sites/default/files/rsf_report_on_online_harassment.
pdf
Reuters Institute. 2021, March 30. “Do You Want to Fix the News Media Race Problem? Put
Fewer White Men at the Top.” Reuters Institute. https://reutersinstitute.politics.ox.ac.uk/news/
do-you-want-fix-news-media-race-problem-put-fewer-white-men-top
Russell, F. M. 2019. “Twitter and News Gatekeeping: Interactivity, Reciprocity, and Promotion in
News Organizations’ Tweets.” Digital Journalism 7 (1): 80–99.
DIGITAL JOURNALISM 19

Singer, J. B., D. Domingo, A. Heinonen, A. Hermida, S. Paulussen, T. Quandt, Z. Reich, and M.


Vujnovic. 2011. Participatory Journalism: Guarding Open Gates at Online Newspapers. Malden,
MA: Wiley-Blackwell.
Smythe, D. 1977. “Communications: Blindspot of Western Marxism.” Canadian Journal of Political
and Society Theory 1 (3): 1–28.
Stoycheff, E. 2016. “Under Surveillance: Examining Facebook’s Spiral of Silence Effects in the
Wake of NSA Internet Monitoring.” Journalism & Mass Communication Quarterly 93 (2):
296–311.
Thomson Reuters Foundation. 2022. “TRFilter.” https://www.trfilter.org/
Twitter. 2020. “Account Security—Twitter Transparency Center.” https://transparency.twitter.com/
en/reports/account-security.html#2020-jul-dec
Vergani, M. 2014. “Rethinking Grassroots Campaigners in the Digital Media: The “Grassroots
Orchestra” in Italy.” Australian Journal of Political Science 49 (2): 237–251.
Verizon. 2021. “2021 Data Breach Investigations Report.” https://enterprise.verizon.com/content/
verizonenterprise/us/en/index/resources/reports/2021-data-breach-investigations-report.pdf
Vilk, V., E. Vialle, and M. Bailey. 2021, March. “No Excuse for Abuse: What Social Media
Companies Can Do Now to Combat Online Harassment and Empower Users.” PEN America.
https://pen.org/report/no-excuse-for-abuse
Vogels, E. A. 2021. “The State of Online Harassment.” Pew Research Center. https://www.pewre-
search.org/internet/2021/01/13/the-state-of-online-harassment/
Waisbord, S. 2002. “Antipress Violence and the Crisis of the State.” Harvard International Journal
of Press/Politics 7 (3): 90–109.
Waisbord, S. 2020a. “Mob Censorship: Online Harassment of US Journalists in Times of Digital
Hate and Populism.” Digital Journalism 8 (8): 1030–1046.
Waisbord, S. 2020b. “Trolling Journalists and the Risks of Digital Publicity.” Journalism Practice 16
(5): 984–917.
Westley, B. H., and M. S. MacLean. 1957. “A Conceptual Model for Communications Research.”
Journalism & Mass Communication Quarterly 34 (1): 31–38.
Zerback, Thomas, Florian To €pfl, and Maria Kno
€pfle. 2021. “The Disconcerting Potential of Online
Disinformation: Persuasive Effects of Astroturfing Comments and Three Strategies for
Inoculation against Them.” New Media & Society 23 (5): 1080–1098. https://doi.org/10.1177/
1461444820908530.
20 J. R. HENRICHSEN AND M. SHELTON

Appendix. Interviewees

# Name Designation Affiliation Race Interview date


1 Bill Marczak Research Fellow The Citizen Lab White December 9, 2021
2 Yael Grauer Investigative Consumer Reports White December 13, 2021
journalist
3 Matt Mitchell Co-Founder, hacker CryptoHarlem Black December 14, 2021
4 Micah Lee Director of First Look Media White December 15, 2021
Information
Security
5 Runa Sandvik Security researcher Independent White December 17, 2021
6 Mike Tigas News ProPublica AAPI December 20, 2021
applications
developer
7 Kate Conger Technology New York Times White December 21, 2021
Reporter
8 Łukasz Kr"ol Digital Internews White December 21, 2021
security trainer
9 Selena Larson Former journalist, Security company White December 21, 2021
senior threat
intelligence
analyst
10 Elodie Vialle Affiliate; Consultant Berkman Klein White December 22, 2021
Center;
PEN America
11 Vanessa Charlot Photojournalist Independent Black December 22, 2021
12 Lorraine Ali Television and Los Angeles Times MENA January 3, 2022
Culture Critic
13 Dr. Former Committee to White January 4, 2022
Courtney Advocacy Protect
Radsch Director Journalists
14 Viktorya Vilk Program Director of PEN America White January 4, 2022
Digital Safety
and Freedom
of Expression
15 Emma Journalist, product Kimbap Media AAPI January 5, 2022
Carew manager,
Grovum consultant
16 Dr. Founder, TrollBusters, Media Black January 7, 2022
Michelle Executive Innovation
Ferrier Director Collaboratory
17 Harlo Holmes CISO, Director of Freedom of the Black January 12, 2022
Digital Security Press Foundation
18 Christopher Chief Yubico White January 12, 2022
Harrell Technology
Officer

You might also like