Professional Documents
Culture Documents
fake news sources. The fraction of content from example, on average per day, the median super- left and right (P < 0.0001). For example, people
fake news sources varied by day (Fig. 1A), in- sharer of fake news (SS-F) tweeted 71.0 times, who had 5% or more of their political exposures
creasing (in all categories) during the final weeks whereas the median panel member tweeted only from fake news sources constituted 2.5% of in-
of the campaign (SM S.7). Similar trends were 0.1 times. The median SS-F also shared an aver- dividuals on the left (L and L*) and 16.3% of the
observed in content sharing, with 6.7% of po- age of 7.6 political URLs per day, of which 1.7 right (R and R*). See Fig. 3 for the distribution
litical URLs shared by the panel coming from were from fake news sources. Similarly, the among all political affinity groups and SM S.10
fake news sources. median superconsumer of fake news sources had for additional statistics.
However, these aggregate volumes mask the almost 4700 daily exposures to political URLs, as According to binomial regressions fit separately
fact that content from fake news sources was compared with only 49 for the median panel to each political affinity group, the strongest
highly concentrated, both among a small number member (additional statistics in SM S.9). The predictors of the proportion of fake news sources
of websites and a small number of panel mem- SS-F members even stood out among the overall in an individual’s feed were the individual’s age
bers. Within each category of fake news, 5% of supersharers and superconsumers, the most po- and number of political URLs in the individual’s
sources accounted for more than 50% of ex- litically active accounts in the panel (Fig. 2). Given feed (Fig. 4, A and B, and SM S.11). A 10-fold in-
posures (Fig. 1B). There were far more exposures the high volume of posts shared or consumed by crease in overall political exposures was associ-
to red and orange sources than to black sources superspreaders of fake news, as well as indicators ated with doubling the proportion of exposures
(2.4, 1.9, and 0.7% of aggregate exposures, respe- that some tweets were authored by apps, we find to fake news sources (Fig. 4A)—that is, a 20-fold
ctively), and these differences were largely driven it likely that many of these accounts were cyborgs: increase in the absolute number of exposures to
by a handful of popular red and orange sources. partially automated accounts controlled by humans fake news sources. This superlinear relationship
The top seven fake news sources—all red and (15) (SM S.8 and S.9). Their tweets included some holds for all five political affinity groups and
orange—accounted for more than 50% of fake self-authored content, such as personal commentary suggests that a stronger selective exposure
news exposures (SM S.5). or photos, but also a large volume of political re- process exists for individuals with greater interest
8%
6%
sources. (A) Daily per-
centage of exposures to
black, red, and orange
4%
fake news sources,
relative to all exposures
to political URLs. 2%
Exposures were
summed across all
0%
panel members. (B to 8 15 22 29 5 12 19 26 3 10 17 24 7 14 21 28 5
1 31 12
D) Empirical cumulative August September October November December
showing distribution of B C D
% of all exposures
% of all exposures
A 4000 B
3000 R 9
R
L R*
R R
2000 6
R
L
R
L* L L*
R
1000 L R* C
R C 3
L* L R
R L L L* L
R R L L L L*
L R L L
0 0
Supersharers: Top 38 by political URLs shared Superconsumers: Top 164 by exposures to political URLs
Fig. 2. Shares and exposures of political URLs by outlier accounts, to any political URLs, accounting for 12% of all exposures and 74% of
many of which were also SS-F accounts. (A) Overall supersharers: top fake news exposures. Black, red, and orange bars represent content from
1% among panelists sharing any political URLs, accounting for 49% of all fake news sources; yellow or gray bars denote nonfake content
shares and 82% of fake news shares. Letters above bars indicate political (SS-F accounts are shown in yellow). The rightmost bar shows, for scale,
affinities. (B) Overall superconsumers: top 1% among panelists exposed the remainder of the panel’s fake news shares (A) or exposures (B).
the content seen by even the most avid consumers content from frequent posters or prioritize users news sources in a coexposure network. Such in-
of fake news but nonetheless formed a distinct who have not posted that day. For illustrative terventions do raise the question of what roles
cluster of sites, many of them fake, consumed by a purposes, a simulation of capping political URLs platforms should play in constraining the in-
heavily overlapping audience of individuals mostly at 20 per day resulted in a reduction of 32% of formation people consume. Nonetheless, the
on the right. content from fake news sources while affecting proposed interventions could contribute to
only 1% of content posted by nonsupersharers. delivering corrective information to affected
Discussion (SM S.15). Finally, because fake news sources populations, increase the effectiveness of correc-
This study estimated the extent to which people have shared audiences, platforms could estab- tions, foster equal balance of voice and attention
on Twitter shared and were exposed to content lish closer partnerships with fact-checking or- on social media, and more broadly enhance the
from fake news sources during the 2016 election ganizations to proactively watch top producers resiliency of information systems to misinfor-
season. Although 6% of people who shared URLs of misinformation and examine content from mation campaigns during key moments of the
with political content shared content from fake new sites that emerge in the vicinity of fake democratic process.
news sources, the vast majority of fake news
shares and exposures were attributable to tiny
Exposure to fake news sources
remaining sample still included partially auto- Congruent Incongruent Congruent Incongruent
mated cyborgs. Unlike bots, which may struggle
to attract human audiences, these cyborg accounts 5 F G H I
are embedded within human social networks. Given ** ***
4
the increasing sophistication of automation tools
available to the average user and the increasing 3
volume and coordination of online campaigns, it
2
will be important to study cyborgs as a distinct cat-
egory of automated activity (14, 22). Regarding lim- 1
itations to this study, note that our findings derive
0
from a particular sample on Twitter, so they might D R D R D R D R
not generalize to other platforms (24). Additionally,
although our sample roughly reflects the demo- Fig. 4. Key individual characteristics associated with exposure to and sharing of fake news
graphics of registered voters on Twitter, it might sys- sources. The proportion of an individual’s political exposures coming from fake news sources
tematically differ from that population in other ways. as a function of (A) number of political exposures, excluding fake news sources, and (B) age. Estimates
Our findings suggest immediate points of lev- are based on binomial regression models fitted separately to each political affinity subgroup. Blue,
erage to reduce the spread of misinformation. liberal; black, center; red, conservative. (C to E) An individual’s likelihood of sharing one or more
Social media platforms could discourage users URLs from fake news sources as a function of (C) number of shares of political URLs, (D) number of
from following or sharing content from the handful exposures to fake news sources, and (E) political affinity. (F to I) Likelihood of a liberal (D) or
of established fake news sources that are most conservative (R) individual sharing a political URL to which they have been exposed, depending on
pervasive. They could also adopt policies that the political congruency and veracity of the source: (F) congruent and fake, (G) incongruent and fake,
disincentivize frequent posting, which would (H) congruent and nonfake, and (I) incongruent and nonfake. Brackets indicate significantly different
be effective against flooding techniques (25) while pairs: **P < 0.01, ***P < 0.001. All estimates and 95% CIs [gray shaded regions in (A) to (D);
affecting only a small number of accounts. For line segments in (E) to (I)] are based on regression models specified in SM S.11 to S.13, with the
example, platforms could algorithmically demote remaining model variables held constant to their median or most common level.
SUPPLEMENTARY http://science.sciencemag.org/content/suppl/2019/01/23/363.6425.374.DC1
MATERIALS
RELATED http://science.sciencemag.org/content/sci/363/6425/348.full
CONTENT
REFERENCES This article cites 35 articles, 7 of which you can access for free
http://science.sciencemag.org/content/363/6425/374#BIBL
PERMISSIONS http://www.sciencemag.org/help/reprints-and-permissions
Science (print ISSN 0036-8075; online ISSN 1095-9203) is published by the American Association for the Advancement of
Science, 1200 New York Avenue NW, Washington, DC 20005. The title Science is a registered trademark of AAAS.
Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of
Science. No claim to original U.S. Government Works