You are on page 1of 6

This article has been acceptedThis

for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent

FEATURE ARTICLE: AI and Fake News

AI and Fake News


ABSTRACT: Fake news and propaganda are not new
Anne K. Cybenko
US Air Force Research phenomena but when powered by modern information
Laboratory, Dayton
OH, USA dissemination and AI technologies, they are manifesting
themselves at scales and in ways previously not possible. This
George Cybenko
Dartmouth College, article describes several human frailties that make today's "fake
Hanover NH, USA
news" possible together with several AI-based technologies that
Editor-in-Chief: can help defeat or defeat those frailties. Our goal is to explore
V.S. Subrahmanian
ways in which AI can play a role in the “fake news” arena.

Diversity of thought and opinion is valued in modern society. Often called “cognitive
diversity,” it can counter groupthink and enables better decision-making.[1] Increas-
ingly, organizations are cultivating and measuring cognitive diversity for competitive
advantage. In fact, commercial products now gauge the diversity and inclusiveness of
major companies.[2]
Ironically, a population’s cognitive diversity is also being exploited in an entirely dif-
ferent way today. Instead of synthesizing different perspectives and worldviews into a
superior consensus, new information technologies such as online boutique news out-
lets, social networks and microblogs take advantage of cognitive diversity by isolating
subpopulations and catering to their idiosyncratic opinions, often giving people the il-
lusion that they are in the ideological majority. Done effectively, this creates hardened
enclaves of reliable information consumers for the economic, social or political bene-
fits of the information's purveyors.
As such, cognitive diversity can be regarded as the Petri dish in which “fake news”
thrives. Although “fake news” has become a household concept relatively recently, the
idea that cyberspace creates new opportunities for shaping human perception and ac-
tion was recognized years ago.[3]
What is “real” versus what is “fake” is an epistemological question too deep to be ad-
dressed here. However, it is undisputable that the labels “real” and “fake” are increas-
ingly being applied to news and news sources in contemporary public discourse.[4] In
this article, the term “fake news” does not necessarily refer to news that is demonstra-
bly inaccurate. It refers to news that one community considers to be “fake” so that one
community’s “real news” is another community’s “fake news” with claims and coun-
terclaims repeatedly asserted in spiraling regress.

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been acceptedThis
for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent
IEEE INTELLIGENT SYSTEMS

WHY AND HOW IS THIS HAPPENING?


The why can be explained largely by the value of targeting, retaining and expanding
“audiences” with shared beliefs. That value can be both economic (as in targeted pay-
per-click ad revenue) and political (as in support of a specific party or social cause)
with value that is directly proportional to the size and enthusiasm of the audiences’
memberships. Cognitive diversity guarantees that such audiences will exist in open,
heterogeneous societies.
As for the how it is happening, we turn to established theories in psychology and mod-
ern computing, specifically certain recent AI technologies.
Of particular importance is how information comes to be accepted as truth. Our inher-
ent inclinations, in particular confirmation bias (the “unwitting selectivity in the acqui-
sition and use of evidence”), urge us to seek out and interpret new information that
confirms our existing beliefs.[5] Much psychological research has been devoted to un-
derstanding how new information comes to be accepted or rejected. A recent review of
that research concluded that there are four major cognitive safeguards that play key
roles in rejecting information when new information is consciously considered.[6]
Initially, there is temporary acceptance that new information is true in order to compre-
hend it. Subsequently, as the information is processed at deeper levels, a person will
likely reject it if any of the following four cognitive safeguards are present: i). the in-
formation is incompatible with the existing worldview; ii). it does not comprise a co-
herent story; iii). it does not come from what is considered a credible source, or; iv). it
is perceived that others in the same community do not believe it. As a result, it is typi-
cally difficult and unusual for information that contradicts someone's prior beliefs and
worldviews to be accepted as truthful.
However various existing and emerging information and AI-based technologies can, in
combination, defeat these four cognitive safeguards.

AI'S ROLE IN DEFEATING COGNITIVE SAFEGUARDS


First of all, online users now have a larger choice of news and information sources that
they can self-select to align with whatever niche beliefs they might already have.[7]
This creates audiences with similar, idiosyncratic beliefs and such audiences can be
identified and labeled using AI-based natural language and social network tech-
niques.[8]
Secondly, after an audience is identified, information content can be tailored to that au-
dience even if it is niche. While human reporters and writers can populate mainstream
news and information sources, it is now possible to robotically generate plausible news
stories using AI-based software. For example, the Washington Post has already been
experimenting with such technology in relatively narrow domains such as minor events
in the 2016 Summer Olympics.[9] More ambitious projects aimed at passing a short
story writing Turing Test have been attempted and are also showing progress.[10]
Combining such technologies, we can imagine near-future AI-powered systems that
will start with an imagined or dubious fact, embed it into a plausible storyline and write
a news article with minimal or no human intervention.
Thirdly, because users self-select their sources and therefore tend to see content con-
sistent with their beliefs, they gain trust in those sources. The Pew Research Center
has found that 76% of U.S. adults are turning to the same news sources and 51% say
they are loyal to those sources. [11] Other studies show that the sources are, not sur-
prisingly, highly aligned with political partisanship and increasingly online, especially

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been acceptedThis
for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent
FEATURE ARTICLE

for younger audiences.[7] Once a target community has been identified, AI technolo-
gies can author professional-looking websites with minimal human effort, catering to
ideological niches.[12]
Finally, and perhaps most importantly, AI powered bots can populate thousands of user
accounts that can support, oppose and/or relay any content the bot controllers tar-
get.[13][14] As AI technologies continue to mature and become capable of passing
more Turing-like tests, it will become increasingly difficult to distinguish artificial
from human participants and commenters.[15][16]
Although these cognitive underpinnings and recent technological advances might ex-
plain the rise in social polarization and the associated claims of "fake news," we should
ask how these trends can be reversed.

Table1: A summary of four key cognitive safeguards [6] against accepting "fake news"
together with a sampling of AI technologies for defeating and defending those
safeguards.

Four key cogni- AI technologies for AI technologies for


Defeating the safe-
tive safeguards defeating the safe- defending the safe-
guards
[6] guards guards
i). The information Identify communities Identify and label so- Linguistic cue analy-
is incompatible with specific cial media communi- sis [16]
with the existing worldviews of inter- ties to target [8]
worldview est. Sentiment analysis
[15]

ii). The information Embed the infor- Artificial reporting [9] Discourse analysis
does not comprise mation into a coher- Artificial storytelling [16]
a coherent story ent story and [10] Structural and linguis-
context. tic analysis [16]

iii). The information Make sources seem AI-powered profes- Adversarial stylome-
does not come more credible. sional website design try [17]
from what is con- Have sources ap- [12] Information prove-
sidered a credible pear in major search Search engine optimi- nance and diffusion
source engines. zation analysis [18]

iv). The information Promote the infor- Social media bots [15] Social bot detection
is not believed in mation in targeted Fake social media ac- [15]
the reader's/view- communities using counts and persona Social network be-
er's community. social network and [14] havior [16]
social media technol-
ogy.

AI'S ROLE IN DEFENDING COGNITIVE SAFEGUARDS


Identifying “fake news” is an important potential application of AI seeing as the scale
and scope of fake news claims will probably make human-based assessments about the
veracity of information unsustainable.[15] Moreover, the latency that human fact
checkers introduce into news dissemination would probably make retractions, correc-
tions and provenance analyses appear too late to mitigate the damage already done.
So while AI technology can be used to defeat key cognitive safeguards as discussed
above, it should also play an important role in defending those safeguards. In fact, this
is a growing and highly active area of research today and Table 1 lists several AI re-
lated technologies for attacking and defending the four key safeguards discussed.

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been acceptedThis
for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent
IEEE INTELLIGENT SYSTEMS

Techniques for classifying news as “real” vs “fake” (or rumors vs nonrumors) gener-
ally fall into two categories. One class of methods uses linguistic and semantic analy-
sis of written content to discriminate while the other uses dissemination patterns and
rates to classify different types of news. Some approaches use both. [14][16]

NEXT STEPS
Appropriately, there is a growing community of scientists dedicated to understanding
the fake news phenomenon.[19] A recent conference on combating fake news resulted
in several action items for the scientific community including: increasing bipartisan
participation in the discussion; increasing the strength, visibility, and general accessi-
bility of more subjective "truth" and; increasing data availability for novel research on
the topic.[20]
In the end, we do not believe that the question of what is real news versus what is fake
news is answerable solely through technological means. Technology can surely assist
in answering questions about provenance, consistency and authorship that might be
useful for assessing some measure of objectivity to a news story. However, AI or any
other technologies for identifying "truth" will coevolve with technologies for subvert-
ing "truth" just as attack and defend technologies in cyber security and the spam wars
have coevolved. We cannot predict when or how such coevolution will ultimately con-
verge or stabilize. Moreover, there are complex ethical issues about computers decid-
ing for humans what is true and what is false not to mention what biases the software
will inherit from its programmers or learn from data it ingests. Ultimately it is up to the
consumer to determine what they believe to be real and fake and what they decide to
disseminate to others and awareness and attempted understanding human behavior and
cognition is an important aspect of the fight against fake news. The "fake news" phe-
nomenon is a highly dynamic and socially relevant area for AI research and implemen-
tation. Without doubt, there will be many exciting opportunities, investments and
advances for AI in this and related areas over the coming years.

REFERENCES
[1] F.J. Millikan., L.J. Martins, "Searching for common threads: Understanding the
multiple effects of diversity in organizational groups." Academy of Management
Review, vol. 21, pp.402-433, 1996.

[2] Thomson Reuters Diversity and Inclusion Index,


https://financial.thomsonreuters.com/en/products/data-analytics/market-
data/indices/diversity-index.html. Visited May 17, 2017.

[3] G. Cybenko, A. Giani and P. Thompson, "Cognitive hacking: A battle for the
mind." IEEE Computer, vol.35, pp. 50-56, 2002.

[4] E.C.Tandoc Jr, W.L.Zheng, & R. Ling. "Defining `fake news' A typology of
scholarly definitions." Digital Journalism, vol 6, no. 2, pp137-153, 2018.

[5] R.S. Nickerson, "Confirmation bias: A ubiquitous phenomenon in many


guises." Review of General Psychology, vol. 2, pp.175- 220, 1998.

[6] S. Lewandowsky, U.K. Ecker, C.M., Seifert, N. Schwarz, J. Cook,


"Misinformation and its correction continued influence and successful debiasing."
Psychological Science in the Public Interest, vol. 13, pp.106-131, 2012.

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been acceptedThis
for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent
FEATURE ARTICLE

[7] J.M. Carey, B. Nyan, B. Valentino, M. Liu. "An inflated view of the facts?
How preferences and predispositions shape conspiracy beliefs about the
Deflategate scandal." Research & Politics, vol. 3, pp. 1-9, 2016.

[8] D. Hemavathi, M. Kavitha and N. Begum Ahmed, "Information extraction


from social media: Clustering and labelling microblogs," 2017 International
Conference on IoT and Application (ICIOT), Nagapattinam, pp. 1-10 2017,

[9] WashPostPR, "The Washington Post experiments with automated storytelling


to help power 2016 Rio Olympics coverage," 2016.
https://www.washingtonpost.com/pr/wp/2016/08/05/the-washington-post-
experiments-with-automated-storytelling-to-help-power-2016-rio-olympics-
coverage/?utm_term=.e9ae3ebfe006

[10] M. Casey, Neukom Institute 2017 "Turing Tests in Creativity."


http://bregman.dartmouth.edu/turingtests/node/57. Visited May 17, 2017.

[11] A. Mitchell, J. Gottfried, M. Barthel, E. Shearer, "Loyalty and source


attention." Pew Research Center, 2017.

[12] J. Tselentis, "When websites design themselves."


https://www.wired.com/story/when-websites-design-themselves/, Wired, 2017.

[13] E. Ferrara, O. Varol, C. Davis, F. Menczer, A. Flammini. "The rise of social


bots." Communications of the ACM, vol. 59, pp. 96-104, 2016.

[14] V.S. Subrahmanian, A. Azaria, S. Durst, V. Kagan, A. Galstyan, K. Lerman,


L. Zhu, E. Ferrara, A. Flammini, F. Menczer, "The DARPA Twitter bot
challenge". IEEE Computer, vol. 49, pp. 38-46, 2016.

[15] E. Alvarez. “Facebook’s approach to fighting fake news is half-hearted,” July


13, 2018. https://www.engadget.com/2018/07/13/facebook-fake-news-half-
hearted/

[16] S. Kwon, M. Cha, & K. Jung. "Rumor detection over varying time
windows." PloS one, vol. 12, no.1, 2017.

[17] T. Joachims, "Text categorization with support vector machines: Learning


with many relevant features." In European Conference on Machine Learning,
Springer, Berlin, Heidelberg, pp. 137-142, 1998.

[18] M. Brennan, S. Afroz, & R. Greenstadt "Adversarial stylometry:


Circumventing authorship recognition to preserve privacy and anonymity." ACM
Transactions on Information and System Security (TISSEC) vol. 15, no. 3, pp. 12,
2012.

[19] D. Lazer, et al. "The science of fake news." Science vol. 359, no. 6380,
pp.1094-1096, 2018.

[20] D. Lazer, M. Baum, N. Grinberg, L. Friedland, K. Joseph, W. Hobbs, C.


Mattsson, "Combating fake news: An agenda for research and action," Conference
Final Report, Northeastern University and Harvard University, 2017.
https://shorensteincenter.org/wp-content/uploads/2017/05/Combating-Fake-News-
Agenda-for-Research-1.pdf?x78124

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.
This article has been acceptedThis
for ispublication
the author's
in version
a futureofissue
an article
of thisthat has been
journal, published
but has not beeninfully
this journal. Changesmay
edited. Content werechange
made prior
to thistoversion by the publisher
final publication. prior
Citation to publication.
information: DOI 10.1109/MIS.2018.2877280,
The final version of record IEEE
is available at Systems http://dx.doi.org/10.1109/MIS.2018.2877280
Intelligent
IEEE INTELLIGENT SYSTEMS

ABOUT THE AUTHORS


Anne Cybenko is a Research Psychologist with the Air Force Research Laboratory in Day-
ton Ohio. Her research interests include cognitive and cultural psychology, especially in
man-machine teaming settings. She received a PhD in cognitive psychology from the Uni-
versity of California, Riverside. Contact her at anne.cybenko.1@us.af.mil.
George Cybenko (IEEE Fellow) is the Dorothy and Walter Gramm Professor of Engineer-
ing at Dartmouth College. His research interests include machine learning, cyber security,
and information fusion. He received a PhD in mathematics from Princeton University. Con-
tact him atgvc@dartmouth.edu.

1541-1672 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more
Copyright (c) 2018 IEEE. Personal use is permitted. For any other purposes,information.
permission must be obtained from the IEEE by emailing pubs-permissions@ieee.org.

You might also like