You are on page 1of 6

CONTENT

MODERATION:
The Future is Bionic
Is that email offer from a big brand Artificial Intelligence offers the promise
name really a phishing scam? and threat of being able to take over much
When is a seemingly benign social of the work. At some social platforms, AI
media post really cyber bullying? now catches more inapropriate images than
In the absence of context, content people do. Others are working on AI that
moderation can be a shot in the dark. can understand what’s happening in a live
streaming video. Others still are working to
Since the birth of the Web, but especially
identify the kind of harassment and bullying
since the rise of social media, content
that requires contextual understanding
moderation has become one of the largest
rather than just identifying an inappropriate
and most secretive operational functions
word or image. There’s no question that
in the industry. An army of moderators
the way companies monitor digital content
around the world—in Manila and Bangalore
and remove offensive material is on the
but also in the U.S., Canada and Europe—
brink of change.
screen through violence, pornography, hate
speech and mountains of other illicit, illegal Contrary to the headlines predicting that
or inappropriate content. Estimates vary AI will supplant all human moderation,
widely, but most indicate that more than Accenture believes that the future is built
100,000 people are moderating content on a synergistic relationship between
globally. The work is difficult to stomach. human moderators and AI. This “bionic”
Attrition is high. model will change both the AI used and the
people required. No longer will there be a
need for thousands of low-skilled positions
to do content review. Instead, a new role
will emerge augmented by artificial

Training
AI
Sample for
Manual Review
Text Pictures Websites

And Content
Much AI Human Review Posted
Video Audio Apps More
Pre-Screen Data Split Human Review
by Queue
Human Review
Training
Feedback User Flag

Content is returned securely with approximate action and category identification, and can often be
resolved automatically. Manual reviews are also completed for content flagged.

2
intelligence (AI). Highly skilled content heterogeneous mix of content adds
forensic investigators will garner senior complexity to the review process while the
positions in Internet-based companies pressure to quickly recognize and respond
and enjoy lucrative career paths. This to unacceptable content intensifies.
transformation will help rapidly growing
These issues are causing stress on the
digital platform providers scale content
screening protocols and processes of
moderation at an acceptable risk tolerance
both content producers and platforms,
and at affordable cost.
and the associated risks are rising.

THE CONTENT
Internet companies have a responsibility
to adhere to diverse global regulations
MODERATION and protect consumers from fraud,

IMPERATIVE spam, phishing, malware, malicious


networks and abusive content. A failure
Every industry is now digital. Retailers are by the producer or platform to catch
growing their on-line shopping channels, inappropriate content can result in financial
health care companies are managing risk, brand degradation, loss of consumer
electronic health records and makers of trust and personal risk for viewers or
products from technology to industrial recipients. Upcoming regulations, such as
equipment and consumer products the EU’s General Data Privacy Regulation,
are promoting their wares on the web will heighten these challenges, placing
and through digital advertising. Digital even greater responsibility on Internet
platforms are growing exponentially as companies to take actions that protect
consumers and companies self-publish individual privacies. Current content
content in social media channels at moderation methods are unsustainable.
astonishing rates. The increasingly Something has to change.

16.4 billion 
display ads served in
the United States 
DAILY

Sources: comScore and Accenture analysis.

3
A SYNERGISTIC calculated that will determine if something
should be posted immediately, posted but
ARTIFICIAL still reviewed, reviewed before posting,
INTELLIGENCE or not posted at all. Attributes will be

RELATIONSHIP tracked over time and the feedback loop to


track bad actor activity will become more
Many Internet companies are already accurate and nearly instantaneous.
implementing tools such as image
Yet, even as AI becomes hyper accurate
recognition that help with content
and more contextual at assessing content,
moderation by identifying objects within
the need for investigators will remain.
images. Weighing factors such as user
Investigators bring the subject matter
experience and risk, these tools determine
knowledge to make decisions that lie in the
if images should be human-reviewed.
complex gray areas of decision-making. They
This process is eliminating large volumes
bring empathy and contextual understanding
of content from the investigator’s queue.
that are important to content assessment.
Investigators look only at content flagged
For example, they can view content from the
by AI and make a publishing decision.
subjective perspective of the content creator.
However, while the decision feeds back into
They can also bring an understanding of
the algorithm, the reasons for the decision
cultural context to assess how content
that the investigator took do not, making the
will be perceived by those in different
current process very linear and literal. More
demographics and geographies. But, unlike
dynamic algorithms are needed to keep pace
today, this new investigator will be highly
with evolving bad actor tactics.
trained and laser-focused on investigating
A next generation of AI tools will be able complex, difficult content forensics.
to identify and score a much larger range
Armed with dynamic algorithms, advanced
of attributes than just objects within the
analytics and an amalgamation of data such
content itself. The content source (such
as historical activity, account profiles, and
as various attributes of the publisher) and
the volumes of personal and behavioral data
context (such as geography, time of day, or
resident in most platform companies, these
relevant social, political or market events)
forensic investigators can develop a deep
both carry a relative risk factor that the
understanding of “bad actor” personas
content is illicit, illegal or inappropriate.
and begin to detect and even predict bad
In the not too distant future, algorithms
behavior at the source. For example, as AI
will incorporate a huge multivariate set of
becomes more sophisticated at detecting
content and context attributes. Based on
fraudulent content, fraudsters also become
attribute scoring a relative risk score will be
more sophisticated at evading detection.
Once AI can provide the investigator with

4
intelligence on the content producer and psychology. The ability to attract
versus flagging the content itself, then the and retain talent in these functions will
investigator can evaluate the producer (such become a critical source of competitive
as validating and risk scoring the creator of advantage. So will access to the latest
the app, digital ad, emails or other content) AI technologies. We predict that content
and take action (such as banning the profile) investigation will become a respected and
that will protect against the fraud before rewarding career path and considered a
it occurs. When both the AI and forensic strategic role in the organization. 
investigator work together, investigators
AI will indeed turn content moderation on
can attack the “root cause” of all kinds of
its head—not by eliminating the role but
problems before they occur, dramatically
by turbocharging it. Certainly the total
reducing the need to actually moderate
number of people working in content
digital content at all.
moderation can be reduced, but only if

WHAT THE
Internet companies stay on the forefront
of AI technology. Companies providing
FUTURE HOLDS content moderation services will need
to maintain an investment in predictive
Efficient and effective content moderation at
analytics, workflow AI and more so that, as
scale will be defined by a more synergistic
the content moderation game gets more
relationship between humans and AI. AI will
complicated, they can keep up. Only then
focus on its strength – evaluating massive
will Internet companies truly solve the
amounts of data across multiple dimensions
challenge of content moderation at scale.
in near-real time.  Humans will focus on their
strength – reading between the lines and
understanding the cultural context around
the content. Today’s content adjudicators
will be supplanted by investigators with
analytical thinking and techniques akin to
an actuary or financial crime investigator. 
This evolving role will require investigators
to develop specialized skills in product,
market, legal and regulatory domains. 
It will also require native-level cultural
understanding and training in forensics

5
TO LEARN MORE ABOUT ACCENTURE
ABOUT HOW TO COST-
Accenture is a leading global professional
EFFECTIVELY SCALE YOUR services company, providing a broad
CONTENT MODERATION range of services and solutions in
strategy, consulting, digital, technology
CAPABILITY CONTACT:
and operations. Combining unmatched
Kevin J. Collins experience and specialized skills across
Managing Director more than 40 industries and all business
kevin.j.collins@accenture.com functions—underpinned by the world’s
Mobile: 1.650.303.4633 largest delivery network—Accenture works at
the intersection of business and technology
to help clients improve their performance
Saurabh Mohanty
and create sustainable value for their
Managing Director
stakeholders. With approximately 401,000
saurabh.mohanty@accenture.com
people serving clients in more than 120
Mobile: 1.408.857.1304
countries, Accenture drives innovation to
improve the way the world works and lives.
Visit us at www.accenture.com.

Copyright © 2017 Accenture


All rights reserved.

Accenture, its logo, and


High Performance Delivered
are trademarks of Accenture.

This document makes descriptive reference


to trademarks that may be owned by
others. The use of such trademarks herein
is not an assertion of ownership of such
trademarks by Accenture and is not
intended to represent or imply the existence
of an association between Accenture and
the lawful owners of such trademarks.

You might also like