Professional Documents
Culture Documents
Bca510 Honors Paper 1
Bca510 Honors Paper 1
Grant Polmanteer
BCA510
Polmanteer 2
Table of Contents
Introduction 3
Video - YouTube 10
Livestreaming - Twitch 13
References 17
Polmanteer 3
Introduction
Media law governs the content piped through underground lines and over-the-air signals,
but one paramount method of communication is not regulated to the same degree. The Internet
and the content platforms that have grown into global superpowers on the World Wide Web are
the forefront drivers of mass communication for billions of humans across the world. In
America, content platforms like Twitter, Facebook, YouTube, and Twitch are all ways in which
people receive entertainment, knowledge, and anything under the sun. However, these platforms
radio show.
At one point, there was a proposal by former FCC (Federal Communications Committee)
chairman Tom Wheeler for the Internet to be considered a public utility. Gas, electricity,
telephone service – these things are all considered public utilities, and therefore fall under
heavier governmental scrutiny and regulation. As of 2019, Internet service is not currently a
public utility, according to the US Government, and therefore operates under a much different
structure to its content-delivery rivals in TV and radio. With this in mind, the content platforms
dominating the cyberspace have slowly crafted their own legality surrounding the content on
their platforms. With hefty Terms of Service agreements, privacy policies, and content
moderation, or lack thereof, ladened throughout the platforms, there has largely not been a
single, carefully enacted universal policy for all content platforms existing on the Internet. While
Safe Harbor laws and 1st Amendment laws can interact with content on the Internet, it is often at
the discretion of the content platforms in charge, as to what happens to the content on their
platform.
Polmanteer 4
communication, and infamously a shaky moderation policy. On the platform, Twitter users can
reply to the President of the United States with an opinion, interact with a scientist talking about
climate change, and call for mass genocide, all in a swift 5 minutes. Twitter has largely
contained all of its content and security rules and actions via its Terms of Service1 and its
Privacy Policy2. According to Terms of Service; Didn’t Read3 (a non-profit organization that
analyzes online platforms’ Terms of Service policies), Twitter’s Terms of Service are broad-
reaching and not user-friendly. Twitter, as a platform, can remove any piece of content for any
reason without notification. The company can also sell off personal data in the event of
bankruptcy and can sell users’ personal data to third parties. These overarching powers that users
sign up for, allow for Twitter to be broad with their powers, and not limit themselves to
particular instances and things. This often can lead to the platform deciding (as a private
business) what content is allowed on their platform. Different cases of content moderation and
what kind of content is permissible on the microblogging service have risen, especially in recent
An overwhelming part of Twitter revolves around the political sphere. For the purposes
of Twitter, these political focuses will stay within the realm of US politics. Out of fractured
political leanings have risen extreme left-wing and right-wing propagandists, that often thrive on
Twitter’s rough “ask for forgiveness, not for permission” content policies. The right-wing
propagandists often referred to as “alt-right” can be pervasive all across Twitter. JM Berger, in
1
https://twitter.com/en/tos
2
https://twitter.com/en/privacy
3
https://tosdr.org/about.html
Polmanteer 5
The Atlantic, details how the “alt-right” work on Twitter after taking a data sample of roughly
30,000 right-leaning Twitter accounts and analyzing the dataset: “The alt-right bloc synchronizes
activity that starts on the far-right edge of mainstream conservatism and continues through the
far reaches of genocidal white supremacy. There are common goals threaded through its various
factions, including undermining the purveyors of real information about the world with a barrage
groups, and, most visibly, providing political support to Trump,” (Berger). The lines blended
between the alt-right are often blending truth and opinion into a malformed hybrid, that often
relies on truth-bending and outright lying. Berger details the falsehoods that the alt-right
methodology builds upon, “Accounts for prominent conspiracy websites and their associated
personalities ranked among the top influencers. QAnon, a far-right conspiracy theory, was the
third-most-tweeted hashtag in the data set, although this ranking was exaggerated by coordinated
people with sometimes very divergent views,” (Berger). Taking a look at Twitter’s Rules and
Policies4 they have many policies that outline their stances on false information, abusive
behavior, and other strategies that are employed by extremist Twitter users. Jason Koebler and
Joseph Cox of Vice relay this overarching point into a summary that explains how content
moderation on Twitter largely works, “Rather than act decisively by banning certain types of
behavior and allowing others, Twitter's policy and engineering teams sometimes de-emphasize
content and allow users to hide content that may be offensive but not explicitly against the
platform's terms of service. In doing so, Twitter says it gives more freedom to users, while critics
argue it places more burden on users and more trust in software solutions (or in some cases,
4
https://help.twitter.com/en/rules-and-policies#general-policies
Polmanteer 6
band-aids) to police hateful or otherwise violating content on the site,” (Koebler and Cox). With
the tools of blocking, muting, and reporting at-hand, Twitter content moderation is largely left
with the user to complete. This type of moderation, has muddied the waters on what is allowed,
and often allows for repeat offenders to continue promoting their content, as Twitter stays true to
public space, where even highly offensive voices are allowed to be heard,” (Koebler and Cox).
The President of the United States has recently crafted a situation where many of
Twitter’s users question the ability of the President to infringe upon the platform’s Terms.
Twitter even had to go as far as explaining a policy towards world leaders on Twitter moving
into future elections5. Critics have largely called for the President’s removal from his largest
platform in his Twitter account for multiple infringements of spreading dangerous false
information and skating upon abusive behavior via tweets. In response, Twitter has largely
implied that world leaders will only be actioned against, if “...a Tweet from a world leader does
violate the Twitter Rules but there is a clear public interest value to keeping the Tweet on the
service, we may place it behind a notice that provides context about the violation and allows
people to click through should they wish to see the content.” A protection like this can be used in
many cases and offers a way for Twitter to justify the arguments for leaving the President’s
Furthermore, to stray away from political interference, CEO Jack Dorsey announced that
Twitter would be banning political ads outright globally6, an unprecedented move for Twitter
5
https://blog.twitter.com/en_us/topics/company/2019/worldleaders2019.html
6
https://twitter.com/jack/status/1189634360472829952
Polmanteer 7
All of these actions by Twitter and their community, emphasize how an Internet content
platform is working in 2019. Most of the policy is justified in thought, but consistently seems to
be being failed upon in situations. Koebler and Cox interviewed Becca Lewis, a researcher of
white nationalism on Twitter, in which she said “Twitter’s responses, even those that move
beyond a binary approach, show how they are actually playing an active role in the type of
content that appears and surfaces on their platform...and hiding content instead of removing it
can lead to unintended consequences. Among other issues, it can generate a conspiratorial
mindset among content creators who feel that their content is being suppressed but cannot always
prove it. In short, it shows a lack of transparency that breeds distrust on the platform while still
failing to grapple with the root issues at work,” (Koebler and Cox). This clustered series of
actions and unclear principles has lead Twitter into a continually ravaging atmosphere as it enters
another presidential election cycle and the future of its own company as they know it.
Polmanteer 8
Facebook is the unwavering face of social media and all it stands for. The company, with
net profits in the billions rely on a massive user base to create a revenue model via advertising. It
has proved successful to the continued tunes of billions of dollars in revenue. That has left
As many content platforms look to moderate the content that lives on their platform, they
often rely on outside consulting, especially with the company Cognizant. Casey Newton, a
reporter for The Verge, outlines in stark detail how these moderators plucked for reading right
from wrong on Facebook, have ultimately had brutal work conditions and even worse work
itself. Newton pens, “Collectively, the employees described a workplace that is perpetually
teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes
about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place
where employees can be fired for making just a few errors a week — and where those who
remain live in fear of the former colleagues who return seeking vengeance...it’s a place where, in
stark contrast to the perks lavished on Facebook employees, team leaders micromanage content
moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush
amid the misery, have been found having sex inside stairwells and a room reserved for lactating
mothers; where people develop severe anxiety while still in training, and continue to struggle
with trauma symptoms long after they leave; and where the counseling that Cognizant offers
them ends the moment they quit — or are simply let go,” (Newton). Facebook’s content
moderation revolves around a manual review process by humans, and as a result the content they
see is often PTSD-inducing and ultimately disturbing. This dealing of media is a different way of
handling content than Twitter, who goes for a hybrid approach of artificial intelligence flagging,
Polmanteer 9
human content moderation, and user reporting. Queenie Wong, of CNET, goes on to flag five
different companies that have consulted with Facebook to offer moderation services: the
aforementioned Cognizant, PRO Unlimited, Accenture, Arvato, and Genpact (Wong). Each
contracted firm has been reported of often underpaying staff, maximizing productivity with little
breaks for moderators, and ultimately creating mentally unwell workplace conditions.
Facebook also had to make tough content decisions when its newly-launched Facebook
Live feature had been used for atrocities. In 2017, according to Samuel Gibbs of The Guardian,
Facebook had to hire nearly 3,000 outside content moderators to monitor their livestreaming
service alone. The climate in 2017 had bubbled to scalding conditions with, “...footage of
shootings, murders, rapes and assaults has been streamed on Facebook. The live broadcasts have
then been viewable as recorded videos by the social network’s users, often for days before being
taken down,” (Gibbs). With more content moderators viewing this type of content appearing on
Facebook Live, even more horror stories likely occurred as a direct result.
Facebook continues to deal with a multitude of complex issues surrounding the content
that lives on its services, and will continue to have to monitor its content in great detail as
YouTube
to upload user-generated videos to their own channels. YouTube creators can often garner
massive success if the right amount of virality, community, and ad-friendliness exists.
copyright infringement7. With the ability for any user to upload any type of video, it’s bound to
cause a massive headache for YouTube with nearly 500 hours being uploaded to the service
every minute (a statistic that has likely risen since it was reported in early 2019). Tom McKay of
Gizmodo explains YouTube’s bevy of complexities and issues in a sentence or two: “The number
of issues plaguing YouTube at any one time boggles the mind, and range from accusations it
promotes extremist content to reports its nightmare algorithm recommended home videos of
children to the pedophiles infesting its comments sections. One of the less overtly alarming but
still widespread issues has been the shoddy state of its copyright infringement claims system,
which report after report have repeatedly indicated is trivially abused to file false claims, extort
creators, and generally make YouTubers’ lives hell,” (McKay). YouTube’s Terms of Service
were also examined by Terms of Service; Didn’t Read8, and the results largely indicated a lot of
power, predictably, being left to YouTube’s end. Most notably, the ability for any content to be
taken down on suspicion of copyright infringement. The video-sharing service can also remove
any content it deems infringing on its Terms of Service, and can retain deleted content that was
deleted by a creator or by YouTube itself. These terms ultimately signify a complex mechanism
protecting itself from potential lawsuits and governmental action. Susan Wojcicki, YouTube
CEO identified copyright infringement as one of its biggest focuses in 2019, including making it
7
https://support.google.com/youtube/answer/6005900?hl=en
8
https://tosdr.org/#youtube
Polmanteer 11
more creator-friendly9. A key change that Wojcicki identified involved making copyright claims
slightly harder to create without evidence: “Since so many creators have told us that the
community guidelines strike system felt inconsistent and confusing, we updated our policies to a
simpler and more transparent system. Every creator now gets a one-time warning that allows
them to learn about our policies before they face penalties on their channel. Each strike, no
matter if it comes from the videos, thumbnails, or links, gets the same penalty. On top of adding
new mobile and in-product notifications about a strike, our email and desktop notifications will
provide more details on which policy was violated.” Especially as YouTube uses products of its
own like YouTube Music to diversify its business strategy and content offerings, monitoring the
content on its platform for infringement on copyright grounds will be as important as ever.
YouTube has also dealt with severe, public outcries over what constitutes a strike or ban
based on their Terms of Service. In the Summer of 2019, Vox creator Carlos Maza had crafted a
minute-long supercut, Crowder is disparaging Maza based on his gender identity, physical
demeanor, and political beliefs. It brought up a few key arguments: 1) can a compilation of
violating content be viewed the same as the content spread normally over many pieces of content
(as Crowder’s was)?; 2) what crosses the line of protected free speech to hate speech in online
content?; and 3) had Maza not gotten the public outcry behind him about this issue, would
YouTube had even heard? Benjamin Goggin of Business Insider, notes that Maza called that
Crowder infringed three separate points in YouTube’s community guidelines: Specifically, Maza
pointed to the following types of content that YouTube discourages in its harassment policy: 1.
Content that is deliberately posted in order to humiliate someone, 2. Content that makes hurtful
9
https://youtube-creators.googleblog.com/2019/04/addressing-creator-feedback-and-update.html
Polmanteer 12
and negative personal comments/videos about another person, and 3. Content that incites others
to harass or threaten individuals on or off YouTube,” (Goggin). After reviewing the video and
content internally, YouTube identified that Crowder had not infringed against its community
guidelines, outlining their argument in a tweeted reply10: “Our teams spent the last few days
conducting an in-depth review of the videos flagged to us, and while we found language that was
clearly hurtful, the videos as posted don't violate our policies. We've included more info below to
explain this decision. As an open platform, it's crucial for us to allow everyone–from creators to
journalists to late-night TV hosts–to express their opinions w/in the scope of our policies.
Opinions can be deeply offensive, but if they don't violate our policies, they'll remain on our site.
Even if a video remains on our site, it doesn't mean we endorse/support that viewpoint. There are
other aspects of the channel that we're still evaluating–we'll be in touch with any further
updates." A day after YouTube’s findings, Goggin wrote that YouTube had actually reversed
course slightly, and decided to demonetize (remove advertising) Crowder’s videos11. This left
both sides, Maza and Crowder, largely unresolved in their feelings, and blurred the lines further
of YouTube’s guidelines enforcing, as well as how online platforms overall treat their users. Free
speech and hate speech were blurred in this case, and for other, future cases YouTube will have
to continue in unprecedented territory to make decisions that affect the future of their company
10
https://twitter.com/TeamYouTube/status/1136055311486210048
11
https://youtube-creators.googleblog.com/2018/02/preventing-harm-to-broader-youtube.html
Polmanteer 13
Twitch
The company was purchased by Amazon in 2014, and has since turned into a multi-billion dollar
company that has also received its fair share of content moderation cases.
As the Facebook Live debacle ensued, Twitch still is constantly dealing with multiple
public cases on a monthly cadence, as livestreamers continue to push the limits of Twitch’s
moderation. A recent case, involved streamer “Alinity”. The oft-controversial streamer, who is
known for pushing the boundaries on visual appearance to her Twitch viewers, and for often
making outrageous, hyperbolic statements to fellow Twitch streamers. In the summer of 2019, a
debate ensued over how Twitch should handle Alinity, after a July livestream captured an
angered Alinity throwing her cat from her computer desk to the back of her room. On top of this
single incident, viewers also drudged up videos of Alinity mouth-feeding vodka to the same cat,
as well as kicking her dog in a separate video12. Viewers and other streamers alike called for
action from Twitch’s moderation team, to which the moderation team never took action on. This
also created an extremely dangerous situation for Alinity herself, as outraged viewers revealed
her personal address, defamed her in numerous ways on social media, and reported her to animal
abuse organizations. Situations like Alinity’s continue to happen among Twitch’s most-viewed
Popular Twitch streamer “Ninja” (who crafted a huge moment along from switching from
Twitch to newly-created streaming service Mixer), was angered in the summer of 2019 as well.
Tyler Blevins, Ninja’s true name, met Twitch with outrage as Twitch’s in-platform
12
https://twitter.com/AlinityTwitch/status/1152303851929833472
Polmanteer 14
channel. This raised further questions to Twitch regarding how a stream such as this was made
popular, and how it stayed long enough on the platform to land a recommendation on one of
Worse yet, is Twitch’s movement into an all-”IRL” section, meaning in-real-life streams
of real people interacting in the real world. As Facebook Live’s conundrum indicated,
livestreaming civilian life has proven its troubles. Notorious streamer Ice_Poseidon, whose real
name is Paul Denino, paved the way for questionable content among this section on Twitch. Julia
Alexander of gaming publication Polygon writes that Denino was the first true test for Twitch in
this realm, as the streamer gained notoriety for boundary-pushing interactions and cultivating a
stream viewer anonymously calls the local police to report an intense violent crime at the spot of
the streamer, to have a SWAT team attend to the premises of the streamer). Repeated instances
of situations like this led to Denino’s banning from the platform outright, even though many said
that he hadn’t actually broken Twitch’s terms of service. “Ice’s ban sparked one of the biggest
complaints Twitch members in and outside of the IRL community have sent to the company
since the section was launched: The rules aren’t clear enough. In Ice’s follow up video, the
streamer noted that Twitch doesn’t outline what’s really against its terms of service, arguing that
the rules are too vague for specific cases, like swatting,” (Alexander).
Twitch will continue veering into cases unlike any of its peers on Twitter and YouTube,
as livestreaming continues to craft unusual scenarios that involve copyright infringement, terms
What does the future of content moderation look like on the Internet? It is not clear. As
evidenced by each of the aforementioned platforms, their actions on the content living in their
platforms are varied. It seems to be the early days of content moderation still, where the most
effective ways of combating prohibited material off of the Internet are still being discovered.
However, this also raises the question of free speech and how user generated content based
companies interact with their content. The argument some have taken is that because the
companies largely are based in the United States and are often used like a public utility, free
speech protections should apply. On the other side though, arguments are that each company is
its own private business responsible for its own actions, and every single user technically has
signed the user agreements they are met upon when creating an account on each platform.
It is my opinion that the current system will likely stay the way it is in the next decade.
While the surge of a platform in the United States that is based in China, in TikTok, might affect
the government’s consideration of content platforms’ power and regulation, I believe that
companies like Twitter, Facebook, and YouTube will have permission to continue the way they
have, dealing on a case-by-case basis with the public cases they face, and attempting to squash
the next possible public case through moderation methods. The GDPR, a European Union law,
will force companies to continue working towards better practices surrounding data, privacy, and
content, and its likely that further legislation that takes action against content platforms on the
Internet will undoubtedly come from Europe, before the United States. Seeing as how each
platform has already dealt with issues among the attention of millions, and even tens of millions,
the only true change that would occur would likely be from the United States Government
changing their line of thinking of how they approach the Internet as a whole.
Polmanteer 16
References
Alexander, Julia. “Twitch's Contentious IRL Section Sparked the Platform's Biggest Debate
https://www.polygon.com/2018/1/3/16845362/twitch-irl-iceposeidon-trainwrecks-female-
streamers.
Berger, J.M. “Trump Is the Glue That Binds the Far Right.” The Atlantic, Atlantic Media
right-twitter/574219/.
Downes, Larry. “On Internet Regulation, The FCC Goes Back To The Future.” Forbes,
https://www.forbes.com/sites/larrydownes/2018/03/12/the-fcc-goes-back-to-the-
future/#48ea17345b2e.
Gibbs, Samuel. “Facebook Live: Zuckerberg Adds 3,000 Moderators in Wake of Murders.”
https://www.theguardian.com/technology/2017/may/03/facebook-live-zuckerberg-adds-
3000-moderators-murders.
Goggin, Benjamin. “YouTube's Week from Hell: How the Debate over Free Speech Online
https://www.businessinsider.com/steven-crowder-youtube-speech-carlos-maza-explained-
youtube-2019-6.
Polmanteer 17
Koebler, Jason, and Joseph Cox. “How Twitter Sees Itself.” Vice, 7 Oct. 2019,
https://www.vice.com/en_us/article/a35nbj/twitter-content-moderation.
McKay, Tom. “YouTube Announces Some Changes to Its Infamously Awful Copyright
announces-some-changes-to-its-infamously-screwe-1836233860.
Newton, Casey. “The Secret Lives of Facebook Moderators in America.” The Verge, The
content-moderator-interviews-trauma-working-conditions-arizona.
Wong, Queenie. “Murders and Suicides: Here's Who Keeps Them off Your Facebook Feed.”
ugly-business-heres-who-does-it/.