You are on page 1of 12

1

A Digital World: Negative Repercussions of Misinformation on Social Media and Potential


Policy Refinements

Akshaya Shyamsundar Rekha


English 138T: Rhetoric and Civic Life
April 8th 2024
2

Overview
Significance of the Problem
As substantial technological advances have been made in society, social media has taken control
over huge parts of citizens’ daily lives. Facebook has about 3 billion users per month, TikTok has
1 billion monthly users, Twitter has 350 million users monthly, and Reddit has 250 million users
monthly1. Adding up to almost half the world’s population in magnitude, it showcases how
widespread social media’s reach is. Social media has shown what it means to globally unite or be
on the same page about a certain topic, creating a shared global consciousness unlike ever
before.
As social media has gained popularity, the need to specifically obtain news from actual news
sites such as CNN, New York Times, etc, has decreased. “In 2019, more than 70 percent of
American adults consumed news on social media, compared to fewer than one in eight
Americans in 2008.2” The transformation of social media into a news platform has significantly
impacted the definition of journalism and blurred the lines between fact and fiction.

1
Pew Research Center. 2023. “Social Media and News Fact Sheet.” Pew Research Center.
November 15, 2023.
https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/
2
Levy, Ro’ee. 2021. “Social Media, News Consumption, and Polarization: Evidence from a
Field Experiment.” American Economic Review 111 (3): 831–70.
https://doi.org/10.1257/aer.20191777
3

Regular News Consumption on Social Media Sites is the New Norm3

A concern arises because it is much easier to spread misinformation on social media nowadays
with advanced technologies. Tools such as artificial intelligence, deep fakes, and highly detailed
editing software can cause false information to be interpreted as true. This is not just a concern,
but a genuine complication. During the Covid-19 pandemic, medical misinformation went
rampant on social media. Medical misinformation is the most dangerous and represents the
worst-case scenario of misinformation. “During COVID-19, falsifications of information led
people to decline vaccines, reject public health measures, and use unproven treatments. Health
misinformation has also led to harassment and violence against health workers.4” In certain
cases, misinformation has even led to death, where people believed in erroneous medical
techniques, ultimately causing them to fall ill when not treated properly. With an issue where
human lives are at stake, it is imperative to reduce or completely eliminate the problem of
misinformation. Information on social media platforms must be regulated more strictly but
through multiple means. Social media platforms must enact stronger rules for users who may
spread misinformation, such as banning from the app. The question of free speech comes in
3
ibit
4
U.S. Department of Health and Human Services. 2021. “Health Misinformation—Current
Priorities of the U.S. Surgeon General.” Www.hhs.gov. 2021.
https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html.
4

when the government is involved, so this topic must be trodden lightly with clear-cut limitations
targeting harmful misinformation.

Social Media As a Misinformation Hub


In certain cases, social media is guilty of perpetuating, or at the very least, rewarding users for
habitually spreading false information. Social media and its short-form content are built upon an
‘algorithm’ that prioritizes engagement with the app. In layman’s terms, if a certain video is
spread, gains millions of views and likes, and causes more people to use the app, the platform
will keep pushing the video, even if it spreads misinformation. Our social feeds are not catered
toward accurate, high-quality information, as is the goal of accredited news sources. The goal of
social media’s ‘for you page’ is “engagement, allowing outrageous stories and opinions to find a
broad audience quickly.5” At the heart of these social media sites are the profits and potential
money they can make. Misinformation is never a genuine concern unless it lands these platforms
in heavy legal trouble (which is rare due to the right of free speech in the United States), or
severely decreases engagement with the platform. This adds more fuel to the amplification of
misinformation because social media has become a breeding ground for misinformation, rather
than maintaining a neutral platform for all content to thrive.

5
Review of How Misinformation on Social Media Has Changed News. 2023. US PIRG. US
PIRG Education Fund. August 14, 2023.
https://pirg.org/edfund/articles/misinformation-on-social-media/#:~:text=By%20nudging%20fre
quent%20users%20to,fake%20news%20circulating%20on%20Facebook.
5

Image Showing How Misinformation is Spread as Long as it Is Popular 6

Stakeholders
Because social media is incredibly prevalent, it is not a far-off claim to make that every
individual with access to technology/social media (billions) is a stakeholder in this
misinformation epidemic. Especially with fatalities resulting from medical misinformation, lives
are at stake here. These individuals are the ones who are viewing media content, and sharing the
content. They can unintentionally spread false information or become victims of deceptive
information, affecting their beliefs, behaviors, and decisions. Social media companies are also
essential stakeholders, and arguably the most powerful stakeholders, as they host and
disseminate content, making them responsible for the spread of misinformation on their
platforms. These companies will also be the most responsible for regulating the content on their
private platforms, as the national/local government is very limited in taking away from the right
to free speech (the First Amendment).

Policy Deliberation
Existing Government Policies Addressing Misinformation
Because the rise in social media has been relatively new in our society, clear-cut laws and
regulations have not been established yet, but are in the early stages of development, with many
years ahead of them waiting for approval. Right now, there are still policies that may help cover
the scope of social media misinformation, and expanding upon these existing policies might be a
great way to start regulations.
Currently, Section 230 of the Communications Act of 1934 is very relevant to curbing the effects
of misinformation. This regulation allows providers of interactive computer services to bypass
being held legally responsible for harmful information that was developed on their services7.
Instead of shifting blame onto the hosting service, the individuals who spread misinformation
should be held accountable for the damage caused by their actions, particularly those who are the
actual speakers of harmful content. The act does not protect companies that create illegal or
harmful content, meaning companies are also held responsible for any misinformation they
spread. This act is convenient because it imposes legal consequences on individuals
disseminating harmful speech, which can include misinformation. Nevertheless, it primarily
shields small blogs, websites, major platforms, and individual users from liability, with only a
limited subset of extreme cases of misinformation resulting in legal repercussions.

6
Menczer, Filippo. 2021. Review of Here’s Exactly How Social Media Algorithms Can
Manipulate You. Big Think. Freethink Media. October 7, 2021.
https://bigthink.com/the-present/social-media-algorithms-manipulate-you/
7
“Section 230: An Overview.” 2021.
https://crsreports.congress.gov/product/pdf/R/R46751#:~:text=Section%20230%20of%20the%2
0Communications.
6

Limitations of the Government


The problem with the regulation of misinformation is that the government does not have
overbearing power. The First Amendment prevents entities like the Federal Commission from
exercising greater authority over content posted online8. By the First Amendment, everyone in
the United States is entitled to free speech, which thoroughly reduces how imposing social media
regulations can be on users. Additionally, any sort of regulation would need incredibly clear-cut
definitions of misinformation, and the extent to which it is harmful, in order to take away a user’s
right to freedom of speech. Many such obstacles exist when trying to regulate through the
government. Therefore, social media platforms must take strongly needed action against
misinformation.

Potential Regulations
Existing Social Media Platform Rules
Currently, social media platforms have set methods of taking down false information. TikTok has
something called ‘community guidelines,’ which protects users from different sorts of harmful
information, anywhere from harassment/bullying to the topic of misinformation. Videos that do
not meet Tiktok Community Guidelines are promptly taken down, or marked with various
indicators showing that they might be spreading false information. Though these efforts by
Tiktok may seem to be sufficient, they are far from adequate. This multi-billion dollar platform is
undergoing investigation by the European Commission “due to concerns that it exposes young
users to inappropriate content9,” meaning that TikTok is not effective enough at removing
misinformation; there is always room for improvement with the billions of hours of content put
out daily.

8
D’Virgilio, Allegra. 2022. “The US Government’s Role in Regulating Social Media
Disinformation – Northeastern University Political Review.” Northeastern University Political
Review. May 19, 2022.
https://nupoliticalreview.org/2022/05/19/the-us-governments-role-in-regulating-social-media-disi
nformation/
9
“TikTok Shares Latest Data on Content Removals and Enforcement Actions.” n.d. Social Media
Today. https://www.tiktok.com/transparency/en-us/combating-misinformation/
7

Figure Showing that Removal of Content Has Decreased Despite More TikTok Users10

Improvements Platforms can Implement

Social Media Reward Structure Changes


The social media reward structure works by awarding users whose videos get the most views,
shares, and likes. At heart, a user is rewarded for any popularity that arises from a video,
because the platform can incentivize and profit off that video. If the social media reward
structure is altered to incentivize accurate videos, it would greatly increase accuracy in videos.
Social media users crave the dopamine rushes from likes and comments on their videos.
Therefore, they post videos they hope will get popular, either through a well-known song, dance,
trend, etc that is used in the video. If users realize that accuracy is the new key to making their
videos go viral, that is what they would resort to posting. This could greatly decrease
misinformation. In fact, this phenomenon shows great potential as shown by a research study. A
study by USC researchers found that incentives for accuracy rather than popularity doubled the
amount of accurate news that users share on social platforms.11 Social media can take a more
active role in preventing misinformation. Overall, one tested and true strategy for building

10
“Ibit
11
Madrid, Pamela. 2023. “Study Reveals Key Reason Why Fake News Spreads on Social
Media.” USC Today. 2023.
https://today.usc.edu/usc-study-reveals-the-key-reason-why-fake-news-spreads-on-social-media/.

8

accurate sharing practices is to incentivize accuracy, which social media platforms have the
power to do with their reward algorithms.

Incentivizing accurate content sharing is an efficient way to prevent the falsification of


information, but the opposite can also be done. Users can be discouraged from sharing false
information if they are punished for it. Social media platforms have control over their platform
and can put into place consequences. These repercussions could range from a simple ‘slap on the
wrist’ such as a user’s account receiving a warning, to severe punishments such as banning the
user’s account and any new accounts they may create. The punishments could stretch to legal
action against the misinformation spreader if they violate free speech in an extreme manner. This
way, the government does not need to get involved, worrying about impeding freedom of speech.

User Feedback
Another useful tool is user feedback. On most social media platforms, like TikTok, Instagram,
and Facebook, there are buttons to report platforms. This can help narrow down videos for
moderating algorithms and moderating staff to check, especially if the content is reported
specifically for misinformation. Additionally, the technological route can be taken for
misinformation. More algorithms and procedures can be developed using machine learning and
artificial intelligence to scan through thousands and thousands of pieces of content and check for
falsifications, much more efficiently than a human worker. Technological advancements can be
used for better purposes and improve the quality of information put into the world.

Government Related Policies


Federal Trade Commission
Amid a surge in social media fraud, the Federal Trade Commission has issued orders to eight
social media and video streaming platforms, requesting details on their practices for scrutinizing
and controlling deceptive advertising. These measures aim to address concerns regarding
fraudulent healthcare products, financial scams, etc that not only spread misinformation but also
may cause harm to consumers12. The FTC will use this information to analyze how social media
companies filter through specifically their advertising, who they let through, who they authorize,
etc, and see if any additional protections/regulations can be added.

Limiting Protections
Certain states have taken steps forward, such as expanding on Section 230 by limiting on
protections that the Act gives social media companies. The Texas Supreme Court voted that in

12
“FTC Issues Orders to Social Media and Video Streaming Platforms Regarding Efforts to
Address Surge in Advertising for Fraudulent Products and Scams.” 2023. Federal Trade
Commission. March 16, 2023.
https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-issues-orders-social-media-vid
eo-streaming-platforms-regarding-efforts-address-surge-advertising.
9

cases of sex trafficking/ sexual violence, social media sites are not provided any protections.
Expanding upon cases where social media platforms are not protected is a great step in the right
direction. Clearly defining the problems with misinformation (from the many that exist) allows
for limitations with the First Amendment, and social media companies now have more incentive
to stop misinformation. When the platform is not afforded as many protections, it loses immunity
against legal trouble it could land in from the harm misinformation generates. No platform will
willingly land in legal trouble, nor do they want to be held accountable for their users’ actions, so
companies will invoke more rules and regulations against any users who spread disinformation.

International Reccomendations
The United States can also take recommendations from other countries. In Germany, The
Network Enforcement Act exists to strictly and promptly combat hate speech and fake news on
social media networks. The obligates covered social media networks to remove content that is
“clearly illegal” within 24 hours after receiving a user complaint. If the illegality of the content is
not obvious on its face, the social network has seven days to investigate and delete it. A social
media network may be fined up to 50 million euros (about US$59.2 million) for
noncompliance13. 50 million euros is a hefty fee that leaves social media companies in a rush to
remove falsified content immediately. Punishments on such a grand scale should be implemented
in the US to control companies, especially since there is a monopoly of social media networks in
the US.

Conclusion/Call to Action
In conclusion, the prevalence of misinformation on social media platforms presents a critical
problem in today’s digital-dependent era. Because social media sites are substituted for news
sources, medical misinformation runs rampant, along with other dangerous forms of false
information that can result in death. While governmental policies like Section 230 of the Federal
Trade Commission offer some protection, limitations in regulatory power hinder comprehensive
solutions. To combat this issue, a multi-pronged solution should be utilized, involving not only
the government, but also social media platforms and internal regulations.
The government must recognize the dangers of misinformation and play this to their advantage.
Under the First Amendment, speech that causes harm or danger is not protected, allowing for
greater government ordinance in online speech. Moreover, the government should explore
mandates inspired by international laws that place blame and legal repercussions on social media
sites, causing social media platforms to automatically increase their rules for users to avoid legal
trouble.

13
Gesley, Jenny. 2021. “Germany: Network Enforcement Act Amended to Better Fight Online
Hate Speech.” Library of Congress, Washington, D.C. 20540 USA. July 6, 2021.
https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforcement-act-a
mended-to-better-fight-online-hate-speech/.
10

Simultaneously, social media sites must administer changes to the reward structures of social
media. These changes must incentivize accurate content sharing over popular content, naturally
encouraging users to post more accurate content. Furthermore, sites should strictly punish users
for any false information that is spread and leverage technological advancements for content
moderation. Ultimately, only through collaborative efforts can we mitigate the impact of
misinformation and uphold the integrity of information dissemination on social media platforms.
11

Works Cited
D’Virgilio, Allegra. 2022. “The US Government’s Role in Regulating Social Media
Disinformation – Northeastern University Political Review.” Northeastern University
Political Review. May 19, 2022.
https://nupoliticalreview.org/2022/05/19/the-us-governments-role-in-regulating-social-me
dia-disinformation/

“FTC Issues Orders to Social Media and Video Streaming Platforms Regarding Efforts to
Address Surge in Advertising for Fraudulent Products and Scams.” 2023. Federal Trade
Commission. March 16, 2023.
https://www.ftc.gov/news-events/news/press-releases/2023/03/ftc-issues-orders-social-me
dia-video-streaming-platforms-regarding-efforts-address-surge-advertising.

Gesley, Jenny. 2021. “Germany: Network Enforcement Act Amended to Better Fight Online
Hate Speech.” Library of Congress, Washington, D.C. 20540 USA. July 6, 2021.
https://www.loc.gov/item/global-legal-monitor/2021-07-06/germany-network-enforceme
nt-act-amended-to-better-fight-online-hate-speech/.

Levy, Ro’ee. 2021. “Social Media, News Consumption, and Polarization: Evidence from a Field
Experiment.” American Economic Review 111 (3): 831–70.
https://doi.org/10.1257/aer.20191777

Madrid, Pamela. 2023. “Study Reveals Key Reason Why Fake News Spreads on Social Media.”
USC Today. 2023.
https://today.usc.edu/usc-study-reveals-the-key-reason-why-fake-news-spreads-on-social-
media/.

Menczer, Filippo. 2021. Review of Here’s Exactly How Social Media Algorithms Can
Manipulate You. Big Think. Freethink Media. October 7, 2021.
https://bigthink.com/the-present/social-media-algorithms-manipulate-you/

U.S. Department of Health and Human Services. 2021. “Health Misinformation—Current


Priorities of the U.S. Surgeon General.” Www.hhs.gov. 2021.
https://www.hhs.gov/surgeongeneral/priorities/health-misinformation/index.html.

Pew Research Center. 2023. “Social Media and News Fact Sheet.” Pew Research Center.
November 15, 2023.
https://www.pewresearch.org/journalism/fact-sheet/social-media-and-news-fact-sheet/

Review of How Misinformation on Social Media Has Changed News. 2023. US PIRG. US PIRG
Education Fund. August 14, 2023.
https://pirg.org/edfund/articles/misinformation-on-social-media/#:~:text=By%20nudging
%20frequent%20users%20to,fake%20news%20circulating%20on%20Facebook
12

“Section 230: An Overview.” 2021.


https://crsreports.congress.gov/product/pdf/R/R46751#:~:text=Section%20230%20of%2
0the%20Communications.

“TikTok Shares Latest Data on Content Removals and Enforcement Actions.” n.d. Social Media
Today. https://www.tiktok.com/transparency/en-us/combating-misinformation/

You might also like