You are on page 1of 5

Guzman III 1

Francisco Guzman III

ENGL 1302 – 207

Ana Mendoza

9 March, 2024

AI/Social Media and its influence on people

Since its conception in 1993, the internet has allowed people across the world to have access to a

nearly limitless library of information as well as the ability to share information worldwide. This is

especially true in the modern age; however, the rise of major social media sites such as Facebook,

Google, and Twitter have radically changed the way in which we receive our information through the

usage of Recommender Algorithms that work to gather information on users on a personal level to show

them ads or content in which the algorithms believes matches the person and their interests. Although this

may seem harmless at face value, the truth is that these recommender algorithms can pose the risk of

violating users' data privacy and may also lead to the formation of dangerous echo chambers in online

communities. As well as this, AI chatbots, a fairly recent technology, has also begun to change the internet

landscape for better or for worse. Both of these pieces of technology have the potential to shape the

world's beliefs as they dominate over the internet influencing what people see or believe.

According to De, S.J., Imine, Facebook, one of the most popular social media sites used today,

collects data from users from a variety of ways such as a users likes, post, and their watch time on certain

videos to feed into their built in recommender algorithm that uses this data in order to build a profile of

the user that they can use to better tailor fit ads to the user. While at surface level this may seem

noninvasive of the users privacy (as in theory, they can simply opt out of this data collection through the

settings of the app) Facebook uses third party apps that can bypass these restrictions on data collection

placed by a user to gather sensitive information about said user such as medical records, criminal records,

and even their geolocation logins on their devices. De, S.J., Imine argues in their article that this is a
Guzman III 2

violation of GDPR regulations as the user has no control over the data that is shared by these third-party

software. While privacy rights are certainly a huge concern in the argument of the usage of recommender

algorithms, the concern that Bojić, Ljubiša, Aleksandra Bulatović, and Simona Žikić have in their paper is

how these algorithms can lead to the formation of bias and worse still, Echo Chambers in our modern

social media climate. They argue how popular social media sites can perpetuate misinformation and bias

through their recommender algorithms that make sure that if the user believes in a certain political

viewpoint, they will be shown more content relating to that political belief. The content that can be shown

can range from verifiable information, to disinformation, to even misinformation just to fit into the

personalized recommendation of the user to in turn generate more view time and usage time in their

app/website. Both of these concerns, that these recommender systems can violate user privacy as well as

perpetuate echo-chambers in the online ecosystem are valid in their own rights especially in the case of

Facebook.

While Facebook is a prime example of the dangers that may come from unregulated collection of

user data such as the spread of misinformation and the creation of echo chambers, research by Mark

Ledwich and Anna Zaitsev shows that not all social media platforms are prone to such problems if they

take the right precaution. Their study into the YouTube Recommendation algorithm shows that YouTube

has taken precautions in alleviating one of the previous concerns, that being the fear of online echo

chambers forming through a recommender algorithm. This is achieved through careful moderation on the

site and the media that is produced on YouTube by moderation staff and AI systems. As well as this,

YouTube often limits exposure to radical channels through efforts such as demonetization of videos that

may contain radical content as well as limiting the comments that are allowed on these videos. They may

even take it so far as to not show these videos on their recommendation tab at all thus limiting the spread

of these harmful ideas. Such regulations can be taken by other social media sites to ensure that the online

space is not prone to the formation of these dangerous Echo Chambers and the spread of misinformation.
Guzman III 3

Social media sites are not the only prone to the dangers of recommendation algorithms as recent

popular news sites and apps have begun to employ these systems to increase user usage as well as profit.

Ying Roselyn Du discusses this in their paper and goes into detail on how users from a variety of different

backgrounds such as different age groups, genders, race, as well as knowledge on recommender

algorithms feel about these systems in their news apps. Their study showed that an alarming amount of

people despite knowing the risk that comes with this technology, especially in a news-providing software,

choose to ignore it and are happy with the results of this personalization. The personalization from the

algorithms is used by the websites/apps to increase the amount of traffic on their pages in order to

generate more revenue for the company running the sites. This comes at the cost of the previously

mentioned bias on the news that is shown to users of these apps/sites and could lead to the formation of

dangerous Echo Chambers such as the case with Facebook. Like social media apps, the employment of

recommendation algorithm as argued by this paper can lead to the issue of echo-chambers forming from

the information these apps may or may not provide to users.

Although recommender algorithms have been present in our technology for quite some time as

well as their issues, the recent rise of AI chatbots in social media have begun to pose an arguable bigger

threat to public safety then the previously mentioned technology. Nick Haji, Usman Saeed, Mina Tajvidi,

and Farid Shirazi as well as Victor Galaz, Hannah Metlzer, Stefan Daume, Andreas Olsson, Bjorn

Lindstrom, and Arvid Marklund both argue in both of their papers that AI bots can easily be used

maliciously to spread misinformation on the web in a short amount of a time in a wide variety of topics

such as climate change, political views, or socio-economical debates. One such example would be in Alia

Gabarron, Sunday Oluwafemi Oyeyemi, and Rolf Wynns article which talks about how AI bots were used

during the COVID-19 pandemic in order to spread misinformation about the virus and to generate

discourse and division between the groups who were educated about the virus and those who were not.

The AI bots could be used in a manner to easily spread or create misinformation through bots accounts

and through AI prompts which could generate believable misinformation in a matter of seconds. These
Guzman III 4

articles all have a common idea that AI bots can be used maliciously in order to spread misinformation on

social media in a fast and unregulated manner.

In closing, the introduction of technologies such as recommendation algorithms and AI deep

learning bots have changed the internet landscape in ways that can be both positive and negative for each

individual user. The negative aspects of these technologies must be addressed however, as failure to do so

could lead to the formation of bias in communities as well as the spread of misinformation on the internet

which could ultimately lead to unnecessary division and discourse and the formation of dangerous

bubbles known as Echo chambers.

Works Cited

De, S.J., Imine, A. Consent for targeted advertising: the case of Facebook. AI & Soc 35, 1055–1064

(2020). https://doi.org/10.1007/s00146-020-00981-5

Ying Roselyn Du (2023) Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A

Qualitative Study of AI-Powered News App Users, Journal of Broadcasting & Electronic

Media, 67:3, 246-273, DOI: 10.1080/08838151.2023.2182787

Bojić, Ljubiša, et al. “The Scary Black Box: AI Driven Recommender Algorithms As The Most Powerful

Social Force”. Etnoantropološki Problemi Issues in Ethnology and Anthropology , vol. 17, no. 2, Oct.

2022, pp. 719–744, doi:10.21301/eap.v17i2.11.

Gabarron E, Oyeyemi SO, Wynn R. COVID-19-related misinformation on social media: a systematic

review. Bull World Health Organ. 2021 Jun 1;99(6):455-463A. doi:10.2471/BLT.20.276782. Epub 2021

Mar 19. PMID: 34108756; PMCID: PMC8164188.

Galaz, Victor, et al. "AI could create a perfect storm of climate misinformation." arXiv preprint

arXiv:2306.12807 (2023).
Guzman III 5

Ledwich, Mark, and Anna Zaitsev. "Algorithmic extremism: Examining YouTube's rabbit hole of

radicalization." arXiv preprint arXiv:1912.11211 (2019).

Cavallo, David et al. “Effectiveness of Social Media Approaches to Recruiting Young Adult Cigarillo

Smokers: Cross-Sectional Study.” Journal of medical Internet research vol. 22,7 e12619. 22 Jul. 2020,

doi:10.2196/12619

Hajli, N., Saeed, U., Tajvidi, M. and Shirazi, F. (2022), Social Bots and the Spread of Disinformation in

Social Media: The Challenges of Artificial Intelligence. Brit J Manage, 33: 1238-

1253. https://doi.org/10.1111/1467-8551.12554

You might also like