Professional Documents
Culture Documents
Ana Mendoza
11 April 2024
Algorithm and AI Chatbot Technologies and the Threats They Pose
Since its conception, the internet has drastically changed the course of human history, allowing us
to seamlessly connect and share our ideas and information across the globe. Nowadays, the internet is
populated by big companies who seek to profit from their sites through the usage of personalization and
through the allure of having the newest and innovative technologies for all to use. The former has led to
the heavy use of algorithmic technology in many popular websites while the latter has led to the inclusion
of the newest and most controversial piece of technology, AI Chatbots. Without the proper regulation of
these algorithmic technologies and AI chatbots, we may be looking at misinformation, privacy risk, and
As our algorithmic technologies continue to improve, big companies such as Google and
Facebook begin to use these technologies to help better personalize their apps and websites for a better
user experience. While at face value this may seem harmless as a more personalized profile of a user
means an overall better user experience on these sites, the potential risk this technology creates cannot be
ignored as stated by De S.J. and Imine A’s article in which they examine Facebook's recommender
algorithm and try to see if it follows GDRP requirements. These requirements are put into place by the
EU to ensure that users are aware of how their data is being used by a website and are given the option to
opt out of this sharing without negative repercussions to the user experience on the given site. In the
article, they discover that Facebook finds ways to bypass the requirements put in place to keep users' data
safe by the usage of third-party software that collects data regardless if you opt out of the collection or not
on their website. They also state that Facebook does not make it clear how a user's data is used and when
Guzman III 2
it comes to data leaks, they do not take responsibility for the stealing of personal data as they claim that
the users always had a choice to opt of data collection, which isn’t true. The collection of personal data by
algorithmic recommendation systems as argued by the article above can lead to privacy risks that the user
may not even be aware of which is why it is important that we look to regulate the usage of these
While data risk is certainly a big concern when it comes to the usage of algorithmic
recommendation systems that are used in most modern social media sites, an even greater risk posed by
these algorithmic recommendation systems would be the unwanted spread of misinformation due to the
popularity of these aforementioned social media sites. We need to look no further than the 2020 COVID-
19 Pandemic in which the world was plunged into fear and uncertainty due to the lack of information
available during the time about the COVID-19 virus. Many people trapped in their homes took to the
internet and more specifically, social media to gather information about the virus to help better protect
themselves against it. As panic continued to ensue, many people took advantage of this chaos and began
to spread misinformation online regarding the nature of the virus as stated by Elia Gabarron, Oyeyemi
SO, and Wynn R in their article. The recommendation algorithms on these sites began to pick up on this
misinformation and began to rapidly spread this misinformation regarding the virus. Users were shown
this information based on the profiling created by these recommendation systems that could range from
age, race, ethnicity, gender, or political standing. As this went on, bubbles of misinformation and beliefs
regarding the virus called echo chambers began to form and grew as the algorithm continued to spread the
misinformation. While CDC officials tried to alleviate this issue by informing users of facts regarding the
virus, the damage was already done and still lingers to this day. If we wish to prevent this level of panic
and uncertainty from happening again, we must regulate these algorithmic technologies to ensure that
they do not accidentally spread misinformation which could lead to fearmongering and chaos in the
public.
Guzman III 3
However, not all companies are so unaware of the dangers that may occur with the usage of
algorithmic technologies for user personalization as the information and research provided by Mark
Ledwich, and Anna Zaitsev in their paper titled “Algorithmic extremism: Examining YouTube's rabbit
hole of radicalization” demonstrates that the popular website, YouTube, has taken precautions when it
comes to dealing with issues such as the formation of Echo-Chambers and more importantly
Radicalization on their platform. Radicalization is an issue that forms when certain groups of people,
especially online, begin to form small bubbles of echo chambers that feed not only misinformation but
hate in their networking spheres. This can range from political radicalization over a certain party to
outright hate groups. YouTube ensures that radicalization does not occur on their site through the usage of
careful moderation by YouTube staff to make sure that radical content is not present in their platform as
well as the clever tweaking of their algorithmic technology that actively tries to steer away users from
harmful misinformation that may be present in the far corners of the site. Videos and channels are
carefully moderated by staff in order to ensure that they do not pose a threat to public safety by spreading
dangerous or unhealthy YouTube can tweak its algorithm to hide its content from recommendations or can
even remove the account altogether. With careful moderation and consideration, it is possible to combat
the possible dangers that come with the usage of algorithmic technologies without removing them
While this may help the issues that may be present in Social Media sites, a new danger has
presented itself with this technology and that is the fact that News sites/apps have begun to employ
algorithmic technology in their platforms which has the potential to be even more dangerous and harmful
than the problems faced with social media sites. Information provided by Ying Roselyn Du in their paper
“Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A Qualitative Study of AI-
Powered News App Users, Journal of Broadcasting & Electronic Media” shows us that many people may
not be aware of the heavy amount of personalization found in modern news sites and the people who are
Guzman III 4
aware are more prone to simply brushing it off as they feel it simply increases the user experience. The
problem with this is unlike social media sites, a news site’s sole purpose is to spread what people would
assume to be factual information with little to no bias about current world event and issues. Believing that
this is true, many people would assume that the information provided to them on these sites is 100%
accurate which is dangerous when algorithms may only show you information on things you would want
to see rather than the whole picture which can lead to the user getting only half of the information on a
story or worse yet, completely false information on the very site that is supposed to provide you with the
news. The issue of misinformation and disinformation inherent with algorithmic technologies in news
sites can only be resolved with heavy enforcement of moderation on these platforms.
The dangers of algorithmic technologies are not completely new and foreign to us as they have
been around for a decade; however, this argument cannot be made for AI Chatbots which have begun to
take over the internet like a storm in 2022 with the introduction of Chat GPT. Although this technology is
still in its infancy, the impact it has left on the world cannot be understated. From being able to generate
entire texts to creating realistic images and art that cannot be easily identifiable as AI-generated, the
possibilities with this technology are endless. With endless possibilities comes endless problems however
as shown with Victor Galaz’s study into the matter in their paper entitled “AI could create a perfect storm
of climate misinformation” which demonstrates how this technology can be used to spread believable
misinformation about important topics such as climate change. This can be done by creating fake images
relating to climate change as well as easily generating believable text of misinformation about climate
change. Another issue that may arise from these social chatbots is stated in “Social Bots and the Spread of
Disinformation in Social Media: The Challenges of Artificial Intelligence” authored by Nick Hajli,
Usman Saeed, Mina Tajvidi, and Farid Shirazi in which it is discovered that chatbots can be made to run
fake social media accounts in mass in order to spread large amounts of disinformation on these social
media sites. This poses a huge issue not only because of the amount of damage that can be done in a short
time with these bots but also because as these bots grow, the ratio of real users and bots is skewed
Guzman III 5
creating a lack of a better term, a “Dead Internet” in which sites are completely flooded with bots and AI
and real users are drowned out by this technology. While this is still a long way from happening and could
be easily prevented with careful monitoring of social media sites, it is a very real possibility that the very
technology that was meant to connect us can be used by malicious entities in order to further divide us by
generating misinformation on the internet or completely flood the internet with fake accounts and bots
making us unable to distinguish what or who is real online. For the aforementioned reasons, it is
imperative that we properly regulate the usage of chatbots on the internet as the dangers it presents can be
It can be reasonably inferred from the previously provided about algorithmic technologies and AI
chatbots that it is vital that we properly regulate these technologies as failure to do so can lead to an
uncontrollable spread of misinformation and privacy risks on the internet. If we are able to properly
regulate these technologies and solve the problems that come with them, we can use their limitless
potential to help push humanity to a new age of technological innovation and discovery.
Citations
Guzman III 6
Mark Ledwich and Anna Zaitsev. "Algorithmic extremism: Examining YouTube's rabbit hole of
Ying Roselyn Du (2023) Personalization, Echo Chambers, News Literacy, and Algorithmic Literacy: A
Qualitative Study of AI-Powered News App Users, Journal of Broadcasting & Electronic
review. Bull World Health Organ. 2021 Jun 1;99(6):455-463A. doi:10H.2471/BLT.20.276782. Epub 2021
De, S.J., Imine, A. Consent for targeted advertising: the case of Facebook. AI & Soc 35, 1055–1064
(2020). https://doi.org/10.1007/s00146-020-00981-5
Galaz, Victor, et al. "AI could create a perfect storm of climate misinformation." arXiv preprint
arXiv:2306.12807 (2023).
Hajli, N., Saeed, U., Tajvidi, M. and Shirazi, F. (2022), Social Bots and the Spread of Disinformation in
Social Media: The Challenges of Artificial Intelligence. Brit J Manage, 33: 1238-
1253. https://doi.org/10.1111/1467-8551.12554
Guzman III 7