You are on page 1of 4

Ethics in AI

End Semester Exam

Submitted by - Nishchay Sandhu


Roll number - 2019373

4. Algorithmic systems undermine our autonomy and the control that we can exercise over our
choices. Critically discuss this statement with reference to the readings shared in the course.

The ability to make decisions and perform acts based on one's own values, beliefs, and
preferences without being influenced or compelled by others is referred to as "autonomy." It
entails the ability to self-determine, self-govern, and follow one's own goals and interests.
Algorithmic systems have the potential to erode human autonomy and control over our decisions.
In their work "Technology, Autonomy, and Manipulation," (Susser et al., 2019) argue that
autonomy is critical to our ability to make free and informed judgments based on our own beliefs
and choices. Autonomy enables us to have control over our lives and make decisions that are in
line with our goals and wishes. Algorithmic systems, on the other hand, can undermine our
autonomy in a variety of ways.

Manipulation is one way that algorithmic systems might weaken autonomy. Manipulation is any
attempt to influence or control our choices in such a way that our autonomy is undermined. By
selectively presenting information or choices, manipulating our cognitive biases, and molding
our preferences and views over time, algorithmic systems can control humans. For example,
social media algorithms may display us stuff that promotes our opinions and biases through
advertisements. Advertisers use algorithms to target specific users with advertisements based on
their browser history, search queries, and other online activities, limiting their exposure to
alternative viewpoints. These echo chambers can strengthen our biases and make it more difficult
to evaluate opposing points of view, causing us to make judgments that may not be in line with
our ideals.

The use of selective recommendations is one of the primary issues with algorithmic systems.
They have the potential to produce echo chambers. Echo chambers are virtual settings where
users only engage with people who share their beliefs and are exposed to a limited range of
perspectives and opinions. By offering information that matches the user's tastes and biases,
selective recommendation systems can promote echo chambers. As people grow increasingly
alienated from varied perspectives and alternative viewpoints, this can contribute to societal
polarization and the propagation of disinformation.

The authors propose that algorithmic systems can be used to control humans in a variety of ways.
Online platforms, for example, can use algorithms to sway the information users see and shape
their beliefs and opinions. This can lead to filter bubbles, which leads to a narrowing of vision
and a lack of critical thinking since people are less likely to encounter and engage with ideas that
challenge their existing beliefs. It reinforces existing biases and stereotypes and can also inhibit
people from acquiring empathy and compassion for persons from diverse origins and
experiences. One of the better examples that explains such cases is the criminal justice system,
where AI algorithms are used to predict a defendant's likelihood of committing future crimes or
to determine their sentence. These algorithms may be biased against certain groups, such as
people of color or those from lower socio-economic backgrounds, and can limit their autonomy
and control over their own lives.

Furthermore, algorithmic systems can limit our autonomy by making judgements for us based on
data and pre-defined criteria, rather than allowing us to make our own choices. This is especially
concerning in industries like healthcare, where algorithms may be used to determine medical
treatments or diagnoses. According to the authors, this can lead to a loss of control over our own
health and well-being.

Algorithmic systems can also impair autonomy through opaque decision-making procedures.
Individuals may find it challenging to understand why particular judgements were made or to
question conclusions that they believe are inaccurate or unjust when algorithmic systems make
decisions without providing transparent explanations of their decision-making processes. This
lack of openness has the potential to destroy faith in the system and reduce people's sense of
control over their decisions.

However, the authors admit that not all algorithmic systems are fundamentally deceptive or
undermine autonomy. With the incorporation of AI algorithms into our daily lives, it is critical
for developers and platform owners to address and mitigate these concerns. The authors contend
that the design and implementation of these systems can have a significant impact on whether
they promote or degrade our autonomy. For example, algorithmic systems can be designed to
provide transparent explanations of their decision-making processes, allow for user control and
input, and be subject to ethical oversight and accountability. Furthermore, providing users with
the ability to customize and manage their recommendations can help to avoid the building of
echo chambers and filter bubbles, allowing users to be exposed to a diverse variety of
perspectives for greater critical thinking and understanding. These design elements can assist in
ensuring that individuals understand the reason for algorithmic conclusions and can make
educated choices that are consistent with their beliefs and preferences.
3. Hate speech and polarization is a direct result of the design of social media platforms.
Discuss.

Social media platforms give us the power to express our opinions and beliefs conveniently and
without any limitations. But this freedom of speech comes at the cost of undermining others.
(Udanor et al., 2019) investigate the issue of hate speech on social media and provide a remedy
based on the Twitter ego-lexalytics approach. However, it is critical to understand the underlying
roots of this problem, as well as the role of social media platform design in promoting hate
speech and polarization.

While social media can be an excellent instrument for encouraging dialogue and understanding,
it can also have a negative impact on the propagation of hate speech and polarization. The
architecture of social media platforms has a considerable impact on user behavior and content
creation. Social media platforms are designed to maximize engagement and retain users for as
long as possible. This is accomplished by employing algorithms that prioritize content that is
likely to be of interest to users, such as content that receives a high level of involvement, such as
likes, comments, and shares. These algorithms are intended to provide material that is relevant to
the user's interests and preferences.

This algorithmic approach to content curation has the potential to have unexpected
repercussions, such as the promotion of controversial and inflammatory information, such as hate
speech. Hate speech is more likely to be shared and engage users than other sorts of content
because it is often more inflammatory and elicits stronger emotional responses. As a result, hate
speech on social media platforms can spread swiftly and create a polarized climate.

Social media companies' algorithms are also meant to create echo chambers, where users are
only exposed to content that validates their pre-existing views and perspectives. This reinforces
existing biases and can help build polarized communities on social media platforms. Polarization
occurs when people are exposed to one-sided information and are unable to examine opposing
viewpoints.

Another way that social media platforms contribute to hate speech and polarization is by
providing users with anonymity. People may feel empowered to say things they would not say in
person when they can voice their thoughts without disclosing their genuine identities. Users are
more inclined to engage in hate speech when they can do so anonymously or without fear of
repercussions, making it difficult to identify and hold accountable the sources of hate speech.
This may result in the propagation of harmful and inflammatory speech, further polarizing and
dividing communities.
Twitter, for example, has been chastised for its inability to adequately monitor hate speech on its
own website. Leslie Jones, an actress and comedian, was exposed to a deluge of racist and sexist
abuse on the website in 2016, prompting her to briefly leave the site. While Twitter finally
banned some of the abusers, many people criticized the company for not doing enough to avoid
such situations in the first place.

The Twitter ego-lexalytics approach proposed by (Udanor et al., 2019) can help identify and
categorize instances of hate speech on Twitter, but it does not address the root causes of the
problem. To effectively combat hate speech and polarization, social media platforms must
address the design features contributing to the problem.

For example, social media platforms could modify their algorithms to prioritize less polarizing
and more informative content. Platforms could also implement techniques to encourage
conversation and urge users to examine alternate viewpoints. This could involve fostering
dialogues between users with opposing points of view or giving users access to a larger diversity
of content and ideas. Social media sites could help improve openness and accountability. This
might include forcing users to authenticate their identities and taking harsher action against
individuals who participate in hate speech or other damaging behavior.

References:

Susser, D. & Roessler, B. & Nissenbaum, H. (2019). Technology, autonomy, and manipulation.
Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1410

Udanor, Collins & Anyanwu, Chinatu. (2019). Combating the challenges of social media hate
speech in a polarized society: A Twitter ego lexalytics approach. Data Technologies and
Applications. ahead-of-print. 10.1108/DTA-01-2019-0007.

You might also like