You are on page 1of 1

ARTIFICIAL INTELLIGENCE WITH RESPECT TO VOILATION OF

INTERNATIONAL HUMAN RIGHTS OF THE LGBTQAI+ COMMUNITY

1. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3259344

Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content


Moderation and Risks to LGBTQ Voices Online- LGBTQIA+ freedom of speech is being
restricted by Artificial Intelligence.
 AI tools developed to analyze text-based content are not yet able to understand
context. Differently from these other studies, however, this article approaches
the issue from the perspective of the LGBTQ community to highlight how
content moderation technologies could affect LGBTQ visibility.
 This is not a flaw, since these algorithms make their decisions based on language
alone, irrespective of who says it and in what context.
 This is particularly interesting when one of the main reasons behind the
development of these tools is to support vulnerable communities by dealing
with hate speech targeting such groups. If these tools might prevent 730 T. Dias
Oliva et al. LGBTQ people from expressing themselves and speaking up against
what they themselves consider to be toxic, harmful, or hateful, their net impact
may be disempowering, rather than helpful.
 In this paper we can see that Artificial Intelligence block the use of certain terms
such as terrorism, bitch, LGBTQ, etc.…. Sometimes I couldn’t assess the context
such terms are being used in. These words are used to self-empower by People
of LGBTQ. This paper analyses toxicity of the data taken from the twitter, where
some tweets are not toxic but declared toxic or offensive by the artificial
intelligence due to presence of the common words like bitch, gay, lesbian etc.…

2. https://academic.oup.com/hrlr/article/20/4/607/6023108-

You might also like