The document discusses how artificial intelligence used for content moderation can unintentionally restrict the freedom of speech of LGBTQIA+ communities by not understanding context. AI tools trained only on language are unable to discern whether terms like "bitch" or references to gender/sexual identities are being used in an empowering or hateful way. This poses risks to LGBTQIA+ visibility and expression online, potentially having a disempowering impact despite the goal of protecting vulnerable groups from hate speech. The document analyzes how AI has incorrectly flagged non-toxic tweets as offensive due simply to the presence of common words sometimes used by the LGBTQIA+ community.
Original Description:
Original Title
Artificial Intelligence With Respect to Voilation of International Human Rights of the Lgbtqai
The document discusses how artificial intelligence used for content moderation can unintentionally restrict the freedom of speech of LGBTQIA+ communities by not understanding context. AI tools trained only on language are unable to discern whether terms like "bitch" or references to gender/sexual identities are being used in an empowering or hateful way. This poses risks to LGBTQIA+ visibility and expression online, potentially having a disempowering impact despite the goal of protecting vulnerable groups from hate speech. The document analyzes how AI has incorrectly flagged non-toxic tweets as offensive due simply to the presence of common words sometimes used by the LGBTQIA+ community.
The document discusses how artificial intelligence used for content moderation can unintentionally restrict the freedom of speech of LGBTQIA+ communities by not understanding context. AI tools trained only on language are unable to discern whether terms like "bitch" or references to gender/sexual identities are being used in an empowering or hateful way. This poses risks to LGBTQIA+ visibility and expression online, potentially having a disempowering impact despite the goal of protecting vulnerable groups from hate speech. The document analyzes how AI has incorrectly flagged non-toxic tweets as offensive due simply to the presence of common words sometimes used by the LGBTQIA+ community.
Fighting Hate Speech, Silencing Drag Queens? Artificial Intelligence in Content
Moderation and Risks to LGBTQ Voices Online- LGBTQIA+ freedom of speech is being restricted by Artificial Intelligence. AI tools developed to analyze text-based content are not yet able to understand context. Differently from these other studies, however, this article approaches the issue from the perspective of the LGBTQ community to highlight how content moderation technologies could affect LGBTQ visibility. This is not a flaw, since these algorithms make their decisions based on language alone, irrespective of who says it and in what context. This is particularly interesting when one of the main reasons behind the development of these tools is to support vulnerable communities by dealing with hate speech targeting such groups. If these tools might prevent 730 T. Dias Oliva et al. LGBTQ people from expressing themselves and speaking up against what they themselves consider to be toxic, harmful, or hateful, their net impact may be disempowering, rather than helpful. In this paper we can see that Artificial Intelligence block the use of certain terms such as terrorism, bitch, LGBTQ, etc.…. Sometimes I couldn’t assess the context such terms are being used in. These words are used to self-empower by People of LGBTQ. This paper analyses toxicity of the data taken from the twitter, where some tweets are not toxic but declared toxic or offensive by the artificial intelligence due to presence of the common words like bitch, gay, lesbian etc.…