You are on page 1of 9

INDIA’S INTERNATIONAL MOVEMENT TO UNITE NATIONS

STUDY GUIDE
COMMITTEE: INFLUENCER SUMMIT

AGENDA: THE QUESTION OF CONTENT MODERATION AND


CENSORSHIP ACROSS SOCIAL MEDIA PLATFORMS
INTRODUCTION
Content moderation and censorship are two related but distinct concepts that refer to how
online platforms regulate the information and expression of their users. Content moderation is
the process of enforcing the platform’s own rules and policies, such as removing hate speech,
harassment, or illegal content. Censorship is the suppression or alteration of information or
expression that is deemed objectionable, harmful, or inconvenient by a government,
corporation, or other authority.

The question of content moderation and censorship across social media platforms is complex
and controversial, as it involves balancing the rights and interests of various stakeholders,
such as platform owners, users, advertisers, regulators, and the public.

The tremendous rise of social media and the democratisation of content generation in recent
years has made the need for competent content moderation even more critical. Every day,
billions of people create and share content on social media sites, making it nearly impossible
to manually monitor and censor everything. Consequently, there is a growing need for
automated systems that can instantly scan and filter massive amounts of data.

There are significant moral and legal questions raised by content moderation and restriction.
For instance, the effect of censorship on the right to free speech and the possibility of
prejudice and discrimination in automatically generated content moderation are issues that
need to be addressed. For this reason, it's critical to find a balance between shielding people
from offensive material and making sure their freedom of speech isn't unfairly curtailed.

Furthermore, content moderation and censorship are both technical challenges and social and
political ones, as they involve competing values, interests, and power dynamics among
different actors and stakeholders. For example, social media giants may have different
incentives and preferences than users, advertisers, regulators, or the public, and may use their
control over content to advance their agendas or interests. Similarly, governments may use
their authority or influence to pressure platforms to remove or restrict content that they deem
harmful or undesirable or to protect content they favour or endorse.

Content moderation and censorship can impact the quality and diversity of information and
freedom of expression on social media platforms, as well as on the public sphere and
democracy. For instance, content moderation and censorship can affect the availability and
accessibility of information and expression, as well as the visibility and reach of certain
content, perspectives, or voices. This can have consequences for the formation and
dissemination of public opinion, the diversity and pluralism of views and sources, and the
deliberation and participation of citizens in democratic processes. Moreover, content
moderation and censorship can also affect the trust and credibility of information and
expression, as well as the accountability and transparency of platforms and authorities.

Content moderation and censorship are not static or uniform, but dynamic and contextual, as
they vary depending on the platform, the content, the user, the time, and the place. Different
platforms may have different rules and policies, as well as different mechanisms and
techniques, for moderating and censoring content. Likewise, different content may have
different meanings and effects, as well as different legal and ethical implications, depending
on the context and the audience. Furthermore, different users may have different expectations
and experiences, as well as different rights and responsibilities, regarding content moderation
and censorship. Additionally, content moderation and censorship may change over time, as
platforms and authorities adapt to new developments and challenges, such as emerging
technologies, trends, or events.

Traditional Content Moderation Methods


Conventional content moderation techniques depend on human moderators who manually
examine and select user-generated content under established legal guidelines and community
standards. Despite being in use for decades, these techniques are not enough to moderate the
vast amount of user-generated information on websites and social media platforms due to
restrictions. In addition, traditional content moderation techniques are time-consuming and
costly, which is one of their main drawbacks.

The Role of AI in Content Moderation


The demand for automated solutions to support conventional content filtering techniques is
rising. This is where Artificial Intelligence (AI) can be used to advantage since it can provide
real-time, scalable, consistent content filtering while lowering human error and bias,
addressing many of the shortcomings of traditional content moderation.

The fundamental concept underlying AI-powered content moderation is the automatic


analysis and filtering of user-generated content on websites and social media platforms
through the use of machine learning algorithms. Large datasets of labelled content that
represent the particular social norms and legal requirements that AI algorithms are intended
to uphold are necessary for training them, and for them to function properly. For instance, a
dataset of social media postings and comments that have been classified as either hateful or
non-hateful may be used to train an algorithm that is intended to identify hate speech.

One of the advantages of AI is the potential of AI-powered content moderation to scale to the
massive amount of user-generated content on websites and social media platforms. Even in
situations where the volume of content is too big for human moderators to handle, AI
algorithms can swiftly discover and remove dangerous content since they can analyse and
filter it in real-time. Nevertheless, prejudice, false positives, and false negatives are some of
the drawbacks of AI-powered content filtering. To guarantee that AI algorithms are efficient
and equitable while also striking a balance between the demands of safety and free speech, it
is crucial to thoroughly design and test them.

CURRENT SCENARIO

Facebook Oversight Board1


The Facebook Oversight Board is an independent body that reviews the platform’s content
moderation decisions, especially those that involve the removal of content or accounts that
violate the platform’s community standards. The board consists of 20 members from various
backgrounds and regions, who are selected by a trust that is funded by Facebook. The board
has the power to overturn Facebook’s decisions and issue policy recommendations, but it can
only review a limited number of cases that are referred by Facebook or by users who have
exhausted the platform’s appeal process. The board issued its first set of rulings in January
2021, overturning four out of five cases that involved the removal of content related to
nudity, hate speech, misinformation, and dangerous organizations. The board also upheld the
suspension of former US President Donald Trump’s account, which was imposed by
Facebook after the Capitol riot on January 6, 2021, citing the risk of inciting violence.
However, the board also asked Facebook to review its indefinite ban and apply a clear and
proportionate penalty, as well as to assess its role in contributing to the situation. The board’s
rulings are binding for Facebook, but its policy recommendations are not, although Facebook
has to publicly respond to them.

1
https://www.oversightboard.com/
The Indian Government
The Indian government introduced new rules in February 2021, requiring social media
platforms to appoint local officers, remove unlawful content within 36 hours, and disclose the
origin of messages when asked by authorities. The rules are part of the Information
Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 20212, which
aim to regulate online intermediaries, digital media, and over-the-top (OTT) platforms in
India. However, the rules have been criticised by digital rights groups and platforms for
violating the freedom of expression and privacy of users, as well as for imposing excessive
obligations and liabilities on the platforms. The rules also face legal challenges from various
parties, such as WhatsApp, which has filed a lawsuit against the government for requiring it
to break its end-to-end encryption and trace the origin of messages. The government has
defended the rules as necessary to protect the sovereignty, security, and public order of India,
and to empower the users and the digital media industry.

European Union
The European Commission proposed the Digital Services Act and the Digital Markets Act3 in
December 2020, aiming to create a new legal framework for digital platforms in the EU. The
proposals include new rules and obligations for platforms to ensure the safety, transparency,
and accountability of their content moderation practices, as well as to prevent the abuse of
market power and promote fair competition. The Digital Services Act applies to all online
intermediaries that offer their services in the EU, such as social media, online marketplaces,
or cloud services, and requires them to remove illegal content, provide clear and transparent
terms and conditions, cooperate with authorities, and protect the rights and interests of users.
The Digital Markets Act applies to the platforms that are designated as gatekeepers, meaning
that they have a significant impact on the internal market and act as an important gateway for
business users to reach consumers, such as Google, Facebook, or Amazon, and requires them
to comply with a set of prohibitions and obligations, such as not favouring their services,
allowing interoperability and data portability, and reporting any acquisitions.

2
https://www.meity.gov.in/writereaddata/files/Information%20Technology%20%28Intermediary%20Guidelines
%20and%20Digital%20Media%20Ethics%20Code%29%20Rules%2C%202021%20%28updated%2006.04.202
3%29-.pdf
3
https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package
BLOC POSITIONS

Tech Giants
Big Tech companies, such as Meta, X (formerly Twitter), and Google, generally favour a
more permissive and self-regulatory approach to content moderation and censorship, based
on the principles of free speech and innovation. They argue that online platforms should have
broad immunity from liability for the content posted by their users and that they should have
the discretion to enforce their community standards and policies, without undue interference
from the government or other actors. They also claim that they are committed to removing
illegal or harmful content, such as hate speech, terrorism, or child abuse, and to improving
the transparency and accountability of their content moderation practices. However, they face
increasing criticism and pressure from various quarters, such as civil society, media,
academia, and lawmakers, who accuse them of failing to prevent or address the spread of
misinformation, disinformation, extremism, or violence on their platforms, and of abusing
their market power and influence over the public sphere and democracy.

European Union
The European Union and its member states, generally favour a more restrictive and
regulatory approach to content moderation and censorship, based on the principles of human
rights and democracy. They argue that online platforms should have more responsibility and
accountability for the content posted by their users and that they should comply with the laws
and norms of the countries where they operate, as well as with international standards and
obligations. They also claim that they are committed to protecting and promoting the freedom
of expression, privacy, and diversity of users and content creators, and to preventing or
mitigating the negative impacts of content moderation and censorship, such as chilling
effects, bias, discrimination, or polarization. However, they face various challenges and
dilemmas, such as defining and identifying harmful or objectionable content, designing and
implementing effective and transparent content moderation policies and practices, and
balancing the rights and interests of different stakeholders and actors.
China and its allies
China and some of its allies, such as Pakistan, generally favour a more authoritarian and
censorial approach to content moderation and censorship, based on the principles of national
security and social stability. They argue that online platforms should have strict liability and
compliance for the content posted by their users and that they should remove or restrict any
content that is deemed objectionable, harmful, or inconvenient by the government or other
authorities. They also claim that they are committed to safeguarding the sovereignty, security,
and public order of their countries, and to empowering and educating their users and content
creators. However, they face widespread condemnation and resistance from various quarters,
such as human rights groups, dissidents, activists, and journalists, who accuse them of
violating the freedom of expression and privacy of users and content creators, and of
suppressing or altering information or expression that is critical, independent, or diverse.

Russia
Based on the ideas of sovereignty and security, Russia and some of its allies, like Belarus,
typically support a more nationalist and protectionist approach to content regulation and
control. They contend that internet companies ought to honour and abide by the rules and
laws of the nations in which they conduct business and that they ought to work with law
enforcement to remove or filter any content deemed to be dangerous, unlawful, or foreign. In
addition, they assert that they are dedicated to opposing outside meddling and hostile actors'
influence while upholding and advancing their nations' national identities, cultures, and
values. However, a number of groups and organisations, including human rights
organisations, opposition parties, civil society, and international organisations, have strongly
criticised and opposed them. These groups claim that they violate the privacy and freedom of
expression of users and content creators and that they use censorship and content moderation
as instruments of political repression and propaganda.

Brazil
Based on the ideas of pluralism and diversity, Brazil and some of its neighbours, including
Argentina, generally support a more democratic and participatory approach to content
regulation and restriction. They believe that in addition to incorporating a wide range of
stakeholders in the development and application of their content moderation policies and
procedures, online platforms ought to uphold the rights and interests of both users and
content providers. Additionally, they assert that they are dedicated to promoting
communication and collaboration across various players and sectors, as well as to nurturing
and supporting the diversity and quality of information and expression on social media
platforms. However, they must deal with some difficulties and conflicts, including defining
and weighing the obligations and limitations of free speech, addressing the impact and
dissemination of false information, and addressing the social and economic disparities that
influence people's ability to access and use social media platforms.

SUGGESTED MODERATED CAUCUS TOPICS

1. Discussing the ethical and legal implications of content moderation and censorship
2. Discussing the social and political impacts of content moderation and censorship
3. Discussing the technical and operational challenges and opportunities of content
moderation and censorship
4. Discussing the user and content creator perspectives and experiences of content
moderation and censorship
5. Discussing the role of alternative media platforms in promoting responsible content
creation and consumption.
6. Discussing strategies for developing best practices for content moderation
7. Discussing strategies for tackling misinformation and disinformation
8. Discussing the crucial role of governments in content moderation and censorship
9. Discussing the sociological impacts of harmful content
10. Discussing norms that should be followed by social media platforms to censor content

RESEARCH LINKS

(Note: Delegates, some of the links are only meant for light reading and thus are not
highlighted, Only refer to the highlighted sources as valid proof as others may or may not be
accepted as a source of proof in the Council. The decision of the Presiding Officer regarding
the acceptable sources is Final and Binding.)

1. https://www.pnas.org/doi/abs/10.1073/pnas.2210666120
2. https://www.tandfonline.com/doi/full/10.1080/1369118X.2021.1874040
3. https://journals.sagepub.com/doi/abs/10.1177/1461444818773059
4. https://onlinelibrary.wiley.com/doi/full/10.1002/poi3.372
5. https://www.eff.org/press/releases/eff-launches-tracking-global-online-censorship
-project-shine-light-how-content
6. https://aicontentfy.com/en/blog/role-of-ai-in-content-moderation-and-censorship
7. https://hbr.org/2022/11/content-moderation-is-terrible-by-design
8. https://journals.sagepub.com/doi/10.1177/1461444818773059
9. https://www.cjr.org/special_report/disrupting-journalism-how-platforms-have-upende
d-the-news-part-6.php
10. https://publicknowledge.org/content-moderation-is-not-synonymous-with-censorship/
11. https://www.researchgate.net/publication/343798653_Content_moderation_AI_and_t
he_question_of_scale
12. https://phys.org/news/2023-02-free-speech-misinformation-people-dilemmas.html
13. https://link.springer.com/article/10.1007/s13347-020-00429-0
14. https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-har
m-or-silencing-dissent
15. https://cdt.org/area-of-focus/free-expression/transparency-accountability/
16. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/652718/IPOL_STU(
2020)652718_EN.pdf
17. https://dl.acm.org/doi/10.1007/978-3-031-25460-4_34
18. https://www.brookings.edu/articles/history-explains-why-global-content-moderation-c
annot-work/
19. https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-har
m-or-silencing-dissent
20. https://onlinelibrary.wiley.com/doi/abs/10.1002/poi3.372

You might also like