You are on page 1of 13

The Impact of Children’s Content Moderation

Policies: A Game-Theoretic Approach


Mahdi Kiaee
Mahdi.kiaee@ucalgary.ca

This research navigates the intricate dynamics between regulators, digital platforms, and users, focusing
notably on safeguarding children’s online experiences. Integrating foundational theories in regulatory eco-
nomics and platform markets with a game-theoretical approach, we offer an in-depth exploration of the
nuanced decision-making processes involved in policy implementation and market responses. Our analysis
delineates how regulatory fines can significantly influence platforms’ strategic decisions, potentially guiding
them toward heightened adherence to safety norms. Moreover, this study elucidates the complex interplay
between compliance costs, pricing strategies, and moderation efforts undertaken by platforms. By decipher-
ing how these platforms balance the intricacies of regulatory compliance against potential revenues, we shed
light on the evolving market equilibrium. Furthermore, we illustrate how user preferences, especially con-
cerning price sensitivities and content moderation expectations, can markedly influence platform strategies
and decisions. In doing so, this research endeavours to facilitate policymakers, digital platform designers,
and guardians in the formulation of strategies that enhance children’s digital welfare, providing a nuanced
insight into the subtle forces steering digital platforms’ strategic orientations.

Key words : Digital Platforms, Analytical Modeling, Digital Content Regulation, Content Moderation.

1. Introduction
In the contemporary digital landscape, concerns about children’s online safety are increasingly
prominent. Data from recent studies indicates that in the US, over 70% of children own a smart
device, with an alarming 40% reporting online interactions with strangers (Holloway et al., 2019).
Such statistics amplify the importance of adopting stringent measures to protect young users in
the digital realm.
While parental controls offer one layer of defence (Wisniewski et al., 2017), these software solu-
tions are not foolproof, especially given the technical acumen of today’s youth and the pervasive
issue of inappropriate content (O’Keeffe et al., 2011). Recognizing these limitations, regulatory
bodies have introduced measures such as COPPA, which mandates parental consent for informa-
tion collection involving children under 13 (Marwick et al., 2017). Yet, even with such laws, digital
platforms might not always be adequately incentivized to prioritize young user safety (Gillespie,
2018).

1
Mahdi Kiaee: Children’s Content Moderation Policies Impact
2

To better understand the strategic decision-making among regulators, digital platforms, and
users concerning children’s online safety, our research employs a game-theoretic approach within a
two-stage model. The model showcases three primary actors: a regulator formulating policies and
penalties, platforms aligning their compliance and parental control offerings based on economic and
regulatory parameters, and users selecting platforms influenced by the resulting platform decisions
and their preferences.
Building on foundational work in regulatory economics and platform markets (Rochet and Tirole,
2003), we propose that while higher compliance costs may reduce platforms’ likelihood of full com-
pliance, they might not deter the introduction or enhancement of parental controls. In contrast,
regulatory penalties could indirectly shape platform decisions, nudging them towards stricter adher-
ence. Moreover, we delve into how user preferences can act as additional motivators for platforms
in determining their compliance levels and parental control features.
This research offers a nuanced understanding of how multiple factors, including regulatory pres-
sures, compliance costs, and user preferences, collectively determine the strategic decisions of digital
platforms. We highlight the intricate balance that platforms strike in responding to these diverse
forces and how this balance affects the digital experience for young users.
Ultimately, by shedding light on these dynamics, we hope to provide a clearer blueprint for
policymakers and platform designers, pointing towards strategies that can lead to safer online
ecosystems for children. We trust this study will serve as a foundation for future inquiries and
policy considerations in the digital platform domain, with a particular focus on enhancing digital
safety for the next generation.

2. Literature Review
2.1. Overview of Related Literature
This literature review critically examines the literature on digital platforms, focusing particularly
on those hosting children’s content. It explores seminal works, outlining key themes, findings, and
gaps in existing scholarship, and positions our research in relation to established knowledge.
Overview of Main Themes: The review is organized around three crucial themes: Two-sided
Market Platforms & Network Externalities, Content Moderation, and Regulatory Frameworks and
Compliance. Each theme offers insights and frameworks relevant to digital platforms, serving as
foundational knowledge for this research.
Critical Reflection and Organization: The existing literature, while insightful, does not fully
address the challenges of content moderation and children’s safety on digital platforms. This review
not only synthesizes the findings of each study but also highlights the contributions and positioning
of the present study within the academic discourse.
Mahdi Kiaee: Children’s Content Moderation Policies Impact
3

2.2. Two-sided Market Platforms & Network Externalities


The literature on two-sided platforms and network externalities provides a foundational framework
for understanding the dynamics of digital platforms. Katz and Shapiro (1985) offer a static oligopoly
model that introduces the Fulfilled Expectations Equilibrium (FEE), but their focus does not
extend to content moderation or safeguarding children’s online experiences. This research critically
engages with and extends these foundational theories by incorporating considerations of content
moderation, children’s online safety, and user behaviour.
Our research uses a dynamic sequential approach, diverging methodologically from the static
model used by Katz and Shapiro. This dynamic approach allows for a nuanced exploration of the
interactions and strategic considerations among digital platforms, regulators, and users, offering a
comprehensive understanding of the ecosystem with a focus on children’s safety.
Shavell (1987) explores the optimal employment of nonmonetary sanctions as deterrents, empha-
sizing the associated social costs and developing a model that considers sanctions as socially costly
yet beneficial for deterrence. This work provides a baseline understanding of the strategic consid-
erations that digital platforms might undertake in response to regulatory policies.
This study extends Shavell’s insights by focusing on the unique dynamics of digital platforms,
including user preferences and content moderation strategies. It offers a nuanced understanding of
how platforms navigate compliance costs, pricing strategies, and moderation efforts in the face of
regulatory fines.
Polinsky and Shavell’s (2000) work presents a comprehensive analysis of public enforcement and
corporate wrongdoers in the context of digital platforms. They provide a theoretical framework
for understanding regulatory enforcement, but it does not directly address the challenges faced
by digital platforms in content moderation. Our study expands on these insights by examining
the specific context of digital platforms, offering a deeper understanding of how these platforms
respond strategically to regulatory measures while ensuring children’s online safety.
The authors also argue that their model needs significant adaptation to address the complexities
of digital platform regulation and enforcement. Our work acknowledges and addresses this gap
by introducing enhancements to their model to account for the multifaceted relationships within
digital platforms, the varied nature of content to be regulated, and the imperative for effective and
efficient content moderation and children’s protection.
Chen and Png’s (2003) study investigates the welfare effects of government policies on copyright
enforcement and pricing of information goods, with a case study focusing on computer software.
However, their analysis requires extension and adaptation to be applicable to various types of
content on digital platforms, considering diverse user behaviours and characteristics of different
Mahdi Kiaee: Children’s Content Moderation Policies Impact
4

content types. Our research builds upon and extends Chen and Png’s work by analyzing enforce-
ment and pricing strategies for diverse content types on digital platforms, incorporating a nuanced
understanding of user motivations and behaviours.
Dellarocas’ (2003) study explores the dynamics and challenges of online feedback mechanisms
(OFMs), emphasizing their transformative impact on fostering trust, cooperation, and strategic
business operations in virtual marketplaces. However, the study does not address the varied dynam-
ics of different digital platforms, particularly those dealing with children’s content, which may have
unique feedback mechanisms and user interactions that necessitate a separate analytical framework.
The present research provides insights and guidance for understanding and implementing effec-
tive feedback mechanisms on platforms dealing with children’s content.
Rochet and Tirole’s (2003) study on two-sided markets provides a theoretical framework for
understanding platform competition within these markets, but it primarily focuses on pricing
strategies and competition. However, it does not address the complexities of platforms’ content
and users’ interactions, which are crucial in the context of digital content for children.
Bhargava and Choudhary’s (2004) study on information intermediaries provides insights into
their pricing, versioning strategies, and product line designs. However, their analysis does not
consider the complexities of content moderation, user protection, and regulatory compliance, which
are crucial in the contemporary landscape of digital platforms. Our research builds upon and
extends Bhargava and Choudhary’s insights by considering the new challenges and regulatory
requirements of platforms.
Parker and Van Alstyne’s (2005) work on two-sided network effects and their implications for
information product design is also critical. Their model serves as a foundational piece for under-
standing relationships and transactions on digital platforms, especially those functioning as inter-
mediaries between content providers and end-users. This study advances Parker and Van Alstyne’s
work by integrating considerations related to regulatory compliance and child online safety into
the established relationship between content providers and end-users.
Anderson et al. (2014) advance our understanding of platform performance investment strategies
in the context of two-sided markets. They highlight the delicate trade-offs between investing in
superior platform performance and facilitating third-party content development, offering insights
that are crucial for digital platforms operating in a competitive environment. However, in their
setup, the role of regulatory interference is missing, as neglecting some mandatory investments
may have financial consequences like fines and penalties. Our approach tries to provide a more
comprehensive perspective of all influential players in digital platform investment decisions.
Hagiu and Wright (2015) provide a pivotal analysis of the strategic decisions faced by digital plat-
forms regarding operating as marketplaces or resellers. They scrutinize the economic implications,
Mahdi Kiaee: Children’s Content Moderation Policies Impact
5

strategic considerations, and trade-offs inherent in these operational models, offering invaluable
insights for digital platforms in a rapidly evolving online landscape. It seems that the authors
ignored the regulator as a player who can lead platforms toward one of those options. Moreover,
growing concern about inappropriate content and determining the responsible player makes it more
difficult for platforms to operate as a mere limited-liability marketplace. Finally, in the current
environment, users’ demand leans toward more responsible platforms. These are all neglected points
that have been considered in our research design.
Zhang et al. (2022) delve into the complexities of competition among two-sided platforms within
the sharing economy, where workers adjust their supply schedules in response to prevailing wages.
The authors scrutinize various wage schemes, including fixed and dynamic commission rates and
fixed wages, utilized by platforms to attract workers. Two missed points should be mentioned
regarding their work. First, the compliance level of a platform will influence its monetary power
to pay competitive rates to reinforce its human resources. Second, besides the monetary concerns,
after regulatory intervention and users’ demand for developing and enhancing content moderation
systems, some skilled workers may prefer to work for a compliant platform that has a better
reputation among users and, therefore, a brighter outlook. These are the things about human
resources as a part of the content moderation system we considered in our study.

2.3. Content Moderation & Children’s Safety


This section examines prior studies on content moderation and digital platform safety for children.
Liu, Yildirim, and Zhang (2022) developed a robust theoretical model elucidating the economic
incentives steering social media platforms in moderating user-generated content. Their research
enlarges user bases, increases user utility, influences platforms’ positioning in terms of content
extremeness, and provides pivotal insights into using content moderation as a marketing tool.
Although this work is the most related study in content moderation via an analytical modelling
approach and plays a key part in inspiring the present research, its focus is on maximizing the
digital platform’s revenue. It also investigates the role of content moderation as a marketing tool
rather than a safety feature. Therefore, we shift the concentration toward maximizing the users’
utility and the effectiveness of regulations associated with them. An environment in which, users
interact with platforms empowered by the best content moderation systems.
Roberts (2019) discusses the ethical and human considerations that go into content moderation
systems. The effectiveness, ethical concerns, and challenges of both manual and automatic content
moderation, which aims to protect users from hazardous content, are critically evaluated by the
author. The various impacts of these activities on various demographic groups, however, are not
thoroughly addressed. Our study seeks to offer a more precise and reliable measurement than
Mahdi Kiaee: Children’s Content Moderation Policies Impact
6

Roberts’ study by focusing on the effects of content moderation approaches for children as one of
the demographic categories.
The governance techniques used by platforms to keep an eye on and control user behaviour in
anonymous networks are examined by Gillespie (2018). Gillespie’s investigation offers insightful
information about platform governance and user behaviour, but it does not go into great detail
about how these governance processes may affect the quality of content moderation systems and
users’ online safety. Moreover, the author utilized a qualitative, intuitive, and descriptive approach,
which may not be helpful for regulators to establish precise requirements for content moderation
systems. Our study tries to model the interaction of all players by applying a robust analytical
and theoretical method. This approach can craft a solid foundation for policymakers and platform
designers to use in their strategic decision-making processes.
Alcici (2021) illuminates the precarious balance that social media platforms must maintain
between content moderation and safeguarding users’ right to freedom of expression. Focusing
sharply on the implications of utilizing artificial intelligence (AI) for content moderation, Alcici
underscores the human rights obligations incumbent upon social media companies. The study has
a legal lens with a concentration on human rights. Yet, our research tries to use an analytical mod-
elling method to demonstrate the process of strategic decision-making among players. Moreover,
it should be considered that for each age category, the sensitivity and severity level of content
moderation are different. Our study expands on Alcici’s work by addressing the trade-off between
ensuring children’s rights and their safe interaction with digital platforms simultaneously.
Fagan (2020) presents a compelling model delineating the dynamics of lawmakers’ decisions
concerning platform liabilities for user-generated content and platform immunity. Exploring the
legal framework surrounding content moderation, the article provides insights into the trade-offs
and considerations lawmakers contemplate when deciding between maintaining platform immunity
and imposing platform liabilities. This study has significant insights for regulators to define the
most effective fine- and content-moderation system requirements. However, the author uses a static
analytical model that shows the interaction between regulators and digital platforms only, and
users’ demands, expectations, and pressure on digital platforms are not incorporated into their
model. The present study, with its two-stage sequential game, rectifies this issue and provides a
more expanded perspective.
Lefouili and Madio (2022) embark on a meticulous investigation into the economics of platform
liability. They shed light on how the proliferation of illegal content online influences both platform
strategy and user behaviour in a digitized landscape. The research has some notable issues. First,
the authors use a one-stage game in their model, which is not capable of showing the strategic
interaction between regulators, digital platforms, and users. Second, they have only investigated
Mahdi Kiaee: Children’s Content Moderation Policies Impact
7

the platform’s compliance decision, and the pricing decision was missed. One of the most impor-
tant concerns of policymakers, platform designers, and users regarding forcing content moderation
systems is their impact on pricing strategies. Finally, complying with the development of content
moderation systems is not a binary decision, and each platform will comply to a certain level that
is possible based on its resources and other considerations. These issues have been resolved in our
work.
Madio and Quinn (2023) embark on an exploration of social media platforms’ incentives to
mitigate unsafe content, devising balanced content moderation policies and advertising strategies.
Utilizing a two-sided market model, their study meticulously dissects the relationship between
users and advertisers mediated by online intermediaries, providing a granular understanding of the
implications of content moderation intensity. However, the authors only consider the relationship
and interaction between content providers and users, which is not a holistic and necessarily correct
picture in the content moderation domain. Regulators, digital platforms, and users are the final
players engaged with each other, and content providers interact with users indirectly through the
digital platform channel. Moreover, in their model, users’ utility has been limited to the safe/unsafe
content view, which is not a realistic design. Price plays a crucial role in user utility, particularly
when we have content moderation regulations. Our study expanded their work by adhering to the
points mentioned.
Dave (2020) illuminates the immediate responses by major social media platforms like Facebook,
YouTube, and Twitter as they transition towards AI and automated tools for content moderation
due to the constraints imposed by the COVID-19 pandemic.
Gagliordi’s (2020) piece sheds light on Facebook’s endeavour to enhance content moderation
through the integration of Artificial Intelligence (AI). The efficiency of detecting and removing
content that violates community standards, including hate speech and COVID-19 misinformation,
reportedly improved with AI’s assistance.
While Dave (2020) provides a snapshot of the immediate strategies adopted by platforms, the
article doesn’t delve into the long-term implications. Moreover, it should be noted that both Dave
(2020) and Gagliordi (2020) are making a common mistake in considering AI-driven content mod-
eration as the whole content moderation system. AI, as a powerful tool, is only part of the CMS
(content moderation system). Moreover, they have a journalistic approach that, although they
provide useful descriptive information about the leaders’ responses to pressure and demand for
developing the content moderation system, they are not capable of grasping the strategic decision-
making process. It is also evident in their work that COVID-19 has boosted the importance of
content moderation systems in users’ demand and utility. Our study tries to extract fruitful insights
Mahdi Kiaee: Children’s Content Moderation Policies Impact
8

from the giants’ behaviour while pointing out that content moderation systems are more than just
an artificial intelligence tool.
Gershgorn (2020) sheds light on the future role of AI in detecting hate speech online, as envisioned
by Facebook CEO Mark Zuckerberg. The article suggests that within five to ten years, AI tools will
significantly improve in accuracy and efficiency in understanding the nuances of language, crucial
for identifying various forms of content, including hate speech. Although Gershgorn’s study gives
us significant insight for considering a variety of content moderation systems and, as a result, a
continuous level of compliance with the defined CMS requirements, the author again mistakenly
assumes AI can be assumed as the whole content moderation system.
Lomas (2017) illuminates the intricacies and hurdles Facebook encounters in moderating content.
The article unveils Facebook’s internal moderation guidelines, which have drawn criticism from
child safety charities for their ’alarming’ stipulations. Lomas’s work can be considered a pioneer
study about children’s content moderation and helps us understand what is specific about this
age category. However, it is a case study that has investigated Facebook and is not generalizable.
Our study departs from Lomas’s investigative journalism technique, which gathers insights from
internal documents and stakeholder remarks, by adopting a game-theoretical framework within a
two-stage model. This methodological change enables a methodical analysis of the strategic moves
made by authorities, platforms, and users, all with an eye toward children’s online safety.
de Keulenaar, Magalh aes, and Ganesh (2023) furnish an empirical approach to the evolution
and multifaceted nature of content moderation practices on Twitter. Their study meticulously
unravels Twitter’s journey through the precarious landscape of content regulation, amidst fluctu-
ating public speech norms, external critiques, and crisis responses. Again, the authors utilize a case
study method that is based on evidence from a single digital platform, and they analyze limited
data. Our study shifts toward applying a comprehensive analytical modelling path that empowers
generalizability.
Gorwa et al. (2020) propose an exhaustive scrutiny of algorithmic content moderation, demarcat-
ing the technical and political challenges entwined within. With platforms like Facebook, YouTube,
and Twitter leaning heavily on algorithms to navigate the sea of user-generated content, the authors
shed light on the murky waters of transparency, accountability, and understanding regarding the
workings of these algorithmic systems. The Gorwa work gives regulators the ability to define tech-
nical aspects of content moderation systems more accurately. Moreover, it provides a hint for the
regulator to establish procedures to prevent any political or biased content moderation. Again, in
Gorwa’s study, content moderation systems have been mistakenly assumed to be some automated
algorithms. The author uses an empirical method with only three digital platforms, all in the treat-
ment group, and there are not any control groups or sufficient data for using time series. Our work
Mahdi Kiaee: Children’s Content Moderation Policies Impact
9

tries to provide a robust theoretical model for content moderation regulation that can be assessed
by future empirical studies whenever enough data is created.

2.4. Regulatory Frameworks & Compliance


This section explores prior studies about complex dynamics between digital platforms, regulatory
frameworks, and compliance that might have illuminating insights for our research.
Tiwana, Konsynski, and Bush (2010) give a complete picture of how platforms change over time
by focusing on the complex interactions between platform design, governance, and environmental
factors. Their research shows how rapidly growing platform-centric environments change and adapt,
creating strong barriers to competition while also developing new ideas. While Tiwana et al. (2010)
offer insightful perspectives on platform evolution, their focus predominantly lies on the technical
and strategic dimensions of platform architecture and governance, and they neglect the role of
external players in digital platform architecture. Regulators and users are two key players in the
shaping and evolution of digital platforms, and building upon the foundational framework provided
by Tiwana et al., this research makes contributions by examining how platforms strategically
respond to regulatory pressures and user safety concerns. This expanded focus and incorporation of
additional dimensions provide a more comprehensive understanding of the challenges and strategies
adopted by platforms in today’s digital landscape.
Parker and Van Alstyne (2018) unveil an enlightening sequential innovation model that metic-
ulously delineates the intricate trade-offs firms encounter within business ecosystems and microe-
conomies. Their model, which sheds light on pivotal aspects of openness, intellectual property dura-
tion, and the inherent trade-offs in a platform-controlled ecosystem, offers a valuable scaffold for
comprehending the nuances of platform innovation and management. Their insightful exploration
of the interaction between platform sponsors, third-party developers, and end-users is instrumental
for our research, providing essential groundwork for investigating the dynamics among regulators,
digital platforms, and users in the context of children’s online safety. However, in the Parker and
Van Alstyne (2018) study, an unsupervised environment is assumed, and the clear role of the reg-
ulators and their intervention is missing, as innovation can be stifled or boosted by a good or bad
policy made by the regulators, respectively. In our research, we highlight the role of regulators,
exploring how regulatory interventions can subtly yet profoundly influence platform strategies and
user experiences.
Nault and Zimmermann (2019) critically analyze the advent of a two-tier Internet, envisioning a
landscape where a fast lane, anchored on fee-based prioritization, seamlessly coexists with the open
Internet. Through their lens, this arrangement is not only a potential alleviator of congestion but
also a mechanism that safeguards the Internet’s openness, stimulates innovation, and guarantees
Mahdi Kiaee: Children’s Content Moderation Policies Impact
10

a consistent quality of service (QoS) on the open Internet. The idea of a two-tier internet inspired
the content moderation systems complementary to the primary filtering system to ensure the
quality of content in our research. Although the concepts used in Nault’s and our work differ, the
methodology and approach toward digital platform regulation are the same. Despite the similarities
and strengths of Nault’s work, the safety of the content is neglected, and the whole concentration
is on safeguarding openness, which is not merely appealing to digital platform users, particularly
after COVID-19.
Gopal et al. (2018) make a substantial contribution to the literature by meticulously analyzing
the enhancement of network design for optimal message and idea dissemination within Enterprise
and Consumer Social Networks (ESN and CSN). They use the Hop-Constrained Minimum Span-
ning Tree (HMST) model to come up with heuristic algorithms that are meant to make cascade
propagation work better at the lowest cost. This work provides crucial insights into the art of
message seeding and the establishment of beneficial network connections to improve propagation
dynamics. The authors provide valuable insights for platform designers and policymakers to define
the technical aspects of content moderation systems more efficiently. It also helps regulators set
their requirements more accurately. Their work is capable of providing the foundation for design-
ing an integrated and standard framework for content moderation systems. In the present study,
we try to expand their work by investigating the effects of the technological efficiency of content
moderation systems on pricing strategies and users’ behaviour.
Nan et al. (2023) conduct a thorough examination of optimal regulation strategies for user-
generated content (UGC) platforms, considering both monopoly and duopoly market structures.
Utilizing game theoretic analysis, they meticulously explore the dynamics between platforms and
government regulatory strategies, providing valuable insights into the interactions and strategic
decisions made within this context. While Nan et al. (2023) make noteworthy contributions, their
work predominantly focuses on the dynamics between governments and platforms without paying
adequate attention to the users. Their study provides a crucial starting point but requires further
refinement and expansion to address the intricate issues related to users’ preferences and demand
influences on digital platform compliance strategies.
Vijairaghavan et al. (2021) conduct a pivotal analysis of Data Portability Regulations (DPR),
shedding light on regulations enacted by various jurisdictions to enhance competition and reduce
industry concentration by bolstering data subject choices. Their work stands as a crucial reference
for understanding the impact of DPR on the decision-making processes of data controllers (DCs)
and the subsequent market dynamics. Their study has given us significant insights into how to
model regulatory compliance. However, the authors overlook the user’s utility and its pressure
and effects on compliance. Moreover, the compliance decision has nothing to do with the pricing
Mahdi Kiaee: Children’s Content Moderation Policies Impact
11

decision of digital platforms, which makes it an unrealistic scenario. Finally, as mentioned earlier,
platforms can partially comply based on their resources and other considerations, while the authors
assume compliance is a binary decision. Our study expands their work by taking care of the pointed
issues.

2.5. Conclusion of Literature Review


This literature review meticulously navigates studies on two-sided digital platforms, content mod-
eration, children’s online safety, and digital platform regulation and compliance, illuminating the
multifaceted dynamics and challenges within digital platforms. Despite insightful findings, the
literature presents gaps, predominantly focusing on economic and strategic aspects, often overlook-
ing in-depth discussions on content moderation and children’s online safety. Rapid technological
advancements further necessitate the updating of existing analytical frameworks and methodologies
to capture the dynamics of modern digital platforms accurately. The present research addresses
these gaps, providing a nuanced analysis of content moderation strategies and children’s online
safety while introducing innovative analytical frameworks. It extends existing theoretical frame-
works, offering insights beneficial for academics, policymakers, and practitioners in the field, thereby
making significant contributions to understanding digital platforms, content moderation, and chil-
dren’s online safety in an era of increased regulatory scrutiny.

References
[1] Alcici, Lucas Moreira. 2021. The use of artificial intelligence by social media companies for content
moderation: a look at the international human rights framework on the right to freedom of expression, in
Artificial intelligence and human rights, 1st ed., 169–179. Dykinson, S.L. (Accessed: 25 September 2023).

[2] 2020. Fraught Platform Governmentality: Anonymity, Content Moderation, and Regulatory Strategies
over Yik Yak, in Book of Anonymity, 255–274. Punctum Books. (Accessed: 25 September 2023).

[3] Anderson, Edward G., Geoffrey G. Parker, Burcu Tan. 2014. Platform Performance Investment in the
Presence of Network Externalities. Information Systems Research 25(1) 152–172.

[4] Bhargava, Hemant K., Vidyanand Choudhary. 2004. Economics of an Information Intermediary with
Aggregation Benefits. Information Systems Research 15(1) 22–36.

[5] Chen, Y.-n., I. Png. 2003. Information Goods Pricing and Copyright Enforcement: Welfare Analysis.
Information Systems Research 14(1) 107–123.

[6] Dellarocas, Chrysanthos. 2003. The Digitization of Word of Mouth: Promise and Challenges of Online
Feedback Mechanisms. Management Science 49(10) 1407–1424.

[7] Dave, P. 2020. Social Media Giants Warn of AI Content Moderation Errors, as Employees
Sent Home. World Economic Forum. Available at: https://www.weforum.org/agenda/2020/03/
social-media-giants-ai-moderation-errors-coronavirus/ (Accessed: 25 September 2023).
Mahdi Kiaee: Children’s Content Moderation Policies Impact
12

[8] de Keulenaar, Emillie, Magalhães, João C, Ganesh, Bharath. 2023. Modulating Moderation: A History
of Objectionability in Twitter Moderation Practices. Journal of Communication 73(3) 273–287.

[9] Fagan, Frank. 2020. Optimal social media content moderation and platform immunities. European Journal
of Law and Economics 50(3) 437–449.

[10] Gagliordi, Natalie. 2020. Facebook Says AI Enhancements Have Bolstered its Con-
tent Moderation Efforts. ZDNet. Available at: https://www.zdnet.com/article/
facebook-says-ai-enhancements-have-bolstered-its-content-moderation-efforts/ (Accessed:
25 September 2023).

[11] Gershgorn, Dave. 2020. Mark Zuckerberg Just Gave a Timeline for AI to Take
Over Detecting Internet Hate Speech. Quartz. Available at: https://qz.com/1249273/
facebook-ceo-mark-zuckerberg-says-ai-will-detect-hate-speech-in-5-10-years/ (Accessed:
25 September 2023).

[12] Gillespie, T. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions
That Shape Social Media. London: Yale University Press.

[13] Gopal, Ram D., Hooman Hidaji, Raymond A. Patterson, E. Rolland, Dmitry Zhdanov. 2018. How Much
to Share with Third Parties? User Privacy Concerns and Website Dilemmas. MIS Quarterly 42.

[14] Gorwa, Robert, Binns, Reuben, Katzenbach, Christian. 2020. Algorithmic Content Moderation: Tech-
nical and Political Challenges in the Automation of Platform Governance. Big Data & Society 7(1).

[15] Hagiu, Andrei, Julian Wright. 2015. Marketplace or Reseller? Management Science 61(1), 184–203.

[16] Holloway, Donell. 2019. Surveillance capitalism and children’s data: the Internet of toys and things for
children. Media International Australia 170(1) 27–36.

[17] Katz, M.L., C. Shapiro. 1985. Network Externalities, Competition, and Compatibility. The American
Economic Review 75(3) 424–440.

[18] Lefouili, Yassine, Leonardo Madio. 2022. The Economics of Platform Liability. European Journal of Law
and Economics 53(3) 319–351.

[19] Liu, Yi, Pinar Yildirim, Z. John Zhang. 2022. Implications of Revenue Models and Technology for
Content Moderation Strategies. Marketing Science 41(4) 831–847.

[lomas2017facebook] Lomas, Natasha. 2017. Facebook’s Content Moderation Rules Dubbed “Alarm-
ing” by Child Safety Charity. TechCrunch. Available at: https://techcrunch.com/2017/05/22/
facebooks-content-moderation-rules-dubbed-alarming-by-child-safety-charity/ (Accessed: 25
September 2023).

[21] Madio, Leonardo, Martin Quinn. 2023. Content Moderation and Advertising in Social Media Platforms.
Available at: https://ssrn.com/abstract=3551103orhttp://dx.doi.org/10.2139/ssrn.3551103.

[22] Marwick, A., Fontaine, C., & boyd, danah. 2017. “Nobody Sees It, Nobody Gets Mad”: Social Media,
Privacy, and Personal Responsibility Among Low-SES Youth. Social Media + Society 3(2).
Mahdi Kiaee: Children’s Content Moderation Policies Impact
13

[23] Nan, Guofang, Ding, Ning, Li, Guangyu, Li, Zhiyong, Li, Dahui. 2023. Two-Tier Regulation Models for
the User-Generated Content Platform: A Game Theoretic Analysis. Decision Support Systems 114034–.

[24] Nault, Barrie R, Zimmermann, Steffen. 2019. Balancing Openness and Prioritization in a Two-Tier
Internet. Information Systems Research 30(3), 745–763.

[25] O’Keeffe GS, Clarke-Pearson K; Council on Communications and Media. 2011. The impact of social
media on children, adolescents, and families. Pediatrics 127(4) 800–804.

[26] Parker, Geoffrey G., Marshall W. Van Alstyne. 2005. Two-Sided Network Effects: A Theory of Infor-
mation Product Design. Management Science 51(10) 1494–1504.

[27] Parker, Geoffrey, Van Alstyne, Marshall. 2018. Innovation, Openness, and Platform Control. Manage-
ment Science 64(7), 3015–3032.

[28] Polinsky, A.M., S. Shavell. 2000. The Economic Theory of Public Enforcement of Law. Journal of
Economic Literature 38(1) 45–76.

[29] Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. Yale
University Press. (Accessed: 25 September 2023).

[30] Rochet, Jean-Charles, Jean Tirole. 2003. Platform competition in two-sided markets. Journal of the
European Economic Association 1(4) 990–1029.

[31] Shavell, Steven. 1987. The Optimal Use of Nonmonetary Sanctions as a Deterrent. The American
Economic Review 77(4) 584–592.

[32] Tiwana, Amrit, Benn Konsynski, Ashley A. Bush. 2010. Research Commentary-Platform Evolution:
Coevolution of Platform Architecture, Governance, and Environmental Dynamics. Information Systems
Research 21(4) 675–687.

[33] Vijairaghavan, V., Nault, B.R., Hidaji, H. 2021. Winner Take All with Data Portability. Management
Science, MS-INS-21-01102.R1. Note: Under review at Management Science.

[34] Wisniewski, Pamela, Arup Kumar Ghosh, Heng Xu, Mary Beth Rosson, John M. Carroll. 2017. Parental
Control vs. Teen Self-Regulation: Is There a Middle Ground for Mobile Online Safety? Proceedings of the
2017 ACM Conference on Computer Supported Cooperative Work and Social Computing 17, 51–69.

[35] Zhang, Chenglong, Jianqing Chen, Srinivasan Raghunathan. 2022. Two-Sided Platform Competition in
a Sharing Economy. Management Science 68(12), 8909–8932.

You might also like