You are on page 1of 22

Letter from the Under Secretary General ............................................................................

2
1. Introduction to the committee ..........................................................................................3
2. Introduction to the Agenda Item: International Governance of Artificial Intelligence.4
2.1 AI Governance .............................................................................................................7
2.2 Principles for the Ethical Use of Artificial Intelligence ..................................................7
2.3 Legal and Regulatory Considerations ..........................................................................9
2.4 The EU Artificial Intelligence Act ................................................................................10
2.5 The Collingridge Dilemma ..........................................................................................12
2.6 Interim Report: Governing AI for Humanity ................................................................15
3. Conclusion .......................................................................................................................17
4. Questions To Be Addressed ...........................................................................................17
5. Further Reading and Research .......................................................................................18

1
Letter from TARMUN Team

As the academic team of TARMUN, we are delighted to extend a warm welcome to you at

Yeditepe University on March 6th. This occasion is especially significant as it represents the

inaugural MUN conference aimed at imparting knowledge of Model UN procedures to the

delegates, within an academic framework.

Our discussions will take place within the Legal Committee, focusing on the pivotal agenda

item of AI Governance. We are committed to ensuring a stimulating and enlightening

experience for all participants. We look forward to a conference that is both productive and

enjoyable, wishing all delegates success in their deliberations.

All the best,

TARJMUN Academic Team

You can contact us via mun@munpoint for your inquiries.

Note: The study guide has been written by Yasin Yıldırım.

2
1. Introduction to the committee

The LEGAL Committee, also known as Sixth Committee or C6, is one the six main committees

of the United Nations General Assembly.

The promotion of justice and international law, accountability and internal UN justice

concerns, drug control, crime prevention, and the fight against international terrorism are

among the issues assigned to the Legal Committee. Other UN agencies, not all of which report

to the GA, also handle counterterrorism-related matters.The Committee also considers requests

for observer status in the GA.

The reports of the many subsidiary organs, ad hoc Committees, and expert organizations

handling legal issues under the GA's jurisdiction are essential components of the Committee's

activity. Some items are evaluated every year, while others are evaluated every three, five, or

ten years. As needed, the Committee forms working groups.

The following are some of the subsidiary bodies report through the Sixth Committee:

● International Law Commission,

● Special Committee on the Charter of the United Nations,

● United Nations Commission on International Trade Law;

● United Nations Programme of Assistance in the Teaching, Study, Dissemination, and

Wider Appreciation of International Law 1.

1Ruder, N., & Aeschlimann, J. (2011). The PGA handbook: A Practical Guide to the United Nations General
Assembly.

3
The United Nations General Assembly has an express mandate to support the gradual

development of public international law, as outlined in the United Nations Charter. Article 13

of the Charter expressly empowers the General Assembly to "initiate studies and make

recommendations for the purpose of: (a) promoting international cooperation in the political

field and encouraging the progressive development of international law and its codification."

Following precedent, this provision has been understood as a broad power to develop new

treaties on a wide range of subjects, adopt them, and recommend them to governments for

signature, ratification, or accession. 2

2. Introduction to the Agenda Item: International Governance of Artificial

Intelligence

Artificial Intelligence (AI) refers to the simulation of human intelligence processes by

machines, particularly computer systems. These processes include learning (the acquisition of

information and rules for using the information), reasoning (using rules to reach approximate

or definite conclusions), and self-correction. AI systems are designed to mimic cognitive

functions such as problem-solving, decision-making, perception, and language understanding,

often with the goal of performing tasks that would typically require human intelligence.

At its core, AI involves the development of algorithms and models that enable machines to

analyze data, recognize patterns, and make informed decisions or predictions. This is often

achieved through techniques such as machine learning, where algorithms are trained on large

datasets to identify patterns and relationships, and deep learning, which involves neural

2Nations, U. (1945). Charter of the United Nations: Together with the Statute of the International Court of
Justice.

4
networks with multiple layers of interconnected nodes that can extract increasingly complex

features from data.

AI encompasses a broad spectrum of applications, ranging from virtual assistants and

recommendation systems to autonomous vehicles and medical diagnosis tools. These

applications leverage AI capabilities to automate tasks, optimize processes, and augment

human capabilities, leading to improvements in efficiency, productivity, and decision-making

across various domains.

AI is often described as the interdisciplinary field of study that seeks to develop intelligent

systems capable of performing tasks that would typically require human intelligence. It draws

upon principles and techniques from computer science, mathematics, statistics, cognitive

science, neuroscience, and other related disciplines to create algorithms and models that can

exhibit intelligent behavior. 3

While AI has made significant advancements in recent years, achieving human-like

intelligence remains an ongoing challenge. Researchers continue to explore new

methodologies, algorithms, and architectures to push the boundaries of AI capabilities and

address limitations such as interpretability, scalability, and ethical concerns. As AI

technologies continue to evolve, their impact on society, economy, and ethics becomes

increasingly profound, necessitating careful consideration and responsible stewardship in their

development and deployment.

3 Jackson, P. C. (2019). Introduction to Artificial Intelligence: Third Edition. Courier Dover Publications.

5
The potential benefits of AI are huge, ranging from helping the world tackle complex

challenges such as climate change and major diseases to improving efficiency in the workplace.

But the risks are equally great, including the use of AI to amplify disinformation, carry out

cyberattacks and further entrench prejudice and injustice, not to mention apocalyptic claims

that AI will surpass human intelligence. These debates – some of which are hyperbolic and

binary – all take place against a backdrop of fierce geopolitical competition, where AI is valued

by all but in the hands of a few, and amid the dizzying pace of technology.

At the heart of these debates is a fundamental dilemma: how to harness the enormous potential

of AI for good while minimizing the risks and ensuring equitable access to this technology In

our view, this delicate balance can only be achieved through appropriate AI governance at

national, regional and global levels. Crucially, compliance with international law should be the

starting point. 4

“If we are to harness the benefits of artificial intelligence and address the risks, we must all

work together - governments, industry, academia and civil society - to develop the frameworks

4De Souza Dias, T. (2024, January 2). AI governance in the age of uncertainty: international law as a starting
point. Just Security. https://www.justsecurity.org/90903/ai-governance-in-the-age-of-uncertainty-international-
law-as-a-starting-point/

6
and systems that enable responsible innovation. […] We must seize the moment, in partnership,

to deliver on the promise of

technological advances and

harness them for the common

good.” 5

2.1 AI Governance

AI governance refers to the establishment of frameworks and principles to regulate the

development, deployment, and use of artificial intelligence technologies. As AI becomes

increasingly pervasive in society, the need for effective governance mechanisms to address

ethical, legal, and societal implications becomes paramount. Understanding the historical

context and evolution of AI governance frameworks provides essential context for navigating

the complex landscape of AI regulation. Ethical principles serve as foundational guidelines for

AI governance, guiding decisions and actions to ensure that AI technologies are developed and

deployed responsibly.

2.2 Principles for the Ethical Use of Artificial Intelligence

Understanding the ethical principles guiding the use of artificial intelligence (AI) systems is

essential. Some of these principles are adopted by the United Nations for the use of AI systems.

These principles provide a framework for international AI governance, outlining the challenges

to be addressed and the standards to be followed.

5 UN Secretary-General António Guterres, AI for Good Global Summit, Geneva, 2019

7
Do No Harm : AI systems should not cause harm to individuals or groups, and their lifecycle

should align with the United Nations Charter's purposes, principles, and commitments. They

should be designed, developed, deployed, and operated in a way that respects human rights and

fundamental freedoms. Monitoring the intended and unintended impacts of AI systems is

crucial to prevent harm, including violations of human rights and freedoms.

Defined purpose, necessity and proportionality: The use of AI systems should be justified,

appropriate, and proportionate to achieve legitimate aims, in line with United Nations system

organization mandates and governing instruments, rules, regulations, and procedures.

Safety and security: Safety and security risks should be identified, addressed and mitigated

throughout the lifecycle of AI systems to prevent and/or limit potential or actual harm to people,

the environment and ecosystems wherever possible. Safe and secure

AI systems should be enabled by robust frameworks.

Fairness and nondiscrimination: International law mandates that United Nations system

organizations promote fairness, ensure equal distribution of benefits, risks, and costs, and

prevent bias, discrimination, and stigmatization, while AI systems should not deceive or impair

human rights and fundamental freedoms.

Right to privacy, data protection and data governance: The use of AI systems requires respect

for individual privacy and data subjects' rights, ensuring adequate data protection frameworks

and governance mechanisms aligned with the United Nations Personal Data Protection and

Privacy Principles to maintain data integrity.

8
Transparency and explainability: United Nations system organizations must ensure

transparency and explainability of AI systems throughout their lifecycle and decision-making

processes. Technical explainability means that decisions made by AI systems can be

understood and traced by humans. Individuals should be informed about decisions affecting

their rights, fundamental freedoms, entitlements, services, or benefits, and have access to the

reasons and logic involved.

Responsibility and accountability: The United Nations should establish impact assessment

mechanisms and legal frameworks to ensure accountability for the use of AI systems

throughout their lifecycle. These mechanisms should include whistle-blower protection and

ethical and legal responsibility for AI-based decisions. The organization should investigate and

take appropriate action in response to harms caused by AI systems, fostering shared knowledge

resources and capacities.

Inclusion and participation: A participatory approach is crucial in identifying underlying

assumptions and risks in AI systems, involving stakeholders in the process of defining their

purpose, determining benefits, harms, and adverse impacts, and implementing prevention and

mitigation measures. 6

2.3 Legal and Regulatory Considerations

6 Principles for the ethical use of Artificial intelligence in the United Nations System | United Nations - CEB_.
(n.d.). https://unsceb.org/principles-ethical-use-artificial-intelligence-united-nations-system

9
Legal frameworks for AI governance vary across jurisdictions, encompassing a wide range of

laws, regulations, and policies aimed at addressing issues such as data privacy, cybersecurity,

and algorithmic transparency. Challenges in AI regulation include balancing innovation with

risk mitigation, ensuring accountability and transparency, and addressing ethical concerns in

AI development and deployment.

2.4 The EU Artificial Intelligence Act

The AI Act is a proposed European law on artificial intelligence (AI) – the first comprehensive

law on AI by a major regulator. The Act aims to regulate the development, deployment, and

use of AI systems within the EU. It covers both public and private sector actors and applies to

AI systems placed on the EU market or used within the EU, regardless of where they are

developed.

The Act adopts a risk-based approach to AI regulation, categorizing AI systems into four risk

categories: unacceptable risk, high risk, limited risk, and minimal risk. The regulatory

requirements vary depending on the risk level of the AI system, with stricter obligations

imposed on high-risk systems.

High-risk AI systems include those used in critical infrastructure, such as healthcare,

transportation, and law enforcement, as well as AI systems with potential safety, security, or

fundamental rights implications. High-risk AI systems are subject to mandatory requirements,

including data quality, transparency, documentation, human oversight, robustness, and

accuracy. 7

7 “Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL LAYING
DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT)
AND AMENDING CERTAIN UNION LEGISLATIVE ACTS” EUR-LEX - 52021PC0206 - EN - EUR-LEX.
(n.d.). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206

10
The Act prohibits certain AI practices that pose significant risks to individuals' rights, safety,

and dignity. These prohibited practices include AI systems that manipulate human behavior in

a deceptive manner, exploit vulnerabilities of specific groups, or enable social scoring by public

authorities.

It's important to note that the EU Artificial Intelligence Act is still in the proposal stage and

subject to review and approval by the European Parliament and the Council of the European

Union. Once adopted, the Act is expected to significantly impact the development and

deployment of AI technologies within the EU, setting a precedent for AI regulation globally. 8

“Artificial intelligence is already changing our everyday lives. And this is just the beginning.

Used wisely and widely, AI promises huge benefits to our economy and society. Therefore, I

very much welcome today's political agreement by the European Parliament and the Council

on the Artificial Intelligence Act. The EU’s AI Act is the first-ever comprehensive legal

framework on Artificial Intelligence worldwide. So, this is a historic moment. The AI Act

transposes European values to a new era. By focusing regulation on identifiable risks, today’s

agreement will foster responsible innovation in Europe. By guaranteeing the safety and

fundamental rights of people and businesses, it will support the development, deployment and

8 Neuwirth, R. J. (2022b). The EU Artificial Intelligence Act. https://doi.org/10.4324/9781003319436

11
take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the

development of global rules and principles for human-centric AI.” 9

2.5 The Collingridge

Dilemma

The Collingridge dilemma

refers to a concept in

technology governance that

highlights the challenges

associated with regulating

emerging technologies,

particularly in their early stages of development. The dilemma presumes that it is difficult to

control the societal impacts of a technology once it has been widely adopted, yet it is also

challenging to regulate it effectively during its initial stages of development when its impacts

are uncertain and difficult to anticipate. This concept has significant implications for AI

governance, as AI technologies continue to evolve rapidly and their societal impacts become

increasingly pronounced. 10

In the context of AI governance, the Collingridge dilemma underscores the importance of

adopting proactive and adaptive regulatory approaches that balance innovation with risk

mitigation. On one hand, delaying regulatory intervention until the impacts of AI technologies

9Ursula von der Leyen, President of the European Commission,


The Commission welcomes political agreement on AI Act. (2023, December 9). European Commission -
European Commission. https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

10 Chinen, M. (2023). The International Governance of Artificial Intelligence.

https://doi.org/10.4337/9781800379220

12
become more evident risks allowing harmful consequences to emerge unchecked. On the other

hand, implementing overly restrictive regulations too early in the development process may

stifle innovation and impede the potential benefits of AI.

2.5 The Case for a Global AI Observatory

"The Case for a Global AI Observatory (GAIO), 2023" describes the urgent need to establish

an international platform dedicated to monitoring and analyzing developments in the field of

artificial intelligence (AI) on a global scale. Written by leading experts in AI governance and

policy, the report emphasizes the rapid expansion of AI technologies and their profound impact

on societies worldwide and highlights the need for improved oversight and coordination.

The report begins by highlighting the exponential growth of AI technologies in various sectors,

from healthcare and finance to transportation and education. It highlights the transformative

potential of AI in driving innovation, increasing productivity and tackling complex societal

challenges. However, it also warns of the risks and uncertainties associated with unchecked AI

development, including ethical concerns, bias and discrimination, and geopolitical tensions.

13
Against this backdrop, the report argues for the establishment of GAIO as a dedicated platform

for monitoring, analyzing and disseminating information on AI developments on a global scale.

GAIO would serve as a central repository of data, insights and best practices and provide

valuable resources for policymakers, researchers, industry stakeholders and civil society

organizations for decision-making and policy formulation.

Key objectives of GAIO include:

Monitoring AI trends: GAIO would track developments in AI research, innovation and

adoption across different regions and sectors and provide real-time updates on new

technologies, applications and trends.

Impact and risk assessment: GAIO would analyze the societal, economic and ethical impacts

of AI technologies, including their potential risks and benefits, to enable evidence-based policy

decisions and regulatory action.

Promoting transparency and accountability: GAIO would promote transparency and

accountability in the development and deployment of AI by supporting open data sharing,

facilitating peer reviews, and promoting best practices in AI governance.

Facilitating international cooperation: GAIO would facilitate international cooperation and

knowledge sharing on AI governance, allowing countries to learn from each other's

experiences, harmonize standards, and address cross-border challenges.

Strengthening stakeholders: GAIO would provide stakeholders, including policy makers,

researchers, industry representatives and civil society organizations, with the information and

tools they need to effectively engage in AI governance processes and shape the future of AI in

a responsible and inclusive manner.

14
The report concludes with a call to governments, international organizations and other

stakeholders to support the establishment of the GAIO as a crucial step towards improving the

global governance of AI and ensuring that AI technologies are developed and deployed in a

way that benefits all of humanity. 11

2.6 Interim Report: Governing AI for Humanity

The interim report "Governing AI for Humanity" provides an overview of progress and

challenges in the governance of artificial intelligence (AI) through December 2023. The report

was written by an international panel of experts and provides a snapshot of current efforts to

create an effective AI governance framework that upholds ethical principles, protects human

rights and promotes societal well-being.

Policy makers, industry stakeholders and civil society organizations are increasingly

recognizing the importance of AI governance in addressing the ethical, social and economic

impacts of AI technologies. However, translating this recognition into concrete regulatory

measures and international cooperation remains a major challenge. The report identifies a

variety of approaches to AI governance in different countries and regions, reflecting different

cultural, political and regulatory contexts. While some countries have taken proactive measures

to regulate AI technologies, others are lagging behind, highlighting the need for greater

harmonization and coordination at the international level.

11Carnegie Council for Ethics in International Affairs. (2023, July 6). _The Case for a Global AI Observatory
(GAIO), 2023_. https://www.carnegiecouncil.org/media/article/the-case-for-a-global-ai-observatory-gaio-2023

15
The report highlights emerging best practices and regulatory models for AI governance,

including risk-based approaches, human rights frameworks and stakeholder engagement

mechanisms. These models provide valuable insights into effective strategies for overcoming

the complex challenges posed by AI technologies, while promoting innovation and

competitiveness. Despite progress in the development of AI governance frameworks,

challenges remain in their implementation and enforcement. These challenges include the lack

of technical expertise, resource scarcity and the rapid pace of technological change outpacing

regulatory responses.

The report highlights the importance of international cooperation and coordination in AI

governance to address cross-border challenges and ensure consistency and interoperability of

different regulatory regimes. Joint efforts are essential to share best practices, harmonize

standards and address emerging issues such as AI-driven disinformation and geopolitical

tensions.

The report recommends increased collaboration between governments, international

organizations, industry stakeholders, academia and civil society organizations to develop and

implement effective frameworks for AI that balance innovation and risk mitigation. There is a

need for capacity building and knowledge sharing initiatives to support policy makers and

regulators in developing AI governance expertise and implementing best practices. Training

programs, workshops and knowledge-sharing platforms can facilitate peer learning and

promote the adoption of effective regulatory approaches.

Robust monitoring and evaluation mechanisms are essential to assess the effectiveness of AI

governance frameworks and identify areas for improvement. Regular evaluations, stakeholder

16
consultations and impact assessments can help ensure that regulatory interventions are

evidence-based and responsive to evolving challenges. Promoting ethical AI design and

responsible innovation is crucial to address concerns about bias, discrimination and misuse of

AI technologies. Ethical guidelines, certification schemes and ethical impact assessments can

help incentivize responsible behavior by AI developers and users.

The interim report "Governing AI for Humanity" provides a comprehensive overview of the

progress and challenges in AI governance and sets the stage for further reflection and action in

the coming years. As AI technologies continue to advance and permeate all aspects of society,

it is imperative that policymakers, industry stakeholders and civil society organizations work

together to ensure that AI is developed and deployed in a way that serves the best interests of

humanity.

3. Conclusion

International law plays a central role in the governance of AI. It provides states with a common

vocabulary and greater clarity, predictability and trust in tackling this global and complex

challenge. International rules and principles already apply to AI technologies: they can be

interpreted to accommodate the different uses and applications of AI technologies by different

actors around the globe. The task of interpreting international law in the context of AI is not

easy and requires a joint effort that brings together different stakeholders and areas of expertise.

At this stage, there are more questions than answers. However, given the rapid development of

AI and the risk of it overtaking any regulation, it is clear that AI governance should be flexible

and dynamic, covering all phases of AI development. While this flexibility lies in the

universality of international law, states and other stakeholders still need to think together about

how it can be applied in practice, including through existing or new processes, forums or

institutions. This is where the discussion on global AI governance should start.

17
4. Questions To Be Addressed

1: What are the current challenges and risks associated with the global proliferation of AI
technologies, and how can international governance frameworks address them effectively?

2: What principles should underpin international governance frameworks for AI to ensure


that they uphold human rights, promote ethical standards, and foster global cooperation?

3: How can international governance frameworks address the cross-border implications of AI


technologies, including issues related to data protection, privacy, and cybersecurity?

4: How can international governance frameworks address the cross-border implications of AI


technologies, including issues related to data protection, privacy, and cybersecurity?

5: How can international governance frameworks adapt to the rapid pace of technological
innovation and emerging ethical challenges in the field of AI?

6: What role should industry stakeholders, civil society organizations, and academia play in
shaping international governance frameworks for AI?

7: What mechanisms should be put in place to ensure transparency, accountability, and


oversight in AI development and deployment at the international level?

8: How can international governance frameworks promote inclusivity and equitable access to
AI technologies, particularly for marginalized communities and developing countries?

5. Further Reading and Research

The Oxford Process on International Law Protections in Cyberspace

https://www.reuters.com/technology/united-nations-creates-advisory-body-address-ai-
governance-2023-10-26/

https://ec.europa.eu/commission/presscorner/detail/en/ip_23_6473

https://academic.oup.com/isr/article/25/3/viad040/7259354

https://www.un.org/en/ai-advisory-body

https://aiindex.stanford.edu/report/

https://www.governance.ai

18
https://www.aisafetysummit.gov.uk

https://initiatives.weforum.org/ai-governance-alliance/home

https://unu.edu/cpr/blog-post/us-executive-order-ai-takeaways-global-ai-governance

19
Steps That You Need To Follow

1. Prepare your opening speech


o Read the study guide
o Research your country and write an opening speech maximum of 1 minute.
Finally write your opeining speech.

Sample Opening Speech

oHonorable chair and fellow delegates, the delegation of China stands before
this august assembly to address the paramount issue of AI Governance, a
cornerstone for ensuring the ethical development, deployment, and utilization
of Artificial Intelligence. As we navigate through the digital revolution, the
significance of establishing robust, equitable, and universally applicable
regulatory frameworks cannot be overstressed. China, home to over a billion
individuals, has witnessed firsthand the transformative power of AI, with more
than 60% of our industries integrating AI technologies, thereby enhancing
efficiency, but also raising substantial ethical and security concerns.
Recognizing the dual nature of AI, where innovation must be balanced with
accountability, China proposes a multilateral approach to governance. This
includes the establishment of international standards for AI ethics,
transparency, and data protection, coupled with a mechanism for global
cooperation in AI research and development, ensuring that the benefits of AI
are shared broadly and do not exacerbate global inequalities. By fostering an
environment of collaboration rather than competition, we can harness the full
potential of AI to address humanity’s most pressing challenges while
safeguarding the rights and dignity of all individuals.
2. Sample Speech From YouTube

https://www.youtube.com/watch?v=twPsPZfVPmk

20
3. Prepare Your General Speaker’s List Speech (GSL)

GSL is a platform where delegates take the floor and give speeches about the agenda item in
general. You can talk about:

• Topics
• Solutions
• Facts & Information
• Statistics
• Statements
• Previous and possible or future policies
• Previous and possible treaties

Sample GSL Speech


Honourable Chair and Fellow delegates,
As the delegation of China, we want to talk about how important Artificial
Intelligence, or AI, is for our world. In China, we use AI to make many things better,
like schools and hospitals. But we know we must be careful with it too.
We think the best way to handle AI is by making sure it's used in a good and safe
way. We want to work with other countries to make rules that everyone can follow.
This way, AI can help everyone without causing problems.
We're asking everyone here to help make these rules together. If we do this, we can
make sure AI is used to make the world better for all of us.
Thank you.
4. Get Ready for Motions!
Motions are usually reproduced from the questions to be addressed part!
Sample Motion!
As the delegate of China, we would like to give a motion to discuss the main concerns
and risks of using AL worldwide in a moderated caucus. Individual speaking time is 1
minute, and total time is 10 minutes.

21

You might also like