You are on page 1of 2

CSB IAS ACADEMY

-------------------------------------------------------------------------------------------------------------------------------
TOPIC OF THE DAY (DATE: 03.11.2023)
AI SAFETY SUMMIT: BLETCHLEY DECLARATION
WHY IN NEWS?
Recently, Artificial Intelligence Safety Summit was held at Bletchley Park, London, United Kingdom,
on 1–2 November 2023.
Global AI Summit London – Key Highlights
• The Global AI Summit 2023 is convened by the British Prime Minister Rishi Sunak at Bletchley
Park, London. Early research on AI was pioneered at Bletchley Park” by Alan Turing who is widely
considered as the “father of AI”. Turing and his team of mathematicians had helped
crack “Enigma”, a German code during World War II, giving the Allies a huge advantage in their
military operations.
• It is the first-ever global summit on artificial intelligence.
• Key Emphasis has been placed on regulating "Frontier AI". The summit aimed to create a
framework for mitigating AI risks while maximising its potential and resulted in the “Bletchley
Declaration.”
What is Frontier AI?
o “Frontier AI” is defined as highly capable foundation generative AI models that could possess
dangerous capabilities that can pose severe risks to public safety .for example- Robotics, 3D
printing, and the Internet of Things.
• The summit aimed to create a framework for mitigating AI risks while maximising its potential and
resulted in the “Bletchley Declaration.”
Bletchley Declaration
• The central objective of the Bletchley Declaration is to address risks and responsibilities
associated with frontier AI in a comprehensive and collaborative manner. The document
emphasizes the necessity of aligning AI systems with human intent and urges a deeper
exploration of AI’s full capabilities.
• Member countries- Australia, Brazil, Canada, Chile, China, France, Germany, India, Indonesia,
Ireland, Israel, Italy, Japan, Kenya, Saudi, Arabia, Netherlands, Nigeria, The Philippines, the
Republic of Korea, Rwanda, Singapore, Spain, Switzerland, Turkey, Ukraine, United Arab Emirates,
United Kingdom of Great Britain and Northern Ireland, the United States of America, and the
European Union.
• The Declaration fulfils key summit objectives in establishing shared agreement and responsibility
on the risks, opportunities and a forward process for international collaboration on
frontier AI safety and research, particularly through greater scientific collaboration.
• The Declaration sets out agreement that there is “potential for serious, even catastrophic, harm,
either deliberate or unintentional, stemming from the most significant capabilities of
these AI models.” Countries also noted the risks beyond frontier AI, including bias and privacy.
• The Declaration details that the risks are “best addressed through international cooperation”. As
part of agreeing a forward process for international collaboration on frontier AI safety.

Phone No: 9966436874, 8374232308 1


CSB IAS ACADEMY
-------------------------------------------------------------------------------------------------------------------------------
India at AI Summit
• Minister Rajeev Chandrasekhar, representing the Government of India, stressed the importance
of international conversations on AI.
• He suggested a sustained approach to regulating technology, driven by a coalition of countries to
prevent innovation from outpacing regulation.
• India currently chairs the Global Partnership on AI, a coalition of 15 governments.
Need of Global AI Governance
• Lack Of Laws To Regulate Data Scraping: Web scrapers for Generative AI gather data for training
the models properly and this data needs to be regulated.Currently, there is no uniform global law
to regulate this data.
• Dominance of The AI Big Three: China, the European Union (EU) and the US are shaping the new
global order of governance, development of AI and the data-driven digital economy in support of
their interests.
• At an industry level, the AI Big Three are headquarters of the top 200 most influential digital
technology companies worldwide, and they shape current industry-led global AI governance.
• Widening social disparities: Today, AI development is with large digital corporations.
• The concentration of AI expertise in a few companies and nations could exacerbate global
inequalities and widen digital divides.
• Cybercrime: The potential risks with AI, include online harassment, hate and abuse, and threats
to children’s safety and privacy.
• Curb of individual rights: The risks posed by AI to freedom of expression entail, among others,
excessive content blocking and restriction and opaque dissemination of information.
• Exclusionary AI governance frameworks: Current transnational AI governance frameworks do
not adequately consider the perspectives of Global South.
• Without active participation in the multidimensional global AI governance discourse, countries in
the Global South will likely find it challenging to limit the harm caused by AI-based disruption.
Other Initiatives at International level to Reguate AI
• US: Blueprint for an AI Bill of Rights’ released by US.
• European Union-European Board for Artificial Intelligence
• G7 Hiroshima Al Process: It is an effort to determine a way forward to regulate artificial
intelligence (AI). Voluntary AI code of conduct: G7 published guiding principles and a 11-point
code of conduct to “promote safe, secure, and trustworthy AI worldwide”.
Steps taken by Government of India to regulate AI
• National Strategy for Artificial Intelligence: NITI Aayog issued it in 2018 which had a chapter
dedicated to responsible AI. In 2021, NITI Aayog also issued a paper, ‘Principle of Responsible
AI’.
• Global Framework on Expansion of “Ethical” AI: It was emphasized by India during the B20
meeting. This implies establishment of a regulatory body to oversee the responsible use of AI,
akin to international bodies for nuclear non-proliferation.
• G20 Meeting: In the recently concluded meeting, India suggested international collaboration to
come out with a framework for responsible human-centric AI.

Phone No: 9966436874, 8374232308 2

You might also like