You are on page 1of 146

ISTANBUL TECHNICAL UNIVERSITY « GRADUATE SCHOOL

HOW TO GOVERN MILITARY AI: ON THE GLOBAL GOVERNANCE OF


ARTIFICIAL INTELLIGENCE FROM AN INTERNATIONAL SECURITY
PERSPECTIVE

M.Sc. THESIS

Onur TÜRK

Department of Science, Technology and Society

Science, Technology and Society Programme

JULY 2021
ISTANBUL TECHNICAL UNIVERSITY « GRADUATE SCHOOL

HOW TO GOVERN MILITARY AI: ON THE GLOBAL GOVERNANCE OF


ARTIFICIAL INTELLIGENCE FROM AN INTERNATIONAL SECURITY
PERSPECTIVE

M.Sc. THESIS

Onur TÜRK
(422181015)

Department of Science, Technology and Society

Science, Technology and Society Programme

Thesis Advisor: Assoc. Prof. Dr. Emine Aslı ÇALKIVİK

JULY 2021
İSTANBUL TEKNİK ÜNİVERSİTESİ « LİSANSÜSTÜ EĞİTİM ENSTİTÜSÜ

ASKERİ YAPAY ZEKA NASIL YÖNETİLMELİ: ULUSLARARASI


GÜVENLİK PERSPEKTİFİNDEN YAPAY ZEKANIN KÜRESEL YÖNETİMİ

YÜKSEK LİSANS TEZİ

Onur TÜRK
(422181015)

Bilim, Teknoloji ve Toplum Ana Bilim Dalı

Bilim, Teknoloji ve Toplum Yüksek Lisans Programı

Tez Danışmanı: Doç. Dr. Emine Aslı ÇALKIVİK

TEMMUZ 2021
Onur Türk, a M.Sc. student of ITU Graduate School student ID 422181015,
successfully defended the thesis/dissertation entitled “HOW TO GOVERN
MILITARY AI: ON THE GLOBAL GOVERNANCE OF ARTIFICIAL
INTELLIGENCE FROM AN INTERNATIONAL SECURITY PERSPECTIVE”,
which he prepared after fulfilling the requirements specified in the associated
legislations, before the jury whose signatures are below.

Thesis Advisor : Assoc. Prof. Dr. Emine Aslı ÇALKIVİK .............................


İstanbul Technical University

Jury Members : Assoc. Prof. Dr. Aslı ÖĞÜT ERBİL .............................


Istanbul Technical University
Assoc. Prof. Dr. Ahmet Salih BIÇAKÇI ............................
Kadir Has University

Date of Submission : 30 June 2021


Date of Defense : 12 July 2021

v
vi
To my family,

vii
viii
FOREWORD

I want to thank Claudio Palestini and Tacan İldem from NATO, Kate Saslow from
Stiftung Neue Verantwortung and Nicholas Davis from World Economic Forum for
their help and input during the ideation phase of this thesis.
I would like to express my atmost gratitude to my supervisor Assoc. Prof. Dr. Emine
Aslı Çalkıvik for her insightful comments, patience, and encouragement. This thesis
would not have been completed without her guidance.
I feel deeply indebted to Assoc. Prof. Dr. Aslı Öğüt Erbil for her support and guidence
both for this thesis and throughout the programme, as well as to Prof. Dr. Aydan
Turanlı for elegantly designed programme that helped us understand science and
technology through the STS perspective.
I am profoundly grateful to my father, my mother, my sisters and my brother for their
unconditional support and love.

June 2021 Onur TÜRK

ix
x
TABLE OF CONTENTS

Page

FOREWORD ............................................................................................................. ix
TABLE OF CONTENTS.......................................................................................... xi
ABBREVIATIONS ................................................................................................. xiii
LIST OF TABLES ................................................................................................... xv
LIST OF FIGURES ............................................................................................... xvii
SUMMARY ............................................................................................................. xix
ÖZET ............................................................................................................... xxi
1. INTRODUCTION .................................................................................................. 1
1.1 Motivation and Research Questions ................................................................... 2
1.2 Literature Review and Contribution of This Thesis ........................................... 4
1.3 Theoretical Framework ...................................................................................... 8
1.4 Research Methodology ..................................................................................... 10
2. IDENTIFYING THE PROBLEM ...................................................................... 15
2.1 Definitional Ambiguity of AI ........................................................................... 15
2.1.1 Defining intelligence from psychological and technical perspective ........... 16
2.1.2 Defining artificial intelligence...................................................................... 18
2.1.2.1 Academic definitions of AI ................................................................ 19
2.1.2.2 International organization’s definition of AI ..................................... 22
2.1.2.3 Government’s definition of AI ........................................................... 26
2.1.2.4 Summarizing AI definitions ............................................................... 29
2.1.3 Building block I: It is hard to find a common definition for AI ................... 31
2.2 Ethical Issues with AI in Civilian Use ............................................................. 32
2.2.1 Automated decisions .................................................................................... 32
2.2.1.1 Explainable AI ................................................................................... 33
2.2.1.2 Human-in-the-loop systems ............................................................... 34
2.2.2 Behavior prediction ...................................................................................... 36
2.2.2.1 Discriminatory predictions ................................................................. 35
2.2.2.2 Unfair predictions............................................................................... 37
2.2.3 Data governance ........................................................................................... 37
2.2.4 Building block II: AI is ethically problematic.............................................. 38
2.3 Threats of AI from the Individual, National and International Perspectives ... 40
2.3.1 Malicious use of AI ...................................................................................... 41
2.3.1.1 Digital security ................................................................................... 41
2.3.1.2 Physical security................................................................................. 42
2.3.1.3 Political security ................................................................................. 43
2.3.2 Lethal autonomous weapon systems ............................................................ 45
2.3.2.1 Military advantages of AWS .............................................................. 46
2.3.2.2 Technical issues with autonomous weapons ...................................... 47
2.3.2.3 Legal issues with AWS ...................................................................... 48
2.3.2.4 Ethical issues with LAWS ................................................................. 54
2.3.3 The AI arms race .......................................................................................... 55
2.3.3.1 United States ...................................................................................... 56

xi
2.3.3.2 European Union.................................................................................. 57
2.3.3.3 China .................................................................................................. 59
2.3.3.4 Russia ................................................................................................. 60
2.3.4 Building block III: AI will dramatically impact global security .................. 61
2.4 Existing Governance Frameworks on AI ......................................................... 62
2.4.1 Ethical frameworks ....................................................................................... 64
2.4.2 Arms control agreements .............................................................................. 67
2.4.3 Banning lethal autonomous weapon systems ............................................... 70
2.4.4 Building block IV: There is a significant gap in the global governance of AI
............................................................................................................................... 73
2.5 Putting It All Together...................................................................................... 74
3. SCIENCE AND POLICY INTERFACE ........................................................... 77
3.1 Boundary Organization Concept: Definition and New Concepts .................... 78
3.1.1 Hybrid management ................................................................................... 81
3.1.2 Landscape of tensions ................................................................................ 82
3.1.3 Boundary chains ......................................................................................... 82
3.2 Boundary organizations in practice .................................................................. 83
3.3 IPCC: An Illustrative Case Study ..................................................................... 85
3.3.1 Structure of IPCC ......................................................................................... 89
3.3.2 How Does IPCC Produce Reports? .............................................................. 91
3.3.3 Who writes IPCC reports? ............................................................................ 93
3.3.4 Impact of IPCC’s work ................................................................................. 94
3.4.5 Criticisms of IPCC ....................................................................................... 95
4. CONCLUDING DISCUSSION ........................................................................... 97
REFERENCES ....................................................................................................... 107
CURRICULUM VITAE ........................................................................................ 120

xii
ABBREVIATIONS

AI : Artificial Intelligence
IEEE : Institute of Electrical and Electronics Engineers
ANI : Artificial Narrow Intelligence
AGI : Artificial General Intelligence
LAWS : Lethal Autonomous Weapons Systems
UN : United Nations
IPCC : Intergovernmental Panel on Climate Change
STS : Science and Technology Studies
ML : Machine Learning
CNN : Convolutional Neural Networks
NATO : North Atlantic Treaty Organization
OECD : Organization for Economic Co-operation and Development
EC : European Commission
HLEG : High-Level Expert Group on AI
WEF : World Economic Forum
ISO : International Organization for Standardization
HITL : Human in the Loop
XAI : Explainable AI
AWS : Autonomous Weapons Systems
IHL : International Humanitarian Law
ICRC : International Committee of the Red Cross
CCW : Convention on Certain Conventional Weapons
ICT : Information and Communications Technology
UNFCCC : United Nations Framework Convention on Climate Change
SBSTA : Subsidiary Body of Scientific and Technological Advice

xiii
xiv
LIST OF TABLES

Page

Table 2.1 : AI domains and subdomains constituting operational definition…..24


Table 2.2 : Summary of examined definitions of AI…………….…….…….....31
Table 2.3 : Lethal AI arms race by numbers…………………….…….………..56
Table 2.4 : Key military technology profiles……………….………..………....68
Table 3.1 : Number of authors by responsibility and region in the special report
on climate change and land…………………..….……………..….....94

xv
xvi
LIST OF FIGURES

Page

Figure 2.1 : European public knowledge about consumer data collection and
sharing……………………………………….………………………38
Figure 3.1 : Organizational structure of IPCC…………………………………..90
Figure 3.2 : Report production process of IPCC………………………………...92

xvii
xviii
HOW TO GOVERN MILITARY AI: ON THE GLOBAL GOVERNANCE OF
ARTIFICIAL INTELLIGENCE FROM AN INTERNATIONAL SECURITY
PERSPECTIVE

SUMMARY

The growing capabilities of artificial intelligence have made the technology feasible
to be deployed as part of military systems. So much that more than forty-five
governments have published their AI development plans, promising a striking amount
of investment in the military and commercial development of AI. Indeed, AI-enabled
military systems promise a dramatic competitive edge as well as cause grave
humanitarian concerns which leads to opposing views on how to govern it. This thesis
summarizes existing governance frameworks to identify the governance gap, shows
the underlying causes of this gap by analyzing problematic aspects of AI systems, and
proposes a new way of thinking about how to govern AI on the international level. It
is argued that AI even in the non-military fields produces ethically problematic results,
so implementing these technologies in military fields without proper regulation would
lead to cata-strophic scenarios. It is showed that AI represents digital, physical, and
political security threats and the current international laws are limited in addressing
the problems. This thesis categorizes current policies (and policy proposals) for the
global governance of AI and analyze their advantages and shortcomings. Main
argument of this is that current governance works and proposals on AI are based on
positions, not commonly agreed upon, policy-relevant scientific knowledge. This
thesis focuses on the co-production of knowledge for better informed policy making
and argues that it is this lack of science-policy interface on AI that prevents finding
common grounds between stakeholders. As both social and technical literature on AI
has grown drastically, it will be more helpful to employ the organizational structure
and tools that boundary organization concept offers, to bring all experts, governments,
international organizations, corporations, NGOs together to perform policy-relevant
yet policy-neutral scientific review on AI to enlighten the governance path ahead of it.
Rather than looking for policy-specific inspirations from previous treaties, forming a
boundary organization on AI will allow interactions between technical experts,
ethicists, decision makers, civil society, and business to co-produce relevant
knowledge that would eventually lead to commonly accepted policies. An illustrative
case study was built on the IPCC to show how an organization was formed to bridge
the gap between science and policy to facilitate knowledge-driven policymaking in
climate governance. This thesis concludes by arguing that an IPCC-type boundary
organization is needed for global governance of AI to bridge the gap between social
scientists and technical experts, as well as with decision-makers. Such a platform is
needed to make sure that policy-relevant knowledge is produced to ensure that AI
systems are working in accordance with the international humanitarian law, as well as
the scientific development of the field is encouraged.

xix
xx
ASKERİ YAPAY ZEKA NASIL YÖNETİLMELİ: ULUSLARARASI
GÜVENLİK PERSPEKTİFİNDEN YAPAY ZEKANIN KÜRESEL
YÖNETİMİ

ÖZET

Yapay zekanın insanlık üzerindeki potansiyel etkileri uzun bir süreden beri bilim kur-
gu sinema ve edebiyatın ilgi çekici konularından biri olagelmiştir. Akıllı cihazların
ucuzlaması ve yaygınlaşması ile birlikte her bir bireyin veri üreticisi ve bu verilerle
eğitilen algoritmaların günlük kullanıcıları haline geldiği günümüzde bilim kurgu sen-
aryolarının artık gerçek olma ihtimali her zamankinden çok daha yakındır. Distopik
geleceğin en korkutucu aktörlerinden ölümcül otonom robotlar artık ciddi politik
tartışmaların bir parçasıdır. Ölümcül otonom silah sistemleri genel adıyla anılan bu
sistemler artık teknik olarak mümkün, politik ve finansal olarak da makul seviyelere
gelmiştir. Öyle ki Çin, Rusya, ABD gibi büyük devletler 2035 yılına kadar envanter-
lerine otonom uçuş ve otonom ateşleme özelliğine sahip dronelar ekleyeceğini
açıklamıştır. Yapay zekanın askeri alanda üzerinde ortak olarak anlaşılmış bir regüla-
syondan geçmeden hızlıca entegre planları yapılması özellikle sivil toplum ve
uluslararası organizasyonlar nezdinde karşıt kampanyalar şeklinde reaksiyon
bulmuştur.
Bu tez yapay zekanın uluslararası yönetimi için bir çerçeve sunmayı amaçlar. Bu hedef
çerçevesinde ilk olarak yapay zekanın mevut problematik yanlarını ve yapay zeka
üzerine sunulan yönetim ve regülasyon çerçevelerini analiz eder. Yapay zekanın
problematic yanlarının en başında tanımsal muğlaklık gelir. Geniş ve sürekli gelişen
teknik bir alan olarak yapay zekanın üzerine ortak anlaşılmış bir tanımı yoktur. Politi-
ka odaklı dökümanlarda yapay zeka insani özelliklere benzetilerek tanımlanır, bunun
yanında bilimsel dökümanlardaki tanımlar fonksiyonalite ve teknik becerileri kulla-
narak yapay zekayı tanımlarlar. Politik tanımlar yapay zekayı insan zekasının teknik
bir tezahürü olarak lanse ettiğinden, teknolojinin gelişim sürecinin nihai noktası olarak
yapay süper zekayı görürler. Akademik tanımlar ise günümüzde kullanımda olan
tekniklere atıf yaparak bir yapak “dar” zeka tanımı yaparlar: sınırlandırılmış belli
amaçları sınırları belirlenmiş bir çevrede insan müdahalesi olmadan otonom olarak
gerçekleştirebilen sistemler. Tanımlama metodolojileri arasındaki uçurum iki tarafın
ortak bir dil konuşmasına mani olmaktadır ve bu ortak dil yakalanmadığı sürece ya-
pay zekayı teknik açıdan regüle etmek isteyen karar vericilerin bürokratik iletişimde
aşılmaz bariyerler ile karşılaşacağı öngörülmektedir.
Problemin ikinci ayağını etik problemler oluşturur. Yapay zekanın sivil alanlarda
kullanımları dahi ciddi problemler doğurmaktadır, ve bu problemlerin tam olarak or-
tadan kaldırılması için yeterli yönetim kabiliyeti bulunmamaktadır. Algoritmalara
emanet edilen insan kaynakları sistemlerinin cinsiyetçi tercihler yapması, adres bazlı
fiyatlama algoritmasının ırkçılıkla suçlanması ve en iyi niyetle dahi yönetilse de veri
güvenliğinin hiçbir zaman %100 sağlanamıyor olması, yapay zekanın eşitlik, mahrem-
iyet ve adalet konularında güvenilmez sistemler olduğunu ortaya koyuyor. Bu etik
problemlerin yanında yapay zeka sistemlerinin istemli olarak kötü amaçlar için
kullanımı da mümkündür. Halihazırdaki fiziksel ve siber tehditlerdeki karar verme

xxi
senaryolarının otomatize edilmesi yapay zeka ile daha kolay olacaktır. Yapay zeka
sistemlerinin ölçeklenebilir ve sürekli eğitilebilir olması alınan her yeni önlemi de
geçersiz kılabilme yeteneğine sahiptir. Fiziksel saldırılarda karar verici ile hedef
arasındaki fiziksel mesafe psikolojik yükü de ortadan kaldıracağından ölümcül amaçlı
saldırıların artmasından endişe edilmektedir. Bunun yanında yalan haber ve deep-fake
gibi yanıltıcı içeriklerin ölçekli şekilde yapay zeka ile üretimi daha da kolay hale
gelecek ve bu durum da siyasi tehdit olarak karşımıza çıkma potansiyeline sahiptir.
Otonom ölümcül silah sistemlerinin hiç olmadığı kadar gerçeğe yakın olması karmaşık
hukui tartışmaların da doğmasına sebep olmuştur. Yapay zeka tarafından navigasyon
ve ateşleme sistemleri kontrol edilen silahların uluslararası savaş hukuku ve
uluslararası insancıl hukuk karşısında çözümlenmesi çok zor sorunlar doğurmaktadır.
Bir kesim araştırmacı yapay zeka destekli sistemlerin silahlı çatışma durumunda ölüm-
leri azaltacağını idda ederken, diğer bir kesim yapay zeka sistemlerinin silahlı
çatışmanın karmaşık ve anlık doğasını çözümleyemeyeceğini ve daha çok ölüme se-
bep olacağını savunmaktadır. Öyle ki bir insanın makul hedef olup olmaması çok
görecelidir ve otomatize edilemeyecek karmaşıklıkta olduğu iddia edilir. Bu sebeple
mevcut savaş hukukuna göre bu tür silahların hiç faaliyete bile giremeyeceği iddia
edilir. Ancak buna rağmen büyük devletler arasında bir yapay zeka silahlanma
yarışının da başladığı görünmektedir. ABD, Çin, Rusya, Avrupa Birliği ve Güney
Kore özellikle otonom silahlı drone teknolojilerine en çok yatırım yapan ve yatırım
yapmayı taahhüt eden ülkeler arasındadır. Bu hızlanan silahlanma yarışında mğulak
bir seviyede kalan hukuki tartışmaların ülkeler nezdinde bağlayıcılığı bulunmamak-
tadır.
Buna rağmen yapay zekanın küresel yönetimi için önerilen çerçeveler bulunmaktadır.
Bu çerçevelerden ilki etik çerçevelerdir. Etik çerçeveler yapay zeka sistemlerin nasıl
davranması gerektiğini değerler üzerinden belirten dökümanlardır. Bu dökümanlar
soyut kavramlar ile ideal yapay zeka sistemlerini tasarlarlar. En öne çıkan kavram
açıklanabilir yapay zeka (XAI) kavramıdır. XAI, yapay zekanın aldığı kararları
açıklayabiliyor olması gerektiğini belirtir. Bunun haricindeki etik çerçeveler çok soyut
kalmaktadır: yapay zekanın güvenilir olması gerektiği teknik olarak tanımlaması zor
bir özelliktir. İkinci tür çerçeveler ölümcül otonom silah sistemlerinin mevcut si-
lahsızlanma anlaşmaları dahilinde yönetilebileceğini savunur. Bu öneriler Nükleer
Silahların Yayılmasının Önlenmesi Antlaşması’nın otonom ölümcül silah sistemlerine
uygulanabileceğini iddia eder. Ancak nükleer silahların üretiminin takip edilmesi
göreceli daha kolaydır. Uranyumun nadir element olması ve ticaretinin takip edilebili-
yor olması, nükleer santrallerin büyük ve uydulardan saklanamayacak yapılar oması,
nükleer silah üretiminin ve operatörlüğün yüksek kapasiteli kalifiye personel gerek-
tirmesi bu silahları yapay zeka destekli sistemlerden ayırır. Yapay zeka destekli
ölümcül silahlar ticari olarak ulaşılabilir parçalarla üretilebilir, takip edilebilecek özel
bir maddesi yoktur, üretim için büyük alanlar gerektirmez. İlk algoritmanın üretilmesi
sonrası kullanım için kalifiye operatör gerektirmeyen sistemler olan ölümcül otonom
silah sistemler mevcut silahsızlandırma çabaları arasında radara takılmama özelliğine
sahiptir. Üçüncü kategoride ölümcül otonom silah sistemlerinin koşulsuz ve şartsız
önceden yasaklanması gerektiğini savunanlar bulunur. İnsan Hakları İzleme Örgütü ve
Uluslararası Kızılhaç Komitesi’nin öncülük ettiği bu çerçevede ülkelerin bu çağrıya
katılmaları için için lobi faaliyetleri gerçekleştirilir. Otonom ölümcül silahların hiçbir
şartta insan hakları ve savaş hukukuna uyamayacağını savunan bu grup yapay zekanın
ölümcül amaçlar için kullanılmasının tamamen yasaklanmasını savunur. 2020 yılına
kadar 30 hükümet bu yasak çağrısına olumlu yanıt vermiştir ancak bu ülkelerin ekser-
iyeti belirtilen silahların geliştirilmesi konusunda teknik ve finansal altyapıya sahip

xxii
değildir. Bu kapasiteye sahip olan ülkeler ise mevzubahis silah sistemlerinin hala tam
bir karşılığının olmadığını, bu sebeple de mevcut olmayan sistemler üzerinden
yasaklama kararının verilmesini doğru bulmadıklarını belirtmektedirler.
Bu tez mevcut yönetim çerçevelerinin dışında, kullanıma hazır kural ve regülasyon-
ların haricinde bir küresel yönetim çerçevesi oluşturmak adına yapay zekaya si-
lahsızlanma çerçevesinden değil, bir küresel yönetim sorunu olarak ele almayı önerir.
Bu sebeple diğer silahsızlanma anlaşmalarına bakmak yerine, aynı karakteristiklere
sahip küresel yönetim problemlerine odaklanır. Tezde küresel iklim yönetimi vaka
olarak alınmıştır. İklim yönetiminin vaka olarak seçilmesi küresel iklim yönetimi ve
küresel yapay zeka yönetiminin benzer karakteristiklere sahip olmaları sebebiyledir.
İki senaryoda da küresel güney/kuzey ayrımı bulunmaktadır. İklim değişikliğinde
küresel kuzey ülkeleri en çok sera gazını salgılar, yapay zekada en büyük üretim kapa-
sitesine küresel kuzey ülkeleri sahiptir. İki senaryoda da her seviyeden aktör sürece
aktif olarak dahildir. Son olarak iki senaryoda da yönetim ve karar verme çok kat-
manlı olarak gerçekleşmektedir. Bu bilgiler ışığında küresel iklim yönetimi için en
önemli organizasyon olan Hükûmetlerarası İklim Değişikliği Paneli (IPCC) üzerine
bir vaka analizi yapılması uygun görülmüştür. Bu analizin amacı bir organizasyonun
çok paydaşlı bir krizi çözmede nasıl bir rol aldığını görmek ve buradan yapay zeka
yöne-timi için dersler çıkarmaktır.
IPCC kendi akademik araştırmalarını gerçekleştirmeyen, mevcut bilimsel çalışmalar
üzerinde literatür çalışması yapan bir kurumdur. Bu kurum siyasetten bağımsız ama
siyasetin kullanımına uygun bilimsel bilgi üreten ve bu bilgi ile küresel iklim yöne-
timine yön veren bir kurumdur. IPCC’nin en önemli özellikleri bilim dünyası ile karar
vericiler arasında köprü olması, tüm kararları ortak alması ve hazırladığı raporların
geniş çerçevede kabül görmesidir. Yapay zeka alanında gözlemlenen problemler de
bunlardır. Teknik uzmanlar, sosyal bilimciler ve karar vericilerin bir araya geleceği bir
platform mevcut değildir. Etkileşimin olmadığı bir senaryoda politikaya dönüşe-
bilecek kapsayıcı bilgi üretimi mümkün değildir. 2015’ten itibaren yapay zeka üzerine
hem teknik hem de sosyal perspektiften yüksek miktarda akademik çalışma
yapılmıştır. Bu çalışmaların uluslararası düzeyde ele alınıp kullanılabilir veriye dö-
nüştürülmesi elzemdir. Yine IPCC gibi bütün hükümetleri bilimsel bilginin ışığı etra-
fında toplamak alınan kararların da daha kabul edilen ve uygulanan kararlar olmasını
sağlayacaktır. Özetle bu tez yapay zekanın küresel yönetiminin etkin bir uluslararası
sınır organizasyonu ile gerçekleşebileceğini savunmaktadır.

xxiii
xxiv
1. INTRODUCTION

The video starts with a man on stage giving a presentation about their line of products.
A mini drone flies into the scene, circles around him, and hovers on the stage. He
describes it as a hundred times more responsive than a human being, intelligently flies
itself, and has an anti-sniper feature. Drone lands on his hand, as he continues to liken
it to the smartphones that everybody has, it is equipped with wide-field cameras,
tactical sensors, and facial recognition technology. “Inside here is a three-gram-shaped
explosive,” he says, followed by an audible gasp from the audience. He throws the
mini drone like a rock; it stabilizes itself instantly and attacks the dummy target on its
head placed on stage with a tiny explosion sound like a firecracker.

“Did you see that?” he says, “that little bang is enough to penetrate the skull and
destroy the contents.” He claims that people get emotional, disobey orders, and aim
high in high-pressure moments; when weapons make the decisions, they can perform
airstrikes with surgical precision. “They can penetrate buildings, cars, trains; evade
people, bullets, pretty much any countermeasure: they cannot be stopped. Just
characterize your enemy, release the swarm, and your enemy is eliminated,” he says.
The following scenes of the video present a world where such tiny drone weaponry is
commonly available. A woman (unnamed) has a video call with his son Oliver, a
university student. He engages in human rights activist group working on exposing
oppressions. Video shows snippets of news, one mentioning an increase in aerial alerts,
other urges people to stay inside and cover their windows with security shutter. Then
two men release a swarm of tiny drones from the back of a van, swarm immediately
flies off towards their target. Larger drones penetrate the walls of university buildings
for smaller ones to enter through. Mini drones fly into classrooms, causing chaos and
horror among the students, random explosion sounds are heard. Oliver, in horror,
calling his mom while trying to hide from the drones, is recognized as a target and
eliminated on the spot.

The fictional short film produced by the Future of Life Institute titled “Slaughterbots”
was released in 2017 caused a mix of responses. It portrays a dystopian future in which

1
autonomous killer robots are massacring innocent people. The film shows that it is
relatively quick for terror groups and non-state actors to adapt the technology to use
them in their favor. Stuart Russell, a prominent AI researcher, appears at the end of
the film and says that the video we saw is not speculation and “allowing machines to
kill humans will be devastating to our security and freedom” (Cussins, 2017, p. 1).

This film is heavily criticized. Some researchers argue that the video takes a plausible
scenario, i.e., terror groups using drones to attack civilian targets, and scale it up
without factoring in how others would respond (Anderson & Waxman, 2017, p. 5).
Other criticisms converge on three points: first is there is no evidence that governments
are planning to produce a swarm of mini killer drones at mass scale; second, the claim
that drones can defeat any countermeasure is false as every military technology has a
counter measure; and third is that militaries are capable of preventing terrorists from
getting their hands on military-grade weapons (Scharre, 2017, p. 2). Indeed, the short
film uses hype and fear aiming to scare the viewer into the action.

“If you want to drum up fears of ‘killer robots,’ the video is great. However, as
a substantive analysis of the issue, it falls apart under even the most casual
scrutiny. The video does not put forward an argument. It is sensationalist fear-
mongering,”

argues Paul Scharre in his article on the IEEE Spectrum in response to the short film
(2017). Regardless of the methodology and reasoning presented in the short film, it
represents the one end of the debate on AI and the military.

The militarization of AI, materialized as killer robots, lethal autonomous weapons,


slaughterbots, to name a few in the literature, has gained the attention of policymakers,
AI researchers, and social scientists, as it has become evident that with its dual-use
nature, AI systems will make their way into the military technology. This growing
interest has resulted in extensive AI vision papers by governments, policy
recommendations by think tanks, and academic works on AI technology's social
impacts, especially since 2015.

1.1 Motivation and Research Questions

The social construction of technology simply argues it is not just that technology
determine human action, but that rather, human action shapes technology as well

2
(Bijker, 2008). Considering that what makes AI is the algorithm trained by real-life
data, it seems like a good example of how human action shapes technology, although
a little different than the original definition is intended. Additionally, the very same
traits of AI can support the deterministic view of technology as well. Available
technology at a given point shapes what is possible to create. In other words, existing
technologies play a key role in the design process of new technologies (Hallström,
2020). Data collection, processing power, and algorithm design capabilities, according
to the deterministic view of technology, will shape how AI technologies will develop.
Nevertheless, the deterministic view still acknowledges that humans are in control of
the design and development of such technologies.

AI is also an integral part of debates on Autonomous Technology (AT) view as well.


In addition to the deterministic view of technology determining the structure of the
rest of society and culture, AT holds the view that technology is not in human control:
it develops with a logic of its own. Considering the reality of self-learning and self-
improving algorithms, AT view also seems relevant for AI as well. Nevertheless, these
views of AI in specific are not without criticisms either. Some researchers call this
enchanted determinism: the idea that deep learning techniques are magical, self-
evolving tools, they are outside the scope of present scientific knowledge, but also
deterministic and they argue that as social and technical research on AI develops, we
are more capable of achieving the Weberian concept of disenchantment:
“encompassing a widespread decline in mystical or religious forces and their
replacement by processes of rationalization and intellectualization” (Campolo &
Crawford, 2020, p. 12).

These debates on AI have shaped my primary motivation: rather than taking a


philosophical stance on the nature of AI, I wanted to focus on where power and
responsibility lie and understand the boundaries of social, political, and technical
structures surrounding AI technologies. Secondary motivation comes from all the AI-
related news that gradually accumulated in the back of my head in years. Being a
technology enthusiast, every news, announcement, publish about AI has stuck with me
as this turned out to be a slow literature collection. I was surprised by the increasing
volume of governments announcing AI Strategies and how different they were from
each other. And the third motivation came from my observation of the non-
governmental campaigns calling either for a preemptive ban on killer robots or argue

3
that military AI will cause less violence. My interest in military AI peaked by the
curiosity of how such technology can accommodate opposing views and is it possible
to find a common ground to regulate AI for the greater good of our societies.

Stemming from these motivations, this thesis has two main research questions:

1) Why does AI need to be governed?

2) How should AI need to be governed?

Additional questions include:

- What are the causes of significant differences between policy proposals?

- How do current AI governance frameworks compare with each other?

- What characteristics of AI cause differences in governance approaches?

- Is it possible to derive parallels from other global governance challenges?

1.2 Literature Review and Contribution of This Thesis

Based on the questions above, my initial hypothesis was to look for a common ground
between existing governance proposals to highlight essential aspects of AI governance
that is widely accepted by all relevant stakeholders. The literature scan revealed that
this will not be possible for two reasons: the first one is that existing proposals are
diverse in approach and methodology with opposing results that are challenging to
bring together, and the second one is that the lack common ground between existing
proposals are caused not only by what their goals are, but what type of scientific
understanding they base their proposals. I conducted a thorough literature analysis to
better understand the landscape of existing governance proposals and the reasons for
such differences.

For the first question of why AI needs to be governed, literature analysis resulted in an
array of works which I categorized under three topics in section two: definitional
ambiguity, ethical issues, and threats of AI. Definitional ambiguity refers to how the
definition of AI differs by the organizations, sectors, fields, and expertise. Academic
definitions of AI mostly focus on technical definitions and capabilities of existing
artificial narrow intelligence systems (Kaplan & Haenlein, 2019; Kurzweil, 2014;
Poole & Mackworth, 2017), whereas political definitions envision AI with human-like
capabilities and refer to the capabilities of artificial superintelligence systems (The

4
Federal Government, 2018; The Presidential Executive Office, 2018; Trump, 2017;
Villani et al., 2018). This lack of conceptual clarity, detailed under subsection 2.1,
especially evident on the topics of autonomous weapons, presents a significant
challenge to reaching a consensus on which type of systems fall under which laws.
Ethical issues under subsection 2.2 refer to the widespread use of AI systems in civilian
cases without proper oversight. Literature shows that AI has resulted in discriminatory
and unfair consequences by automated decisions (Adadi & Berrada, 2018; Howard et
al., 2017; O’neil, 2016), by behavior prediction (Angwin et al., 2015; Bellamy et al.,
2018; Dawson et al., 2019; Narayanan, 2018; Stobbs et al., 2017), and by failure in
data governance (Barnes, 2006; Krauth, 2018; Trepte et al., 2015). Ethical issues with
AI show that AI, even without any military use, is ethically problematic and many
other unintended consequences are still to be discovered. Subsection 2.3 shows the
threats of AI from personal, national, and international security perspectives, mostly
focusing on military aspect of AI. Malicious use of AI shows how state and non-state
actors can utilize AI systems to amplify existing cyber, physical, and political security
threats, and present novel threats that will need countermeasures to be created against.
These threats can take the shape of self-learning and distributed cyber-attack systems
(Surber, 2018), drones with autonomous launch and targeting capabilities (Csernatoni,
2019; Geist, 2016), and fast production and dissemination of fake news to create
political instability (Brundage et al., 2018). Subsection 2.3.2 specifically focuses on
lethal autonomous weapon systems, their military advantages (Marchant et al., 2015;
Surber, 2018; US Department of Defense, 2007), whether they are technically capable
of distinguishing combatants from non-combatants in complex military situations
(Ludovic Righetti et al., 2014; Sharkey, 2019; UK Ministry of Defense, 2018), whether
it is possible to assign responsibility when things go wrong (Anderson & Waxman,
2017; Ludovic Righetti et al., 2014; Solis, 2016), and whether it is morally right to
delegate the decision to kill to the machines (Anderson & Waxman, 2017; Sharkey,
2019). As the last part of the threats of AI, subsection 2.3.3 focuses on how major
military powers have already laid out their plans to utilize AI systems to dramatically
enhance their military capabilities, leading to an AI arms race (Future of Life Institute,
2015; Haner & Garcia, 2019; SIPRI, 2019).

The dual-use nature of AI allows the same technologies to be utilized both in civilian
and military purposes. Due to this nature, the subcategories of 2. Identifying the

5
Problem seamlessly switch between civilian focused problems, military related
problems, and political problems. The common thread that holds them together are
that all the specific problems listed there are relevant for international security,
meaning that these problems have impacts beyond borders that might require
international response. Additionally, it is necessary to show how the same problem
(i.e., behavior prediction) unveils in a civilian use case and in a military use case to
see that the boundary between civilian and military AI is highly blur. Because of that,
throughout the thesis the issue is regarded as AI governance, not just as military AI
governance.

For the questions of how AI should be governed and how current AI governance
frameworks compare with each other, literature is divided into three categories as
ethical frameworks, governing AI with existing international laws, and banning lethal
autonomous weapons systems. Subsection 2.4, Governance of AI, shows that
researchers suggest creating ethical frameworks, set of ethical rules such as
explainability, trust, transparency to shape AI from the development phase to make
sure it complies with international laws (Daly et al., 2020; High Level Expert Group
on Artificial Intelligence, 2019; OECD, 2019b; Schiff et al., 2020); some argues that,
for the case of lethal autonomous weapons systems, existing international arms control
laws on nuclear and ICT technologies are already compatible to govern AI (G. Allen
& Chan, 2017; Borghard & Lonergan, 2018; M. M. Maas, 2019; Payne, 2018; Scharre,
2017); others argue that LAWS cannot comply with international humanitarian laws
and development, production and usage of it need to be preemptively banned (Future
of Life Institute, 2015; Human Rights Watch, 2018; ICRC, 2018).

For the questions of whether it is possible to derive parallels from other global
governance challenges, I reviewed the literature on global climate governance and
boundary organizations. This literature analysis shows the historical evolution of the
concept of boundary organization, the main pillars that define a boundary organization,
three significant improvements on the concept, and the effectiveness of boundary
organizations in governance. Boundary organizations sit between the worlds of
scientists and decision-makers, engaging knowledge producers, users of that
knowledge, and moderators/mediators to build a legitimate place for interaction
(Kirchhoff et al., 2015; Parker & Crona, 2012). They create products and strategies to
encourage dialogue between relevant actors with lines of accountability to all. Once a

6
boundary organization is formed to provide a credible platform, it helps achieve four
goals in governance (Gustafsson & Lidskog, 2018; Hoppe et al., 2013). First, it
facilitates eliminating cultural and organizational differences. Second, as the
interaction happens within a boundary organization context, it leads to forming trust
and legitimacy. Third, as this relationship and interactions go on sustainably, this
convergence shapes the nature of information production as experts gain a more
profound understanding of policy-makers decision-making processes. Fourth,
interaction also helps to connect the knowledge with other types of knowledge that
policymakers already utilize (Kirchhoff et al., 2015; Lemos & Morehouse, 2005;
Moser, 2009). Nevertheless, there is not a single formula to create a perfect boundary
organization to stimulate the co-production of knowledge between scientists and
policymakers for any global governance challenge. Therefore, it validates the idea that
there is a need for further study on a boundary organization that works on a global
governance challenge with similar characteristics as AI, to better understand how the
boundary organization concept can be used for AI governance.

It is necessary to state that when talking about global governance of AI, this thesis
does not cover the full process of governance from problem definition to policy
making. This thesis summarizes problematic aspects of AI that are relevant to the
international security and categorizes current policies (and policy proposals) for the
global governance of AI and analyze their advantages and shortcomings. Existing
proposals have clear answers about what should be done about AI and who is
responsible. Main argument of this is that current governance works and proposals on
AI are based on positions, not commonly agreed upon, policy-relevant scientific
knowledge. This thesis only focuses on the co-production of knowledge for better
informed policy making and argues that it is this lack of science-policy interface on
AI that prevents finding common grounds between stakeholders. As both social and
technical literature on AI has grown drastically, it will be more helpful to employ the
organizational structure and tools that boundary organization concept offers, to bring
all experts, governments, international organizations, corporations, NGOs together to
perform policy-relevant yet policy-neutral scientific review on AI to enlighten the
governance path ahead of it. Rather than looking for policy-specific inspirations from
previous treaties, forming a boundary organization on AI will allow interactions
between technical experts, ethicists, decision makers, civil society, and business to co-

7
produce relevant knowledge that would eventually lead to commonly accepted
policies.

1.3 Theoretical Framework: STS, Security and AI Connection

Topics of artificial intelligence and security converge on multiple points concerning


the STS studies. As the AI technologies significantly become commercially and
militarily viable to develop and deploy, these points require more focus and research.
Security is often used as an umbrella term to validate governments' actions and
regarded as an unquestionable public good that should be defended. Therefore, the
framing of security matters significantly for the definition of security and the power
and influence of security enterprises (Vogel et al., 2017, p. 346). As I will show in
more detail in later sections, discussions regarding developing AI systems for military
purposes are shaped narrowly around the notion of enhancing national security without
paying proper attention to collateral matters. AI-enabled military systems are already
in use in limited capacities, and many governments have already announced visions
on heavy investments in AI from social, economic, and defense perspectives (Geist,
2016, p. 5). It is stated that deploying AI systems in national security practices will
create an exponential leap forward for countries with the technical and financial
capacity to invest in it; therefore, this great potential will inevitably lead to a global AI
arms race (G. Allen & Chan, 2017, p. 2). Global military superpowers have already
defined artificial intelligence technologies as a national security priority, potentially
blocking any further scrutiny over the discussing larger implications of military AI
systems.

Another interaction point is the socio-technical complexity of AI and risks included in


the large-scale systems: how we could know them and how security is inherently
related to them. As systems become increasingly complex in social and technical
realms, the security risks they possess increase in correlation (Vogel et al., 2017, p.
348). Especially in the case of lethal autonomous weapon systems (LAWS) degree of
uncertainty is significantly high, so much that it is challenging even for the developer
of the systems to explain why it behaves in a certain way, especially when the systems
malfunction (Ludovic Righetti et al., 2014, p. 23). As I will show in detail later, the
socio-technical complexity concerning LAWS is vast and concerning that researchers
are calling outright ban, as LAWS are argued to inadvertently fail to comply with the

8
law of armed conflict (Human Rights Watch, 2018, p. 4). Nevertheless, from the STS
perspective, getting involved in the development and testing of such systems, asking
the questions of “what type of knowledge considered most credible,” “what real-world
use would involve,” and “what type of system design is deemed best and why” are
crucial for understanding the complexity inherent in these systems (Vogel et al., 2017,
p. 352).

The most significant interaction point, especially for this thesis's purpose, is the dual-
use nature of AI and innovation and knowledge production in general. Dual-use was
initially referring to as repurposing military technologies into civilian (spin-off);
current debates on dual-use almost always refer to turning civilian research into
military or hostile uses (spin-on) (Vogel et al., 2017, p. 340). AI technologies perfectly
fit in both the spin-off and the spin-on dual-use definitions. Most notable AI
capabilities such as image detection, face recognition, autonomous drive, although
developed in civilian settings, have a direct use in the military (Surber, 2018, p. 5).
Apart from this dual utility of technology in civilian and military settings, the more
pressing issues are how one could know when to identify research, object, or people
as security threats (Vogel et al., 2017, p. 359). From the STS perspective, the
discussion focuses on the politics of knowledge: how can we know a technological
artifact is benign or malign? The same question applies not only to AI-enabled systems
but to people as well. Especially in the cases of strikes carried out by unmanned aerial
vehicles (UAVs), or drones, in countries such as Afghanistan, it is crucial to ask how
the categorization of enemy or friend occurs and how people are deemed a military
target (Anderson & Waxman, 2017, p. 3). As will be discussed later in detail, the rule
of distinction is a dynamic rule that needs to be interpreted for each context becomes
more problematic for when human operators are out of the loop of the decision-making
schemes of LAWS (Ludovic Righetti et al., 2014, p. 32). The same human being in
the same environment can be a protected civilian or legitimate military target based on
context (i.e., a nearby enemy tank that is about to fire), therefore increasing the level
of complexity required in decision-making (Human Rights Watch, 2020, p. 45). These
debates inadvertently lead the discussion towards profound ethical questions regarding
the “killer robots”: should it even be permissible to develop such robots with lethal
capabilities, or should a robot only have pacifist capabilities? (Vogel et al., 2017, p.
368). Additional questions regarding the responsibility of deaths caused by LAWS or

9
whether having the ability to kill via robots would lead to ethical slippage concerning
AI and security topics (Vogel et al., 2017, p. 370).

AI, security, and ethics can also be examined from a boundary-work perspective.
Literature shows that there is minimal multi-disciplinary work on AI that involves
technical experts and ethicists together. Additionally, policy experts and technical
experts have significant disparities between them regarding the understanding of AI
capabilities (Barrett et al., 2018, p. 4)

1.4 Research Methodology

I started by conducting a thorough literature analysis to better understand the landscape


of existing governance proposals. I run searches on the keywords of “AI Ethics, AI
Governance, AI Regulation, AI Strategy, AI Policy, Autonomous Weapons
Governance, LAWS Regulations, Military AI Regulations” and used databases such
as Google, Google Scholar, Web of Science, arXiv, SSRN, ScienceDirect, and ACM
Digital Library. Also, I went through the references of those results to identify further
resources. Initial results have presented one hundred and fifty-five documents. I have
chosen 2015 as the oldest date for publications to be considered to keep the research
more up-to-date (with only one exception). 2015 is considered to be a breakthrough
year for AI as the pace of technological advancement has been argued to be sped up
and that attracted more technical and non-technical research on AI (Clarck, 2015).
Results included forty-eight national AI strategies published by governments. I have
scanned all of them, but only analyzed the ones by G20 countries for detailed
comparison in the relevant section as they have significant economic (80% of global
GDP) and technical (90% of AI publications) capabilities to pursue their visions. Non-
G20 papers contributed to the general analysis of the governmental approach to AI
governance. For the remaining non-academic resources (i.e., policy recommendations
and proposals from international organizations, NGOs, and think-tanks) I filtered them
according to their relevance and ranked them by the number of citations they received
for more academic credibility.

These three types of resources (academic, governmental, non-academic & non-


governmental) are analyzed to display two parts of the problem: the problematic
characteristics of AI, and proposed solutions with their shortcomings. The problematic
characteristics are categorized as (1) definitional ambiguity of AI, (2) ethical issues

10
with AI, and (3) threats of AI. The main purpose of this thesis is not to figure out
whether AI has socio-technical problems: I start with the assumption that AI is a socio-
technical complexity and wanted to show which specific traits of AI that require or
attract regulatory response. This analysis builds up to understand the differences
between governance approaches, which brings us to the second part of the problem:
what are the proposed governance solutions, and what are their shortcomings? For this
part, I have divided the current literature on AI governance into three categories: (1)
ethical frameworks, (2) governing AI with existing international laws, and (3) banning
lethal autonomous weapons. I have analyzed them against the problems of AI they aim
to address and compared them with each other to figure out their strengths and
weaknesses.

Formulation of my research questions and doing background research as mentioned


above have helped materialize my hypothesis: Apart from the benefits, artificial
intelligence possesses a significant malevolent potential that threatens security,
democracy, freedom, and human rights of people on an international scale and neither
abstract set of rules, existing international laws, nor outright ban of certain AI-enabled
systems will be enough to address the whole matter. Current governance efforts on AI
are based on positions of individual states, not based on commonly agreed upon
scientific knowledge. Formulating the problem as a lack of science and policy interface
will allow us to examine new tools and concepts that has not been used before on AI,
namely boundary organization concept, while examining other global governance
challenges (i.e., climate governance) that has successfully embraced the concept in the
shape of an international boundary organization (IPCC). This approach will provide a
better foundation to build a robust global governance structure for AI.

My analysis of existing governance approaches shows that these approaches are


limited and not entirely addressing the problems posed by AI. Firstly, the ethical
frameworks are extremely hard to translate into technical instructions: what does it
mean that AI should be “accountable” or “trustworthy”? Secondly, existing arms
control treaties do not exactly correspond to the characteristics of LAWS: comparing
nuclear technologies with AI shows that even though both have high destructive
potential, AI deployment in armed conflict will be much cheaper, more frequent, and
will not require a prominent level of skilled operators. Thirdly, a preemptive ban on
AI-enabled weapons does not have wide approval in the international arena, especially

11
by the countries that have greater potential to develop them, as these calls are based
on hypothetical and exaggerated scenarios. My hypothesis is based on changing the
perspective to focus on how to build organizational systems that would create policy
tools to govern AI with wide acceptance and approval on an international level, rather
than figure out what exactly the stakeholders (ethicists, politicians, technical experts)
should do about it.

As I formulate the problem as a lack of science and policy interface, global climate
governance has emerged as the most viable example to analyze as the boundary
organization concept is heavily used in multiple level of climate governance
worldwide. Additionally, climate governance and AI governance share some key
characteristics to further study the structure and methodology of the climate
governance for actionable insights. Firstly, the north-south divide exists in both
scenarios: it is the countries that belong to the global north that produce higher
emissions and have more industrial automation and weaponization capabilities.
Secondly, both climate and AI governance occur at multiple scales: the complex web
of relations between these scales in both scenarios begets critical questions about
where the governing power and authority lie. Thirdly, both climate and AI governance
has multiple actors: roles that state and non-state actors play do not have clear
boundaries, creating various uncertainties and complexities for both. I argue that given
the similar characteristics, the success of boundary organizations on climate
governance supplies incentive to analyze the concept and study a specific boundary
organization to draw insights for the construction of the science and policy interface
of AI governance.

Global climate governance happens at the UN level and one of the most significant
organizations for climate governance is the Intergovernmental Panel on Climate
Change (IPCC) as it provides an internationally accepted authority on climate change,
with its reports widely received by the leading climate scientists and participating
governments. IPCC’s scientific and intergovernmental nature requires it to produce
rigorous and balanced scientific output for decision-makers, therefore it claims to be
policy-relevant and yet policy-neutral, never policy-prescriptive (IPCC Secretariat,
2013c).

IPCC is frequently put forward as an ideal example of boundary organizations. It is


described as one of the most successful boundary organizations based on its

12
achievements in balancing science-policy interface and providing credible
assessments that resulted in concrete global agreements (Schleussner et al., 2016).
Therefore, I deduced that it is necessary to examine the organizational structure of
IPCC, starting with the core idea behind it. For this purpose, I first performed a
literature scan on boundary organizations to show the origin, evolution, and the current
state of the idea, while keeping the global governance perspective as a compass. I used
“boundary organization” and “climate governance” keywords on databases (e.g., Web
of Science, Scopus, arXiv, SAGE, Google Scholar, ITU Library) and filter the results
based on whether they are peer-reviewed and written in social sciences subject areas.
I scanned abstracts of sixty publications to identify twenty-eight relevant papers that
focus on the boundary organization concept’s empirical use and theoretical
development.

I have built an illustrative case study on IPCC. I have chosen the illustrative case study
as a methodology to see how the boundary organization concept solidifies as an
organization on the international level, what tools and strategies they use to make
progress, and how effective it is in addressing the needs of the global governance
challenge that is climate change. The illustrative case study details the organizational
structure of IPCC, how it selects experts, what scientific methodology it employs, how
it produces the reports, how it handles scientific and political disputes, and how it
builds consensus among participating governments. The case study reveals that an
international boundary organization can be successful at building scientific credibility
to produce political common ground in addressing a global threat. It shows that this
methodology of addressing a global governance challenge is optimal when the
complexity of the challenge and the actors involved in it are highly diverse.

13
14
2. IDENTIFYING THE PROBLEM

The use and development of AI systems are rapidly growing, especially in the last ten
years. Some of these systems have immensely helped progress medical capabilities
and provided many valuable tools for everyday use. However, although growing in the
last five years, the malicious use cases of AI have gotten less attention. Unregulated
and unsupervised growth in research, development, and deployment of AI systems
poses severe global peace and security threats. Malicious AI systems, which will be
detailed later in this section, are expected to lead to the expansion of existing threats,
the introduction of new threats, and changes to the typical character of threats
(Brundage et al., 2018, p. 5).

This section starts with a literature review on the global governance of AI: what kind
of suggestions are presented in the literature, how they can be categorized, what kind
of advantages and disadvantages they possess, and why they fail to grasp the whole
picture fully. Then it will continue to analyze critical problems of AI technologies that
require policy action and how this vast array of problematic points feeds differing
views on governance. It will show the definitional ambiguity of AI, then continue to
describe ethical issues with AI, and show how countries' AI ambitions would lead to
another arms race.

2.1 Definitional Ambiguity of AI

In order to regulate the development and applications of artificial intelligence, any


regulatory body must first be able to define what artificial intelligence is. However,
international debate and literature will not quickly come to the regulatory bodies' aid,
as there is no one widely accepted answer to what artificial intelligence is. This
definitional ambiguity prevents meaningful debate around the development and
regulation of AI. Concerning headlines about the potential harms of AI applications,
encourage the regulatory action to limit or completely ban certain developments in the
AI field, such as face recognition. Nevertheless, researchers primarily focusing on
technical capabilities might not share the same grim perception of AI. Interestingly,

15
some argue that this definitional ambiguity helped the artificial intelligence field grow
and expand beyond prediction (Agre et al., 1997, p. 10).

This thesis will not go into the abyss of trying to develop a novel definition of AI. Still,
it will present the differences between the approaches and the problematic points in
the debate and give context-specific descriptions of AI applications to define the
problem.

2.1.1 Defining intelligence from psychological and technical perspective


Artificial intelligence is the two words that created an enormous amount of hype in
business, government, and academic fields, especially in the last 5-10 years. Within
those two words, intelligence has long been discussed well before the artificial
intelligence field's foundation. Intelligence is (and autonomy) has been used as a
characteristic that belongs to humans. However, there is no general understanding of
intelligence nor a standard definition that came out of the long history and debate
(Surber, 2018, p. 8). The growth in the field of AI has been presenting a growing
incentive to define what intelligence is. There is a belief that intelligence cannot be
precisely defined; it may only be described approximately (Legg & Hutter, 2007, p.
43). Some researchers find this pessimism unattainable and attempted to produce a
definition for universal intelligence.

Researchers at the Swiss AI Lab IDSIA have collected seventy informal definitions to
formulate a standard definition. According to their research, dictionary definitions of
intelligence primarily focus on “the ability to learn, understand and make judgments
or have opinions that are based on reason” (Cambridge Advanced Learner’s
Dictionary) or “…the ability to adapt to the environment” (World Book Encyclopedia).
Dictionary definitions emphasis the ability to process and transform information.
Psychologists have diverse definitions of intelligence. Anne Anastasi defines
intelligence as a single ability and as a collection and combination of various
functionalities. When defining intelligence, she focuses on two distinct aspects:
‘survival and advancement within a particular culture’ (1992). Howard Gardner argues
that intelligence is for solving problems or creating relevant products within one or
multiple cultural settings (Poole & Mackworth, 2017, p. 18). Dean Simonton’s
description is more reflective of common elements within the description of
intelligence. He argues that “. . . a certain set of cognitive capacities that enable an
individual to adapt and thrive in any given environment they find themselves in, and

16
those cognitive capacities include things like memory and retrieval, and problem-
solving and so forth. There is a cluster of cognitive abilities that lead to successful
adaptation to a wide range of environments.” (Legg & Hutter, 2007, p. 76).

Many other definitions are in the lines of environmental adaption, gathering and
processing knowledge, sensory capacity, optimization, and problem-solving. These
traits can also easily be associated with an ‘intelligent’ machine. Even though they
cannot feel emotions or perform imaginations, researchers can create “artificial”
intelligence by the psychological definitions above. The general claim against the
improbability of replicating human intelligence in the cyber form usually derives from
the fact that it is impossible to decode and code the capacity to imagine and feel
emotions. Nevertheless, from psychologists’ perspective, intelligence requires neither
emotions nor imagination. However, for some psychologists, it requires one key
component: abstract thinking. Lewis Terman defined intelligence as the sole ability to
carry on abstract thinking (“Cambridge Handb. Intell.,” 2011, p. 643). Abstract
thinking is simply the ability to think about things, objects, ideas that are not physically
within one’s grasp (Pothier, 2014). It is the ability that makes us think about ideas,
concepts and turn them into concrete objects or think about the emotional impact of
things or comprehends abstract concepts like liberty and absurdity. For example,
understanding the sentimental value of an otherwise worthless object is possible by
abstraction. It is the kind of thing that does not have concrete data points to be analyzed
or processed.

The possibility of a neural network learning abstract reasoning or making sense of


superficial statistics is still open for debate. One important side of that debate, namely
AI researchers, define intelligence slightly differently than psychologists. For some
researchers, intelligence is simply the ability to solve challenging problems (Minsky,
1991, p. 26). For others like Kurzweil, it is an agent’s capability to use limited
resources optimally to achieve given goals (Kurzweil, 2014). Roger Schank, an AI
theorist, states that intelligence means getting better over time (1991). John McCarthy
provides a more computer science-friendly definition, stating that “intelligence is the
computational part of the ability to achieve goals in the world. Varying kinds and
degrees of intelligence occur in people, many animals and some machines.” (1989).

Various definitions by experts in psychology and technology do not provide much ease
to create a unified definition of intelligence. Furthermore, it is hard to claim that one

17
or more of these definitions are the correct ones. Nevertheless, we can see some
commonalities within this pool of definitions and pinpoint standard features. Firstly,
as all definitions point out, intelligence is one’s ability to complete a goal or create
profit. An intelligent agent should be able to succeed in achieving an objective in
various degrees. Secondly, intelligence is specific to the environment. An intelligent
agent executes its tasks within an environment, and it engages with its environment.
Thirdly, intelligence's degree and complexity depend on how successful it is in
adapting to different environments and objectives. Distillation of definitions of
intelligence will help us understand and categorize artificial intelligence systems more
precisely.

2.1.2 Defining artificial intelligence


When talking about artificial intelligence, terms like neural networks, machine
learning, and deep learning are used synonymously. Without going into technical
details of how an intelligent system works, artificial intelligence refers to creating
systems using digital technology capable of performing tasks commonly thought to
require intelligence. Machine learning is variously characterized as either a sub-field
of AI or a separate field. It refers to the development of digital systems that improve
their performance on a given task over time through experience (Brundage et al., 2018,
p. 9). It will not be a surprise to state that these definitions, although they provide a
general explanation of AI and machine learning, are not the standard definition
accepted by policymakers or AI researchers. Marvin Minsky, a pioneer in the artificial
intelligence field, stated that the artificial intelligence field is “the science of making
machines do things that would require intelligence if done by men” (Minsky, 1982).
However, when a system can execute a task that otherwise requires human
intelligence, it is argued that the task at hand did not require intelligence to be executed
after all. John McCarthy, who is the person that coined the term “artificial intelligence”
in 1956, calls this “AI effect” and states that as long as a system works, no one calls it
AI anymore (Vardi, 2012, p. 5). Regardless of what it is called, machine learning has
brought a dramatic shift in AI development. In traditional programming, computer
codes are written to set the rules and methodologies to process inputs, and the program
would create results based on the input. In machine learning, however, the system
receives the data and the expected answers together to come up with the set of rules
and methodologies (Craglia et al., 2018, p. 20). These rules can then be used to produce

18
novel answers to novel data. Deep learning is a subfield of ML employing multiple
neural layers and neurons to create higher-level features from given raw data
progressively. Deep learning has enabled image processing and object recognition, and
it is being used in a variety of fields. Reinforcement learning is another subfield of
ML, focusing on how an agent should take actions in each environment to maximize
and optimize the cumulative reward. ML systems' development has increased
dramatically, especially since 2010, with the help of multiple factors such as large data
availability, reduction in the cost of cloud computing and processing powers, and wide
availability of specialized open-source ML software libraries.

Depending on the complexity and the capability of ML algorithms, they are


categorized either as “artificial narrow intelligence,” “artificial general intelligence,”
or “artificial superintelligence. Artificial narrow intelligence (ANI) refers to
algorithms that specialized in achieving limited goals in rigid environments. These
algorithms are trained with precise data to achieve targets that can only be applied to
specific circumstances. ANI algorithms operate within a predetermined, pre-defined
range and constitute most AI algorithms in use today. Regardless of how complicated
and nuanced they may seem; many ANI applications can only achieve specific tasks.
Virtual digital assistants are good examples of ANI. They are specialized to process
particular speech patterns and perform specific tasks within their ecosystem. Artificial
general intelligence (AGI) refers to algorithms that can perform various complicated
tasks in various complex environments. The concept of these algorithms is to perform
the same sort of general intelligence as humans. Environment and goal complexity do
not constrain AGI algorithms. Expectations from an AGI are that they can generalize
the information or data they have acquired, including generalization to contexts
qualitatively different from those they have experienced before. They should have the
ability to interpret a goal in the world's context at large (Goertzel, 2015, p. 21).
Artificial superintelligence (ASI) refers to the level of intelligence that surpasses
general human intelligence capabilities. It is a type of intellect that is much smarter
than the highest human intellect benchmark in various fields such as social skills,
scientific creativity, and general wisdom. Implementation of a superintelligence can
be possible by systems ranging from creating an ensemble of network computers to
cultured cortical tissues (Bostrom, 2006, p. 2). There seems to be a long way between
ANI and AGI in terms of software and hardware capabilities. According to Bostrom,

19
however, the gap between AGI and ASI, although still enormous, can be narrowed just
by the effect of Moore’s Law, without needing any additional software development
(2006). Moore’s law states that a processor's speed doubles every 18 to 24 months,
leading developers to expect double performance at the same hardware cost.
Theoretically, once human-level intelligence is achieved by AI algorithms, it will only
be a matter of time for processors to get better according to Moore’s Law so that
algorithms can process much more data in a shorter time.

Issues of artificial general intelligence and artificial superintelligence recall the


discussion on the definition of intelligence. One of the essential traits of human
intelligence is, as previously mentioned, abstract thinking. Therefore, an AGI system
is expected to display some level of abstract thinking to qualify as “human-like
intelligence.”

Edwin Boring’s definition of intelligence “intelligence is what intelligence tests


measure” in 1923, although criticized by many academics along the way (Hunt &
Jaeggi, 2013, p. 38), has regained its popularity in recent years (H. van der Maas et al.,
2014, p. 14). Based on this premise, DeepMind (an AI company and research
laboratory by Google Inc.) researchers have subjected ResNet (residual neural
network) and CNN (convolutional neural network) algorithms, two of the advanced
artificial neural network algorithms available today, to the Raven’s Progressive
Matrices (RPMs), a simple yet effective test to measure human’s abstract reasoning
(Barrett et al., 2018, p. 54). Their research aimed to answer whether a novel neural
network could find solutions, if any, to complex human-challenging abstract reasoning
tasks provided with extensive training data and whether this capacity would generalize
while explicitly controlling the abstract content of training data. Their findings showed
that standard visual-processing models like ResNets and CNNs had performed very
poorly in dealing with abstract reasoning tests. One striking outcome of the research
is that algorithms fail to transfer experience from one test dataset to slightly different
ones. Humans have no problem applying XOR (exclusive or: one is true, other is false)
logic to questions, such as the color of lines and the color of shapes simultaneously.
However, even that would be hard to solve by algorithms as they performed strikingly
poorly when required to extrapolate to inputs beyond their experience or to deal with
entirely unfamiliar attributes (Barrett et al., 2018, p. 61).

20
All these discussions on the definition of intelligence and AI’s capabilities lead us to
a couple of critical points. Firstly, an artificial intelligence system functioning in a
rigid environment, performing single or very few tasks, can still be called an intelligent
agent. Knowing about the details of the environment and the tasks it is programmed to
perform can tell us about the complexity of its intelligence. Secondly, intelligence in
the human sense is still not achievable by algorithmic systems. If we take abstract
reasoning as the hallmark of human intelligence, even the most advanced AI
algorithms still lack way behind the minimal human intelligence levels. It shows us
that the concepts of artificial general intelligence or artificial superintelligence are still
not going to prevail soon. Thirdly, we can categorize all the artificial algorithms
available today as “artificial narrow intelligence.” All AI systems in use today are
designed to perform highly specific tasks in highly controlled environments, regardless
of how complicated or advanced they might seem. Virtual voice assistants,
autonomous driving, object detection, multi-language translations, playing advanced
level Chess and Go, and many more applications are part of ANI's definition.
Distinguishing that these AI agents are far from being an AGI or ASI will help us ask
the right questions about their attributed targets and defined environments to
understand their functions better.

Depending on the subject, environment, and stakeholders, formal definitions of


artificial intelligence vary slightly from institution to institution. Depending on their
motivation, i.e., how they want to utilize AI, or focus area, i.e., which sort of AI
applications they base their assumptions on, we see differing definitions from
academics, international organizations, large corporations, and governments. These
definitions are critical as they constitute the base for these institutions to create
policies, applications, and products on AI.

2.1.2.1 Academic definitions of AI

When we look at the earlier definitions of AI by the researchers who made


technological breakthroughs in that field, we see that they base their definitions on
human-like abilities and rationality. Researchers like Richard Bellman and Ray
Kurzweil used human analogies while defining AI, which to some extend would
enable the comprehension of a newly developing field. However, as we saw in our
discussion on the definition of intelligence, using human-like intelligence analogies
might stir the debate into semantic contradictions.

21
Other pioneer researchers like Patrick Winston, George Luger, and William
Stubblefield used rationality as the basis for their definitions, and rather than referring
to human intelligence, they talk about intelligent behavior. Therefore, using the general
term of human intelligence enables the discussion of intelligence levels within the AI
field.

Modern definitions from researchers like Kaplan & Haenlein divert from
biopsychological explanations and focus on data, goal, and environment: “Artificial
intelligence (AI)—defined as a system’s ability to correctly interpret external data, to
learn from such data, and to use those learnings to achieve specific goals and tasks
through flexible adaptation.” (2019, p. 10).

David Poole, in his book with Alan Mackworth titled “Artificial Intelligence:
Foundations of Computational Agents,” defines AI as “the field that studies the
synthesis and analysis of computational agents that act intelligently.” (Poole &
Mackworth, 2017, p. 25). Modern definitions draw a clear line between human and
machine intelligence. They prefer to regard AI as computational intelligence and focus
on the definitions of agents, actions, reactions to the environment, and learning
(Samoili et al., 2020, p. 11).

In its report about AI development in China, Tsinghua University takes a


comprehensive approach to defining AI by dividing the topic into two categories: brain
like intelligence research and machine-learning, acknowledging that the latter has had
dramatic progress in comparison with the former. Their definition is simple as “AI is
made up of modern algorithms, supported by historical data, and forms artificial
programs or systems capable of perception, cognition, decision making and
implementation.” (Yunhe et al., 2019, p. 22). Although being an academic work, the
report captures the Chinese AI ecosystem in multiple dimensions, and as we will see
in the following chapters, this report has become the cornerstone of China’s national
AI strategy.

2.1.2.2 International organizations’ definition of AI

Arguably the most challenging task an international organization faces is to produce a


definition on any issue that has many stakeholders with differing interests in any given
topic. Additionally, if the said organization has regulatory powers, definitions become
more general and unspecific. This section will look at definitions from the United

22
Nations (UN), Organization for Economic Cooperation and Development (OECD),
North Atlantic Treaty Organization (NATO), European Commission (EC), and some
other non-governmental international organizations.

European Commission has set up multiple working groups and produced frameworks
and reports to kickstart EU-wide cooperation on AI. Its first document was released in
2018 with the title of “Artificial Intelligence – European Perspective” to present a
European view of AI, based on independent research and analysis by the European
Commission Joint Research Centre (JRC) to inform the debate at the European level.
In the same year, EC released a communication titled “European AI Strategy: EC
Communication - Artificial Intelligence for Europe.” This EC AI COMM aimed to set
a European initiative on AI, and in both documents, EC has the same definition for AI:
“Artificial intelligence (AI) refers to systems that display intelligent behavior by
analyzing their environment and taking actions – with some degree of autonomy – to
achieve specific goals.” (EC, 2018, pp. 3–4). In 2019, European Commission
appointed The High-Level Expert Group (HLEG) on Artificial Intelligence, consisting
of 52 representatives from academia, civil society, and industry, to contribute to the
implementation process of the European AI strategy (Samoili et al., 2020, p. 66).
HLEG’s elaborative academic work on AI can be an excellent scheme to put all AI
definitions in perspective. HLEG groups AI subdomains under two categories: (1)
reasoning and decision making, (2) and learning and perception, based on the
systematic capabilities mentioned in the given definitions (AI HLEG, 2019, p. 14).
The first category is about the capabilities of turning data into knowledge via the
transformation of real-world perceptions into machine-readable data and using that
data to make decisions following an algorithmic path of planning, optimization, and
problem-solving. The second category is about the capabilities of making sense of the
environment in the absence of symbolic rules, involving the extraction of information,
creating solutions on structured and unstructured perceived data, adaptation and
reaction to changes, and behavioral prediction (AI HLEG, 2019, pp. 16–17).

Domains and subdomains mentioned in Table 2.1 do not provide complete or rigid
classification: they are related, and not disjoint, subsets of AI. Based on this work,
HLEG provides a holistic definition for AI:

“Artificial intelligence (AI) systems are software (and possibly also hardware)
systems designed by humans that, given a complex goal, act in the physical or

23
digital dimension by perceiving their environment through data acquisition,
interpreting the collected structured or unstructured data, reasoning on the
knowledge, or processing the information, derived from this data and deciding
the best action(s) to take to achieve the given goal. AI systems can either use
symbolic rules or learn a numeric model, and they can also adapt their
behavior by analyzing how the environment is affected by their previous
actions.” (AI HLEG, 2019, p. 22).

Table 2.1 : AI domains and subdomains constituting part of the operational


definition.

AI Domain AI Subdomain

Reasoning Knowledge representation

Automated reasoning

Common sense reasoning

Planning Planning and scheduling

Searching

Optimization

Learning Machine learning

Communication Natural language processing

Perception Computer vision

Audio processing

HLEG’s definition is comprehensive and practical because it does not use ambiguous
human-like analogies; it touches the key technical aspect of AI and mentions its goal-
environment dependences.

Organization for Economic Cooperation and Development (OECD) has published a


document in 2019 titled “Principles on Artificial Intelligence.” This document has
become one of the first international agreements on trustworthy AI standards,

24
including policy and strategy recommendations for governments. In this document,
OECD defines AI as: “An AI system is a machine-based system that can, for a given
set of human-defined objectives, make predictions, recommendations, or decisions
influencing real or virtual environments. AI systems are designed to operate with
varying levels of autonomy.” (OECD, 2019b, p. 19). Rather than mentioning essential
AI subdomains or keywords, OECD emphasizes “human-defined objectives” in its
definition. Especially in the discussion of ethics and philosophy of AI, degrees of
autonomy, and human control of AI are a much-contested issue, OECD takes a stance
for human responsibility on AI systems' actions. Later sections will provide an in-
depth discussion on this matter.

World Economic Forum (WEF), self-described as an international organization for


public-private cooperation, defines AI in the fourth industrial revolution perspective.
WEF mentions that AI is “the collective term for machines that replicate human’s
cognitive abilities. Within the broader technological landscape, predictive
maintenance in the cognitive era has the potential to transform global production
systems.” (Schwab & Davis, 2018, p. 9). WEF mentions the political, social, and
business challenges AI possesses and affirms the requirement of global cooperation to
ensure accountability, transparency, and privacy matters. Like OECD, WEF’s
definition mostly comes from an ethics perspective, yet unlike OECD and EC, it is not
necessarily adopted officially by governments.

International standards institutions such as European Telecommunications Standards


Institute (ETSI) and International Organization for Standardization (ISO) also have
their AI definitions. ISO defines AI as a “branch of computer science devoted to
developing data processing systems that perform functions normally associated with
human intelligence, such as reasoning, learning, and self-improvement” (Samoili et
al., 2020). Similarly, ETSI’s definition goes like this: “Computerized system that uses
cognition to understand information and solve problems.” (ETSI, 2018). Two
definitions only focus on knowledge reasoning and knowledge representation parts of
AI, thus become only a general definition (Samoili et al., 2020, p. 42).

Although the United Nations (UN) and the North Atlantic Treaty Organization
(NATO) have produced strategy papers on AI adoption, they do not include a specific
definition for AI; thus, they were not included.

25
2.1.2.3 Governments’ definition of AI

Since 2016 many governments have announced their AI policies for the next 10 to 15
years. These documents provide the Government’s perspective of emerging
technology such as AI and its aim to leverage it for social wellbeing, economic
prosperity, and military power. Each document is unique in its methodology and
perspective, on which later sections will provide detailed analysis; therefore, the
definitions they preferred provides a glimpse of their perception of AI. This section
will look at definitions from Germany, France, Russia, the USA, and Australia
comparing them with the template provided by EC HLEG.

France has developed a national strategy by a parliamentary mission in 2018, aiming


to make France a global AI leader. Parliamentarian and mathematician Cédric Villani
led these efforts and published the document under the title of “For a Meaningful
Artificial Intelligence.” In this report, the definition they chose for AI is as follows:
“[AI] refers to a program whose ambitious objective is to understand and reproduce
human cognition; creating cognitive processes comparable to those found in human
beings….There is a great variety of approaches when it comes to AI: ontological,
reinforcement learning, adversarial learning, and neural networks, to name just a few.”
(Villani et al., 2018, p. 15). The further explanations of AI mention APIs, robotics,
industry-specific use cases, and NLP subdomains of AI (Villani et al., 2018, pp. 17–
19).

In the same year, The Federal Government of Germany has published its artificial
intelligence strategy with the slogan “AI made in Germany” (The Federal Government,
2018). Federal Government’s definition starts by reaffirming that there is no
universally agreed-upon definition of AI and focuses on weak/strong AI distinction.
German Federal Government’s definition of AI is “in highly abstract terms, AI
researchers can be assigned to two groups: “strong” and “weak” AI[….]The aspects of
human intelligence are mapped and formally described, and systems are designed to
simulate and support human thinking.” (The Federal Government, 2018, pp. 4–5).
They evade discussing AGI and solely focus on properties of weak AI, or ANI, for
economic and industrial objectives. Their explanation of AI includes subdomains such
as deductive reasoning, human-machine interaction, pattern deduction and analysis,
and responsible development of AI. Germany regards AI as a tool that would solve

26
specific problems in the industry and shares similar views on the social and ethical
concerns of AI with the European Commission.

The United States has released a variety of reports and working papers on AI starting
in May 2016. The first report came out of the White House under the Obama
administration, titled “Preparing for the future of Artificial Intelligence,” and the report
aimed to facilitate coordination between agencies, provide advice on technical and
policy matters regarding AI, and monitor technical developments in industry and
research communities (Executive Office of the President & NSTC CT, 2016). This
first report also starts with affirming the absence of a single definition and gives an
overview of the taxonomy of AI definitions from the academic field. It continues to
discuss the differentiation between ANI and AGI, arriving at the notion that the
speculations regarding AGI should not affect the policy work on AI (2016, pp. 17–18).
The first document with a clear definition of AI came from US Congress, titled “US
National Defense Authorization Act” (115th Congress, 2018). Congress provides an
extensive definition starting with “[AI is] an artificial system developed in computer
software, physical hardware or another context that solves tasks requiring human-like
perception, cognition, planning, learning, communication, or physical action” and
affirming that “an artificial system designed to act rationally, including an intelligent
software agent or embodied robot that achieves goals using perception, planning,
reasoning, learning, communicating, decision-making, and acting” (115th Congress,
2018, sec. 238). Policy and strategy documents from US institutions primarily focus
on AI's capabilities rather than give a single definition. Although there are many
activities, initiatives, and reports on AI, one cannot find any specific definition in any
of them. Nevertheless, deducting from explanations, the US’s understanding of AI
covers a small number of the AI domains and subfields, mostly focusing on how
institutions can leverage it.

India released its national strategy for AI in 2018, regarding AI as the single most
massive technology revolution of our lifetimes (Aayog, 2018, p. 12). India’s definition
goes like this: “AI refers to the ability of machines to perform cognitive tasks like
thinking, perceiving, learning, problem-solving and decision making. […] With
incredible advances made in data collection, processing, and computation power,
intelligent systems can now be deployed to take over a variety of tasks, enable
connectivity and enhance productivity.” (Aayog, 2018, p. 13). They define AI in three

27
categories: sense (computer vision, audio processing), comprehend (natural language
processing, knowledge representation), and act: machine learning, expert systems).
The definition also includes debates regarding ANI and AGI, giving a very low
probability of achieving superintelligence but not ruling it out completely (Aayog,
2018, p. 14).

Not having a national AI strategy, Australia took a different approach, and the
Australian Government Department of Industry Innovation and Science funded
research conducted by the Commonwealth Scientific and Industrial Research
Organization (CSIRO) on the ethical aspects of AI. In their discussion paper titled
“Artificial Intelligence: Australia’s Ethics Framework,” CSIRO defines AI as a
collection of technologies that “solve problems autonomously and perform tasks to
achieve defined objectives without explicit guidance from a human being. […] This
definition of AI encompasses both recent, powerful advances in AI such as neural nets
and deep learning, […] automated decision systems.” (Dawson et al., 2019, p. 18). The
paper categorizes AI as ANI and AGI and describes a couple of AI-related fields such
as robotics and autonomous vehicles (Dawson et al., 2019, p. 19). The paper’s
description looks at AI from a privacy and transparency perspective and takes the
current AI technologies that have an imminent effect on society into consideration to
explain AI further (Dawson et al., 2019, p. 25). Thus the paper, as previously
mentioned in Government’s “Australia 2030: prosperity through innovation” report,
discards the possibility of AGI at least until 2030, the end date of the development plan
(Australian Government, 2017, p. 46).

Russian Federation adopted its national strategy for AI in October 2019 to serve as the
basis for developing and enhancing state programs, projects, and strategic documents
of state-owned corporations and companies that support AI development in the country
(OECD, 2019a). Presidential decree numbered 490, titled “On the development of
artificial intelligence in the Russian Federation,” AI is defined as “a set of
technological solutions that allow simulating human cognitive functions (including
self-learning and finding solutions without a predetermined algorithm) and obtaining
results when performing specific tasks that are comparable, at least, with the results of
human intellectual activity.” (On the Development of Artificial Intelligence in Russian
Federation, 2019, sec. 5). Russian definition of AI relates it to human intelligence and
considers AGI as a feasible target for the development of the field. Description of AI

28
includes subdomains such as computer vision, natural language processing, speech
recognition and synthesis, decision support systems. It sets robotics and autonomous
vehicles as primary related fields for AI development and investment (On the
Development of Artificial Intelligence in Russian Federation, 2019, sec. 5b).

Many other countries such as United Kingdom, Japan, Canada, China, and
international organizations such as the UN, UNESCO, NATO have published their
national strategies and priorities for AI development. However, none of them provided
a clear definition of AI on which they based their strategies. Their policies will be
examined in-depth in a later section on national AI visions.

2.1.2.4 Summarizing AI definitions

Definition of AI matters as it helps us understand the given institution’s perception of


AI. Looking at the definitions mentioned above, it does not seem that a single thread
might go through all the definitions and provide a uniform definition that would apply
to all scenarios. Definitions change based on the priorities of the institutions. For
example, countries that aim to utilize AI for military objectives emphasize the AI’s
effect on logistics, robotics, and autonomous vehicles. Others aim to utilize AI in
economic and industrial activities, focusing on decision support systems, pattern
recognition, and optimization. The distinction between ANI and AGI also plays a role
in the direction to which the strategy papers point. Countries such as Germany,
Australia, and France, which see artificial general intelligence or superintelligence as
highly unlikely, put more emphasis and investment commitments on ANI
developments. Countries such as Russia, the USA, and China, which define AGI as
the inevitable or feasible future of AI development, also share their commitments to
funding advanced research on superintelligence. Additionally, some definitions
include potential ethical implications of AI and related fields of robotics and
autonomous vehicles.

As mentioned previously, this section took HLEG’s definition as a base definition and
used it as a framework to look at other definitions. Out of all the definitions examined
above, HLEG’s definition stands out with its detail, and it covers all related fields and
domains of AI. Being an independent research group that aimed to provide a
comprehensive definition of AI for the twenty-eight member states of the European
Commission, it is relevant for theoretical and practical work on AI. Expending on the

29
AI taxonomy presented in Table 2.2, we will add two transversal domains (and
subdomains) such as integration and interaction (multi-agent systems, robotics, and
automation, autonomous vehicles) and ethics to see all AI definitions in one
perspective (Samoili et al., 2020, p. 18).

It is hard to call it the definition of AI, certainly is not officially accepted (or refuted
for that matter) by all the governments, institutions; European Commission’s High-
Level Expert Group on Artificial Intelligence provides the most comprehensive
definition: For the remainder of this thesis, AI HLEG’s definition will be used as the
definition of AI.

“Artificial intelligence (AI) systems are software (and possibly also hardware)
systems designed by humans that, given a complex goal, act in the physical or
digital dimension by perceiving their environment through data acquisition,
interpreting the collected structured or unstructured data, reasoning on the
knowledge, or processing the information, derived from this data and deciding
the best action(s) to take to achieve the given goal. AI systems can either use
symbolic rules or learn a numeric model, and they can also adapt their
behavior by analyzing how the environment is affected by their previous
actions.” (AI HLEG, 2019).

30
Table 2.2 : Summary of examined definitions of AI.

Communication

& Interaction
Integration
& Planning

Perception
Reasoning

Learning

Ethics

Other
Field
Source

Kaplan & ü
Haenlein, 2019

Tsinghua
University, ü ü ü ü
2018

ü
Academic

Poole &
Mackworth,
2017

Russel & ü
Norvig, 2010

McCarthy,
ü ü ü ü
2007

EC HLEG, 2019 ü ü ü ü ü ü ü

OECD, 2019 ü ü ü ü ü ü
Intl. Organizations

WEF, 2018 ü ü

ETSI, 2018 ü

ISO, 2015 ü

Russia, 2019 ü ü ü ü ü

Australia, 2019 ü ü ü
Governments

France, 2018 ü ü ü ü

USA, 2018 ü ü ü ü ü

Germany, 2018 ü ü ü ü

India, 2018 ü ü ü ü ü

31
2.1.3 Building block I: It is hard to find a common definition for AI
Table 2.2 shows how the definition of AI differs across domains and institutions. One
striking outcome of comparing definitions from various fields is that the policy-related
definitions, even though they are referencing many research-related definitions, differ
from academic ones, especially in integration & interaction and ethics subdomains.
Many policy documents favor human-like definitions of AI, whereas AI researchers
prefer specifying functionality and technical problems. Understandably policy-related
definitions put more emphasis on the social implications of AI. However, although not
the case for all the policy-related documents, definitions that favor human-like aspects
of AI tend to focus on hypothetical future technologies, whereas the academic
definitions focus on the AI technologies that are in use today (Krafft et al., 2020, p. 9).
It is not to say that the academic approach is better or worse than the policymaker
approach to defining AI.

Nevertheless, if policymakers fail to have a clear definition for AI on bureaucratic


levels, it will be even harder to figure out which systems fall under which new laws.
Lack of conceptual clarity, evident on the topics of autonomous weapons, presents a
significant challenge to reaching a consensus (Krafft et al., 2020, pp. 3–5).

The first building block of the problem is that AI is tough to define. It does not mean
that we do not know what it is, how it is used, or think it is magical. It means that AI
is such an extensive field; it is not feasible to develop one definition that would fit all.
Most definitions I have shown above either have one perspective, i.e., political,
academic, technical, or the group that produces the definition do not include all
relevant stakeholders. The definition shows the given groups' perception, assumptions,
and even expectations from AI, and they build their systems, policies, visions on those
definitions. All those works will continue to be impartial and less likely to be embraced
by many. I argue that as AI is now of global concern, generalized, less inclusive
definitions will not help the global governance of AI.

2.2 Ethical Issues with AI in Civilian Use

The social impacts of AI we have witnessed so far have proved that AI is not just a
tool that would help humanity progress faster. It is a web of complex systems that
affect millions of people's daily lives by making decisions on humans’ behalf,
categorizing people based on their own learned rules, and shaping the environment in

32
which we are allowed to live. This section will show major ethical issues with AI with
examples under three subsections: automated decisions, predicting human behavior,
and data bias.

2.2.1 Automated decisions


An average human makes thousands of decisions every day. These decisions are
shaped by intellect, information, emotional state, internal biases, and external
influences. Our decisions can only affect us, but depending on our social and
professional influence area, it can affect the larger society. A judge’s decision on a
particular matter, for example, is going to define the subsequent rulings on similar
judicial matters. Nevertheless, humans are responsible for the decisions they make and
the actions they take. The matter of responsibility is becoming increasingly opaque as
we delegate some of the decision-making processes to the AI systems. The AI systems
mentioned here are still ANI systems, performing specific tasks in specific
environments. Moreover, those systems are already in use globally in government,
finance, medical and insurance sectors on a large scale.

There are many concerns around letting AI systems make decisions on delicate
matters. Mathematician and data scientist Cathy O’Neil argues that big data can only
represent the history and may not necessarily be used as a decisive, pivotal point to
invent the future: doing that requires moral imagination, and that is something only
humans can provide (O’neil, 2016). Putting the philosophical discussion on whether
machines should be allowed to make decisions on humanity's behalf aside, it is a big
challenge to reason and justify the decisions made by AI systems. This problem is
often referred to as the black-box problem. A black-box system or a component is a
type of artifact that does not provide any explanation about its internal design,
structure, and implementation (Adadi & Berrada, 2018, p. 52141). The difficulty for
an AI system to provide a sensible explanation about how it reached a particular
conclusion is called the black-box problem in the literature. The black-box problem
partly occurs due to technical limitations of AI systems; having to process a large
amount of data and provide extensive outputs in a short amount of time makes it
challenging to provide reasoning for every decision. Enterprises also perpetuate it to
maintain competitiveness and protect intellectual property rights (Adadi & Berrada,
2018, p. 52142). The black-box problem has led to the birth of new concepts like

33
humans in the loop (HITL) and explainable artificial intelligence (XAI) (Dawson et
al., 2019, p. 35).

2.2.1.1 Explainable AI

Although it is not as hard to define XAI as AI, it is safe to say that there is no unified
definition of XAI either. DARPA defines the aim of XAI as “[…] to produce more
explainable models while maintaining a high level of learning performance (prediction
accuracy); and enable human users to understand, appropriately, trust, and effectively
manage the emerging generation of artificially intelligent partners.’’ (Gunning, 2017,
p. 4). XAI discussions are focused on the challenges to demystify the black-box
properties of AI. It is also used interchangeably, or at least in a parallel fashion, with
responsible AI that focuses on three pillars of transparency, accountability, and
responsibility (AI HLEG, 2019, p. 12). Ideally, it aims to achieve more transparent AI
systems without decreasing the models' efficiency and accuracy, and thus often a
tradeoff must be made between accuracy and interpretability. There is an obvious need
for XAI. For regulatory purposes, ethical concerns, or even commercial benefits, we
need to understand the inner workings of an AI system and be able to trust its outputs
appropriately. According to Adadi & Berrada, this need for XAI rises on the four
pillars: explain to justify, explain to control, explain to improve, explain to discover
(2018, p. 52143).

Explain to justify comes into focus mainly when an AI system is thought to be


producing biased and discriminatory outputs. Many recent cases have made public
headlines, like lack of diversity in training data leading to misidentification of people
of a particular age, gender, and skin colors (Howard et al., 2017, p. 1). Explanation
here means that the given AI system should provide reasons and justifications for the
particular outcome rather than explaining the logic of reasoning behind the process.
Explain to control is for the prevention of actions before they go wrong. It is vital to
understand systems behavior to have preparedness for unknown vulnerabilities and
flaws. It will help identify and rectify errors before reaching critical levels, thus
enabling enhanced control. Another reason for XAI is the need to improve the systems
continuously. An explainable model is easier to understand and is one that can be
easily improved. Because the users and developers will have the information regarding
why the output is created, and they will know how to make it brighter. Explain to
discover is for users to be able to gain insight into how certain things work. An

34
explainable system, which is assumed to excel in one field, can teach us about new
and invisible laws in that field. Thus it is desirable that the system can explain its
learned strategy to us (Adadi & Berrada, 2018, p. 52144). XAI is undoubtedly a
significant step towards more transparency in the AI field and has immense potential
to move towards the center of attention in the debates. However, it is not the perfect
concept that would solve all our problems regarding AI.

2.2.1.2 Human-in-the-loop systems

Human-in-the-loop (HITL) system refers to the systems that have a human operator as
a crucial component. The human operator is a vital part of optimization, maintenance,
handling the challenging task of supervision, automated control process, and exception
control (Rahwan, 2018, p. 2). HITL systems require human touch in two distinct
stages: continuous development of an AI system and supervising its decisions and
actions. A simple form of HITL is being used in labeling data for training machine
learning algorithms. Another usage area is the interactive feedback loop for continuous
algorithm training, like email spam detection (Amershi et al., 2014, p. 5). HITL is not
only for improving the efficiency and accuracy of an AI system. It is also thought to
be a powerful tool for the regulation of the behavior of such systems. A HITL AI
system is expected to have two primary functions: Firstly, a human operator can
identify an autonomous system's misbehavior and prevent them. The most significant
example of this case is the lethal autonomous weapon systems (LAWS). If such human
operator is in the loop, they can prevent the computer vision system on a weaponized
drone misidentifying civilian as a combatant, they are hoped to be correcting such
deadly mistake. Secondly, once humans are in the loop, they can be the accountable
entity in the case that the system misbehaves. Again, in the case of LAWS, if the
system causes any harm, the person in the loop would be expected to bear the
consequences and thus have the incentive to minimize any potential errors. This person
may be a human within a tight control loop, like an operator of a drone, or a much
slower loop, like programmers in a multi-year development cycle of an autonomous
vehicle (Rahwan, 2018, p. 3). I will expand the cases of LAWS in the following
sections.

2.2.2 Behavior prediction


In conjunction with automated decisions, as AI systems can process information and
use it to make decisions, they can also recognize patterns functional to make

35
predictions about future events and human behaviors. The matter of predicting human
behavior can also be addressed in the black-box and the HITL topics as long as we
treat every prediction as a decision output, the capability to predict future potential
actions poses additional specific ethical concerns related to bias and fairness that
require consideration (Dawson et al., 2019, p. 38). In fairness, AI systems’ capability
to predict human behavior can be a great asset when used appropriately to alleviate the
bias and fatigue of human-generated judgments and predictions. This technology is
believed to be instrumental in industries that require decision-makers to generate
frequent, accurate, and replicable predictions and judgments such as the areas of
justice, policing, and medicine.

2.2.2.1 Discriminatory predictions

A behavior prediction algorithm can yield discriminatory results in its predictions


without having discreetly discriminatory variables included in the model. An
algorithm, for example, might not include the race as a parameter in the dataset, but it
might have a postal code parameter, and that might yield unfavorable predictions
against a neighborhood filled almost entirely with people of one race, hence leading to
discrimination (Dawson et al., 2019, p. 40). The ability of AI to recognize patterns
poses serious ethical challenges. To ensure that predictive systems are not indirectly
biased, all variables that are used to develop and train the algorithms must be
rigorously assessed and tested. It is illegal to discriminate based on religion, race,
gender, age, relationship status, and many more categories in Europe and many other
countries. However, AI systems explicitly use these categories to make predictions.
There is a thin line between discrimination and non-discrimination when using such
data. The color of skin, for example, is essential data for training an algorithm to
predict the likeliness of a person developing skin cancer. So, when is the use of a
particular input variable considered discrimination? Researchers point out that
indicators like education, ZIP codes, and family history can be extrapolated into a race
indicator. Based on some cases in the US, even location-based dynamic pricing is
being considered as discriminatory. Similarly, racial bias cases had surfaced in the US
when security forces used predictive systems to identify crime hotspots and potential
suspects (Angwin et al., 2015; Stobbs et al., 2017).

36
2.2.2.2 Unfair predictions

Unfair predictions occur because of biased datasets and because the real world's data
is not necessarily fair. People of different ages, gender, and socioeconomic status are
likely to be represented differently in datasets. If an algorithm is trained with such data,
it will produce different outputs for various sub-populations, even with the same input.
No dataset is perfect; they have varying levels of accuracy. As in the cases of
discriminatory predictions, if a facial recognition system’s errors concentrate on
misidentifying one specific racial group, this is racial discrimination. It occurs due to
no representation of such a racial group in the dataset. However, even when the
datasets are reasonably accurate, they still can incorporate the unfair situation present
in the real world. Disadvantaged groups may be disadvantaged for historical or
institutional reasons, but that information is not necessarily understood or
compensated for by an AI, which will assess the situation based on the data alone
(Dawson et al., 2019, p. 41).

Fairness as a concept is miscellaneous and complex. Research in 2018 shows that there
are at least 21 different mathematical definitions of fairness in the literature
(Narayanan, 2018). These differences are not philosophical; each of the various
definitions produces an entirely different outcome. Putting different fairness
definitions aside, when there are so many different bias handling algorithms addressing
various parts of a model’s life cycle, it is particularly challenging for researchers in
algorithmic fairness to figure out when and why to use each case. As a direct result of
this, the public, policymakers, AI researchers need clarity on how to proceed.
Currently, this problem is being addressed by the developers, asking questions such as
“Should the data be debiased?”, “Should we create new classifiers that learn unbiased
models?” and “Is it better to correct predictions from the model?” (Bellamy et al.,
2018, p. 52).

2.2.3 Data governance


Data governance is an integral part of AI ethics. Through various web services, highly
capable personal devices, people’s opinions, emotions, bio-medical data, interactions,
and behaviors are more easily tracked than ever. However, the public is mostly
unaware of the scale and degree to which their data is being collected, sold, shared,
collated, and used. Research shows that there are disparities in internet users' online
privacy attitudes and online privacy behaviors (Barnes, 2006, p. 21). When asked to

37
users, they express their concerns about sharing personal data on the internet;
nevertheless, they share private and personal information about themselves and their
close family and social circles on various web platforms daily. Researchers refer to
this inconsistency as the privacy paradox (Barnes, 2006). Figure 2.1 (Trepte et al.,
2015, p. 23) shows that people have provided several reasons as to why their behaviors
online are inconsistent with their beliefs. One reason is a lack of knowledge about
online privacy, how to protect their data online from being re-shared, sold, used to
train models. It is assumed that users want to recalibrate their online activities to
protect their privacy, but they are having trouble doing so due to a lack of privacy
literacy (Trepte et al., 2015, p. 26).

Figure 2.1 : European public knowledge about consumer data collection and sharing.

Data governance is crucial even despite the low levels of public understanding. Many
public examples have highlighted those unclear consent policies and data breaches that
lead to malicious AI systems development (Krauth, 2018). Protecting user data privacy
should be of utmost importance at each level of the AI life cycle. Privacy is explicitly
stated to be a human right under Article 12 of the Universal Declaration of Human
rights. When handling the data, protecting the consent process is crucial for protecting
personal privacy (Dawson et al., 2019). European Commission has produced the most
comprehensive policy work in that regard. EU’s General Data Protection Regulation
states that consent must abide by four key terms: “the individual is adequately
informed before giving consent, the individual gives consent voluntarily, the consent

38
is current and specific, and the individual has the capacity to understand and
communicate their consent” (Voigt & von dem Bussche, 2017, p. 10).

Even though the consent is thoroughly verified before collecting personal data, data
breaches pose the next step threat for privacy. This type of security breach where
personal data is unlawfully stolen, destructed, altered, unauthorizedly disclosed, or at
least accessed, is also a serious point of discussion in AI ethics debates. In the EU and
the USA, companies collecting, processing, and storing personal data are legally
obligated to disclose any data breaches and inform all individual users about the data
breaches. (Voigt & von dem Bussche, 2017). However, it takes a long time to detect
such breaches through technical and organizational incompetencies in some cases.
According to the Varonis Data Lab, in 2019, it took companies 206 days to identify a
breach and additional 73 days to contain those breaches on average (2019, p. 5).

Another critical issue for data governance involves open data sources and re-
identification. To support the development of machine learning initiatives, institutions
may make a large amount of data available to the public. However, all personal
identifier parameters should be removed before doing so, and all the datasets must be
completely anonymized. Access to big data is crucial for AI research; if there is no
data to train an algorithm, researchers will not see the result of their work. A stark
example of this problem occurred in Australia in 2016 when the government decided
to make de-identified health data publicly available to further medical research and
policy development. To their surprise, researchers were able to personally identify
individuals, combined with other publicly available information, from the data source
(Dawson et al., 2019). It showed that there is a significant risk to manage even with
the handling of anonymous data.

2.2.4 Building block II: AI is ethically problematic


The second building block of the problem is that AI is an ethically problematic
technology, and a technological fix will not help to solve the socio-political problems
it creates. Even with the best intentions, an AI system can lead to catastrophic results.
Moreover, the efforts to produce solutions so far are seemed to be primarily technical.
Researchers showed that the literature on explainable AI, one of the most promising
concepts to address ethical problems, rarely references social science or its derivatives
(Adadi & Berrada, 2018). The explanation is a form of social interaction, having
psychological, cognitive, and philosophical projections. Literature shows that ideas

39
from social and behavioral science are not sufficiently visible in this field. HITL
systems might only help allocate responsibility and prevent any immediate danger.
However, it is another technical fix to a much complex problem. Universal solutions
will not apply to every current and future ethical matter, and each scenario requires a
thorough examination from multiple perspectives. Behavior prediction algorithms will
create differing ethical problems when used in the military field versus the retail sector.
I argue that the current solutions are mostly technological and static. Ethics in AI is a
dynamic subject and can only be addressed with a healthy mix of computer and social
sciences, addressing each case's specific environment.

2.3 Threats of AI from the Individual, National, and International Perspectives

AI systems are already being developed to be used for lethal purposes in war.
Deployment of such systems will dramatically change how wars are fought in the near
future. Thus it will beget severe problems for the stability of the international system
(Garcia, 2018). The more AI systems become capable and widely used, it is expected
that the use of AI will alter the landscape of threats that the world order will face.
Firstly, the deployment of AI systems for malicious purposes will lead to the expansion
of existing threats. It will be cheaper to incorporate AI systems into organized attacks
that otherwise require extensive human labor, expertise, and intelligence. AI systems
would multiply the number of actors who carry the attack, increase the attack rate, and
set potential targets. Secondly, it will lead to the introduction of new threats. Novel
attacks would be possible with AI systems that would otherwise be impossible or
impractical for human agents. In addition to that, malevolent agents would seek to
exploit potential weaknesses in AI systems deployed as a defense. Thirdly, it will lead
to a change to the typical character of threats. It is argued that AI-enabled attacks
would be specifically effective, difficult to attribute, finely targeted, and likely to
exploit vulnerabilities in defensive AI systems (Brundage et al., 2018).

Growing developments of lethal autonomous weapons systems (LAWS) and their


potential effects on the battlefield gain attention and military investment from many
countries. (Geist, 2016, p. 14). This section will show how AI is projected to be in use
for autonomous warfare, how it leads to an AI’s arms race between key military
powers globally, and how it all threatens the international order in digital, political,
and physical realms.

40
2.3.1 Malicious use of AI
The growing capabilities of AI systems are posing significant potential threats in the
security domain. AI capabilities that seem harmless at first, like systems excelling in
face detection, language processing, and vehicle navigation to produce convenient
products and services, can also be used for malicious intent. Specific properties of AI
make it necessary to be analyzed from a security perspective. Firstly, AI is a dual-use
technology area: the same AI capabilities can be put toward civilian and military uses
to both achieve commercial benefit and deadly harm. As there is no clear demarcation
between benign AI and malicious AI, except in some specific scenarios, AI researchers
cannot sterilize their research to avoid being directed to harmful intentions (Brundage
et al., 2018). Many decision points that are worth automating are open for dual-use
scenarios. Secondly, AI systems are scalable and efficient: Upon deployment, AI
systems can deliver specific tasks quicker and cheaper than humans. Such systems are
also scalable; once they are trained to perform specific tasks efficiently, a simple
increase in processing power will allow them to deliver more results in shorter periods.
Thirdly, AI systems might increase the psychological distance and anonymity:
performing many tasks depend on interacting with other people, making decisions that
respond to their behaviors, and being physically in the same environment. When actors
delegate decision-making to AI, it will lead to greater anonymity, and they would
experience greater psychological detachment from the lives they impact (Brundage et
al., 2018, p. 3). This issue is already relevant for remote military drone operators and
will be relevant for the deployment of LAWS as well. Lastly, most research
advancements in AI can be adopted by anybody: a high degree of open-source culture
in software development enables anyone with technical expertise to freely duplicate
and further develop software systems. Additionally, it is exponentially much costly for
attackers to obtain hardware associated with AI systems in a military scenario, but it
is much easier to access the software or necessary scientific findings (Brundage et al.,
2018, pp. 3–5). These properties of AI shape the digital, physical, and political security
threat landscape.

2.3.1.1 Digital security

For digital security, it is commonly agreed that the cybersecurity field will face
accelerated deployment of AI technologies, both for increasing the number and the
complexity of attacks and for comprehensive defense. Recent years have shown

41
concerning signs of utilizing AI for malicious intentions in cyberspace. For example,
researchers showed that an AI spear-phishing system could write custom-tailored
tweets on Twitter based on user’s interests and achieved a click rate to a link that could
be malicious (Seymour & Tully, 2016, p. 12). The strategic landscape of cybersecurity
might be shaped by the adaptability of AI systems as well. However, researchers
cannot reach a decisive conclusion on how adaptability will affect the offense and
defense balance (Brundage et al., 2018, p. 43). Attackers are undoubtedly likely to take
advantage of reinforcement learning's growing capabilities to learn from experience to
produce attacks that the current IT systems are the least prepared for (Surber, 2018).
These threats will grow as cybercriminals and state actors adopt capable AI systems
and the novel applications of AI in malicious cyberattacks that have not been
researched yet.

2.3.1.2 Physical security

For physical security, combined advances in AI, robotics, and automation pose
significant threats. Advances in the ground and aerial robotics (drones) field, even
without autonomous capabilities, have proven helpful on the battlefield and used by
both state and non-state actors (Geist, 2016, p. 13). Robots can be easily customizable.
They can be loaded with dangerous payloads to attack a physical target from a long
distance: this ability was previously only available to the countries with the resources
to develop or buy technologies like cruise missiles. The physical threats robotics
introduces present even without AI, yet the threats can be magnified by AI making
such systems autonomous. Specific characteristics of robotics make it a highly
adaptable technology in commercial and military applications. Firstly, it is a global
sector: various robotics applications are being explored on every continent, and the
supply chain is not concentrated in one place but distributed across countries.
Secondly, robotics can be applied in a wide range of sectors: some applications may
be specific, but many are customizable for various purposes. Thirdly, autonomy and
robotics are already meshing together: although available robotics systems are
controlled by human operators to a varying degree, autonomous and semi-autonomous
systems are already being developed for commercial and military applications
(Brundage et al., 2018, p. 24). The concepts of lethal autonomous weapon systems
(LAWS) are already creating reactionary responses from technical and policy experts.
It is stated that attributing greater degrees of autonomy to weapon systems will

42
inevitably result in greater damage done by a single person, which inadvertently means
that smaller groups of people will be able to conduct such attacks. To put it in contrast,
most of the software components required for such systems are already available as
open-source: navigation and planning algorithms, face detection algorithms, and
multi-agent swarming frameworks can be easily found and utilized for malicious
intent. Depending on battery or fuel capabilities, such systems can operate for longer
durations and pose threats to a more significant number of spaces and people. I will
provide more details on LAWS in the following section (Csernatoni, 2019, p. 43).
Additionally, the intersection of cybersecurity and robotics will provide a more
complex set of threats to address. High adaption of AI-enabled robotics and the
Internet of Things (IoT) devices in homes and offices provides greater efficiency and
an opportunity for attackers to target and manipulate from afar. Subversion of such
cyber-physical devices would cause a larger scale of damage than it would be possible
were those systems human-operated.

2.3.1.3 Political security

For political security, it is argued that AI’s impact on communication technologies will
change the nature of communication between individuals, corporations, and states; as
communication is being meditated, the consumable content is created more and more
by autonomous systems (Brundage et al., 2018, p. 8). Social media already has a
significant role in politics. It is the primary line of communication for politicians,
insurgents, leaders, protesters. The cost of communication is decreased, the speed and
dissemination of information are significantly increased with the social media
platforms. Scholars who work on conflict and radicalizations, for example, have
already turned to social media as a source of new data (Zeitzoff, 2017, p. 41).
Introducing AI capabilities to this equation will make the existing schemes more
extreme and create whole new political dynamics. It is argued that AI's scalability will
be effective in producing large-scale and persuasive untrue content to undermine
public discourse and strengthen authoritarian regimes' power grip. It is expected that
there will be an arms race for systems to produce and detect false information, and
states will want to utilize these systems to maintain their rule (Brundage et al., 2018,
p. 24). It is not entirely clear how these systems' long-term effects turn out, but we
already have early examples to scratch the surface of what might come. As there is
almost no entry barrier apart from some rudimentary security protocols on social media

43
platforms, AI systems can camouflage themselves as humans with specific political
views. These social media bots are used to spread political messages and cause dissent
(Weedon et al., 2017, p. 61). Such bots are either controlled by humans or have an
elementary form of automation in their current form.

Nevertheless, national intelligence agencies have leveraged these bots and have
already shown the results in influencing political beliefs and mainstream media
coverage (Woolley & Howard, 2017, p. 135). Many cases in the Syrian Civil War and
the 2016 US Presidential Election have shown us the potential impact of bots on
swaying public opinion (Guilbeault & Woolley, 2016, p. 18). As previously
mentioned, current AI systems have already proved to fool humans with autonomously
created human-like tweets; thus, it is highly probable that more sophisticated
autonomous software actors will take a prominent position in the political sphere.
Another level of fake information is the production of high-quality fake video and
sound content. The early results of deep-fake technologies show us that it is possible
to create high-fidelity video footage of politicians saying appalling (fake) things (Yang
et al., 2019, p. 12). It is argued that as AI systems will be able to create high-quality
forgeries, it will challenge the notion of “seeing is believing” of audio-visual evidence.
Additionally, people will be able to easily deny any allegations, reasoning that such
evidence may easily be fabricated with the help of AI-enabled systems (Brundage et
al., 2018, pp. 14–15).

Increasing the quality and the quantity of fake content with AI may not necessarily
correlate in a direct increase of the effect of such content, as people will be more aware
of the possibility of them online. Nevertheless, even that outcome, the decreasing trust
in online environments, will benefit authoritarian regimes as the objective truth is no
longer valued, and they can present their version of truth more firmly. Authoritarian
regimes will also have

more sophisticated control mechanisms through AI that may not be readily applicable
by democratic countries. Existing surveillance systems enable authoritarian regimes to
gather data on their citizens at a large scale but processing this data is still costly. It is
argued that AI will enhance these regimes' ability to prioritize attention by identifying
current and potential leaders of opposition groups and decrease the cost of mass
intelligent surveillance of a larger population (Brundage et al., 2018, p. 21).

44
2.3.2 Lethal autonomous weapon systems
Autonomous technology results from research in AI and robotics and draws on other
disciplines such as mathematics, psychology, and biology. Like AI, there is also no
universally accepted definition for the term autonomous or autonomous technology in
the context of AI and robotics, but there are good-enough attempts to do so (Atkinson,
2015, p. 188). Some researchers' autonomy definition also includes automation; others
use the term autonomous for more complex technological processes (Surber, 2018, pp.
37–38). For this thesis, I will choose to pursue the latter definition.

The key benefit of AT is the ability of an autonomous system to “[…] explore the
possibilities for action and decide “what to do next” with little or no human
involvement and to do so in unstructured situations which may possess significant
uncertainty. This process is, in practice, indeterminate in that we cannot foresee all
possible relevant information […]. “What to do next” may include […] a step in
problem-solving, a change in attention, the creation or pursuit of a goal, and many
other activities […].” (Russell & Norvig, 2016, p. 11). To put it shortly, an AI system
is autonomous only if it can change its behavior during operation in response to
unanticipated events.

Autonomous technologies substitute human beings in decision-making in areas like


healthcare, potentially for the greater good. In addition to those promises, autonomous
technologies can be integrated into military technologies that can select their targets
and engage them without human supervision. These systems are mostly referred to in
the literature as Autonomous Weapons Systems (AWS), Lethal Autonomous Weapons
Systems (LAWS), or Robotic Autonomous Systems (RAS), and they also do not have
specific universal definitions (Surber, 2018, p. 15). Part of the reason is that AI is
already a challenging term to define, and once autonomy is added to the picture, it
becomes increasingly challenging to produce an encompassing and agreed-upon
definition. Nevertheless, the general premise of LAWS is that after their activation,
they can search for the targets, identify them, select them, and engage with them
without further human input, with the help of their sensors and governing algorithms.
It is a debated issue that if a human operator could interfere or veto a system's decision,
whether it would still be called LAWS. For this thesis, I will assume a system is
autonomous if it has the ability to select and engage with targets without human
control, regardless of their accuracy and whether they have a human operator. Even

45
though a human is responsible for such systems that can interfere with the decisions,
if the system displays some autonomy level when left on its own, it will be referred to
as autonomous. Autonomous weapons systems have various advantages on the
battlefield like a reduced risk to military personnel, increased persistence in the
battlefield, force multiplication, increased capability over a wider area and deeper in
the adversaries’ territory, and all at a potentially lower cost (Marchant et al., 2015, pp.
76–77).

2.3.2.1 Military advantages of AWS

One of the key drivers for AWS development is that they potentially decrease the
number of personnel needed to operate such weapons. It can be achieved by one person
controlling multiple AWS rather than one while automating the analysis of data
collected by the systems (US Department of Defense, 2007). Increased autonomy can
also substitute or expand ground forces, which would also add to the cost reduction.
Some argue that reducing a human operator's workload, like not needing to operate an
unmanned drone, would give the human operator more time and mental power to focus
on supervision and critical decisions, such as pulling the trigger. However, if one
human operator is put in charge of multiple systems, their attention would be divided
among those systems, and all the benefits of reduced workload would be lost (Ludovic
Righetti et al., 2014, p. 21). Another key driver is that AWS would reduce the
dependence on high-speed communication links required to operate robotic systems.
An AWS would continue their operations even if the communication with the
command center is lost or non-existent, which would be handy in the cases such as
operating underwater or in caves (US Department of Defense, 2007). Wireless
communications are open to malicious attacks that can interfere with an unmanned
system if it requires a continuous remote human operation. Some scenarios include
using fake GPS communications to re-direct the path of an unmanned air system or
complete a takeover of their operations (Krishnan, 2016, p. 16). Even with the most
secure communication line, there are risks of limited bandwidth to transmit sensor
data, and video feeds back to the command base. Autonomy provides an attractive way
to oversee these problems. However, having no communication does also mean that
human intervention in critical decisions is also not possible. The third key driver for
AWS is that it can come into play when human capabilities of quick decision-making,
quantitative information analysis, reaction time are limited. It is argued that as the

46
speed, complexity, and information overload of warfare increase, it will be difficult
for humans to direct (Surber, 2018, pp. 9–10). If AWS can perform better in particular
rapid decision-making and reaction to situations, it can also yield a significant
advantage over adversaries. Nevertheless, there is still a challenging technical barrier
for AWS to outperform humans in various qualitative tasks that require sophisticated
human judgment and reasoning (Marchant et al., 2015, p. 123).

2.3.2.2 Technical issues with autonomous weapons

There are serious technical, legal, and ethical issues with autonomous weapons. The
degree of autonomy in the critical functions of a particular weapon system needs
thorough examination before activation. Automated target recognition systems were
already in use with the existing weapon systems, including crewed aircraft and
defensive systems. However, existing systems' capabilities are limited even for
distinguishing simple objects in simple, low-clutter environments (UK Ministry of
Defense, 2018). Even the autonomous navigation of an unmanned system would not
attract much ethical debate, as it is not causally related to weapons firing. A critical
aspect of any analysis will be the degree of autonomy in the targeting and firing
process, including acquiring, tracking, selecting, and attacking targets (Ludovic
Righetti et al., 2014, p. 43). The most significant technical challenge of AWS is
making sure that systems are working as intended. This challenge is relevant to any
weapon system where a malfunction in the system would result in dangerous
consequences (Krishnan, 2016, p. 57). As an AWS can adapt to the changes in its rigid
environment, it is accepted to be unpredictable. Due to the considerable number of
scenarios that an AWS could face, it is impossible to perform tests for every situation.
Current systems allow for verification and certification for the most straightforward
applications, as the comprehensive testing itself may cause further safety issues
(Saxon, 2016, p. 11). In addition to unpredictability, the problem of reliability is an
issue for any complex system. In the case of autonomous weapons, potential failures
might occur due to human error, cyber-attacks, software failures, breakdown in
human-machine interaction, among many others (US Department of Defense, 2007).
Predictability and reliability problems will grow in numbers and complexity as
research and development in AWS progress. These problems will always be present
when an AWS interacts with its environment. One crucial question is if or when
predictability will be sufficiently high and uncertainty sufficiently low to have an

47
accurate AWS legal review (Ludovic Righetti et al., 2014, p. 49). Past accidents related
to autonomous elements of target selection and firing in existing weapon systems raise
concerns over potential risks with weapons that have increasing levels of autonomy
(Hawley, 2020, p. 5). If effective human oversight is the key to mitigating these types
of risks, more problems arise there may be insufficient time for a person to decide to
intervene and then override. There are two types of arguments for this problem. The
first argument is for greater autonomy in some weapons systems, which argues for
advances of autonomous systems to ease human intervention by accurately predicting
decision points in advance and informing the human operator. This argument relies on
the assumption that the reliability and predictability will be sufficiently high. The other
argument is that focus needs to be on improving human-machine interaction and
supervision. It is an additional research field to model and develops human-machine
interaction frameworks for better coordination (Ludovic Righetti et al., 2014, p. 52;
Marchant et al., 2015, p. 13). In addition to malfunctions and accidents, further
concerns arise from the potential for intentional interference with autonomous weapon
systems. Software for autonomous weapons will always be subject to cyber-attacks
both during development and operations. If an AWS is interfered and its objectives
and boundaries are altered, then the potential consequences could be disastrous.
(Human Rights Watch, 2020, p. 2).

2.3.2.3 Legal issues with AWS

Any weapons systems, autonomous or not, must comply with existing international
law, like international humanitarian law (IHL) or the law of armed conflict. A
Weapon’s compliance with IHL is determined by examining the foreseeable effects
based on its design and use in expected circumstances (Ludovic Righetti et al., 2014,
p. 66). It is expected that AWS also should comply with core functions of IHL: the
rules of distinction, proportionality, and precautions in attack (Solis, 2016, p. 4). In the
fast pace of military engagement, respecting these rules will require a robust capacity
to exercise qualitative judgments. Our knowledge of AWS so far is showing us that
building such capacity would pose significant challenges. One question is whether IHL
can be programmed into AWS. It is still not an easy question to answer. Nevertheless,
it is argued that some of the AWS would comply with these rules in environments
where there are few or no civilians, like where they would be meant for “machine-on-
machine” operations, or where their functions would pose little or no risk to civilians

48
(Anderson & Waxman, 2017, p. 53). Programming quantitative evaluation capabilities
into a system are possible; encoding qualitative judgments, however, remains
problematic.

The rule of distinction

IHL describes the rule of distinction under Article 48 as following: “In order to ensure
respect for and protection of the civilian population and civilian objects, the Parties to
the conflict shall at all times distinguish between the civilian population and
combatants and between civilian objects and military objectives and accordingly shall
direct their operations only against military objectives.” (Solis, 2016, p. 3). The types
of autonomous target recognition systems in use today were developed to operate in
highly limited, non-complex environments in which they can only detect simple
objects (UK Ministry of Defense, 2018). For any potential weapon systems that will
have the ability to distinguish between civilian objects and military objects, between
combatants, non-combatants, and persons hors de combat, and predict behaviors in
complex environments. It will require extensive qualitative assessment, thus making
it far more challenging for autonomous weapon systems to be IHL-compliant in such
environments (Ludovic Righetti et al., 2014, p. 72).

It will be incredibly challenging for AWS to distinguish civilians from combatants or


other fighters, as the line between civilians and fighters blurs in most of the world's
conflicts (Anderson & Waxman, 2017, p. 165). Under the rule of distinction, attacks
must only be directed at combatants. Civilians are protected from deliberate attack
unless and for such time as they directly participate in hostilities (International
Committee of the Red Cross, 1949). Incorporating these rules into LAWS will not be
straightforward, and programmers will have a tough challenge. In a classic armed
conflict scenario, it is expected that human military personnel can distinguish between
civilian and combatant. However, in the more complex scenarios that require a
qualitative assessment of the situation, when faced with a scenario to distinguish
between a uniformed and armed civilian like police officer or hunter from an armed
and uniformed soldier, it is argued LAWS will not be able to provide the required
certainty (Human Rights Watch, 2018, pp. 5–8). As mentioned, recent military
conflicts occur in places near or within where civilian populations live, being involved

49
in the conflict voluntarily or involuntarily and not wearing distinctive formal uniforms.
It is a big challenge to separate lawful targets and humans protected from attacks
(Marchant et al., 2015).

The first legal challenge for LAWS to be IHL-compliant is to accurately distinguish a


civilian directly taking part in hostilities from one who is not. According to IHL, direct
participation in the hostilities have three criteria: “1) the act must be likely to adversely
affect the military operations or military capacity of a party to an armed conflict or to
inflict death, injury or destruction on persons or objects protected against direct attack;
2) there must be a direct causal link between the act and the harm likely to result either
from that act or from a coordinated military operation of which that act constitutes an
integral part; and 3) the act must be specifically designed to directly cause the required
threshold of harm in support of a party to the conflict and the detriment of another.”
(Ludovic Righetti et al., 2014, p. 87). Programming these criteria into a machine is
extremely hard as it involves interpreting an individual’s intentions. These are even
challenging for human military personnel to apply, and expecting a machine to
perform at the same level or better than humans is not a reliable forecast given the
limitations of current and expected technologies (Human Rights Watch, 2020, p. 11).

The rule of proportionality

IHL describes the rule of distinction under Article 51(5) as following: “an attack which
may be expected to cause incidental loss of civilian life, injury to civilians, damage to
civilian objects, or a combination thereof, which would be excessive concerning the
concrete and direct military advantage anticipated.” (Solis, 2016, p. 6). This rule
acknowledges that the civilian population and objects might be affected by the attacks
directed at a military objective. Nevertheless, these civilian casualties can only be
lawful under the treaty if they are not excessive considering the anticipated direct
military advantage (Thurnher, 2014, p. 216). In and of itself, it is said that this rule is
the most complicated rule to interpret: each case should be assessed in its specific
context, in a considerably short amount of time, to figure out whether the projected
military advantage would overpass the civilian loss (Ludovic Righetti et al., 2014, pp.
54–55). To illustrate the difficulty of application, it is argued that the civilian casualties
might be acceptable in case of an attack at an enemy tank that is about to fire, from the
perspective of anticipated military advantage; however, the same number of civilian

50
losses would be excessive if the tank is standing still and posing no immediate threat
(Schmitt, 2012, p. 24).

Programming the rule of proportionality into an autonomous system can only be


achieved if values could be attributed to objects and humans to perform probability
calculations in the given context. Given the rule's complexity, it is a challenge to code
LAWS to respect the rule of proportionality in the fast-changing and complex nature
of today’s warfare. Nevertheless, the literature shows two opposing ideas on the
matter: One group argues that programming proportionality is possible, and we have
available systems that are already performing such tasks in specific ways. Others argue
that it is impossible to achieve accuracy with AWS, given the cases have many non-
quantitative variables. From the first group, Schmidt points to the “collateral damage
estimate methodology” (CDEM) used by US and EU countries, which helps to assess
attack tactics, precision, blast effect, probability of civilian presence, and composition
of buildings (European Union Military Committee, 2016, p. 6). Methodology dictates
that the more the probability of collateral damage gets higher, the required level of
command for approval gets higher as well. This methodology is based on scientific
algorithms and, as it is argued, can be programmed into AWS to calculate the
probability of civilian harm in target areas (Schmitt, 2012, p. 27).

It should be noted that the CDEM does not ensure compatibility with the rule of
proportionality; it is a policy-related instrument to figure out which level of command
is needed for approval of collateral damage (European Union Military Committee,
2016, p. 8). Thurnher argues that it is conceivable to build AWS running by the law,
running on a framework of pre-programmed values. He argues that an excessive
amount of collateral damage can be quantitively described and coded into the system,
and the human operators should set the level of excessiveness at a highly conservative
level to comply with the rule (Thurnher, 2014, p. 217). Schmidt is optimistic about AI
potential ability to calculate probable human casualties; however, he accepts that
regardless of the impressive advances in the AI field, it is implausible in the near future
that AI would be able to perform an accurate assessment of an attack’s potential
military advantage (Schmitt, 2012, p. 28). The other side of the debate is more vocal
about banning LAWS altogether. Most prominently, in its report, Human Rights
Watch states that the psychological process of human decision-making is impossible
to replicate in computer code, thus rendering the ability to comply with the rule of

51
proportionality impossible (Human Rights Watch, 2018, p. 14). Others argue that
considering the modern technical challenges, programming the systems to perform
qualitative evaluations to comply with IHL seems complicated. Thus, if LAWS ever
to be developed and deployed on the battlefield, their use cases should be minimal,
and only when the potential civilian casualties are at a minimum level (Wagner, 2012).

Another critical question is: At what stage should a LAWS perform a proportionality
assessment to comply with it? Should it be during the planning and programming
phase, or should it be left to the system's on-the-spot decisions? Joshi argues that if
somehow, the rule of proportionality is coded in the system and is reliable in every use
case, it would meet the law's requirement (Joshi, 2018, p. 1176). As the systems'
performance will vary depending on the environment's complexity, this argument will
appear to be doubtful.

The rule of precautions in attack

Article 57(2) of AP I of IHL ensures that all the parties present in a conflict take
constant care of civilians and civilian objects. It states that a party who decides on an
attack must do everything to make sure that their targets are not civilians and civilian
objects; take all feasible precautions concerning choosing the means and methods of
attack to avoid or minimize civilian loss; and refrain from deciding to launch an attack
that has apparent civilian impacts (Solis, 2016, p. 55). LAWS should comply with
these rules as well. Although it is not defined in the law, the feasibility of precaution
is an essential question for LAWS. It is worth noting that a precaution's feasibility
depends on the possibilities available to the party who plans, decides on, and executes
the attack; it does not depend on the machine’s feasibility (Ludovic Righetti et al.,
2014, p. 86). One argument is that LAWS would have greater freedom in taking
additional precautions because a human operator's life will not be at risk during the
operation. AI systems would learn and improve based on experience and, thus, be
better at taking precautions than humans on a battlefield (Rosert & Sauer, 2019, p.
372). Thurnher argues that LAWS would be sufficient to comply with these rules if
their target is only to attack pre-identified military equipment: tanks, anti-craft
batteries, artillery pieces. AI systems are already showing excellent results in
identifying objects even on the visual inputs that a human operator would find hard to
assert a judgment on, and they would perform sufficiently enough if the nature of their
targets is limited to military equipment (Thurnher, 2014, p. 25).

52
In order to take precautions in the situations where LAWS are in active use, there are
two decision points: the first one is for the personnel in charge of LAWS to decide
whether or not to deploy LAWS for a specific mission, and the second one is regarding
the LAWS decision to choose specific means to engage with the identified target
(Anderson & Waxman, 2017, p. 31). For the first point, researchers argue that LAWS
could be lawfully deployed only the personnel in charge decides that the military
objective can only be achieved with lesser civilian casualties by LAWS than other
options available. Additionally, they argue that LAWS would have higher precision
than human personnel, posing a minor threat of collateral damage (Inkster, 2017;
Schmitt, 2012). For the second point, similar challenges mentioned above come into
play: it will be challenging to program LAWS to make qualitative assessments to
comply with the rules (Ludovic Righetti et al., 2014, p. 43).

Who is responsible for the action of LAWS?

Figuring out who is responsible when things go wrong is always a big challenge. There
are differing views in theory and application regarding placing the responsibility of
LAWS action on humans. Currently, the UK’s approach is to place the military
activity's legal responsibility on the last personnel that issued the authorization
command (UK Ministry of Defense, 2018). the US Department of Defense also
acknowledges human responsibility, stating that the person who authorizes the use of
or is in partial or complete control of an unmanned system should act with appropriate
care and in full compliance with the law of war (US Department of Defense, 2007).

In the specific case of LAWS, there are two prevailing thoughts on whom to hold
responsible for LAWS' consequences of actions. The first argument is that the
operators or commanders should be responsible: if they deploy LAWS, being fully
aware that those systems are not capable of distinguishing civilians from combatants,
they would be held guilty of deploying weapons in an unlawful manner (Thurnher,
2014, p. 1147). The military commander's subjective decision would let the LAWS
create unreliable targets and lead to deadly strikes with unexpected civilian casualties.
To avoid that, human operators must have a thorough understanding of the systems
they are operating. They should be able to sufficiently understand the weapon's
programming for criminal responsibility (Anderson & Waxman, 2017, p. 53). Others
argue that responsibility should be direct rather than command responsibility:
commanders do not have to know the systems' programming; they have to understand

53
what the system is able and unable to do (Sassóli, 2014, p. 312). IHL has rules in place
to hold commanders responsible for the actions of their subordinates, and if they were,
to some extent, aware of the criminal actions and could take measures to stop or
prevent the actions but did not, then they are criminally responsible for the
consequences of such actions too (Solis, 2016, p. 317). It is essential to mention that
this rule only applies to human subordinates, not military equipment performance.
However, it did not stop some law experts to suggest that, in the same regard, if the
commander is responsible for the action of an autonomous human personnel, they
should be responsible for the actions of autonomous weapon systems as well (Ludovic
Righetti et al., 2014, p. 72).

The second argument is to hold programmers and manufacturers accountable. If the


LAWS are general-purpose systems that are developed before any armed conflict,
when the systems act in a way that would amount to a war crime, it is not
straightforward to hold manufacturers responsible for a said war crime, as it is specific
to the armed conflict (Marchant et al., 2015, p. 141). Law experts argue that if a
programmer intentionally programs the LAWS to commit war crimes, they would be
solely responsible. However, in more complex scenarios where it is assumed that
systems are programmed and altered for each specific armed conflict, some argue that
the programmer who misprogram the system should be held responsible as an indirect
perpetrator. The operator, too, should be held responsible as an accessory to the
ensuing war crime if they are aware of the system's limitations but decide to deploy
them regardless (Anderson & Waxman, 2017, p. 347).

2.3.2.4 Ethical issues with LAWS

The general moral objection held by the robotics researchers, policymakers, military
professionals, and the public is that it is concerning to remove human involvement
from the decision to use force by LAWS (Arkin, 2011, p. 9). However, it is a
challenging view to engage, as it stops with a moral principle that one either accepts
or not. It is also a point of debate to figure out the level of impermissible autonomy,
given that the automation of weapons functions is likely to occur in incremental steps
(Anderson & Waxman, 2017, p. 349). Some law experts argue that the decision to take
life or engage with human targets can only come from humans, as most legal and moral
codes dictate (Sharkey, 2019, p. 77).

54
Another ethical question is whether it is inherently wrong to deploy LAWS to decide
who and when to kill. Previous assumptions were based on deploying LAWS for
specific conflicts with specific orders. Assuming that LAWS are deployed for
patrolling and surveillance for an indefinite duration, should they be allowed to
identify and engage with potential targets based on their judgments? (Ludovic Righetti
et al., 2014, p. 73). It is argued that LAWS have more tendency to be impartial and
ethical than human personnel, especially in the heat of an armed conflict: LAWS are
not emotive and cannot hate the enemy that would lead to acting in unproportionate
manners (UK Ministry of Defense, 2018). Waxman argues that it does not matter
whether a machine or a human makes the decision as long as the systems behave in a
certain way with a specified performance level; the package it comes in is not a moral
issue. (Anderson & Waxman, 2017, p. 128).

2.3.3 The AI arms race


The enormous potential military advantage of LAWS has pushed many governments
to invest in research of development in that field heavily. Concerned about a potential
arms race, a group of AI and robotics researchers has published an open letter in 2015
to call for a complete ban on offensive autonomous weapons beyond meaningful
human control (Future of Life Institute, 2015). Since the letter, there have not been
any concrete efforts to regulate LAWS globally, but we have seen many additional
calls for a ban or strict regulation on LAWS use.

Haner & Garcia’s analysis shown in Table 2.3 uses available public data, combining
them with data from secondary sources to estimate countries' relative standings. The
analysis is based on three categories: governments’ intent to utilize AWS based on
their policy papers and action; their general financial capacity; and their capacity to
cultivate AI talent (Haner & Garcia, 2019, p. 336).

Advances in AWS have happened incrementally and went politically unnoticed for a
long time (Haner & Garcia, 2019, p. 338). Military spending on AWS globally is
expected to reach $16 billion, and spending on general military AI systems to reach
$18 billion by 2025 (Research and Markets, 2018, p. 37). This section will show how
some countries aim to gain an edge in LAWS technologies, showing their visions,
projected investment targets, and talent capacity. I will take the USA, China, Russia,
and the European Union as significant global AWS investment and development

55
players and compare their strategies. It is hard to analyze military spending, as most
of them are classified. Haner & Garcia’s analysis below uses available public data,
combining them with data from secondary sources to estimate countries' relative
standings. The analysis is based on three categories: governments’ intent to utilize
AWS based on their policy papers and action; their general financial capacity; and
their capacity to cultivate AI talent (Haner & Garcia, 2019, p. 339).

Table 2.3 : Lethal AI arms race by numbers.

Intent Capacity Expertise


2018 Defense Spending

AI-Related Patent and


1997-2017 AI-Related
2017-2021 Projected

2018 GDP (Trillions)4

Patent Applications7
Resolve to Develop

Citizen Trust in AI1

Drone Spending

Number of AI

Publications6
Companies5

AI Talent8
Countries

(Billions)2

(Billions)3
LAWS

United High 25% 649 17.5 20.5 2028 369588 279145 28536
States

China High 70% 250 4.5 13.6 1011 327034 66508 18232

Russia High 40% 61 3.9 1.6 17 NA NA NA

Europ Mixed 29% 281 8 18.8 859 425166 233050 41459


ean
Union

2.3.3.1 United States

The United States has higher military spending than China, Russia, and the European
Union combined. Thus it is not surprising that the US is also the leader in LAWS R&D
activities as well (SIPRI, 2019). Autonomy has been an official component of the
United States national security strategy since 2012 with the release of the Department
of Defense (DoD) Directive 3000.09 (Haner & Garcia, 2019, p. 140). The most
comprehensive initiative came from the Trump Administration when President Donald
Trump signed Executive Order 13859, announcing the American AI Initiative in 2019.
This order collected all the departmental efforts under one roof, creating an integrated
approach to utilize AI for military and economic purposes. Considering the executive
order, the US Department of Defense (DoD) announced its AI strategy to enhance
national security. DoD’s strategy includes commitments to accelerate delivery and

56
adoption of AI capabilities, establish a common foundation for scaling AI’s impact,
synchronize DOD AI activities, and attract and cultivate a world-class AI team (Cronk,
2019).

Public trust in AI is at 25%, and experts raise many ethical concerns from various
fields. As a stark example, Google’s employees have had rejected to work on a military
AI contract that the company had taken from DoD in 2018 (Garcia, 2018, p. 13).
Regardless of the public mistrust in AI technologies, the US is leading the AI field in
talent capacity, the number of ready-to-use equipment, and investment. Starting in
2010, the US has allocated $18 billion to spend on developing autonomous military
technology until the end of 2020 (SIPRI, 2019). Additionally, the United States has
the highest number of private companies working on AI and the most AI-related
publications, the most AI-patent applications (Haner & Garcia, 2019, p. 139). AI
researchers in the top 10% of their fields are working in US institutions, and the
number of those researchers surpass any other country in the world.

AI initiatives in the US have taken an exclusive and nationalist retouch during the
Trump administration. It is clear from the policy documents that the US’s intention is
to use its competitive edge in the AI field to expand military capabilities aggressively.
In addition to the arms race, the American AI Initiative does also promise responsible
development of AI that would respect the “human rights, the rule of law, stability in
the institutions, rights to privacy, respect for intellectual property, and opportunities to
all to pursue their dreams” (Executive Office of the President, 2019). It promises to
address ethical challenges through R&D programs and engage in international
discussion on AI development. The report does not provide details about presumptions
and understandings regarding AI's legal and ethical problems, nor does it explain how
they will address it on a global scale.

2.3.3.2 European Union

European Union collectively has more AI talent than the US, and EU countries focus
on developing industrial AI and robotics. It is not to say that EU countries do not
develop military AI technologies: France, Germany, Sweden, and Italy are developing
autonomous military robotic systems (Garcia, 2018, p. 15). It is hard to say that the
European Union collectively or member countries are racing in an AI arms race, as

57
there are mixed views on AI in the union. Austria, for example, has joined the calls for
banning LAWS globally (Human Rights Watch, 2020).

EU countries, European Parliament, and European Commission has released a variety


of memos and papers on AI in the EU since 2016, but the first comprehensive step to
utilize AI in European Union came with releasing a report titled “Trustworthy AI” by
The High-Level Expert Group (HLEG) on Artificial Intelligence in 2019. This report
stands out more as a preventive governance approach: it states that any AI
development process or the end product should comply with all existing applicable
laws and regulations, ethical principles, and societal values (AI HLEG, 2019).
European Commission appears to be taking a human-centered stance and building its
AI strategy on values, rather than seeing ethical, social concerns as something to be
addressed later. These values can be characterized as fairness, mitigating potential
harm, transparency, ensuring accountability and social wellbeing, privacy, and
fundamental human rights. In June 2019, HLEG published another report on
recommendations on boosting the AI industry, putting forward a detailed vision. This
report points to the rise of AI investments in the US and China, stating that if the EU
fails to take actions to expand investments and regulate markets accordingly, the EU
economy will face a €400 billion loss in cumulative added value to GDP by 2030 (AI
HLEG, 2019, p. 43). If appropriate actions are taken, the report shows that Europe's
potential gain is as high as €2.7 trillion, or 19%, to output by 2030. Between 2018 and
2020, the European Commission has set €1.5 billion in funding for AI-related
initiatives, with an additional €20 billion from public and private investments
(Csernatoni, 2019).

In 2017, the European Commission proposed the foundation of a European Defense


Fund (EDF) to promote defense cooperation between EU countries and private sectors
to accelerate innovation and reduce defense investment costs for the union. It aims to
promote European autonomy in defending itself against emerging threats. EC
proposed allocating €13 billion to the fund for 2021-2027 and between 4% and 8% of
this budget to address disruptive defense technologies and high-risk innovation,
including but not limited to autonomous weapon systems (Csernatoni, 2019). Indeed,
the EU wants to utilize AI technologies for defensive and military purposes; however,
the EU wants to set an example for the rest of the world with its humanitarian policy
perspective. European framing of AI race would like to ensure future developments in

58
this field happens on European terms, respecting European values, fairness standards,
and regulations. Basing all the research and development activities of AI on ethics
could be how the EU tries to avoid “global AI arms race” rhetoric, and the EU could
indeed become a leader in ethical AI, setting the stage for global standards.

2.3.3.3 China

In 2017, China published its extensive AI vision under the title of “Next Generation
Artificial Intelligence Development Plan,” which clearly states that China wants to be
the AI leader by 2030 (Webster et al., 2017, p. 45). The plan involves initiatives and
goals for R&D, industrialization, talent development, education and skills acquisition,
standard-setting and regulations, ethical norms, and security. Plan also outlines the
intention to utilize AI on the battlefield in association with LAWS, and with 70%
citizen trust in AI and the pressure the Chinese government can put on the private
sector to transfer technology to the state, it seems that China will not face any
significant challenge against LAWS development (Haner & Garcia, 2019, p. 142).
Additionally, China has controversial intellectual property laws, which allowed the
Chinese state to copy technologies developed outside China without any legal
repercussions and make technological leaps forward in a non-linear fashion.

Among all other countries that have published some sort of vision for AI, China
heavily emphasizes utilizing these types of emerging technologies for military
purposes. In the Next Generation Artificial Intelligence Development Plan, the
Chinese government mentions “civil-military fusion” as a framework for further AI
investments. Through civil-military fusion, China aims to eliminate the barriers
between civilian research on AI and its defense industries. It is an aggressive national
strategy to transform the Chinese army into a world-class military by 2049 (Webster
et al., 2017, p. 47). China acknowledges that AI will drive the next revolution in the
military and wants to be the first country to develop intelligent warfare capabilities.
Chinese annual budget for weapons development is estimated to be $250 billion, and
$4.5 billion of that budget is projected to be spent on drone technology by 2021 (SIPRI,
2019). With the new civil-military fusion investments, the Chinese State council
estimates that the AI industries will worth $59 billion by 2025 and $150 billion by
2030 (Webster et al., 2017, p. 47).

59
China has started focusing on AI later than the US, so it has less publication on AI to
date. However, when we look at the latest publications starting from 2011, Chinese
scientists published papers on AI almost double the United States during the same
period (Garcia, 2018, p. 8). It is argued that the population advantage in China will be
helpful in AI development strategies: while some countries will develop to LAWS to
augment their soldiers and fill near-term gaps in security, China, already having the
largest military in the world, will be able to focus the majority of their resources on
long-term strategic investments (Haner & Garcia, 2019, p. 139). Chinese efforts also
face some challenges. Firstly its talent pool is draining: the number of AI talents is
already below the US and the EU, top AI talents are leaving the country for better
offers in European or American institutions (Geist, 2016, p. 42). Secondly, civil-
military fusion has already faced a global backlash: The US Department of State called
it a “[…] threat to the trust, transparency, reciprocity, and shared values that underpin
international science and technology collaboration and fair global business practices”
and “enormous risk to America’s national security” (Department of State, 2020).

2.3.3.4 Russia

Russia has been one of the ardent supporters of developing unmanned weapon
systems. The Russian leadership has expressed interest in utilizing AI for economic
and military purposes in their rhetoric starting in 2017. However, the most concrete
step came with the Presidential decree's release, numbered 490, titled “On the
development of artificial intelligence in the Russian Federation” (On the Development
of Artificial Intelligence in Russian Federation, 2019). According to the decree, Russia
aims to improve its positions to catch up with the rest of the developed world by 2024
and attain global leadership in certain areas by 2030. Development priorities consist
of transforming manufacturing and agriculture by emerging technologies into high-
performance and export-oriented sectors, increasing the number of entities involved in
the development of and the investment in the AI field by 50%. Russian strategy focuses
on supporting scientific research on AI by increased public and private investments.
The government also aims to redesign existing regulations and legal norms to lower
the barrier against developing systems such as autonomous vehicles and autonomous
weapons (Markotkin & Chernenko, 2020).

Additionally, the strategy includes the intention to build an AI talent base domestically
by introducing AI-related educational modules at all educational levels, creating

60
respective job development and professional retraining programs, improving STEM
education quality, and attracting foreign talents with incentives (On the Development
of Artificial Intelligence in Russian Federation, 2019). In terms of military
applications, the document does not provide any detail, only mentioning the necessity
of investing in AI to guarantee national security. In that regard, Russian ambitions are
primarily commercial when it comes to AI.

The Russian Ministry of Defense announced the program "Creation of Advanced


Military Robotics up to 2025 with a forecast up to 2030” in 2015. The program
provides a concept for utilizing robotic systems in military activities and lays out
general technical requirements for ground robotic systems. According to the program,
30% of Russian military power should be partially or fully autonomous by 2030
(Kozyulin, 2019). It is also challenging to find any concrete numbers about Russian
military spending on autonomous technologies. Nevertheless, it is clear that Russia
wants to utilize LAWS for military purposes. According to Human Rights Watch,
Russia consistently opposed negotiating any legally binding instruments on LAWS,
arguing that LAWS will not be a reality soon. Human control concepts involve
subjective assessments, and current international humanitarian laws are adequate to
address these issues (Human Rights Watch, 2020, p. 15).

2.3.4 Building block III: AI will dramatically impact global security


AI will reshape the nature of existing threats and introduce new global threats that will
affect politics, business, and everyday life. Diffusibility characteristics of AI will yield
into production and procurement of AI-enabled lethal and non-lethal military
equipment, malicious systems for cyberattacks and mass production of false
information, and mass surveillance tools for oppression. A low entry barrier will
enable non-state actors to produce and utilize such systems to threaten peace and
stability further. Because AI systems require attainable expertise to develop to perform
a wide range of goals, they can be hacked to perform malicious goals. It is probable to
see AI systems with a “just add your own goals” property: such systems may cause an
unprecedented scale of damage depending on the intentions of person, groups, or
institutions in charge (Brundage et al., 2018, p. 102). Military advantages of
autonomous weapon systems are too significant to ignore by militaries, and major
military powers have already started investing in the development of AI-enabled
systems heavily. An AI arms race is already underway and hard to stop at this point.

61
The technical development of LAWS is already outpacing the discussions on the legal
and ethical problems surrounding it. The projected capabilities of LAWS are not fully
compatible with international humanitarian law, and there is still no consensus on the
legal position of such systems in the international order. AI technologies will
significantly enhance threats to personal freedoms, national sovereignties, regional
stability, and global peace. The increase in the number and the anonymity of the actors
will prove challenging to identify and deal with the threats' source. Authoritarian
regimes will have a tighter grip on the production and dissemination of information
and have more extensive capabilities for suppressing the dissent and oppression of
larger populations. These threats should be addressed on a global scale before they
become mainstream practices.

2.4 Existing Governance Frameworks on AI

The majority of the developed nations have announced their national and regional AI
strategies to utilize the latest technology's benefits to gain economic and military
competitiveness. As governments' interest in AI peaked, the potential risks of AI
presented above have led to an explosion of international debates on AI's ethics and
governance. There is no grand scale that can show us whether AI’s benefits outweigh
the harms, yet many institutions have put out normative and regulatory frameworks,
setting governance networks and advocating structures for ethical and human-centric
utilization of AI (Jelinek et al., 2021, p. 143). A literature scan on the keyword “AI
ethics” results in more than one hundred different policy papers, regulatory
frameworks, and governance suggestions. Regardless of the differences in how each
document interprets the normative principles, the majority, if not all, point out that AI
should be safe, reliable, explainable, secure; it should serve society, respect the
environment, and critical decisions should be made under human supervision. The
sheer amount of normative frameworks published shows a global governance gap
between existing and future collaborative action schemes and an ethics gap between
intentions on responsible AI use and existing internalized human action patterns
(Dafoe, 2018). It is argued that these frameworks have been defined very rapidly
without any correspondence on the field: AI is still in its infancy, there is no point
trying to limit its use and deployment based on hypothetical scenarios (Human Rights
Watch, 2020, p. 12). Others argue that the rate at which AI advances already

62
undermines current regulatory approaches, and new regulatory frameworks cannot
function alone without proper new governance structures (Jelinek et al., 2021, p. 148).

Half of the policy frameworks regarding AI are prepared by governments and


intergovernmental bodies. For the reasons mentioned earlier, governments wanted to
utilize AI for economic and military superiority. Since 2015 countries like the USA,
China, European Union (lead by Germany, France, Italy), Denmark, Sweden, Japan,
Finland, South Korea, India, Australia, Mexico, and many more, alongside
organizations like EC, OECD, G20, G7, NATO, have produced extensive documents
on AI, addressing the impact of modern technology from multiple perspectives. Half
of the other documents are shared by global corporations that are already leading the
technological breakthroughs in AI. Some of these companies, like IBM, Microsoft,
Tencent, Facebook, Baidu, SAP, are not only shaping AI from a technical perspective;
they engage in global discussions on the future of the technology. Corporations'
intention is not necessarily aligned with the public: alongside suggestions on ethics,
some focus on promoting self-regulation and cautioning government interventions,
limiting anti-trust efforts, and data governance regulations (Schiff et al., 2020). Lastly,
the NGOs published the remaining quarter of documents to influence the policy
discussions from an outsider position. A considerable number of these NGOs are
newly formed, working solely on AI or responsible technologies. However, there is
skepticism as to what extent these organizations will successfully change the global
discourse.

This section will look at current suggestions on regulating and governing AI


technologies with a particular focus on the military. It is impossible to summarize more
than a hundred different documents in this thesis; thus, I will divide them into three
categories based on their given intentions to identify common patterns and structures
within them. I will start with the ethical frameworks that are shared by all the actors
mentioned above. Some of these frameworks are shared as standalone documents;
some are embedded in strategy documents, especially in national AI strategies. These
frameworks address all aspects of AI, so they are general documents that aim to utilize
AI for competitive advantage while respecting human values. The second section will
focus on specific AI governance suggestions based on previous global governance
experiences on nuclear, cyber, and internet regulations. This section will explain how
some actors suggest using the existing governance structures to address this emerging

63
challenge. The last section will focus on the calls for complete or partial bans on AI
systems' military usage. This section will solely focus on the governance of LAWS
and military applications of AI, presenting the rationales behind the calls for banning
such systems in advance.

2.4.1 Ethical frameworks


Ethical frameworks and AI strategies consist of documents shared by public, private,
and non-governmental organizations that aim to achieve multiple goals on the policy
level. It is hard to analyze the sincerity of the principles mentioned in those documents
or figure out the exact motivations in place; thus, unless there is a piece of tangible
evidence proving otherwise, I will take the presented motivations at face value and
include them in the analysis. It is shown that there are six overarching themes in the
AI strategy and ethical framework papers: social responsibility, competitive
advantage, strategic interventions, and global leadership (Schiff et al., 2020, p. 154).

The social responsibility theme covers the intentions to advocate for socially beneficial
AI and reduce the possible social risks that would stem from a vast AI deployment in
government, economy, and military. If not all, many organizations mention the
necessity to look after society's well-being in the face of threats posed by AI. One of
the most extensive ones published by EC titled “Trustworthy AI” covers various topics
such as the right to privacy, human rights, transparency, accountability, prevention of
misuse as the indispensable building blocks of future policies on AI (AI HLEG, 2019,
p. 32). In a similar vein, the OECD states that any AI policy should ensure “inclusive
growth, sustainable development, and well-being” (OECD, 2019b, p. 25). It should be
assumed that all the concerns on social well-being and individual liberties are
motivated by respect for human rights and democracy. Nevertheless, such documents'
social responsibility narratives may also serve as an “ethics washing” attempt (Daly et
al., 2020). These attempts may be put in place to brand oneself as a socially responsible
entity, or minimize the risk of backlash on policies or products, or cover up lack of
action and even harm that is already caused (Schiff et al., 2020, p. 43).

The competitive advantage theme covers the intentions to utilize AI for economic and
political gains. European Commission states that industrial AI investments are crucial
to keeping the European economy globally competitive and not to lag behind the US
and China (AI HLEG, 2019, p. 34), and China defines AI as “the new focus of
international competition” in its national AI plan (Webster et al., 2017, p. 156). The

64
driving motivation behind many national and corporate AI strategies is to achieve a
competitive edge. That motivation is presented in the documents with a specific
number on the projected investments in AI, priority fields for the given organization,
and expected economic return (Daly et al., 2020, p. 87). Aims to achieve competitive
advantage are not necessarily exclusive; they can be part of a broader pro-social
strategy to secure economic growth and prosperity (Schiff et al., 2020, p. 46).

The strategic planning theme covers the intentions to organize the required structural
change inside the organization. Some organizations include generalized strategic
planning in their documents as a roadmap to embed ethical principles in their research
and development, talent strategies, and public engagement. They can be general
governance ideas, corporate ethics principles, or calls to plan a strategic
implementation plan. For example, The US’s “The American AI Initiative” includes
calls for DoD and DARPA to development of R&D implementation frameworks to
enhance national security capacity (Trump, 2017).

The strategic intervention theme covers the intention of intervening in policymaking


on AI from legal, political, social, and economic perspectives. Documents with this
theme aim to define one or multiple subdomains of AI in a specific way to shape the
public debate around their narrative (Schiff et al., 2020). One example is the calls for
a complete ban on LAWS initiated by the Future of Life Institute as early as 2015.
This document, which is signed by many notable leaders in the AI field, calls for a
complete unconditional ban on autonomous weapons, from development to
deployment in the field (Future of Life Institute, 2015). Other examples show that
some corporations come together to adopt industry-specific ethical principles to avoid
being subject to restrictive laws passed by governments (Hagendorff, 2020, p. 11). The
ethical perception of one organization shapes decisions about strategic interventions.

The last one, which is the global leadership theme, covers the intentions of especially
the countries to be global AI leaders by 2050 at the latest. There is no single national
AI strategy document that I have reviewed that does not include a given country's aim
to be a global AI leader. The US iterates its motivation to maintain its global
leadership, while China, the EU, and Russia state that they want to utilize AI-driven
growth to dethrone the US in this race. The EU primarily focuses on becoming the
global leader in ethical AI, whereas China aims for aggressive holistic strategies to
gain economic and military leadership in AI. Mexico’s document emphasizes that it

65
was “one of the first ten countries in the world to deliver a National Strategy for AI”
and expresses pride to be the “first nation in Latin America to join this elite club”
(Schiff et al., 2020, p. 22). Smaller countries like New Zealand and Finland aim to
utilize AI to gain a globally prominent place among the countries that drive AI.

Analyzing twenty-eight of such documents in-depth has shown that these documents
have cyclical references to one another. Scanning references or sources section of these
documents shows comparable articles, reports, and papers. It shows that the
governmental and non-governmental organizations are aware of the works others have
done. Additionally, it shows that NGO works on AI ethics are already influencing the
government’s policymaking efforts. Being aware of what others have done has enabled
authors to differentiate and further build on AI ethics literature. I have seen that the
more recent the document is, the more comprehensive framework it draws. However,
these documents do not provide much detail for policymaking and governance. It can
be argued that their primary aim is to inspire policy work and shape the discourse
around the regulation of the AI field, but the content and the methodology of those
documents may not allow a smooth transliteration in the research and development of
AI.

As discussed previously, what each body means by AI can differ vastly, and this
difference in perception of the technology alters the frameworks designed around it.
No single documents provide more in-depth technical explanations of AI or tailor their
principles based on AI's specific subdomains. Also, it is not easy to extract meaningful
technical instructions out of the highly abstract ethical principles. It shows that the
ethicists, policy experts, or social science experts, in general, should have a grasp on
the technical details within their intellectual frameworks. That includes understanding
how data is obtained, stored, edited, processed, used, and shared, how the algorithms
are designed and coded, and how the training data sets are picked. Otherwise, it is
argued, ethical work on AI cannot move into micro ethics, which is reducing the
abstraction in ethical discussions to design machine ethics, AWS ethics, information
ethics, data ethics to have a meaningful impact on technical practices of AI
(Hagendorff, 2020, p. 33).

Ethical principles do not have binding power either. It is argued that engineering ethics
guidelines do not necessarily factor into individual decision making; larger social
context plays prevailing roles in the final decisions (McNamara et al., 2018, p. 3).

66
When we see an enormous number of investments pouring into AI development, self-
proclaimed ethics principles may not always be followed by public and private
organizations. I consistently saw exact numbers on expected economic gains from AI
throughout National AI strategy papers, whereas ethical principles are discussed in
abstract terms. In the tech business world's fast pace, checking every tick box in ethical
guidelines may be overlooked. Research, development, and deployment of AI systems
have less to do with ethical norms than expected economic and political gains.

2.4.2 Arms control agreements


Some researchers and governments argue that current international laws concerning
arms control, and nuclear proliferation can also apply to AI; therefore, there is no need
to further develop international laws specifically for the use of AI. Arms control and
nonproliferation have a long history with extensive examples of technology
governance frameworks from the uniliteral to the transnational level. This history may
prove useful as rich insights in controlling and containing disruptive technologies'
weaponization (Scharre, 2018, p. 9).

This section will look at how similar AI is with nuclear and ICT technologies
summarized in table 2.4 and the existing laws regarding these technologies that are
argued to be relevant for AI. All the documents analyzed here are focused on AI in the
international security context, and unless otherwise stated, concerns military usage of
AI. I will use the five dimensions of military technology management presented by
Belfer Center to compare these technologies. These dimensions are; destructive
potential: how much destruction can the weapons using this technology cause?; cost:
the volume of financial resources required to develop such weapons, and the marginal
cost at scale; complexity: level of technical expertise required to develop and use such
weapons; dual-use: does experience with commercial versions of the technology
implies easy transitions to the military version?; the difficulty of monitoring: how easy
is it for adversaries to track each other’s military development progress? (G. Allen &
Chan, 2017, p. 42). The quick overview below shows how the technologies compare
according to these dimensions. The following sections will provide in-depth
explanations and discussions on how relevant this comparison is.

67
Table 2.4 : Key military technology profiles.

Technology Destructive Dual-Use Difficulty of


Cost Complexity
Potential Option Monitoring

Nuclear High High High High Moderate

ICT Moderate Low Moderate High High

AI High Low Low High High

Nuclear technology and AI are highly different in the technical domain; however, both
technologies possess strategic political characteristics relevant to governance and arms
control. It is argued that nuclear weapons present crucial insights as a case study on
the control of disruptive technologies like LAWS (M. M. Maas, 2019, p. 15). Initial
developments of both AI and nuclear technologies require sophisticated technical
talent capabilities and high-level scientific expertise. As talent seems to be
concentrated in a small number of countries, and the development of both technologies
happens behind closed doors, public debate is limited. Both technologies come with
severe ethical and legal concerns and potentially disrupt society and institutions
(Payne, 2018, p. 45). Additionally, both technologies offer an enormous asymmetric
advantage to the parties that own them; therefore, they will have strong incentives to
develop the technology unilaterally. Both technologies have high dual-use
possibilities: nuclear technologies in the energy and medical sectors, AI in
manufacturing and service industries, to name a few, which means that an outright
global ban on using such technologies will face strong resistance across public and
private actors.

Similarities between AI and nuclear technologies have their limits as well. Firstly,
where only state actors possess nuclear weaponry, military AI technologies can also
be developed and deployed by non-state actors. Secondly, the two technologies have
different use cases in the military field. Nuclear weapons were not used since World
War II; However, AI systems will be more frequently used in different scenarios and
not as a single weapon, but as additional functionalities in the various military systems

68
(M. M. Maas, 2019, p. 25). Thirdly, AI has a significantly lower barrier of entry for
production. Nuclear technology requires access to rare materials like uranium, massive
enrichment infrastructures that are hard to conceal, and egregious weapons test. These
resources and infrastructures can be controlled and detected. AI development can be
done on commercially available systems regardless of the time and place, and even
testing can be simulated. These aspects of AI render non-proliferation regimes void
that is based on controlling essential resources. As it will be effortless to conceal
military AI systems, verifying compliance with arms control agreements will be
problematic (Borghard & Lonergan, 2018, p. 32).

Although AI has a lower entry barrier, this does not constitute the same sophistication
level to any actors that develop it. As mentioned before, most of the software
components required for AI development are already available as open-source, like
navigation and planning algorithms, face detection algorithms, and multi-agent
swarming frameworks. Nevertheless, it will require intensive processing capabilities
and competent talent sources to produce the most advanced and capable AI weapon
systems: resources that are available to a few state actors (Payne, 2018, p. 16).
Consequentially, it is argued that the number of state actors required for international
debate on controlling military AI is not so much more than those required for the
debate on nuclear weapons (M. M. Maas, 2019, p. 19). There are indeed many similar
and distinctive characteristics that AI and nuclear technologies share in the
international security context. Nevertheless, a direct implementation of existing arms
control rules will not cover all issues regarding AI. However, global nuclear
governance history can be a rich source of insights and inspiration to design the
required arms control treaties on military AI.

For the case of ICT, indeed ICT and AI share many similar aspects, and they are not
mutually exclusive technologies. AI will enhance cybersecurity concerns on multiple
levels. It will amplify existing threats and introduce a whole new level of threats that
current legal systems are not equipped to address. Diffusibility characteristics of AI
will yield into production and procurement of malicious systems for cyberattacks and
mass production of false information, and mass surveillance tools for oppression
(Brundage et al., 2018, p. 73). ICT-enabled attacks require skilled operators to develop
and initiate. AI-enabled attacks require skilled developers, but not skilled operators, as
the system itself makes the decision and alters its course of action. It enables more

69
actors to use AI for malicious intents. Most importantly, AI’s destructive potential is
much higher and complex than ICT. It will build on cyber threats and introduce a
variety of lethal threats across the national security spectrum. This destructive potential
made some actors call for a complete ban on weaponized AI, which will be the
following section's theme.

2.4.3 Banning lethal autonomous weapon systems


The earliest document I identified that argues against LAWS' use in armed conflicts
came out of an expert meeting organized by the International Committee of the Red
Cross titled “Autonomous Weapon Systems Technical, Military, Legal and
Humanitarian Aspects” (Ludovic Righetti et al., 2014, p. 43). In this document, experts
present the likely effect of AI on military systems and management, potential risks that
civilians would face, and how LAWS would be incompatible with IHL. IHL, or the
laws of armed conflict, concerns protecting those who are not involved in the conflict,
but through the exceptional circumstances, they find themselves impacted by it
(Sharkey, 2019, p. 44). As discussed before, IHL requires any military personnel or
equipment to comply with the rule of distinction, proportionality, and precautions.
Experts argue that LAWS will not be able to comply with these rules, as each rule
requires thorough human judgment and understanding specific to a given scenario
(Ludovic Righetti et al., 2014, p. 54). Experts add the potential benefit of LAWS
reducing civilian deaths, as autonomous systems would not act out of rage and
compassion. Deployment of LAWS also reduce the barrier to instigate war without
risking human personnel.

In 2015, the Future of Life Institute had published an open letter signed by many high-
profile AI and robotics researchers and business leaders to call for a ban on offensive
autonomous weapons beyond meaningful human control (Future of Life Institute,
2015). This letter’s main argument is formed around preventing an AI arms race before
it is too late. It draws a parallel between AI and the effects the Kalashnikovs had on
the armed conflict. It reiterates the concerns mentioned before, like the low entry
barrier to the war, the difficulty of monitoring AI weapon development, enabling non-
state actors to have state-level military capabilities, and authoritarian regimes
suppressing people at a mass scale. The letter is not against the use of AI in the
military; it argues that non-lethal AI systems can be used to prevent civilian causalities
in armed conflicts. It aims to uphold the integrity of the AI field, stating that “just as

70
most chemists and biologists have no interest in building chemical or biological
weapons, most AI researchers have no interest in building AI weapons — and do not
want others to tarnish their field by doing so, potentially creating a major public
backlash against AI that curtails its future societal benefits” (Future of Life Institute,
2015).

In 2018, Human Rights Watch (HRW) published its analysis of legal issues of LAWS,
calling it “the most alarming military technologies under development today” (Human
Rights Watch, 2018, pp. 31–33). HRW argues that the Martens Clause should be a
central element of the discussions to examine these weapons in the humanitarian
context thoroughly. Martens Clause, born out of a negotiation stalemate at the 1899
Hague Peace Conference, states that in the cases that are not covered by IHL treaties,
civilians affected by armed conflicts will not be without any protection. The actions of
aggressors are bound by “the principles of the law of nations, the laws of humanity,
and from the dictates of public conscience” (Human Rights Watch, 2018, p. 34). Since
its inception, the Martens Clause has also been included in Geneva Conventions and
disarmament treaties. ICRC argues that it is incredibly critical to include Martens
Clause in protocols because it prevents an assumption that just because one specific
case is not included in the protocol is does not mean that it is permitted. It also serves
as a dynamic factor to address future developments of technology and use cases
(ICRC, 1987). Based on this premise, HRW argues that LAWS exemplify a well-
fitting subject for the Martens Clause. They argue that existing IHL remains a general
framework for LAWS: it requires the system to comply with core principles such as
proportionality and distinction but does not provide specific rules to deal with them.
Martens Clause turns moral considerations into legally relevant principles.

Martens Clause was utilized in discussions around the prohibition of blinding lasers.
Governments, international organizations, non-governmental organizations have
invoked or referenced the Martens Clause on more than one occasion to address an
emerging technology (ICRC, 2018). HRW argues that this shows that stakeholders
have a history of coming together and preemptively banning a controversial weapon
that they think would act against humanity's principles. As autonomous weapons’
impact would be significantly higher on armed conflict, they should be preemptively
banned by international law (Human Rights Watch, 2018). HRW, alongside many

71
other NGOs, is calling for a legally binding instrument that could come in the form of
a new protocol to the Convention on Conventional Weapons (CCW).

The issue of banning lethal autonomous weapon systems and retaining meaningful
human control is being addressed in CCW meetings. HRW tracked country
participation in eight CCW meetings on lethal autonomous weapons systems held at
the UN in Geneva in 2014-2019 (Human Rights Watch, 2020, p. 21). Their report
shows that ninety-seven countries have publicly shared their opinions on LAWS in a
multilateral forum. Sixty-five of them are among the 125 states parties to the
Convention on Conventional Weapons. Thirty states have already adopted the calls for
a preemptive ban on LAWS. Whereas China, even though it agrees upon banning the
use of LAWS in armed conflicts, argues that LAWS' development should be permitted.
The report shows that the remaining countries are reluctant to introduce a ban on
LAWS, arguing that “such weapon systems do not exist, and we are working on an
issue that is still hypothetical” (Human Rights Watch, 2020, p. 23). In the last meeting,
governments agreed to organize CCW meetings spanning 2020 to 2021 to develop “a
normative and operational framework” for lethal autonomous weapons systems.
However, the onset of the Covid-19 pandemic has postponed the 2020 CCW meetings
on killer robots.

Some researchers challenge the strong emphasis on human dignity in the arguments
against LAWS (Birnbacher, 2016; Sharkey, 2019). It is argued that LAWS should not
be deemed undignified solely based on their autonomy. To be killed by LAWS rather
than by a human is not necessarily against human dignity. According to this view, the
use of LAWS alone does not compromise human dignity; it is compromised when
humans are sacrificed to achieve military objectives (Sharkey, 2019, p. 165).
Birnbacher argues that the term human dignity is overused in arguments against
LAWS (2016). According to him, human dignity only applies to the individual and
implies a set of fundamental human rights. He presents a set of tentative rules for
human dignity as following: “(1) the right not to be severely humiliated and made the
object of public contempt; (2) the right to a minimum of freedom of action and
decision; (3) the right to receive support in situations of severe need; (4) the right to a
minimum quality of life and relief of suffering; (5) the right not to be treated merely
as a means to other people’s ends” (Birnbacher, 2016, p. 169). Based on this list, he
agrees that the most likely candidate to be affected by LAWS' deployment is civilians'

72
dignity (Birnbacher, 2016, p. 170). Heyns argues that giving a machine the power to
decide on life and death is in itself demeaning; hence LAWS should be considered to
violate the right to dignity (2016, p. 215). Birnbacher rejects this argument, except for
the cases where LAWS cause mental pain and subjective suffering to individuals.
Sharkey argues that using human dignity as an argument against the LAWS is not the
most optimal way to oppose them, as there are many different perceptions of dignity
and a lack of clarity around it. She instead proposes to focus on three main categories
of argument: “arguments based on technology and the current and likely near-future
abilities of AWS to conform to IHL; arguments based on the need for human judgment
and meaningful human control of lethal decisions; and arguments about the expected
effects of AWS on the likelihood of going to war and on global instability” (Sharkey,
2019, p. 165).

2.4.4 Building block IV: there is a significant gap in the global governance of AI
The last and main building block of the problem is that there is a significant gap in
AI's global governance. All the frameworks, regulatory suggestions, calls for a
complete ban lack the comprehensive approach required to address the challenges
posed by AI. We are still left with many unanswered questions on implementing such
suggestions to utilize AI for humanity's greater good.

Firstly, abstract terms of ethical frameworks are challenging to translate into technical
terms: what does it mean that AI should be “human-centered”? How can AI
researchers make sure that AI is accountable? Is it engineers’ problem or ethicists’
problem to define trustworthy AI? These guidelines use deontological ethics to assert
overly broad rules to govern the highly diversified technical, economic, and military
practices. There is a significant distance between ethics and what it needs to govern.
Some argue that the deontological approach itself is too restrictive. Instead, ethical
frameworks should employ a virtue ethics approach to focus on hidden structures and
address scientists' and engineers' character and behavioral dispositions (Hagendorff,
2020). Nevertheless, these frameworks do not provide a reinforcement mechanism:
noncompliance with various ethics codes does not have any consequences and lacks
technical depth.

Current arms control treaties are not exactly compatible with the discussion on LAWS
either. Comparing nuclear technologies with AI shows that even though both have high
destructive potential, AI deployment in armed conflict will be much cheaper, more

73
frequent, and will not require a high level of skilled operators (G. Allen & Chan, 2017,
p. 56). An arms control agreement's success depends on its thorough list of technical
specifications and its success in establishing effective verification measures (Geist,
2016, p. 31). LAWS do not have a controllable substance, like uranium to nuclear for
global monitoring, and LAWS development can happen anywhere with commercially
available hardware. ICT governance is also not enough for AI: it is argued that even
the treaties on state use of ICTs for military purposes will be challenged by
cyberthreats enhanced by AI (Brundage et al., 2018, p. 101). The impossibility of
monitoring LAWS' development drives some to conclude that efforts should be
directed to prevent or limit LAWS' deployment in armed conflicts. This argument does
not have staunch support in the international arena, partly because of hypothetical
scenarios and states’ unwillingness to preemptively ban a technology with enormous
potential political and military benefits.

2.5 Putting It All Together

I built my definition of the problem on four pillars: definitional ambiguity, ethical


issues, threats, and governance. Starting from a broad view of AI as a whole and
funneling towards the governance of military use of AI has enabled me to see some
critical points on what causes these problems. AI is highly opaque, and AI systems
operate in interactive complexity. It is often even harder for the developers of a system
to provide a rationale for the decisions made by AI systems. AI is also tightly coupled:
resources are bound to specific purposes and functions (M. M. Maas, 2019, pp. 7–9),
and AI-enabled weapon systems have high destructive potential. These characteristics
make AI a suitable candidate for normal accident theory (NAT). It is a theory coined
by Perrow in 1984 in the wake of major technical catastrophes, arguing to look at
technological failures as a product of complex interacting systems amplified by
organizational and management factors (Perrow, 2011, p. 13). He states that such
accidents are inevitable in extraordinarily complex systems. An accident cannot only
be labeled as a technical glitch, operator error, or “the acts of God” (Pidgeon, 2011, p.
15). LAWS is the perfect candidate for NAT. From the initial design phase to training
ML models, hardware built to personnel training, the formation of a command chain
to deployment in armed conflicts, LAWS will pose as one of the most complex socio-
technical systems. A small error in one of these phases may snowball into catastrophic
consequences. Indeed, the projections of military use of AI and worrying signs from

74
already-deployed civilian systems’ failures attract reactions from politicians, AI
researchers, and social scientists, and the public.

I argue that the core issue behind AI's problems is how distinct groups address this
issue, rather than what they say on the matter. First, technical, social, and political
experts or expert groups propose solutions to AI problems, only from a narrow
perspective that only looks at the issue within their respective fields. What each group
perceives of AI highly differs; as I have shown in the definitional ambiguity part,
perceptions yield contrasting outcomes regarding the proposed solutions. Less
technical knowledge might factor into more abstract or even sentimental
understanding of AI's capabilities for politicians, and they would be more akin to the
calls for a total preemptive ban on military AI. Technical experts might frame all these
complicated matters merely as a technical problem and try to provide better, more
reliable systems as a result. Social scientists would draw ethical lines for AI systems
not to cross without knowing any technical details. Works on explainable AI (XAI)
showed us this divide in action. Explainability is defined as a technical challenge: yet
it is a topic with psychological, cognitive, and philosophical aspects (Adadi & Berrada,
2018; Barredo Arrieta et al., 2020).

Second, among the experts and leaders in the AI field, there is an evident lack of
diversity and inclusion, resulting in a cascade of crises. Surveys show that around 80%
of researchers in notable universities such as Oxford, Stanford, and ETH are male, and
in the US, 70% of the applications to the listed AI jobs are made by men (Shoham et
al., 2018). Racial and gender-based discrimination in the AI field has caused many
severe ethical problems, as discussed before. The AI Now Institute calls it a “diversity
crisis” and argues that ensuring diversity and representation in the dataset is not
enough: a complete organizational rethinking is needed to ensure people of all
backgrounds are part of the decision-making processes on AI (West et al., 2019, p.
23).

Third, there is a lack of international platforms for AI experts to interact and lay the
groundwork for AI's global governance. State-level international discussions are
already taking place in a top-down fashion. It is argued that epistemic communities,
defined as “ […] a network of professionals with recognized expertise and competence
in a particular domain and an authoritative claim to policy-relevant knowledge within

75
that domain or issue-area” (Haas, 1992, p. 152), can serve to shape the norms bottom-
top (M. M. Maas, 2019, p. 49).

I have shown in the problem definition “why” AI should be governed on the


international level. All the governance suggestions I have analyzed only focus on the
“what” should be the global governance outcomes. This thesis does not have a stance
on whether LAWS should be banned or list the ethical principles required for AI, nor
will it argue for or against using existing arms control agreements to govern AI. This
thesis's central question is “how” AI can be governed on an international level to
address all the issues. What kind of structures needed to build concrete bridges
between policy, technology, and ethics to create a meaningful, tangible international
set of rules to ensure the AI field's progress does not constitute an existential threat to
humanity? Taking a divergent approach, I will look at the success of climate change's
global governance and analyze some of the critical organizational structures that
helped facilitate the coworking of science and policy to bring tangible policies. The
following section will focus on the boundary organizations, which gained quite a
reputation after the impact of the Intergovernmental Panel on Climate Change (IPCC),
discussing the history, theoretical elaborations, and empirical application of the
concept.

76
3. SCIENCE AND POLICY INTERFACE

Artificial Intelligence possesses, for the reasons listed above, a global potential to
create new multi-dimensional security threats and amplify existing threats that current
technical and political systems are not able to cope with (G. Allen & Chan, 2017). AI
is being explored, developed, and used in every continent, in commercial, industrial to
military sectors, by state and non-state actors, and the very same technologies can
apply to multiple sectors with the characteristics of diffusibility and dual-use. When
looking at ways to mitigate the threats of AI and create a framework to support AI
development for social and economic benefits, it makes sense to look at other globally
significant technologies like nuclear and ICT to get a sense of what can be done on a
policy level. However, we need an exact match between said technologies' profiles for
this strategy to work correctly. As shown in Table 2.4, we do not have an exact match
between AI and nuclear or between AI and ICT. The destructive potential of AI and
nuclear technologies is high, but nuclear has a significantly higher barrier of entry for
non-state actors to get a hold of the materials and the expertise to develop nuclear
weaponry. Cybersecurity, although having a moderate complexity, requires skilled
operators to manage and maintain. AI-enabled systems only require expertise to
develop, and commercially available hardware can be equipped with open-source
algorithms to create varying levels of weapons by a large portion of actors (Brundage
et al., 2018, p. 155). Artificial Intelligence is nothing like the previous technologies
that have come to the spotlight on international security discussions. Although there
are many valuable lessons to extract from nuclear and ICT technologies' governance,
they do not provide ready-to-implement frameworks for AI's global governance (M.
M. Maas, 2019).

Instead of looking for similar technologies for direct reference for what to do, I
question whether we can find another global governance matter that aims to mitigate
significantly high threats with diverse stakeholders and require co-production of
science, technology, and policy like AI governance requires. Furthermore, instead of
re-implementing, the result policies to another context, what can we learn from the

77
methodologies, organizational structures, and global governance processes of such
matter to design the global governance framework for AI?

This section will analyze the boundary concept, providing definition, a scan of recent
literature on the concept and examples of its usage in the field. Boundary organization
concept is primarily studied from climate governance perspective and majority of the
examples of existing boundary organizations are working on climate governance on
varying scales. Looking at the climate governance from only governance perspective,
this section will seek to address for what purpose boundary organization concept was
used in an international context and what problems it solved. Section will end with an
illustrative case study on IPCC, showing a concrete example of a boundary
organization successfully providing policy-relevant yet policy-neutral, commonly
accepted scientific knowledge to inform decision making on climate change, and argue
that such organization is necessary to bring scientific unity to knowledge on AI to build
a functioning global AI governance structure.

3.1 Boundary Organization Concept: Definition and New Concepts

One of the significant obstacles in effective AI governance is the lack of interaction


between AI experts, ethicists, and policymakers. It has shown itself in the form of
divergent understanding of AI, ambiguous policy work, and a lack of technical
specificity of frameworks. Effective decision-making requires credible, salient, and
legitimate information (Cash et al., 2003, p. 11). It is valid for both climate governance
and AI governance. This section will look at how science and policy interface came
into play in building an effective relationship for policymaking on climate change.

Debates on climate change, like many other public issues, required expertise and
expert knowledge. These requirements have led to a surge of policy-relevant science,
new assessment systems, expert panels. It made the generation of knowledge and
policy-making dependent at a growing rate: experts occupy essential roles in policy-
making today (Gustafsson & Lidskog, 2018, p. 67). It is stated that even though more
experts get involved in the public decision-making processes, it might not yield more
effective and relevant policies (Weiland et al., 2013, p. 4).

In the light of this complex nature of the interaction, it is argued that improving
knowledge utilization, interaction, and integration between science, policy, and other
stakeholders is vital for effective policy-making (Lidskog & Sundqvist, 2015, pp. 32–

78
33). The boundary organization concept has emerged as an organizational framework
to mediate the interactions between policy and science. Although it initially emerged
as an attempt to draw the line between science and policy in order to allow them to
achieve their goals within their boundaries, it has gained substantial interest as the
concept is found to be useful for the facilitation of dynamic and collaborative processes
between science and policy (Guston, 2001, p. 401). Aside from a wide range of other
empirical studies, the boundary organization concept is haven been in use in
environmental studies, on the issues of knowledge transfer and the relation between
science and policy (Gustafsson & Lidskog, 2018, p. 3). Born in the late 1990s, there
has been much work on the theoretical elaboration of boundary organization concepts
and many field applications. As this type of organization taking predominant places
regarding science-policy interfaces at an increased rate, it will be beneficial for this
thesis to analyze the current literature on boundary organizations. This section's
remainder will explain the boundary organization concept in detail, showing some of
the new conceptual development around it. Then, it will move on to explaining how
the two boundary organizations, the Intergovernmental Panel on Climate Change
(IPCC) and Subsidiary Body of Scientific and Technological Advice (SBSTA), were
organized to provide policy-relevant scientific knowledge, and how their experience
can be utilized for the governance of AI.

In an attempt to institutionalize scientific norms, philosophers of science have worked


on designating the boundaries of scientific activities and non-scientific activities in the
1960s (Kirchhoff et al., 2015, p. 43). Gieryn further elaborated this thinking by shifting
the focus from the institutionalization of scientific norms to the concept of boundary
work in the 1980s. Gieryn describes boundary work as ‘‘the way scientists set their
work apart from non-scientific activities’’ (Gieryn, 1983, p. 17). He distinguishes
science from non-science and argues that boundary work would help create a social
boundary for science by isolating scientific activities from politics (Kirchhoff et al.,
2015). A decade after Gieryn’s work, the necessity of developing knowledge for policy
work required a more dynamic formation of science-society interactions (Gustafsson
& Lidskog, 2018). Boundary organizations have emerged as a result of these new
necessities to manage science-society interaction “by creating a neutral setting where
science producers and users interact while maintaining both accountability (to science
or policy) and their own separate identities” (Guston, 2001).

79
Three distinct characteristics describe boundary organizations: “they involve
information producers, users, and mediators; they create and sustain a legitimate space
for interaction and stimulate the creation of products and strategies that encourage
dialogue and engagement between scientists and decision-makers; and, they reside
between the worlds of producers and users with lines of responsibility and
accountability to each” (Guston, 2001). The effectiveness of boundary organizations,
especially in the case of producing useful climate information, showcases that co-
production of research improves the usability of the knowledge (Kirchhoff et al.,
2015). First, as experts and policymakers effectively interact, it facilitates eliminating
the cultural (personal and organizational), behavioral, and structural differences
(Moser, 2009). Second, as the interaction happens within a boundary organization
context, it leads to forming trust and legitimacy, thus encouraging the use and
dissemination of climate information (Kirchhoff et al., 2015, p. 46). Third, as this
relationship and interactions go on sustainably, this convergence shapes the nature of
information production as experts gain a more profound understanding of policy-
makers’ decision-making processes (Lemos & Morehouse, 2005, p. 21). Fourth,
interaction also helps to connect the knowledge with other types of knowledge that
policymakers already utilize. This knowledge interplay can help decision-makers
integrate better information in their decision-making (Kirchhoff et al., 2015, p. 52).

Guston is credited for the conceptualization of the boundary organization concept used
in increasing numbers both in research and in the field. However, he is also criticized
for oversimplifying a broad range of processes within this type of science-policy
interface. It is argued that Guston’s theoretical assumptions are too static: he describes
a permanent state of interactions instead of an evolving and ongoing process; he
presents a clear-cut boundary between science and policy without acknowledging
differences within those domains and ambiguous boundaries; it envisions symmetrical
accountability between stakeholders instead of analyzing the power relations and its
effect on the relations and accountability of stakeholders, and it disregards the reality
of multiple stakeholders, therefore multiple boundaries between those stakeholders,
and how to manage the interactions at multiple scales (Kirchhoff et al., 2015; Klerkx
& Leeuwis, 2008; Wehrens et al., 2014). It does not mean that the boundary
organization concept has lost its significance for policy making. Some researchers
have developed new concepts to build upon the original concept of boundary

80
organizations. These concepts are boundary chains (Kirchhoff et al., 2015, p. 51), the
landscape of tensions (Parker & Crona, 2012, p. 263), and hybrid management (Miller,
2001; Wehrens et al., 2014). These new concepts originate from Guston’s theory and
present novel approaches to boundary work.

3.1.1 Hybrid management


Miller’s hybrid management concept proposes a more elaborate version of designing
science-policy relations (2001). Hybrid management recognizes the elements that
aimed to be managed within a boundary organization do not have clear-cut boundaries
within each other. The institutions, people, artifacts have complex, various
relationships between them shaped by the norms, facts, and values (Gustafsson &
Lidskog, 2018, p. 5). It focuses on how the organization should perform, what the
processes should be, how the goals and expectations are placed, and how to form
dynamic and productive relationships between actors (Miller, 2001, p. 491). Hybrid
management formulates the boundary organization as a platform where scientific and
political actors come together, boundaries between them are recognized, and activities
in multiple domains are coordinated (Miller, 2001, p. 493). It presents four strategies
required to have dual accountability and strong relations with various actors to
facilitate a working environment for the production of standardized packages and
boundary objects (Gustafsson & Lidskog, 2018, p. 5). These strategies are
hybridization, deconstruction, boundary work, and cross-domain orchestration (Miller,
2001, p. 495).

Wehrens et al. argue that these four strategies are not straightforward to implement in
every scenario where boundary work is needed (2014, p. 15). They argue that the
strategies mentioned above are both case-sensitive and context-sensitive. Employing
one of the strategies in varying scenarios will present different expectations and
outcomes which are also subject to change. Context sensitivity refers to the possibility
that employing the same type of strategy could produce different outcomes, be both
valuable and problematic, and could even be used to meet divergent goals (Wehrens
et al., 2014, p. 17). Guston bestows more static roles to the actors within the boundary
organizations. In this context, both Miller and Wehrens et al. argue that boundary
organizations operate in hybrid spaces with specific characteristics that need to be
considered by the organization when developing management strategies (Gustafsson
& Lidskog, 2018, p. 9).

81
3.1.2 Landscape of tensions
Parker and Crona’s landscape of tensions concept proposes further refinements to the
boundary organization concept as they operate in hybrid structures (2012, p. 278).
Simply put, they show that actors’ uncomplimentary and incommensurable demands
create a landscape of tensions in an organization and this represents a management
challenge for boundary organization to satisfy all the actors involved in it. Different
actors have different expectations from boundary organizations and that creates four
distinct tensions: “expectations that the organization is disciplinary and
interdisciplinary, has a long-term and a short-term focus, provides basic and applied
research, and aims for autonomy and consultancy” (Gustafsson & Lidskog, 2018, p.
6). A boundary organization's success depends on its ability to build a resilient
negotiation platform to create a balance between different actors in the landscape of
tension (Parker & Crona, 2012, p. 287). Achieving this balance is not straightforward,
as there is an inherent power imbalance between actors, thus a constant asymmetry
and motion in balance are present (Kirchhoff et al., 2015, p. 39). It requires that the
boundaries between actors need to be renegotiated continuously, leading to changing
power dynamics and priorities. In that regard, it is argued that a boundary
organization's mission is to manage social tensions and navigate these dynamic
tensions over time (Parker & Crona, 2012, p. 291). The adaptability of a boundary
organization is crucial for creating necessary changes in organizational structures and
activities.

3.1.3 Boundary chains


Primarily seen in the climate change context, facilitating a sustainable interaction
between science producers and decision-makers enables the co-production of
actionable knowledge for adaptation decisions. However, this one-on-one interaction
between knowledge producers and decision-makers, although particularly important
for increasing knowledge usability requires substantial time and resources. Lemos et
al. argues that to reduce costs and increase efficiency, organizations working in
complementary domains can be linked together (2014, p. 61). The time and staff
constraints hamper the boundary organization's ability to produce usable information
that requires intensive science-policy interaction. Also, as demand for scientific
information increase, expanding science-policy interactions will be critical for
boundary organizations’ success which will overwhelm the limited group of

82
information producers to meet the demand (Kirchhoff et al., 2015, p. 42). Boundary
chains are presented as a strategy to overcome these challenges by linking a minimum
of two boundary organizations to share the burden of costs, collaborate, and pool the
resources (Lemos et al., 2014, p. 64). Theoretically boundary chains can alleviate the
individual workloads and risks of boundary organizations. They represent a promising
concept for building sustainable boundary organizations, however, their actual impact
on the efficiency of a partner boundary organization is not yet examined to the full
extent.

3.2 Boundary organizations in practice

The boundary organization concept is already widely used to describe organizations


that work in multiple fields with various stakeholders and on different scales.
Gustafsson & Lidskog’s (2018, p. 8) analysis shows that these boundary organizations
have a wide variety of institutional structures and objectives, on a local, regional and
international scale. On the local and regional scale, few boundary organizations work
to govern expertise, facilitate policymaking. The majority, however, is formed to
facilitate interaction between the involved actors (Gustafsson & Lidskog, 2018, p. 9).
On the international scale, primary examples of boundary organizations work within
the UN system, namely Intergovernmental Panel on Climate Change (IPCC), the
Intergovernmental Science-Policy Platform for Biodiversity and Ecosystem Services
(IPBES), and Subsidiary Body of Scientific and Technological Advice (SBSTA) (Lee
et al., 2014, p. 78). The structures of existing international boundary organizations
show the flexibility and adaptability of the concept. These organizations can be part of
an existing international organization, have triadic role structures, be formed as an
NGO, be a project within the UN, or a project within research institutions and
university-based organizations (Gustafsson & Lidskog, 2018, p. 5). International
boundary organizations can be both deliberately founded as boundary organizations or
later identified as one by researchers.

Empirical studies show that the boundary organization concept is used in four ways:
contextualization, recommendations, and description (Gustafsson & Lidskog, 2018;
Kirchhoff et al., 2015; Lee et al., 2014). Contextualization refers to the research that
uses the boundary organization concept to set the scene without further elaborations.
Recommendations refer to the works that propose the boundary organization concept

83
as a solution to the given problem. They recommend founding a boundary organization
to increase interaction between relevant stakeholders to produce credible and usable
knowledge (Kirchhoff et al., 2015, p. 44). The description refers to the large body of
research that only uses the boundary organization concept to describe and label the
given organization. Especially for the international boundary organizations, the
concept is used solely as an empirical category (Lee et al., 2014, p. 82). Descriptive of
boundary organization concept helps to position the examined organization between
the social worlds of science and policy.

Deriving from the original concept of boundary work, the discussion on what makes a
boundary organization has been refined in an operational context, under the three
categories of governance, membership, and boundary object (Gustafsson & Lidskog,
2018, p. 7). It is argued that these three pillars are critical aspects of a successful
boundary organization and are used to analyze existing organizations in literature.
First, the governance pillar refers to the challenges of building structures that involve
multiple stakeholders from different social worlds and designing an efficient
knowledge production process. It is stated that these challenges can be overcome by
employing strategies of multiple advisory bodies, forums to hold debates over social
boundaries, decision-making by consensus (Boezeman et al., 2013; Gustafsson &
Lidskog, 2018). It is argued that governing suggestions for boundary organizations
aim to achieve high credibility, relevance, and legitimacy (CRELE) (Gustafsson &
Lidskog, 2018, p. 9). However, the internal CRELE and the external CRELE should
be distinguished as the governance structures that produce these two distinct CRELES
may not be the same. Researchers argue that a given boundary organization should
adapt a dynamic structure that changes over time to manage differing requirements of
achieving internal and external CRELE (Parker & Crona, 2012, p. 246).

Second, the membership pillar covers the challenges of organizing and involving
actors that fall to all sides of the boundary (Kirchhoff et al., 2015, p. 44). Issues of
impartiality and inclusivity are highly critical for any organization that aims to achieve
high CRELE. Researchers state that achieving unbiased relations with financial
supporters, participants, stakeholders, external beneficiaries, and the public is crucial
for success (Boezeman et al., 2013, p. 22). It can be argued in line with decision-
making by consensus, as the inclusion of all stakeholders can be utilized to create a
jointly shared identity, and organizational culture (Gustafsson & Lidskog, 2018, p. 14).

84
Third, the boundary objects pillar refers to the challenges of creating opportunities and
incentives to shape and use boundary objects. Although directly related to how
governance and membership are formed, this pillar focus on how an organization can
perform a boundary work that produces a boundary object that is usable in the social
settings outside of the organization (Boezeman et al., 2013, p. 25). It is argued that the
main points of analysis for the creation of boundary objects are how the boundaries
blur between actors, how decisions are reached to manage differing interests, who is
part of the group to create a CRELE boundary object, and how different framing
strategies are used to introduce the boundary object into different social settings
outside the organization (Gustafsson & Lidskog, 2018, p. 17).

3.3 IPCC: An Illustrative Case Study

The inception of climate governance can be linked to the climate diplomacy between
inter-state actors and, second, to the creation of transnational networks and non-state
actors. Although the point of formation is difficult to pinpoint precisely, there are two
critical points in history: the formation of the Intergovernmental Panel on Climate
Change (IPCC) in 1988 and the 1992 United Nations Framework Convention on
Climate Change (UNFCCC) in Rio. UNFCCC especially was regarded as “the first
major milestone in the history of climate diplomacy” (Bulkeley & Newell, 2015, p.
326). The conference addressed nations worldwide and tried to replicate the Montreal
Protocol's diplomatic accomplishment in eliminating chemicals that cause ozone
depletion. Although the international policy-making mechanism plays a crucial role in
combating anthropogenic climate change, it also operates as part of a more extensive
range of multi-scale private and public climate governance efforts. The growth and
significance of climate governance on the international level have motivated various
transnational public and public-private actor networks to create microframeworks to
hit internationally developed targets on local levels (Bernstein et al., 2010, p. 32).

There are specific characteristics of climate governance that are relevant for AI
governance. The first characteristic is the north-south divide. From the climate
governance perspective, this divide is present between developed countries with
significantly higher emission rates than the developing countries. This divide yields to
increased vulnerability for developing countries in the face of climate change in the
forms of natural disasters and underprepared infrastructures (Bulkeley & Newell,

85
2015, p. 328). The issues of social justice and fairness have risen to the top of the
agendas on the climate governance discussions and remain current today. From the AI
governance perspective, the issue is two-fold: industrial automation and
weaponization. Advanced industrial production automation is expected to tip the
balance of favor towards relocating light manufacturing facilities closer to the markets
rather than outsourcing it to countries with cheaper labor. It is stated that
industrialization has helped many under-developed countries to realize rapid economic
growth, usually starting with low-end textile manufacturing (Norton, 2017, p. 41). It
led to workers moving from less productive manual agricultural works to high
productivity industrial export manufacturing. Improved communications have helped
businesses to coordinate complex activities at a distance, create vertically integrated
supply chains. This process of industrialization created accelerated convergence
between rich and developing countries. Eighty-three of developing countries have seen
growth rates twice the OECD average rate (Baldwin, 2016, p. 134). Automation is
expected to reverse this trend. It is argued that as cheap labor will lose its competitive
advantage in the face of automation, rich countries may face increasing nativist and
nationalist sentiments to bring manufacturing jobs back (Norton, 2017, p. 45).

Disruption of multilateral trade systems via protectionist policies would weaken the
developing countries' growth opportunities and access to developed country markets.
In the weaponization case, it is clear that countries with high technological capacity,
talent, and investment opportunities are already heavily pushing for developing LAWS
to advance their military capacities (Geist, 2016, p. 11). Historically, technological
advancements in weaponry by developed countries have led to the sufferings in
underdeveloped countries at a disproportionate level: usage of drones, anti-personnel
mines, and cluster munitions have created tragic losses in the Global South (Zafonte,
2018). AI arms race, as discussed before, is fueled by military superpowers such as the
USA, China, and Russia at an unprecedented rate, which causes significant concerns
in the Global South states. In the discussions regarding the ban on LAWS, with a few
exceptions, we see that most countries calling for a preemptive ban are from Global
South states (Human Rights Watch, 2020, p. 37). It is argued that a one percent
reduction in violence would accumulate to the amount equal to the total global
investments in development (Zafonte, 2018). Thus, it is of dual interest to the

86
developing countries to limit the development and usage of LAWS to face less
violence and have more available funds for international development.

The second characteristic of climate governance is that it is multilevel governance.


Climate governance occurs at various levels and spaces, including national,
international, regional, and local levels. The complex web of relations between these
levels begets critical questions about where the governing power and authority lie for
climate change (Bulkeley & Newell, 2015, p. 312). Climate governance provides an
exceedingly complex landscape to fit the top-down authority structures: local
initiatives form horizontal networks, national interests shape the international
agreements (Bulkeley, 2010, p. 12). In AI governance, especially data governance
issues have shown how governance needs to occur at multi-levels. AI governance does
also require multilevel governance on international, regional, and institutional levels.
Failures of data governance in one country or corporation may lead to international
consequences, and a regional regulation may alter the nature of services provided in
other regions or all around the globe (Weedon et al., 2017, p. 67). Top-down
governance is also not sufficient for the global governance of AI. It is argued that the
ethical development of AI systems requires various forms of platforms at differing
levels to ensure that the developed systems follow the relevant rules and regulations.
Discussions on forming epistemic communities need to cover the four arms control
rationales (ethics, legality, stability, safety) that require multi-level governance
schemes (M. M. Maas, 2019, p. 42).

The third characteristic of climate governance is that it has multiple actors. Roles that
state and non-state actors play do not have clear boundaries, creating various
uncertainties and complexities for climate governance (Bulkeley, 2010, p. 15). Non-
state actors have played increasing roles in influencing and shaping the nations’
positions in the international climate agreements like Kyoto Protocol and UNFCCC.
These actors can take the shape of scientists, business leaders, lobbyists, community
leaders. As in climate governance, in AI governance, non-state actors have a lot to gain
or lose in the face of any limiting or constraining regulations. Multi-actor
characteristics exactly fit the AI governance as well. As I have shown in the
governance of AI, multi-national corporations, NGOs, thinktanks, states, human-rights
advocates have all produced some sort of governance suggestions for AI to shape the
international agreements, especially on the LAWS. The level of actors ranges from

87
individuals to the states: in both climate governance and AI governance, all level of
actors contributes to the problem (i.e., generating data vs. generating emission) and
gets affected in return.

The fourth characteristic of climate governance is that the issue of climate change is
deeply embedded in our social and physical infrastructure (Bulkeley & Newell, 2015,
p. 317). Our socio-economic structures are built upon technical systems that lead to
the generation of emissions. It makes addressing climate change highly complex as it
is usually overpowered by other domains like economic growth, international trade,
energy security, and job creation (Bulkeley, 2010, p. 16). AI-enabled systems are
already becoming an integral part of socio-economical lives as well. Development of
those systems is happening with minimal, non-binding ethical guidance. The majority
of the systems built for civilian use can easily be reconfigured for military use. As the
socio-technical structure of AI systems become increasingly complex, it will be harder
to address it for regulatory purposes. Luckily, unlike GHGs, we are at early stage of
military AI deployments which provides an opportunity to address the concerns in
advance before they become more embedded. Although not a precondition, the affinity
of the characteristics of climate governance and AI governance further validates the
idea of building a case study on an international boundary organization like IPCC to
draw lessons for AI governance.

The boundary organization concept has been critical to the effectiveness of climate
governance. As I have shown the affinity between climate governance and military AI
governance, it will be beneficial for the purpose of the argument to illustrate how a
successful boundary organization operates on an international level. This illustrative
case study of IPCC provides a practical framework to design a governance structure
for the military use of AI. I choose to study IPCC because it is the international
organization most frequently used to define and exemplify boundary organizations
(Gustafsson & Lidskog, 2018, p. 18). Researchers describe IPCC as a successful
boundary organization because it produces credible guidelines and assessments for
decision-makers while seeking a science and policy balance (Hoppe et al., 2013, p.
168).

IPCC is part of the United Nations as an intergovernmental body to provide scientific


information on human-induced climate change and its larger effects on society,
economy, and politics, and recommendations on how to act in response to it (IPCC

88
Secretariat, 2013c). IPCC was founded in 1988 by United Nations Environment
Program (UNEP) and World Meteorological Organization (WMO), then UN General
Assembly endorsed it as a separate organization. It produces a range of reports such as
technical papers, methodology reports, assessment reports in response to requests for
information on specific scientific and technical matters from governments,
international organizations, and the United Nations Framework Convention on
Climate Change.

3.3.1 Structure of IPCC


IPCC has a complex structure that has balances on multiple levels. Representatives
from member governments meet once a year in Plenary Sessions to elect the bureau of
scientists for the given assessment cycle. This bureau then selects experts from the
pool of nominees by governments and observer organizations to prepare the IPCC
reports.

The 195 governmental members of IPCC has a National Focal Point that acts as
representative and correspondence point. These representatives meet in annual plenary
sessions alongside experts from research institutions, relevant ministries, and observer
organizations. Plenary sessions work by consensus to figure out the organization’s
budget and work program; the scope and outline of its reports; issues related to the
IPCC's principles and procedures; and the structure and mandate of IPCC Working
Groups and Task Forces (IPCC Secretariat, 2013c). The plenary sessions are also
where reports are voted on, the chair of IPCC, and other bureau members are elected.
National Focal Points represent their governments' interests. They provide an updated
list of national experts to IPCC and provide commentary on whether the draft reports
are scientifically or technically accurate and balanced (IPCC Secretariat, 2018). A
focal point is designated by the Ministry of Foreign Affairs or other relevant
governmental body and submitted to the IPCC by an official letter. Observer
organizations are diverse in terms of scope and level. Any NGO working on IPCC
issues can be admitted as an observer organization whether they are national,
international, intergovernmental organizations. Even other UN bodies can be admitted
as an observer organization, with the approval of the Panel. Apart from 195
government members, there are 30 UN bodies and 131 NGOs as observer
organizations in IPCC. These organizations can attend plenary sessions and working
groups. The observer organizations' representatives are also invited to comment on the

89
draft version of reports in their personal capacity as experts (IPCC Secretariat, 2018).
Figure 3.1 shows the illustration of the structure of IPCC under WMO.

Figure 3.1: Organizational structure of IPCC.

The Panel elects IPCC Bureau to provide guidance on the scientific and technical
aspects of IPCC’s work and provide managerial and strategic advice. Bureau has 34
members consisting of IPCC Chair, IPCC Vice-Chairs, the Co-Chairs and Vice-Chairs
of the three Working Groups, and the Co-Chairs of the Task Force on National
Greenhouse Gas Inventories (IPCC Secretariat, 2018). It is essential to point that the
IPCC does not pay these members, they are still part of their national focal points. The
Executive Committee (ExCom) is formed by members of the Bureau except for the
vice-chairs of the working groups to “strengthen and facilitate the timely and effective
implementation of the IPCC work program per the IPCC’s Principles and Procedures,
the decisions of the Panel, and the advice of the Bureau” (IPCC Secretariat, 2013c).
ExCom meets regularly to oversee issues regarding IPCC activities, such as
coordination between working groups, panel sessions, and other IPCC products. Public
relations and communication activities and handling possible mistakes in the IPCC’s
published assessments are also part of ExCom’s responsibilities (IPCC Secretariat,
2018). The IPCC Secretariat is responsible for administrative assistance and
coordination of IPCC’s work. It is the body responsible for the organization of the
IPCC Plenary, ExCom and Bureau meetings, Working Groups, and for the preparation
of required documents and reports. It is also in charge of the IPCC Trust Fund,
budgeting, and contributions to the Fund, and the management of expenditures and

90
reporting. IPCC’s primary work is carried out under the three Working Groups of The
Physical Science Basis of Climate Change, Climate Change Impacts, Adaptation and
Vulnerability, Mitigation of Climate Change, and a Task Force on National
Greenhouse Gas Inventories (IPCC Secretariat, 2018). Each Working Group has a
Technical Support Unit (TSU). TSUs are responsible for providing organizational
support to the Working Groups and Co-Chairs to prepare and produce IPCC products.

3.3.2 How does IPCC produce reports?


The most significant part of IPCC’s work is that it does not conduct original research
as an organization. IPCC does not build its own climate models or make climate and
weather measurements (IPCC Secretariat, 2015). The role of IPCC in climate
governance is to collect, assess, and analyze the scientific, technical, and social
literature on climate change to have a comprehensive understanding of future risks and
adaptation and mitigation strategies. Therefore, it is essential to examine how IPCC
assesses the literature, how it reviews them, and how it endorses the final report.

In the assessment phase, every Working Group chooses a final date by which relevant
works to be accepted by scientific journals for publication. Limiting the time interval
for inclusion allows the team to focus on the latest findings and have enough time to
go through all the literature. Based on previous experience, it is usually set two to three
months before completing the final draft (IPCC Secretariat, 2013d). Although the main
body of literature consists of peer-reviewed and internationally available literature,
IPCC also considers reports from governments, NGOs, industry, and research
institutions. The evaluation of every publication includes an evaluation of all their
sources as well. The literature ends up being cited in the final reports of IPCC usually
sum up to more than a thousand (IPCC Secretariat, 2013d).

Based on the assessments, authors write drafts of the reports. These drafts then are
shared with experts and governments for comments on the overall balance of the drafts
and the accuracy of the literature assessment. This review phase casts a wide net to
include hundreds of reviewers in the process to ensure that the drafts are scientifically
accurate and complete (IPCC Secretariat, 2015). Experts are accepted through a
process of self-declaration of expertise and can be part of invited observer
organizations, governmental representatives. That way IPCC aims to make its reports
containing wide a range of views, expertise, and geographical representation. The
number of comments submitted to one Working Group draft can be as many as thirty

91
thousand and review editors are required to ensure that all the comments are received
and given appropriate consideration by author teams. After reviewing all the
comments, authors then prepare final drafts, and these drafts are submitted to
governments only for review before the Plenary Session (IPCC Secretariat, 2015).
Figure 3.2 shows a detailed flow chart of IPCC’s report production process.

I. Scoping III. Nomination of Authors


The outline is drafted and II. Approval of Outline
Governments and observer
developed by experts nominated The Panel then approves the organizations nominate experts as
by governments and observer outline
authors
organizations

VI. Government and Expert


V. Expert Review - 1st Order Review - 2nd Order Draft
IV. Selection of Authors Draft The 2nd draft of the report and
The Bureau select authors Authors prepare a 1st draft which 1st draft of the Summary for
is reviewed by experts Policy Makers (SPM) is reviewed
by government and experts

VII. Final Draft Report and VIII. Government Review of


SPM Final Draft SPM IX. Approval & Acceptance of
Report
Authors prepare final drafts of the Governments review the final
Working Group/Panel approves
report and SPM which are sent to draft SPM in preparation for its
governments approval SPMs and accepts reports

X. Publication of Report

Figure 3.2 : Report production process of IPCC.

The review and approval process of reports is pervasive and based on consensus.
Figure 3 above shows how detailed each step is designed for maximum participation
and acceptance. The last phase is the endorsement phase and it is a point of dialogue
between scientists, the producers of the reports, and the governments, the users of the
report (IPCC Secretariat, 2013a). When governments endorse a report, it means that
report has the authority and is a definitive statement. There are three levels of
endorsement: approval, adoption, and acceptance (IPCC Secretariat, 2013a). Approval
is used for Summaries for Policymakers, and it requires the material to be discussed

92
line-by-line in a detailed way for IPCC member states to reach an agreement in
conjunction with the scientists who drafted the report. The approval process aims to
ensure that the report's language is as clear, as direct, and as consistent as possible in
summarizing the material. Adoption is used for synthesis reports and requires section-
by-section agreement on the report by the governments and authors. Synthesis reports
are used for the integration of materials from Special Reports and Working Group
Assessment Reports, thus consensus on each section ensures the effectiveness of the
report (IPCC Secretariat, 2013a). Acceptance is used for the full underlying report in
a Working Group Assessment Report or a Special Report. Acceptance is not done by
analyzing reports line-by-line, it ensures that the technical summary and the report’s
chapters are objective, balanced, and comprehensive. Once a report is accepted, other
than minor editorial changes or grammatical corrections, significant changes cannot
be done to keep the report consistent with its Summary for Policymakers which is
approved before. Each working group holds a plenary session with government
representatives present to approve and accept their Special Reports and Assessment
Reports. If a Working Group report is also accepted at a session of the Panel, it
becomes an IPCC report. If government representatives accept the report at the
Working Group's plenary session, it should also be accepted at the session of the Panel,
and the Panel cannot change the report. Nevertheless, the report is open for discussion
at the Panel session, and relevant bodies of IPCC note all disagreements. Regardless,
these disagreements and objections do not prevent the report to be accepted (IPCC
Secretariat, 2013a).

3.3.3 Who writes IPCC reports?


IPCC Bureau, which is elected in the annual Plenary Session, sends a call for
nomination for experts to the member governments and observer organizations. IPCC
Bureau collects all the resumes from candidates and selects the report authors based
on expertise. The selection process aims to create balances on multiple scales:
scientific, technical, and socio-economic backgrounds, country and region of origin,
and lately gender and age (IPCC Secretariat, 2013b). Each chapter has Coordinating
Lead Authors, Lead Authors, Review Editors, and Chapter Scientists. Bureau fills
these positions from the pool of candidates and encourages experts who do not make
the author list to be Expert Reviewers (IPCC Secretariat, 2013b).

93
Looking at the latest Special Report on Climate Change and Land, as Table 3.1 shows,
IPCC still has a long way to be as representative as possible. The majority of the
authors are from Europe and North America, one hundred and four in total, where
there are only 20 authors from Africa (Shukla et al., 2019, p. 253). Additionally, when
I look at the gender parity among authors, I see that more than seventy percent of the
authors are male.

Table 3.1 : Number of authors by responsibility and region in the special report on
climate change and land

Coordinating Lead Contributing Review Chapter


Regions
Lead Authors Authors Authors Editors Scientists

Europe 4 22 61 6 2

Asia & Pacific 3 14 23 6 3

North America 4 7 22 3 1

South/Latin
1 14 10 4 -
America

Africa 3 9 2 1 5

Middle east - 5 5 1 -

Author Total 15 71 123 21 11

Chapters design and content are the responsibility of Coordinating Lead Authors and
Lead Authors collectively. These authors enlist Contributing Authors from the chosen
authors for assistance in their work (IPCC Secretariat, 2013b). Contributing authors
make up most of the authors to make reports are representative as possible. Review
Editors are responsible for the handling of comments on the drafts of the reports.

3.3.4 Impact of IPCC’s work


IPCC produces reports that are policy-relevant, neutral, but not policy policy-
prescriptive. Assessments aim to have a comprehensive picture of the state of
knowledge on climate change. These reports become a key input in the international
negotiations on how to deal with climate change (IPCC Secretariat, 2018). IPCC’s
work has made an impact on multiple levels: not only they informed scientifically
driven international governance of climate change, but they also created a robust
public awareness and advocacy, and they encouraged more scientific research on
climate change as well (Hoppe et al., 2013, p. 27).

94
It is argued that IPCC has drawn and consolidated an epistemic community on global
climate change which led the IPCC to become one of the pioneer organizations to
inform campaigning agendas. It is perceived in the public eye as the authority on
climate change knowledge (Hulme & Mahony, 2010, p. 326). IPCC’s boundary work
has helped to legitimize the scientific narratives that NGOs have been using in public
spaces (Fogel, 2005, p. 43). This visibility has peaked with Nobel Peace Prize awarded
to the organization in 2007 “for their efforts to build up and disseminate greater
knowledge about human-induced climate change, and to lay the foundations for the
measures that are needed to counteract such change” (IPCC Secretariat, 2013c).

The policy impact of IPCC, even though debated by researchers, is still significant.
The Fifth Assessment Report published in 2014 shows that keeping temperature
change below 2 °C relative to pre-industrial levels is achievable was a critical scientific
input into the 2015 United Nations Climate Change Conference, which led to the Paris
Agreement (Schleussner et al., 2016, p. 5). The Special Report on Global Warming of
1.5 °C that was published in 2018 with key findings stating that hitting the 1.5 °C
targets is possible under the conditions of deep emission reductions and "rapid, far-
reaching, and unprecedented changes in all aspects of society” (M. Allen et al., 2018,
p. 93). The special report came as a supporting scientific document for Paris
Agreement, and the 1.5 °C target has become a key policy and public debate point that
made the climate governance more grounded on politics and society. Since the report,
there has been growing debate and research on climate change adaptation and
mitigation, many NGOs and youth movements running campaigns to raise public
awareness and pressure for political action. It can be argued that the impact of the
boundary work of IPCC has reached across domains, sectors, and demographics.

3.4.5 Criticisms of IPCC


Like any other large international organization, IPCC is also subject to criticisms
regarding its work and procedures. First, it is argued that the lead author selection
process is opaque. The Bureau is free to choose Coordinating Lead Authors, Lead
Authors, and Contributing Authors for the reports without further approval. Some
researchers state that this selection process is dominated by political considerations
and opens up the possibility that the Bureau's author choices can predetermine the
outcome of reports (Mckitrick, 2011, p. 49). Second, IPCC is accused of overstating
some scientific claims to create a sense of urgency regarding climate change. The

95
Fourth Assessment Report which was published in 2007 claimed that the Himalayan
glaciers could disappear by 2035, which later turned out to be a scientifically
inaccurate projection (Carrington, 2010). Third, the extensive assessments take a
longer time, leading to exclusion of dissenting or minority positions and outdatedness
of the report (Doubleday et al., 2013). Fourth, it is argued that experts who participate
in the writing of the reports go through labor-intensive processes, taking time off from
their own research programs, and they work voluntarily. These demanding conditions
raise concerns that many qualified researchers might be discouraged from taking part
in IPCC’s work (Hulme & Mahony, 2010).

96
4. CONCLUDING DISCUSSION

In 1917, the Society of Independent Artists’ salon in New York has received an
artwork submitted by an artist named R. Mutt. The bizarre artwork was seemingly a
urinal turned upside-down and signed with “R. Mutt, 1917” and named Fountain
(Mann, 2017). Marcel Duchamp, the creator of Fountain, wanted to see if the jury who
claimed to be open to any type of art would accept this. Board of the Society regarded
this submission as a practical joke from an unknown artist and rejected the Fountain
calling it not an actual work of art (Mundy, 2015). Duchamp, who was a member of
the Board, resigned in protest. Many artists and intellectuals afterward have taken both
sides of the matter, most notably Beatrice Wood arguing that “[Mr. Mutt] took an
ordinary article of life, placed it so that its useful significance disappeared under the
new title and point of view—created a new thought for that object” (Mann, 2017). This
work has provoked many questions such as what makes art, art? Who decides that an
object is a piece of art, the artist, or the critic? The core understanding of art itself was
challenged by that provocative yet straightforward piece of work by Duchamp. In
2004, a survey among leading figures of the art world showed that Duchamp’s
Fountain is the most influential modern art piece (Jury, 2004, p. 24). It is beyond any
doubt that the Fountain is a piece of art today, or is it?

Hundred years after its debut, two researchers have put the Fountain through the AI
Google Cloud Vision, a powerful computer vision system “using algorithms, machine
learning, and a lot of data to train “smart” machines to see and understand the world
around us” (Pereira & Moreschi, 2020, p. 3). Their research showed that the AI system
identified the artwork as a ceramic, plumbing fixture, urinal: anything but art. It can
be interpreted as that these systems are not trained to detect artworks, but this result
shows the limitations of AI systems. When there are claims that it is possible to teach
the machines to see just like we do, these types of experiments clearly present that an
AI system cannot deduct the context, subtext, and emotion (Pereira & Moreschi, 2020,
p. 4). The first part of my discussion shapes around the limitations of AI systems.
Intelligence is constrained to the environment it exists in, and the set of goals it is
required to achieve. I see the discussions and speculations on artificial general

97
intelligence or superintelligence as futile and counterproductive. Literature review
showed that people who emphasize superintelligence are non-experts: people who
have no technical or social understanding of AI technologies. Therefore, it is critical
to have a clear definition and understanding of AI for effective discussions on AI
governance.

Before drafting this thesis, I wanted to evade this pitfall and have an overall knowledge
of the inner workings of AI and machine learning. I took online courses on Coursera
on machine learning and deep learning. After completing every assignment
successfully and finishing two courses, it felt like a Latour-esque moment: I was in the
laboratory and figured out how technical knowledge was made. Granted those courses
are only introductory courses and only scratches the surface of the complexity of the
technologies I have mentioned in this thesis. Nevertheless, it helped me understand the
core components of AI development to an extent that when data, algorithm, or a model
is mentioned in a paper, I had a more concrete sense of the subject. For instance, in the
literature regarding explainable AI (XAI), I realized that AI's interpretability and
transparency are discussed purely as a technical challenge, showing no meaningful
multi-disciplinary work on the matter. There is clearly a divide, a boundary between
ethicists, STS researchers, and AI researchers, each party observing the other from
outside. I argue that it is essential STS researchers know how the technologies they are
analyzing works, what is life in their (mostly physically non-existent) laboratories. A
similar case is present for AI researchers as well: studying ethics literature should be
a core part of their research work to have a general level of guidance and a line of
communication with ethicists. As the required resources for this interaction are readily
available and easy to access, there is no reason to shape the discourse in that fashion:
yet it does not happen regardless.

Issues with artificial intelligence are vast and numerous to an extent that every position
regarding AI has a valid point. When a government vision paper, or a report by an
international organization or think-tank, or an academic paper mentions AI, it is
essential to analyze what authors understand by AI, and which levels of social-political
effects of AI they consider relevant. In the section Identifying the Problem, although
the main problem that this thesis aimed to cover is the gap in the global governance of
AI, especially in the international security context, I also wanted to point out other
specific problematic points that fuel the difference in policy proposals and the

98
literature. Problems start with the collection of data to train the algorithms. Data
collection, user consent, and privacy issues have populated headlines mostly in a
malevolent context. Lack of clarity in privacy policies has created multiple scandals
in big tech companies, which had severe social and political consequences. Even if the
data is collected in lawful ways, lack of diversity and representation in datasets
perpetuates and augments existing racial and gender-based discriminations.
Automating decision-making systems in complex scenarios like judging human
capabilities and character especially used in human resources practices have proven
that it is not a straightforward process to automatize social aspects of the business. The
behavior prediction algorithms that are already in use for dynamic pricing in
commercial sectors and preventive measures in law enforcement have resulted in many
unfair and discriminatory results. These issues explained in detail in relevant sections
are clear to let researchers conclude that AI applications can be ethically problematic
resulting in socio-economic repercussions. There are varying suggestions on
minimizing these problems by making AI systems explainable or putting humans in
the loop in automated decision-making systems. XAI and HITL concepts are
undoubtedly promising, but they are not without criticisms either. Apart from framing
XAI as a technical challenge, the definition of explainability is also discussed in the
literature in the context that humans cannot fully explain their decision-making
processes, let alone machines to achieve meaningful explainability for their decisions.

Additionally, it is argued that a universal solution for explainability cannot be


compliant with every existing use case of AI (Barredo Arrieta et al., 2020, p. 176).
When there are severe problems like these with AI applications, it seems irresponsible
to say the least to use such systems in a military capacity. Prediction and detection
systems already causing social and economic harm to minority groups, disadvantaged
communities, and certain genders (West et al., 2019, p. 24). Embedding such systems
in lethal weaponry would create dire consequences and will leave policymakers a big,
entangled web of problems that are impossible to resolve fully. When an AI system
malfunctions in a social context, it can be reverted i.e., when it is found to be
discriminating against female candidates in the hiring process. However, when a
LAWS commits a war crime by misidentifying civilians as combatants and decides to
engage preemptively, lost lives will not be brought back, and it will not be easy to
place the responsibility on the correct agent either. Indeed, when talking about AI,

99
civilian and military applications should not be separated from one another: the dual-
use aspect of AI not only allows civilian benefits to be applied in a military context,
but it also causes civilian problems to be lethal in a military context. In this thesis, I
did not exclude the civilian aspect of AI and wanted to show that even before military-
specific problems, there are serious ethical issues to consider before using such
systems.

The weaponization of AI is not limited to the lethal autonomous weapon systems or


killer robots. Apart from unintended consequences, malicious use of AI can threaten
digital, political, and physical security. AI will be deployed more frequently in
malicious contexts from extending cyberattack capabilities to producing large-scale
fake content for political purposes. Nevertheless, the LAWS are still the hot topic for
political and academic debates as they present a concrete example of the threats of AI:
robots that can independently make a decision and act on taking human lives. Legal
arguments on LAWS center around the fact that these systems cannot fully comply
with IHL, as it is already challenging to fully implement IHL on human decisions in
military conflicts. Rule of proportionality, rule of distinction, and rule of caution in
attacks are human-centered and context-specific rules that are hard to turn into
algorithms. Apart from legal matters, the ethical questions of allowing autonomous
systems to decide who and when to kill or what level of autonomy should be
permissible in military conflict require more research and debate.

As the debates on the ethical and legal aspects of LAWS continue, public and private
investments in military applications of AI continue to grow. Military superpowers like
the US, Russia, China, and EU countries have already announced their vision on AI
with substantial financial investment targets by varying years. What is striking is that
governments are not particularly concerned with ethical and legal challenges that lay
ahead of military AI applications. Granted that all the country AI visions analyzed in
corresponding sections have a reference to the notion that AI systems should be ethical
and respect human values. On one end of the spectrum, the EU is having a
comprehensive list of values that an AI system should have, and on the other, China is
implementing a military-civil fusion strategy to weaponize any development in the AI
field and is not open to any ethical debates. Both China and the EU vision paper
mention that AI should be ethical, but ethics for sure have different meanings in both
regions. Additionally, when discussing technical progress, all countries present a

100
concrete timeline with an allocated budget for capacity and talent development, but
terms are abstract and nonbinding when it comes to ethical and legal matters. For
governments, AI's military advantages outweigh AI threats: this is why many
researchers acknowledge that there is an ongoing AI arms race.

Responses to the global AI arms race are scattered and layered. Some are calling for a
complete and preemptive ban on LAWS to end the arms race in its early phase (Future
of Life Institute, 2015; Human Rights Watch, 2018), some argues that it is already too
late to stop it, it must be managed instead (Geist, 2016, p. 198). There are also specific
technical suggestions on limiting the use of LAWS in machine-on-machine scenarios
where LAWS are only allowed to engage with enemy military equipment (Anderson
& Waxman, 2017, p. 22), or general moral arguments stating that LAWS would reduce
civilian causalities as they are not emotive agents to act in unproportionate manners
(Ludovic Righetti et al., 2014, p. 45). Some argue that existing international laws
already prohibit the use of LAWS (ICRC, 2018), others argue that there are already
existing governance structures (i.e., nuclear, ICT) to manage the AI arms race (G.
Allen & Chan, 2017; M. M. Maas, 2019). My literature review included more than a
hundred documents with some sort of governance and regulatory proposals on the
military use of AI. Although most of them provide valuable insight, they do not address
the institutional gap required to manage this matter.

Given the current landscape of the AI field, I argue that it is not a straightforward
solution to ban LAWS use and development, as the boundary between military
weapons and the civilian product is blur in AI. Additionally, I find it optimistic and
utopian to argue that LAWS would behave more rationally in armed conflicts. So, I
assume that military AI systems will continue to be an essential part of international
security issues, best we can do is to focus on how to govern it to minimize risks while
taking advantage of the benefits. The nature of AI prevents it to be a matter of national
governance, AI systems transcend national and sectoral borders, creates consequences
on the international level, thus the self-imposed national ethical norms are not enough
to govern it, there must be internationally agreed-upon definitions, rules, and values.
There is already extensive literature of ethical frameworks for AI that can be a valuable
source for international debates. LAWS are already being addressed in
intergovernmental discussions, namely in the Convention on Conventional Weapons
meetings. More than ninety countries have expressed their official views on LAWS in

101
those meetings so far. I argue that making military AI a subject of CCW limits the
discussion and prevents more inclusive and meaningful discussion on AI.

Framing the military application of AI as an arms control challenge understandably


recalls previous arms control agreements as a frame of reference. The most notable
one is the Treaty on the Non-Proliferation of Nuclear Weapons which prohibits
countries without nuclear weapons from acquiring them while committing nuclear-
armed states to eventual disarmament. The main argument is that the world has come
together in the face of a threat of high-destructive potential of nuclear weapons and
decided to ban the development of nuclear weapons for the greater good of the world;
this sentiment and vision can be applied to the LAWS as well (M. M. Maas, 2019, p.
34). However, as I have shown in the corresponding section, the characteristics of
nuclear and AI technologies do not precisely match to re-apply arms control structure
from nuclear to AI. Both technologies present an asymmetric military advantage to the
actors that own them, both come with severe ethical and legal problems, both have
dual-use possibilities, and both require sophisticated technical talent capabilities and
high-level scientific expertise for initial development. However, AI technologies have
significantly more potential to diffuse into a variety of sectors, be used in various
civilian and military applications than nuclear. Both require high expertise to develop,
but AI systems are easier to repurpose to achieve specific goals. It is challenging to
monitor the development of LAWS, unlike nuclear, it does not require large facilities
that can be spotted via satellites. Also, LAWS does not have a rare, hard to acquire
raw material like uranium: tracking the production and usage of the uranium is a
critical part of non-proliferation of the nuclear weapons. The stakeholders in AI span
across diverse levels and spaces from citizens to state and non-state actors, whereas
nuclear energy actors are limited to the governments and high-level industry. Looking
at the militarization of AI as an existential threat to human rights, although is valid and
critical, misleads the governance efforts to try to replicate the arms control agreement
from other technologies that possess an existential threat to humanity. I argue that the
governance of AI should not be limited to its end product, i.e., LAWS, and it should
not be framed as an arms control issue. We need to look at the characteristics,
stakeholders, actors of AI to create an international governance framework on it. It
will help us seek different and more helpful governance examples that may or may not
have anything to do with control arms.

102
Why AI needs to be governed is clear based on the problems it presents to international
peace and security. There are a variety of suggestions for what to do about it as well.
I argue that without seriously considering how to do it, it will yield neither adequate
nor widely accepted results in AI's global governance. Current CCW meetings on
LAWS show that countries express their positions based on their interest in and
understand AI, not based on commonly accepted literature. It is one of the reasons that
there are two visible camps: one group of countries who has lower potential to develop
LAWS calling for a complete preemptive ban, another group with high capacity to
potentially develop LAWS arguing that it is too early to decide on a complete ban
without knowing full details about the said weaponry. International NGOs’ campaign
also focuses on this premise that the number of countries calling for a ban is more than
eighty (Human Rights Watch, 2020), yet the primary motivation seems to limit other
countries to acquire such weapons and prevent potentially being a target of those
weapons. Thus, I argue that the main challenge is to design a governance structure that
is sustainable, research-based, inclusive to address the challenges of AI. Focusing on
how to govern AI will open many more horizons for policymakers and AI researchers
to explore and co-produce meaningful policies and guidelines for the experts in the
field.

As I have shifted my perspective from the arms control challenge to multi-scale and
multi-actor global governance challenge, climate governance was the most interesting
example to explore. The history of climate governance, which has had to navigate
complex social, political, and economic challenges, manage multiple boundaries,
engage with various stakeholders, is rich with sustainable international governance
experience that can provide inspirations for building a global governance structure for
AI. I showed in detail how climate governance characteristics are similar to AI
governance so that it makes sense to analyze the structure of it to conclude AI
governance. I see AI, like climate change, as a global challenge that we need to adapt,
mitigate and be resilient against, and like climate change AI makes the global north-
south divide visible, AI development, usage, ad regulation occurs on multiple scales,
and it has multiple levels of actors actively involved in shaping the nature of AI.
Climate change involves average citizens being more conscious about daily choices
and requires businesses to adapt to changing energy source dynamics and governments
to ensure energy, water, and food security. Ethical AI ideally involves average citizen

103
being aware of daily tech use as they are the data producers, requires businesses to use
AI responsibly, and require governments to ensure the security of their citizens in the
face of physical and socio-economic threats. I would go as far as to argue that we can
liken the “unethical AI” to fossil fuels and “ethical AI” to clean energy: it is cheaper
and more convenient to burn fossil fuels as an AI system becomes more precise if we
do not scrutinize how large amounts of data is acquired but fossil fuels pollute the air
we breathe. Likewise, the unethical AI systems produce results usually not in the
public's interest but those who are using it. On the contrary clean energy is more
expensive to produce (with variance), inconvenient to use (i.e., charging EVs,
fluctuations in supply) as it is harder to build ethical AI systems (i.e., getting user
consent, testing in every relevant environment). However, clean energy does not
destroy nature as wide usage of ethical AI systems will arguably not threaten the fabric
of our societies.

AI governance requires cooperation between ethicists, computer scientists, and


policymakers to fully cover AI technologies' impact areas. It requires crossing
boundaries between science and policy, as well as between social science and
engineering. In the literature review, I have shown that researchers have pointed out
this lack of interaction between disciplines and presented some suggestions to
overcome it (Adadi & Berrada, 2018; Geist, 2016; M. M. Maas, 2019). Proposals such
as researchers creating Track-II diplomacy channels or epistemic communities sound
optimistic; however, who should take the responsibility to do it, how such activities
need to organize, what sort of duties and accountability mechanisms they need to
uphold are not mentioned. Climate governance provides a framework for that. Co-
production of knowledge has been an important challenge for climate governance, and
it resulted in extensive research on how to achieve it. One of the most prominent
concepts that came out of research is the boundary organization concept. Although
initially born to demarcate between science and non-science, boundary work has
evolved into a co-operation tool for scientists and policymakers. It is shown that
improving knowledge utilization, interaction, and integration between science, policy,
and other stakeholders is vital for effective policymaking. However, it should be noted
that boundary organizations are not a magic concept to solve every science-policy
challenge. There is not a unified boundary organization structure to apply in every
scenario. Every boundary organization is formed based on context, challenge and

104
stakeholders involved. Boundary organization literature is also growing to address the
novel challenges of managing multiple boundaries, sustainable management of
resources, and navigating tensions between actors.

There are limited examples of boundary organizations to examine on an international


level. One of the most prominent boundary organizations is IPCC. I chose to build an
illustrative case study on IPCC to show how this promising concept can materialize in
a challenging international context and produces tangible results. Coming from a
Management Engineering background, the managerial and organizational structure
side of IPCC has attracted my attention the most. Rather than what it does, why it does
what it does, how it organizes the co-production of knowledge seemed more critical to
me, as it is seldom addressed in the literature. As mentioned before governance of AI
requires the management of multiple boundaries in an agile and effective way. There
is a need for policy-relevant but policy-neutral knowledge on AI to inform the global
governance of AI. I argue that an IPCC-style boundary organization that works on AI
is necessary to ensure effective governance of AI to ensure social, political, and
physical security and encourage the development of the AI field. We already have
extensive literature on the technical, social, political, economic, and ethical sides of
AI: yet the world lacks a commonly agreed, systematic analysis of the literature to
create policy-relevant knowledge on AI. As mentioned before, current governance
suggestions on AI governance are not commonly agreed upon, as they are either
produced by one corporation, institution, government, or via exclusive participation
schemes. The structure of IPCC shows an example of the democratization of the
production of knowledge.

One of the most essential aspects of IPCC is that it does not conduct original research.
It provides two advantages: first, it helps to become more neutral, and second is that it
encourages more research on climate change. Participation is open to all member
governments to nominate their experts to take part in working groups, and the review
phase of the report production is open to any type of experts. It enables the creation of,
although not perfect, homogeneous epistemic communities to address the issue from
every possible perspective. Current AI research, technical or non-technical, is being
done in silos with minimal cross-departmental and cross-organizational interaction.
This fragmented research forces the debates on AI into position-based debates, that
usually end up inconsequential. Formation of a boundary organization where every

105
country is represented, technical and non-technical experts are invited to collaborate
and contribute, policy-relevant knowledge is produced by consensus seems necessary
for global governance of AI. Filling this institutional gap with an appropriate boundary
organization will help address all the problems AI presents. A possible boundary
organization on AI will produce boundary objects to address specific challenges to
shape the research and development on AI. Issues such as, data governance, prediction,
and detection, explainability, lethality can be addressed under working groups to
produce reports that will eventually inform expertise-based policymaking.

Designing the boundary organization is beyond the scope of this thesis. My purpose is
to show why is it necessary, and how it can be successful based on a case study. I
should also take into consideration the criticisms of IPCC as well. I agree that the
selection of leading authors and their authority on picking experts from the pool of
nominated experts by countries does not seem inclusive and transparent enough to ease
the minds. Additionally, it takes a long time to produce reports: until the release of a
report, the literature already becomes old. AI literature is arguably more dynamic, and
new research is being added quicker than climate change research. A boundary
organization on AI needs to be more agile and inclusive, and the consensus is more
critical on AI than climate change, as it is much harder to monitor non-compliance
with possible rules let alone impose them regarding the development and the use of
AI. Therefore suggestions such as open, wiki-style knowledge production, where the
opinions, ideas, and contributions of every expert are sought and welcome at all stages
of the production, open to the public are important to consider. Shape, structure, and
the place of a boundary organization on AI are topics for further research. Should it be
one large organization to address every aspect of AI under one entity, or is it more
effective to create boundary chains where every chain is focused on a specific aspect?
Which international organization should lead the way to create such an organization?
Can Track-II diplomacy help create such an organization without governmental
involvement? These and many other questions are important to consider and addressed
in further research.

106
REFERENCES

115th Congress. (2018). US National Defense Authorization Act (H.R. 5515).


https://www.congress.gov/115/bills/hr5515/BILLS-115hr5515enr.pdf
Aayog, N. (2018). National strategy for artificial intelligence:# AIforAll. New Delhi:
Government of India/NITI Aayog.
Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on
Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–
52160. https://doi.org/10.1109/ACCESS.2018.2870052
Agre, P., Bowker, G., Gasser, L., Star, L., & Turner, B. (1997). Toward a Critical
Technical Practice: Lessons Learned in Trying to Reform AI. In
Bridging the Gerat Divide: Social Science, Technical Systems, and
Cooperative Work.
AI HLEG. (2019). Policy and investment recommendations for trustworthy AI. In
Brussels: Independent High-Level Expert Group on Artificial
Intelligence (AI HLEG), report published by the European
Commission. https://ec.europa.eu/digital-single-
market/en/news/policy-and-investment-recommendations-trustworthy-
artificial-intelligence
Allen, G., & Chan, T. (2017). Artificial Intelligence and National Security. Belfer
Center for Science and International Affairs Cambridge, MA.
Allen, M., Babiker, M., Chen, Y., de Coninck, H., Connors, S., van Diemen, R.,
Dube, O. P., Ebi, K. L., Engelbrecht, F., & Ferrat, M. (2018).
Summary for policymakers Global Warming of 1.5 C: an IPCC Special
Report on the Impacts of Global Warming of 1.5 C Above Pre-
Industrial Levels and Related Global Greenhouse Gas Emissions
Pathways, in the Context of Strengthening the Global Response to the.
World Meteorological Organization, 1–24.
Amershi, S., Cakmak, M., Knox, W. B., & Kulesza, T. (2014). Power to the people:
The role of humans in interactive machine learning. AI Magazine,
35(4), 105–120.
Anastasi, A. (1992). What Counselors Should Know About the Use and Interpretation
of Psychological Tests. Journal of Counseling & Development.
https://doi.org/10.1002/j.1556-6676.1992.tb01670.x
Anderson, K., & Waxman, M. C. (2017). Debating Autonomous Weapon Systems,
Their Ethics, and Their Regulation Under International Law. The
Oxford Handbook of Law, Regulation, and Technology.
Angwin, J., Mattu, S., & Larson, J. (2015). The Tiger Mom Tax: Asians Are Nearly
Twice as Likely to Get a Higher Price from Princeton Review —
ProPublica. https://www.propublica.org/article/asians-nearly-twice-as-
likely-to-get-higher-price-from-princeton-review

107
Arkin, R. (2011). Governing Lethal Behavior in Robots. IEEE Technology and
Society Magazine, 30(4), 7–11.
https://doi.org/10.1109/MTS.2011.943307
Atkinson, D. J. (2015). Emerging cyber-security issues of autonomy and the
psychopathology of intelligent machines. 2015 AAAI Spring
Symposium Series.
Australian Government. (2017). Australia 2030: prosperity through innovation
(Issue Innovation and Science Australia).
Baldwin, R. (2016). The Great Convergence: Information Technology and the New
Globalization. Belknap Press of Harvard University Press.
Barnes, S. B. (2006). A privacy paradox: Social networking in the United States. First
Monday, 20, 333–365. https://doi.org/10.5210/fm.v11i9.1394
Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S.,
Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R.,
Chatila, R., & Herrera, F. (2020). Explainable Artificial Intelligence
(XAI): Concepts, taxonomies, opportunities and challenges toward
responsible AI. Information Fusion, 58, 82–115.
https://doi.org/10.1016/j.inffus.2019.12.012
Barrett, D. G. T., Hill, F., Santoro, A., Morcos, A. S., & Lillicrap, T. (2018).
Measuring abstract reasoning in neural networks. 35th International
Conference on Machine Learning, ICML 2018, 10, 7118–7127.
http://arxiv.org/abs/1807.04225
Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K.,
Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S.,
Ramamurthy, K. N., Richards, J., Saha, D., Sattigeri, P., Singh, M.,
Varshney, K. R., & Zhang, Y. (2018). AI Fairness 360: An Extensible
Toolkit for Detecting, Understanding, and Mitigating Unwanted
Algorithmic Bias. https://github.com/ibm/aif360
Bernstein, S., Betsill, M., Hoffmann, M., & Paterson, M. (2010). A Tale of Two
Copenhagens: Carbon Markets and Climate Governance. Millennium,
39(1), 161–173. https://doi.org/10.1177/0305829810372480
Bijker, W. E. (2008). Technology, Social Construction of. In The International
Encyclopedia of Communication. John Wiley & Sons, Ltd.
https://doi.org/10.1002/9781405186407.wbiect025
Birnbacher, D. (2016). Are Autonomous Weapons Systems a Threat to Human
Dignity. Autonomous Weapons Systems: Law, Ethics, Policy, 105–
121.
Boezeman, D., Vink, M., & Leroy, P. (2013). The Dutch Delta Committee as a
boundary organisation. Environmental Science & Policy, 27, 162–171.
https://doi.org/https://doi.org/10.1016/j.envsci.2012.12.016
Borghard, E. D., & Lonergan, S. W. (2018). Why Are There No Cyber Arms Control
Agreements? Defense One. January, 18.
Bostrom, N. (2006). How Long Before Superintelligence? International Journal of
Futures Studies.

108
Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., Dafoe,
A., Scharre, P., Zeitzoff, T., Filar, B., Anderson, H., Roff, H., Allen,
G. C., Steinhardt, J., Flynn, C., hÉigeartaigh, S. Ó., Beard, S.,
Belfield, H., Farquhar, S., … Amodei, D. (2018). The Malicious Use
of Artificial Intelligence: Forecasting, Prevention, and Mitigation.
ArXiv Preprint ArXiv:1802.07228, February 2018, 101.
http://arxiv.org/abs/1802.07228
Bulkeley, H. (2010). Climate policy and governance: An editorial essay. Wiley
Interdisciplinary Reviews: Climate Change, 1(3), 311–313.
https://doi.org/10.1002/wcc.1
Bulkeley, H., & Newell, P. (2015). Governing Climate Change. Routledge.
https://doi.org/10.4324/9781315758237
Campolo, A., & Crawford, K. (2020). Enchanted Determinism: Power without
Responsibility in Artificial Intelligence. Engaging Science,
Technology, and Society, 6, 1–19.
https://doi.org/10.17351/ests2020.277
Carrington, D. (2010). IPCC officials admit mistake over melting Himalayan
glaciers. The Guardian.
Cash, D., Clark, W. C., Alcock, F., Dickson, N., Eckley, N., & Jager, J. (2003).
Salience, Credibility, Legitimacy and Boundaries: Linking Research,
Assessment and Decision Making. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.372280
Clarck, J. (2015, December 8). Why 2015 Was a Breakthrough Year in Artificial
Intelligence - Bloomberg. Bloomberg Technology.
https://www.bloomberg.com/news/articles/2015-12-08/why-2015-
was-a-breakthrough-year-in-artificial-intelligence
Craglia, M. (Ed. ., Annoni, A., Benczur, P., Bertoldi, P., Delipetrev, P., De Prato,
G., Feijoo, C., Fernandez-Macias, E., Gomez, E., Iglesias, M.,
Junklewitz, H., M, L.-C., Martens, B., Nascimento, S., Nativi, S.,
Polvora, A., Sanchez, I., Tolan, S., & Tuomi, I. (2018). Artificial
Intelligence - A European perspective.
https://ec.europa.eu/jrc/en/publication/eur-scientific-and-technical-
research-reports/artificial-intelligence-european-perspective
Cronk, T. M. (2019, February 12). DOD Unveils Its Artificial Intelligence Strategy.
DoD.
https://www.defense.gov/Explore/News/Article/Article/1755942/dod-
unveils-its-artificial-intelligence-strategy/
Csernatoni, R. (2019). Beyond the Hype: The EU and the AI Global “Arms Race” -
Carnegie Europe - Carnegie Endowment for International Peace.
European Leadership Network.
https://carnegieeurope.eu/2019/08/21/beyond-hype-eu-and-ai-global-
arms-race-pub-79734
Cussins, J. (2017, November 14). AI Researchers Create Video to Call for
Autonomous Weapons Ban at UN - Future of Life Institute. The Future
of Life Institute. https://futureoflife.org/2017/11/14/ai-researchers-
create-video-call-autonomous-weapons-ban-un/

109
Dafoe, A. (2018). AI governance: a research agenda. Governance of AI Program,
Future of Humanity Institute, University of Oxford: Oxford, UK, July
2017, 1–53.
Daly, A., Hagendorff, T., Li, H., Mann, M., Marda, V., Wagner, B., & Wang, W.
W. (2020). AI, Governance and Ethics: Global Perspectives. SSRN
Electronic Journal, June, 20–21. https://doi.org/10.2139/ssrn.3684406
Dawson, D., Schleiger, E., Horton, J., McLaughlin, J., Robinson, C., Quezada, G.,
Scowcroft, J., & Hajkowicz, S. (2019). Artificial intelligence:
Australia’s ethics framework. Data61 CSIRO, Australia.
Department of State. (2020, June). The Chinese Communist Party’s Military-Civil
Fusion Policy - United States Department of State.
https://www.state.gov/military-civil-fusion/
Doubleday, R., Wilsdon, J., & Beddington, J. (2013). Future directions for scientific
advice in Whitehall. Alliance for Useful Evidence and Cambridge
Centre for Science and Policy, London, UK, April, 158.
http://www.csap.cam.ac.uk/media/uploads/files/1/fdsaw.pdf
EC. (2018). EC Communication from the Commission to the European Parliament,
the European Council, the Council, the European Economic and Social
Committee and the Committee of the Regions. Artificial Intelligence
for Europe.
ETSI. (2018). Experiential Networked Intelligence ( ENI ); Terminology for Main
Concepts in ENI. In Terminology for Main Concepts in ENI (Vol. 1).
European Union Military Committee. (2016). Avoiding and Minimizing Collateral
Damage in EU-led Military Operations Concept (5785/16).
Executive Office of the President. (2019). Maintaining American Leadership in
Artificial Intelligence: Executive Order 13859. 84(31), 3967–3972.
https://www.govinfo.gov/content/pkg/FR-2019-02-14/pdf/2019-
02544.pdf
Executive Office of the President, & NSTC CT. (2016). Preparing for the future of
Artificial Intelligence.
Fogel, C. (2005). Biotic carbon sequestration and the Kyoto protocol: The construction
of global knowledge by the intergovernmental panel on climate change.
International Environmental Agreements: Politics, Law and
Economics, 5(2), 191–210. https://doi.org/10.1007/s10784-005-1749-7
Future of Life Institute. (2015). Autonomous weapons: an open letter from AI &
robotics researchers. Future of Life Institute.
https://futureoflife.org/open-letter-autonomous-weapons
Garcia, D. (2018). Lethal Artificial Intelligence and Change: The Future of
International Peace and Security. International Studies Review, 20(2),
334–341. https://doi.org/10.1093/isr/viy029
Geist, E. M. (2016). It’s already too late to stop the AI arms race—We must manage
it instead. Bulletin of the Atomic Scientists, 72(5), 318–321.
https://doi.org/10.1080/00963402.2016.1216672

110
Gieryn, T. F. (1983). Boundary-Work and the Demarcation of Science from Non-
Science: Strains and Interests in Professional Ideologies of Scientists.
American Sociological Review, 48(6), 781.
https://doi.org/10.2307/2095325
Goertzel, B. (2015). Artificial General Intelligence. Scholarpedia, 10(11), 31847.
https://doi.org/10.4249/scholarpedia.31847
Guilbeault, D., & Woolley, S. (2016). How Twitter bots are shaping the election. The
Atlantic.
https://www.theatlantic.com/technology/archive/2016/11/election-
bots/506072/
Gunning, D. (2017). Explainable Artificial Intelligence (XAI). Defense Advanced
Research Projects Agency (DARPA), 2(2).
Gustafsson, K. M., & Lidskog, R. (2018). Boundary organizations and
environmental governance: Performance, institutional design, and
conceptual development. Climate Risk Management, 19(January
2017), 1–11. https://doi.org/10.1016/j.crm.2017.11.001
Guston, D. H. (2001). Boundary Organizations in Environmental Policy and Science:
An Introduction. Science, Technology, & Human Values, 26(4), 399–
408. https://doi.org/10.1177/016224390102600401
Haas, P. M. (1992). Introduction: epistemic communities and international policy
coordination. International Organization, 46(1), 1–35.
https://doi.org/10.1017/S0020818300001442
Hagendorff, T. (2020). The Ethics of AI Ethics: An Evaluation of Guidelines. Minds
and Machines, 30(1), 99–120. https://doi.org/10.1007/s11023-020-
09517-8
Hallström, J. (2020). Embodying the past, designing the future: technological
determinism reconsidered in technology education. International
Journal of Technology and Design Education, 0123456789.
https://doi.org/10.1007/s10798-020-09600-2
Haner, J., & Garcia, D. (2019). The Artificial Intelligence Arms Race: Trends and
World Leaders in Autonomous Weapons Development. Global Policy,
10(3), 331–337. https://doi.org/10.1111/1758-5899.12713
Hawley, J. (2020). Armed Forces Journal – Not by widgets alone. Armed Forces
Journal. http://armedforcesjournal.com/not-by-widgets-alone/
Heyns, C. (2016). Autonomous weapons systems: living a dignified life and dying a
dignified death. In Autonomous Weapons Systems (pp. 3–20).
Cambridge University Press.
https://doi.org/10.1017/CBO9781316597873.001
High Level Expert Group on Artificial Intelligence. (2019). Policy and Investment
Recommendations for Trustworthy AI. 52. https://ec.europa.eu/digital-
single-market/en/news/policy-and-investment-recommendations-
trustworthy-artificial-intelligence
Hoppe, R., Wesselink, A., & Cairns, R. (2013). Lost in the problem: The role of
boundary organisations in the governance of climate change. Wiley

111
Interdisciplinary Reviews: Climate Change, 4(4), 283–300.
https://doi.org/10.1002/wcc.225
Howard, A., Zhang, C., & Horvitz, E. (2017). Addressing bias in machine learning
algorithms: A pilot study on emotion recognition for intelligent
systems. 2017 IEEE Workshop on Advanced Robotics and Its Social
Impacts (ARSO), 1–7. https://doi.org/10.1109/ARSO.2017.8025197
Hulme, M., & Mahony, M. (2010). Climate change: What do we know about the
IPCC? Progress in Physical Geography: Earth and Environment, 34(5),
705–718. https://doi.org/10.1177/0309133310373719
Human Rights Watch. (2018). Heed the Call: A Moral and Legal Imperative to Ban
Killer Robots. https://www.hrw.org/report/2018/08/21/heed-
call/moral-and-legal-imperative-ban-killer-robots
Human Rights Watch. (2020). STOPPING KILLER ROBOTS: Country Positions
on Banning Fully Autonomous Weapons and Retaining Human
Control. http://www.hrw.org
Hunt, E., & Jaeggi, S. M. (2013). Challenges for research on intelligence. Journal of
Intelligence, 1(1), 36–54. https://doi.org/10.3390/jintelligence1010036
ICRC. (1987). Treaties, States parties, and Commentaries - Additional Protocol (I) to
the Geneva Conventions, 1977 - 1 - General principles and scope of
application - Commentary of 1987. https://ihl-
databases.icrc.org/applic/ihl/ihl.nsf/Comment.xsp?action=openDocum
ent&documentId=7125D4CBD57A70DDC12563CD0042F793
ICRC. (2018). Autonomous Weapon Systems: An Ethical Basis for Human Control?
In Humanitarian Law & Policy Blog of the ICRC (Issue April).
http://blogs.icrc.org/law-and-policy/2018/04/03/autonomous-weapon-
systems-ethical-basis-human-control/
Inkster, N. (2017). Measuring Military Cyber Power. Survival, 59(4), 27–34.
https://doi.org/10.1080/00396338.2017.1349770
International Committee of the Red Cross. (1949). Geneva Convention Relative to
the Protection of Civilian Persons in Time of War (Forth Geneva
Convention). United Nations Treaty Series.
IPCC Secretariat. (2013a). How does the IPCC approve reports?
http://www.ipcc.ch/news_and_events/docs/factsheets/FS_ipcc_approv
e.pdf
IPCC Secretariat. (2013b). How does the IPCC select its authors?
http://www.ipcc.ch/news_and_events/docs/factsheets/FS_select_autho
rs.pdf
IPCC Secretariat. (2013c). IPCC Factsheet: What is the IPCC. In Geneva: IPCC
Secretariat. Retrieved from http://www. ipcc.
ch/news_and_events/docs/factsheets/FS_what_ipcc. pdf.
IPCC Secretariat. (2013d). IPCC Factsheet: What literature does the IPCC assess?
IPCC Secretariat. (2015). IPCC Factsheet: How does the IPCC review process work?
http://www.ipcc.ch/pdf/ipcc-principles/ipcc-principles-appendix-a-
final.pdfwww.ipcc.ch

112
IPCC Secretariat. (2018). Structure — IPCC. https://www.ipcc.ch/about/structure/
Jelinek, T., Wallach, W., & Kerimi, D. (2021). Policy brief: the creation of a G20
coordinating committee for the governance of artificial intelligence. AI
and Ethics, 1(2), 141–150. https://doi.org/10.1007/s43681-020-00019-
y
Joshi, S. (2018). Army of none: autonomous weapons and the future of war.
International Affairs, 94(5), 1176–1177.
https://doi.org/10.1093/ia/iiy153
Jury, L. (2004, December 2). “Fountain” most influential piece of modern art | The
Independent | The Independent. Independent.
https://www.independent.co.uk/news/uk/this-britain/fountain-most-
influential-piece-of-modern-art-673625.html
Kaplan, A., & Haenlein, M. (2019). Siri, Siri, in my hand: Who’s the fairest in the
land? On the interpretations, illustrations, and implications of artificial
intelligence. Business Horizons, 62(1), 15–25.
https://doi.org/10.1016/j.bushor.2018.08.004
Kirchhoff, C. J., Esselman, R., & Brown, D. (2015). Boundary organizations to
boundary chains: Prospects for advancing climate science application.
Climate Risk Management, 9, 20–29.
https://doi.org/10.1016/j.crm.2015.04.001
Klerkx, L., & Leeuwis, C. (2008). Delegation of authority in research funding to
networks: Experiences with a multiple goal boundary organization.
Science and Public Policy, 35(3), 183–196.
https://doi.org/10.3152/030234208X299053
Kozyulin, V. (2019). Militarization of AI from a Russian Perspective.
Krafft, P. M., Young, M., Katell, M., Huang, K., & Bugingo, G. (2020). Defining
AI in Policy versus Practice. Proceedings of the AAAI/ACM
Conference on AI, Ethics, and Society, 72–78.
https://doi.org/10.1145/3375627.3375835
Krauth, O. (2018, January 4). Artificial ignorance: The 10 biggest AI failures of 2017
- TechRepublic. Tech Republic.
https://www.techrepublic.com/article/the-10-biggest-ai-failures-of-
2017/
Krishnan, A. (2016). Killer robots: Legality and ethicality of autonomous weapons.
In Killer Robots: Legality and Ethicality of Autonomous Weapons.
Routledge. https://doi.org/10.4324/9781315591070
Kurzweil, R. (2014). The Singularity is Near. In Ethics and Emerging Technologies.
https://doi.org/10.1057/9781137349088_26
Lee, E., Su Jung, C., & Lee, M. K. (2014). The potential role of boundary
organizations in the climate regime. Environmental Science and Policy,
36, 24–36. https://doi.org/10.1016/j.envsci.2013.07.008
Legg, S., & Hutter, M. (2007). A Collection of Definitions of Intelligence. 1–12.
http://arxiv.org/abs/0706.3639

113
Lemos, M. C., Kirchhoff, C. J., Kalafatis, S. E., Scavia, D., & Rood, R. B. (2014).
Moving Climate Information off the Shelf: Boundary Chains and the
Role of RISAs as Adaptive Organizations. Weather, Climate, and
Society, 6(2), 273–285. https://doi.org/10.1175/WCAS-D-13-00044.1
Lemos, M. C., & Morehouse, B. J. (2005). The co-production of science and policy
in integrated climate assessments. Global Environmental Change,
15(1), 57–68.
https://doi.org/https://doi.org/10.1016/j.gloenvcha.2004.09.004
Lidskog, R., & Sundqvist, G. (2015). When Does Science Matter? International
Relations Meets Science and Technology Studies. Global
Environmental Politics, 15(1), 1–20.
https://doi.org/10.1162/GLEP_a_00269
Ludovic Righetti, Sharkey, N., Arkin, R., Ansell, D., Sassòli, M., Heyns, C., Asaro,
P., & Lee, P. (2014). Autonomous Weapon Systems: Technical,
Military, Legal and Humanitarian Aspects. In Autonomous Weapon
Systems: Technical, Military, Legal and Humanitarian Aspects: Expert
Meeting (Issue March).
Maas, M. M. (2019). How viable is international arms control for military artificial
intelligence? Three lessons from nuclear weapons. Contemporary
Security Policy, 40(3), 285–311.
https://doi.org/10.1080/13523260.2019.1576464
Mann, J. (2017, May 9). How Duchamp’s Urinal Changed Art Forever - Artsy. Artsy.
https://www.artsy.net/article/artsy-editorial-duchamps-urinal-changed-
art-forever
Marchant, G. E., Allenby, B., Arkin, R. C., Borenstein, J., Gaudet, L. M., Kittrie,
O., Lin, P., Lucas, G. R., O’Meara, R., & Silberman, J. (2015).
International governance of autonomous military robots. In Handbook
of Unmanned Aerial Vehicles. https://doi.org/10.1007/978-90-481-
9707-1_102
Markotkin, N., & Chernenko, E. (2020, August 5). Developing Artificial
Intelligence in Russia: Objectives and Reality - Carnegie Moscow
Center . Carnegie Endowment for International Peace.
https://carnegie.ru/commentary/82422
McCarthy, J. (1989). Artificial Intelligence, Logic and Formalizing Common Sense.
In Philosophical Logic and Artificial Intelligence.
https://doi.org/10.1007/978-94-009-2448-2_6
Mckitrick, R. (2011). What is Wrong With the IPCC? Proposals for a Radical Reform.
In The Global Warming Policy Foundation.
http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/mckitrick-
ipcc_reforms.pdf
McNamara, A., Smith, J., & Murphy-Hill, E. (2018). Does ACM’s code of ethics
change ethical decision making in software development? Proceedings
of the 2018 26th ACM Joint Meeting on European Software
Engineering Conference and Symposium on the Foundations of
Software Engineering, 729–733.
https://doi.org/10.1145/3236024.3264833

114
Miller, C. (2001). Hybrid management: Boundary organizations, science policy, and
environmental governance in the climate regime. Science Technology
and Human Values, 26(4), 478–500.
https://doi.org/10.1177/016224390102600405
Minsky, M. (1982). Semantic information processing.
Minsky, M. (1991). Society of mind. Artificial Intelligence.
https://doi.org/10.1016/0004-3702(91)90036-j
Moser, S. (2009). Making a difference on the ground: the challenge of demonstrating
the effectiveness of decision support. Climatic Change, 95(1–2), 11–
21. https://doi.org/10.1007/s10584-008-9539-1
Mundy, J. (2015). Marcel Duchamp: Fountain 1917, replica 1964. Tate Modern.
https://www.tate.org.uk/art/artworks/duchamp-fountain-t07573
Narayanan, A. (2018). Tutorial: 21 Fairness Definitions and their Politics.
Conference on Fairiness, Accountability, and Transparency.
Norton, A. (2017). Automation and inequality: the changing world of work in the
global South. International Institute for Environment and Development.
O’neil, C. (2016). Weapons of math destruction: How big data increases inequality
and threatens democracy. Broadway Books.
OECD. (2019a). National Strategy for Artificial Intelligence (AI) Development.
OECD AI Policy Observatory. https://oecd.ai/dashboards/policy-
initiatives/2019-data-policyInitiatives-24901
OECD. (2019b). Recommendation of the Council on Artificial Intelligence (OECD)
(Issue OECD/LEGAL/0449). https://doi.org/10.1017/ilm.2020.5
Parker, J., & Crona, B. (2012). On being all things to all people: Boundary
organizations and the contemporary research university. Social Studies
of Science, 42(2), 262–289.
https://doi.org/10.1177/0306312711435833
Payne, K. (2018). Strategy, evolution, and war: From apes to artificial intelligence.
Georgetown University Press.
Pereira, G., & Moreschi, B. (2020). Artificial intelligence and institutional critique
2.0: unexpected ways of seeing with computer vision. AI and Society,
0123456789. https://doi.org/10.1007/s00146-020-01059-y
Perrow, C. (2011). Normal Accidents. Princeton University Press.
https://doi.org/10.2307/j.ctt7srgf
Pidgeon, N. (2011). In retrospect: Normal Accidents. Nature, 477(7365), 404–405.
https://doi.org/10.1038/477404a
Poole, D. L., & Mackworth, A. K. (2017). Artificial Intelligence: Foundations of
Computational Agents. In Journal of Aesthetic Education. Cambridge
University Press. https://doi.org/10.1017/9781108164085
Pothier, J. (2014). Abstract Thinking - Electrical and Computer Engineering Design
Handbook. Electrical and Computer Engineering Design Handbook.
http://sites.tufts.edu/eeseniordesignhandbook/2014/abstract-thinking/

115
On the development of Artificial Intelligence in Russian Federation, (2019)
(testimony of Vladimir Putin).
http://publication.pravo.gov.ru/Document/Text/0001201910110003
Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract.
Ethics and Information Technology, 20(1), 5–14.
https://doi.org/10.1007/s10676-017-9430-8
Research and Markets. (2018, January). Artificial Intelligence in Military Market by
Offering (Software, Hardware, Services), Technology (Learning &
Intelligence, Advanced Computing, AI Systems), Application,
Platform, Region - Global Forecast to 2025. Research and Markets.
https://www.researchandmarkets.com/research/z8tfh7/18_8_billion
Rosert, E., & Sauer, F. (2019). Prohibiting Autonomous Weapons: Put Human
Dignity First. Global Policy, 10(3), 370–375.
https://doi.org/10.1111/1758-5899.12691
Russell, S. J., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach,
Global Edition. In Artificial Intelligence: A Modern Approach, Global
Edition.
Samoili, S., López Cobo, M., Gómez, E., De Prato, G., Martínez-Plumed, F., &
Delipetrev, B. (2020). AI Watch - Defining Artificial Intelligence.
Towards an operational definition and taxonomy of artificial
intelligence. In Joint Research Centre (European Commission).
https://doi.org/10.2760/382730
Sassóli, M. (2014). Autonomous Weapons and International Humanitarian Law:
Advantages, Open Technical Questions and Legal Issues to be
Clarified. International Law Studies, 90(308).
Saxon, D. (2016). A human touch: Autonomous weapons, DoD Directive 3000.09 and
the interpretation of ‘appropriate levels of human judgment over the use
of force.’ In Autonomous Weapons Systems: Law, Ethics, Policy (pp.
185–208). Cambridge University Press.
https://doi.org/10.1017/CBO9781316597873.009
Schank, R. C. (1991). Where’s the AI? AI Magazine, 12(4), 38–49.
Scharre, P. (2017, December 22). Why You Shouldn’t Fear “Slaughterbots” . IEEE
Spectrum. https://spectrum.ieee.org/automaton/robotics/military-
robots/why-you-shouldnt-fear-slaughterbots
Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. WW
Norton & Company.
Schiff, D., Biddle, J., Borenstein, J., & Laas, K. (2020). What’s next for AI ethics,
policy, and governance? A global overview. AIES 2020 - Proceedings
of the AAAI/ACM Conference on AI, Ethics, and Society, 153–158.
https://doi.org/10.1145/3375627.3375804
Schleussner, C.-F., Rogelj, J., Schaeffer, M., Lissner, T., Licker, R., Fischer, E.
M., Knutti, R., Levermann, A., Frieler, K., & Hare, W. (2016).
Science and policy characteristics of the Paris Agreement temperature
goal. Nature Climate Change, 6(9), 827–835.
https://doi.org/10.1038/nclimate3096

116
Schmitt, M. N. (2012). Autonomous Weapon Systems and International Humanitarian
Law: A Reply to the Critics. SSRN Electronic Journal.
https://doi.org/10.2139/ssrn.2184826
Schwab, K., & Davis, N. (2018). Shaping the future of the fourth industrial revolution.
Currency.
Seymour, J., & Tully, P. (2016). Weaponizing data science for social engineering:
Automated E2E spear phishing on Twitter. Black Hat USA, 37, 1–39.
Sharkey, A. (2019). Autonomous weapons systems, killer robots and human dignity.
Ethics and Information Technology, 21(2), 75–87.
https://doi.org/10.1007/s10676-018-9494-0
Shoham, Y., Perrault, R., Brynjolfsson, E., Clark, J., Manyika, J., Niebles, J. C.,
Lyons, T., Etchemendy, J., Grosz, B., & Bauer, Z. (2018). The AI
Index 2018 annual report. AI Index Steering Committee, Human-
Centered AI Initiative.
Shukla, P. R., Skea, J., Calvo Buendia, E., Masson-Delmotte, V., Pörtner, H. O.,
Roberts, D. C., Zhai, P., Slade, R., Connors, S., & Van Diemen, R.
(2019). IPCC, 2019: Climate Change and Land: an IPCC special report
on climate change, desertification, land degradation, sustainable land
management, food security, and greenhouse gas fluxes in terrestrial
ecosystems.
SIPRI. (2019). SIPRI Military Expenditure Database. Military Expenditure Database.
https://www.sipri.org/databases/milex
Solis, G. D. (2016). The Law of Armed Conflict. Cambridge University Press.
https://doi.org/10.1017/CBO9781316471760
Stobbs, N., Bagaric, M., & Hunter, D. (2017). Can sentencing be enhanced by the
use of artificial intelligence? Criminal Law Journal, 41(5), 261–277.
Surber, R. (2018). Artificial Intelligence: Autonomous Technology (AT), Lethal
Autonomous Weapons Systems (LAWS) and Peace Time Threats.
ICT4Peace Foundation and the Zurich Hub for Ethics and Technology
(ZHET) P, 1, 21. https://ict4peace.org/wp-
content/uploads/2018/06/2018_RSurber_AI-AT-LAWS-Peace-Time-
Threats_final.pdf
The Cambridge Handbook of Intelligence. (2011). In R. J. Sternberg & S. B.
Kaufman (Eds.), The Cambridge Handbook of Intelligence. Cambridge
University Press. https://doi.org/10.1017/CBO9780511977244
The Federal Government. (2018). Artificial Intelligence Strategy. In Nantionale
Strategie für Künstliche Intelligenz AI Made in Germany (Issue
November).
The Presidential Executive Office. (2018). The President signed Executive Order On
National Goals and Strategic Objectives of the Russian Federation
through to 2024 • President of Russia.
http://en.kremlin.ru/events/president/news/57425
The Varonis Data Lab. (2019). Data Gets Personal: 2019 Global Data Risk Report.

117
Thurnher, J. S. (2014). Examining Autonomous Weapon Systems from a Law of
Armed Conflict Perspective. In New Technologies and the Law of
Armed Conflict (pp. 213–228). T.M.C. Asser Press.
https://doi.org/10.1007/978-90-6704-933-7_13
Trepte, S., Dienlin, T., & Reinecke, L. (2015). Reforming European Data Protection
Law. In S. Gutwirth, R. Leenes, & P. de Hert (Eds.), Von Der
Gutenberg-Galaxis Zur Google-Galaxis. From the Gutenberg Galaxy to
the Google Galaxy (Vol. 20). Springer Netherlands.
https://doi.org/10.1007/978-94-017-9385-8
Trump, D. J. (2017). National security strategy of the United States of America.
Executive Office of The President Washington DC Washington United
States. https://www.whitehouse.gov/wp-
content/uploads/2017/12/NSS-Final-12-18-2017-0905.pdf
UK Ministry of Defense. (2018). Strategic Trends Programme: Global Strategic
Trends. Out to 2045. Development, Concepts, Doctrine Centre.
US Department of Defense. (2007). Unmanned systems roadmap 2007-2032.
Defence Technical Information Centre.
van der Maas, H., Kan, K.-J., & Borsboom, D. (2014). Intelligence Is What the
Intelligence Test Measures. Seriously. Journal of Intelligence, 2(1), 12–
15. https://doi.org/10.3390/jintelligence2010012
Vardi, M. Y. (2012). Artificial intelligence. Communications of the ACM, 55(1), 5–
5. https://doi.org/10.1145/2063176.2063177
Villani, C., Schoenauer, M., Bonnet, Y., Berthet, C., Cornut, A.-C., Levin, F., &
Rondepierre, B. (2018). For a meaningful Artificial Intelligence:
Towards a French and European strategy. 152.
https://www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-
VF.pdf
Vogel, K. M., Balmer, B., Evans, S. W., Kroener, I., Matsumoto, M., & Rappert,
B. (2017). Knowledge and Security. In U. Felt, R. Fouché, C. A. Miller,
& L. Smith-Doerr (Eds.), The Handbook of Science and Technology
Studies (pp. 973–1002). The MIT Press.
Voigt, P., & von dem Bussche, A. (2017). Rights of Data Subjects. In The EU General
Data Protection Regulation (GDPR): A Practical Guide (pp. 141–187).
Springer International Publishing. https://doi.org/10.1007/978-3-319-
57959-7_5
Wagner, M. (2012). Beyond the Drone Debate: Autonomy in Tomorrow’s
Battlespace. Proceedings of the ASIL Annual Meeting.
https://doi.org/10.5305/procannmeetasil.106.0080
Webster, G., Creemers, R., Triolo, P., & Kania, E. (2017). Full Translation: China’s
‘New Generation Artificial Intelligence Development Plan’(2017).
DigiChina, 1.
Weedon, J., Nuland, W., & Stamos, A. (2017). Information operations and
Facebook. Retrieved from Facebook: Https://Fbnewsroomus. Files.
Wordpress. Com/2017/04/Facebook-and-Information-Operations-v1.
Pdf.

118
Wehrens, R., Bekker, M., & Bal, R. (2014). Hybrid Management Configurations in
Joint Research. Science, Technology, & Human Values, 39(1), 6–41.
https://doi.org/10.1177/0162243913497807
Weiland, S., Weiss, V., & Turnpenny, J. (2013). Science in Policy Making. Nature
and Culture, 8(1), 1–7. https://doi.org/10.3167/nc.2013.080101
West, S. M., Whittaker, M., & Crawford, K. (2019). Discriminating systems.
Woolley, S. C., & Howard, P. (2017). Computational propaganda worldwide:
Executive summary.
Yang, X., Li, Y., & Lyu, S. (2019). Exposing deep fakes using inconsistent head
poses. ICASSP 2019-2019 IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP), 8261–8265.
Yunhe, P., Hequan, W., Lan, X., Zheng, L., Yixin, D., Xinghua, D., Daitian, L.,
Zhen, Y., Fangjuan, Y., & China, A. I. (2019). China AI
Development Report. In China Institute for Science and Technology
Policy at Tsinghua University. Retrieved from http://www. sppm.
tsinghua. edu.
cn/eWebEditor/UploadFile/China_AI_development_report_2018. pdf
(Issue July).
http://www.sppm.tsinghua.edu.cn/eWebEditor/UploadFile/China_AI_
development_report_2018.pdf
Zafonte, A. (2018). Weaponized Artificial Intelligence & Stagnation in the CCW: A
North-South Divide. E-International Relations. https://www.e-
ir.info/2018/11/01/weaponized-artificial-intelligence-stagnation-in-
the-ccw-a-north-south-divide/
Zeitzoff, T. (2017). How Social Media Is Changing Conflict. Journal of Conflict
Resolution, 61(9), 1970–1991.
https://doi.org/10.1177/0022002717721392

119
CURRICULUM VITAE

Name Surname : Onur TÜRK

EDUCATION

• B.Sc. : 2014, Istanbul Technical University, Management


Faculty, Management Engineering

PROFESSIONAL EXPERIENCE AND REWARDS

• 2014-2016 Turkish Airlines, Sales & Marketing Expert


• 2016-2017 ROOT, Chief Marketing Officer
• 2017-2019 Bavulier, Co-founder

120

You might also like