Professional Documents
Culture Documents
1 Introduction
Ongoing debates around artificial intelligence (AI) typically revolve around if and
how to regulate it. AI brings about significant benefits, but it also poses serious
risks, especially related to privacy, safety, and security. To date, there is no specific
(political) consensus on how to best regulate this technology so that potential risks
of AI are offset by the innovations that it brings about. In this light, while all eyes
are on the legislator, this article pays attention to existing and emerging private rules
that work as a governance framework pending legislative initiatives. The focus is
on technical standardisation. There are a number of reasons that justify this choice.
First, technical standardisation stands as a meaningful type of private regulation.
Second, technical standardisation is highly prolific in the development of AI
technologies. Third, technical standardisation is not politically neutral. Fourth,
technical standardisation has the capacity for regulatory and policy diffusion. More
specifically, the article explores standardisation as an account of private regulation
in artificial intelligence (PRAI) and asks what views are being implemented in the
development of these non-state rules for AI. In so doing, it considers that private
regulation is being turned into a tool for power politics
The pervasive use of AI has spurred debates about the ethics of AI more
broadly. Most worryingly, a pervasive, uncontrolled and illegitimate use of AI for
criminal justice and surveillance purposes sparks fears of serious human rights
violations (Feldstein 2019). As a result, public opinion is becoming increasingly
1
For example: https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-
for-public-regulation-and-corporate-responsibility/.
2
Financial Times (December 1, 2019), Chinese tech groups shaping UN facial recognition standards.
3
WIRED (March 13, 2019), China Is Catching Up to the US in AI Research—Fast.
In this article I follow the view that regards regulation as conceptually narrower
than governance (Black 2002). Thus, while regulation is about steering behaviour,
a broader understanding of governance, as opposed to regulation, involves
distributional outcomes (Braithwaite et al. 2007).
Despite their level of technicality, technical standards are not politically neutral
(Büthe & Mattli 2011). Histories of technology, infrastructure, networks and
4
Available at https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.
5
European Commission, White Paper on Artificial Intelligence - A European approach to excellence and trust.
COM(2020) 65 final.
6
China Electronics Standardization Institute (CESI). AI Standardization White Paper.
7
See Artificial Intelligence and future directions for ETSI. ETSI White Paper No. #34. 1st edition – June 2020.
In theory, private regulation remains ill-suited for the production of public ‘goods’
(Bergstrom et al. 1986). In practice, however, technical standardisation performed
by established and constitutionalised SDOs is not unambiguously private
regulation. Some SDOs are not even bona fide private bodies, for they are also
composed of governmental actors, with whom they divide the regulatory space
(Cafaggi 2015). The institutional and procedural structure of these bodies have been
crafted following, more generally, (north-)western accounts of democracy and
accountability as a response to long-existing multidimensional accountability and
democratic deficit claims (cf. Cantero Gamito 2020 in telecoms standardisation).
The result is a greater institutional complementarity – that is, the more a
transnational private body reproduces the procedures of publicly-made law, the
more likely it is that the State uses that body to pursue its interests (Büthe & Mattli
2011).
While the increasing use of private regulation has been widely questioned,
international standardisation by institutionalised SDOs, such as the International
Telecommunications Union (ITU) or the International Standardisation
Organization (ISO), has been largely uncontested – this is particularly true where
these organisations have already undergone a process of self-constitutionalisation.
The question that arises is then: who is using SDOs as a governance tool?
Institutional stability is also critical in standardisation. The value of the
standard depends on the platform where it has been produced – ‘if the standard-
setting platform is not stable, then the standard is not stable and therefore the
predictability of the business based on the standard is not secured’. 8 Institutional
stability requires collaboration, alignment and synchronisation of a large set of
business actors and technologies (Ali-Vehmas & Casey 2012). At the same time,
the presence of (large) network effects can create lock-in results, pushing for clarity
and structure in standardisation, especially in a context where standardisation
develops simultaneously in multiple parallel processes.
A standard can secure wider adoption not only because of its technical
quality only, but specially through political support (Kwak et al. 2012).
Standardisation within intergovernmental organisations is more likely to be
successfully diffused. Latecomers to the international standardisation arena, such
as China, will seek participation in already structured SSOs. In addition to increased
participation, newcomers will establish strategic alliance formations through
corporations for knowledge transfer and political cooperation (ibid.).
Major SDOs provide not only the necessary expertise and institutional
capacity for international cooperation but also the critical infrastructure for
standards diffusion (Büthe & Mattli 2011). Aware of this, the EU has
8
Interview with a former member of a team working on compatibility and industry collaboration, regular
participant in SDOs meetings. Helsinki, November 2016.
9
Made in China 2025 (see http://english.www.gov.cn/2016special/madeinchina2025/) and China Standards
2035.
10
Several media are lately reporting power dynamics within SDOs. This article collects a set of recent pieces
on the topic.
11
‘Standardisation exists because it is valuable to those taking part in it, private actors or the governments’.
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021. TSAG acts act as an advisory body to the study groups, membership and staff of ITU-T and it is
responsible for the ITU-T working procedures.
12
See the publicly available draft at
https://share.ansi.org/Shared%20Documents/News%20and%20Publications/Links%20Within%20Stories/Firs
t_Draft_USSS-2020_For_Comment.pdf.
13
See, for example, 2019 China Standardisation Development annual report; available at
http://www.cnstandards.net/index.php/china-standardization-annual-report-2019/. See also China Electronics
Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by J. Ding.
14
J Palfrey and U Gasser, Interop: The promise and perils of highly interconnected systems. (New York: Basic
Books, 2012).
15
China Electronics Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by
J. Ding.
The ITU is a United Nations specialised agency. Created in 1865, the ITU is
responsible for regulating and coordinating telecommunications internationally.
The international coordination of telecommunications required the creation of
widely recognised standards to make interconnection and interoperability possible,
turning the ITU into the oldest international standardisation body. Over time, as
technology develops, the ITU’s scope has expanded to cover ICT more broadly.
This includes the development of standards relevant for AI.
16
The entire list of ongoing projects under ISO/IEC JTC 1/SC 42 can be found at
https://www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0.
17
European Commission, Communication - Artificial Intelligence for Europe. COM(2018)237 final.
18
CEN-CENELEC response to the EC White Paper on AI (version 2020-06); available at
https://www.cencenelec.eu/news/policy_opinions/PolicyOpinions/CEN-
CLC%20Response%20to%20EC%20White%20Paper%20on%20AI.pdf.
19
See the Problem Statement (ETSI GR SAI 004 V1.1.1 (2020-12)) at
https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf.
20
Available at https://standards.ieee.org/industry-connections/ec/autonomous-systems.html.
10
21
A contribution is defined as the membership input into a SG proposing new work areas, drafting
Recommendations or introducing changes to existing Recommendations.
22
Section 2.3.1 Recommendation ITU-T A.1 - Working methods for study groups of the ITU
Telecommunication Standardization Sector (09/2019), hereinafter ‘Rec. ITU-T A.1 (09/2019)’, available at
https://www.itu.int/rec/T-REC-A.1-201909-I/en.
23
Ibid. section 2.3.3.3.
24
Ibid. section 2.3.3.4.
25
Ibid. section 2.3.3.3.
26
The different SGs can be found at https://www.itu.int/en/ITU-T/studygroups/2017-2020/Pages/default.aspx.
Due to the pandemic, the next World Telecommunication Standardization Assembly (WTSA20), originally
scheduled in November 2020, has been postponed until 2022, so the works of the current SGs continue based
on the ITU-T work continuity plan (adopted October 31st, 2020), available at https://www.itu.int/md/S20-
CLVC2-C-0003/en.
11
Deep learning software F.AI-DLFE Under study 2021 Shuo Liu (NGAI)
framework evaluation Yanjun Ma (Baidu)
methodology Yuntao Wang (CAICT)
Technical framework for deep F.AI-DMPC Under study 2022 Min Liu (ICT)
neural network model partition Wei Meng (ZTE)
and collaborative execution Yuntao Wang (CAICT)
Yuwei Wang (ICT)
Framework for audio F.AI-FASD Under study 2022 Xiaofei Dong (CAICT)
structuralizing based on deep Sun Li (CAICT)
neural network Qing Liu (China Telecom)
Ranran Zeng (China
Telecom)
Technical requirements and F.AI-ILICSS Under study 2021 Xiaofei Dong (CAICT)
evaluation methods of intelligent Jiaxuan Hu (Tencent)
levels of intelligent customer Pin Wang (Tencent)
service system Xueqiang Zhang (NGAI)
Requirements for the construction F.AI-MKGDS Under study 2021 Lin Shi (RITT)
of multimedia knowledge graph Mingjun Sun (CATR)
database structure based on Yuntao Wang (CAICT)
artificial intelligence Mingzhi Zheng (NGAI)
Technical framework for shared F.AI-MLTF Under study 2021 Xiongwei Jia (China
machine learning system Unicom)
Kepeng Li (Alibaba)
Xinyang Piao (Alibaba)
27
Information available at http://www.itu.int/net4/ITU-T/lists/mgmt.aspx?Group=16.
28
Rec. ITU-T A.1 (09/2019), section 1.3.1.
29
Ibid. section 2.1.1.
30
Information available at http://www.itu.int/net4/ITU-T/lists/loqr.aspx?Group=16&Period=16.
12
Requirements for smart factory F.AI-SF Under study 2022 Jie Li (China Telecom)
based on artificial intelligence Zhen Yang (QQ)
Requirements for smart speaker F.IMCS Under study 2021 Baoping Cheng (China
based intelligent multimedia Mobile)
communication system Jun Lei (China Mobile)
Requirements and evaluation F.REAIOCR Under study 2021 Xiaofei Dong (CAICT)
methods for AI-based optical Shuo Liu (CAICT)
character recognition service Shuhuan Mei (NGAI)
Requirements for smart F.SBNG Under study 2022 Guanyi Jia (China
broadband network gateway in Telecom)
multimedia content transmission Li Jiacong (China Telecom)
Bo Lei (China Telecom)
Requirements for smart class F.SCAI Under study 2021 Tengfei Liu (China
based on artificial intelligence Unicom)
Yongsheng Liu (China
Unicom)
Liang Wang (ZTE)
Yuntao Wang (CAICT)
As observed in the table above, 100% of the editors are representatives of Chinese
organisations. Although leading positions such as SG chairman, rapporteur or editor
do not evidence any particular influence on the development of the standard,32 the
significant participation of Chinese delegations in AI standardisation is consistent
with China’s strategy and growing engagement in international standardisation.
While in the study periods 1997-2000 and 2001-2004, China did not have any
representative as chair, rapporteur or editor in SG16, in the current study period
(2017-2020) there are Chinese representatives as rapporteur (5), editor (more than
150)33 and chair of the said SG.34
Following this trend, it is also important to signal that Q5/16 explicitly
mentions as other bodies with a relationship to this Question the work by ISO
(international), IEC (international), ETSI (Europe), Artificial Intelligence Industry
Alliance (China), and the China Communications Standards Association. 35
Moreover, in preparation for the upcoming WTSA-20, China has submitted the
largest number of candidates (12) for the position of Chairmen and Vice-Chair for
those Groups will have reached their term limit by WTSA-20, including SG16. This
31
Data retrieved from https://www.itu.int/ITU-T/workprog/wp_search.aspx?sg=16&q=5.
32
Rec. ITU-T A.1 (09/2019), see section 2.3.1.
33
It is important to note that editors do not serve during the entire study period.
34
Information on the composition of the different study periods available at https://www.itu.int/en/ITU-
T/studygroups/2017-2020/Pages/default.aspx.
35
See https://www.itu.int/en/ITU-T/studygroups/2017-2020/16/Pages/q5.aspx.
13
36
The full list of candidates can be found at https://www.itu.int/en/ITU-T/wtsa20/candidates/Pages/ms.aspx.
37
See section 2.1, Recommendation ITU-T A.7 - Focus groups: Establishment and working procedures,
available at https://www.itu.int/rec/T-REC-A.7-201610-I/en.
38
IMT stands for International Mobile Telecommunications.
39
ITU-T Y.3172 (06/2019) - Architectural framework for machine learning in future networks including IMT-
2020.
40
ITU-T Y.3174 (02/2020) - Framework for data handling to enable machine learning in future networks
including IMT-2020.
41
Representative of China Mobile as Chairman of WG2: Data formats & ML technologies and a representative
of ZTE as Chairman of WG3: ML-aware network architecture. One of the Vice-chairs of FG ML5G is from
CAICT.
42
See https://www.itu.int/en/ITU-T/focusgroups/ai4ee/Pages/default.aspx
43
Ibid.
14
4.3 ‘Why not to have the standard written the way you like it?’ 50
44
Interview with one of the FG-AI4H members on December 1st, 2020 (videoconference).
45
See https://www.itu.int/en/ITU-T/focusgroups/ai4ad/Pages/default.aspx.
46
Ibid.
47
ITU/WHO Focus Group on Artificial Intelligence for Health, White Paper. Available at
https://www.itu.int/en/ITU-T/focusgroups/ai4h/Documents/FG-AI4H_Whitepaper.pdf.
48
Ibid.
49
See https://www.itu.int/en/ITU-T/focusgroups/an/Pages/default.aspx.
50
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021.
15
51
Section 2.3 Recommendation ITU-T A.1 - Working methods for study groups of the ITU
Telecommunication Standardization Sector (09/2019), hereinafter ‘Rec. ITU-T A.1 (09/2019)’, available at
https://www.itu.int/rec/T-REC-A.1-201909-I/en.
52
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021. TSAG acts act as an advisory body to the study groups, membership and staff of ITU-T and it is
responsible for the ITU-T working procedures.
53
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021.
54
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
55
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
56
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
57
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
16
58
Interview with a former member of a team working on compatibility and industry collaboration, regular
participant in SDOs meetings. Helsinki, November 2016.
59
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
60
Data available at https://www.itu.int/online/mm/scripts/gensel11.
61
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
62
See, for example, 2019 China Standardisation Development annual report; available at
http://www.cnstandards.net/index.php/china-standardization-annual-report-2019/. See also China Electronics
Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by J. Ding.
https://docs.google.com/document/d/1VqzyN2KINmKmY7mGke_KR77o1XQriwKGsuj9dO4MTDo/.
63
Financial Times (March 23, 2020), Inside China’s controversial mission to reinvent the internet.
17
This article is motivated by the question of how standardisation is used as a tool for
power politics. In particular, assuming that standardisation works as a governance
framework for AI, what are the lessons to be drawn from China’s increased
participation in AI standardisation? With the help of the descriptive data displayed
above, this section draws some theoretical conclusions. From an institutional
analysis perspective, the results are revealing: standardisation is responding to
contemporary events in the geopolitical board. The following observations are
worth considering in the definition of a governance model for AI.
64
See the different initiatives at https://futureoflife.org/ai-policy-china/.
65
Plan for the Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35)
66
The New York Times (April 14, 2019), One Month, 500,000 Face Scans: How China Is Using A.I. to Profile
a Minority.
67
In addition to the increasing representation within the ITU, China has also secured managing positions at the
joint initiative ISO/IEC SC 42. The current chairperson of ISO/IEC SC 42 is Mr Wael William Diab. Mr Diab,
Senior Director at Huawei.
68
For further information see https://www.beltroad-initiative.com/belt-and-road/.
18
69
Available at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-
leadership-artificial-intelligence/.
19
The use of artificial intelligence is pervasive, and its possibilities remain unknown.
Yet, the way in which AI develops and it is incorporated into the society will depend
on its regulatory treatment. To date, the legislator has adopted a cautious approach
in the regulation of AI and only very specific rules have been put in place while
specific use cases are being considered to put forward more concrete regulatory
initiatives. This reflects a narrative of AI exceptionalism where the gap left by the
legislator is filled with technical and ethical overlays produced by non-state actors.
This article has shown how instances of political and economic power are
taking positions in shaping the direction of AI governance, particularly by using
traditional instances of non-state and private regulation such as international
standardisation. SDOs function as extensive governance structures beyond the state.
However, the lack of the State in the private regulation narrative is the wrong
perspective (Michaels 2010). As the paper showed, existing initiatives for AI
technical and ethical standardisation are strongly influenced by (geo)politics.
This story presents parallels to internet governance. The purpose of early
internet governance was to preserve a global, open and unregulated internet.
Nowadays, the such project is progressively failing in favour of a fragmented
regulatory space. The eventual balkanization of the internet would have
implications for AI governance since the protocols underpinning the internet are
critical in the definition of the capabilities of AI. Western countries are not
sufficiently following the development of internet standards within the ITU, which
might ultimately lead to the end of an open and global internet (Hoffmann et al.
2020).
While such wait-and-see approach offers significant parallels to internet
governance (Black & Murray 2019), the inability to adequately and timely govern
current technologies such AI-powered surveillance will pose significant liabilities
70
Available at https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-
of-AI-1-7-19.pdf.
71
Personal Information Security Specification (2018).
72
See Next Generation Artificial Intelligence Development Plan (AIDP) Issued by State Council (2017)
http://fi.china-embassy.org/eng/kxjs/P020171025789108009001.pdf.
20
Power dynamics are also visible in the development of ethical standards. To date,
the majority of documents containing ethical standards for AI come from non-State
actors such as civil society, the industry and inter-governmental institutions.
Besides these seemingly nonpartisan initiatives, ethical principles are also being
produced by big tech companies such Microsoft’s Responsible AI 73, the
controversial Google’s AI principles 74, or Tencent’s Ethical Framework for AI.75
73
Available at https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6.
74
Available at https://ai.google/principles.
75
Available at https://www.tisi.org/13747.
21
76
Available at https://futureoflife.org/ai-principles/?cn-reloaded=1.
77
Available at https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf.
78
Available at https://www.oecd.org/going-digital/ai/principles/.
79
Although these initiatives are government-driven contributors to their content include civil society, industry
players and stakeholders; e.g. China national AI standardization group and a national AI expert advisory group.
80
Available at https://media.defense.gov/2019/Oct/31/2002204458/-1/-
1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF.
81
Available at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
82
See at https://www.loc.gov/law/foreign-news/article/china-ai-governance-principles-released/.
83
Available at https://www.baai.ac.cn/news/beijing-ai-principles-en.html.
84
MIT Technology Review, Why does Beijing suddenly care about AI ethics? May 31, 2019.
22
85
Project Syndicate, The False Promise of “Ethical AI”. April 24, 2020.
86
Der Tagesspiegel, Ethics washing made in Europe. April 8, 2019.
23
24
6 Conclusion
References
Abbott, Kenneth W., & Benjamin Faude (2020) “Choosing Low-Cost Institutions
in Global Governance,” International Theory 1–30.
AI Now (2019) AI Now 2019 Report.
Ali-Vehmas, Timo, & Thomas R. Casey (2012) “Evolution of Wireless Access
Provisioning: A Systems Thinking Approach,” 13 Competition and
Regulation in Network Industries 333–61.
Arenal, Alberto, et al. (2020) “Innovation Ecosystems Theory Revisited: The Case
of Artificial Intelligence in China,” 44 Telecommunications Policy 101960.
Baron, David P. (2003) “Private Politics,” Journal of Economics and Management
Strategy.
Bartley, Tim (2011) “Certification as a Mode of Social Regulation,” Handbook on
the Politics of Regulation. Edward Elgar Publishing Ltd.
Benkler, Yochai (2019) “Don’t Let Industry Write the Rules for AI,” 569 Nature
Nature Publishing Group.
Bennett, Andrew, & Jeffrey T Checkel (2014) Process Tracing From Philosophical
25
26
27
28