You are on page 1of 28

From private regulation to power politics: the rise of China in AI

private governance through standardisation


Marta Cantero Gamito

This article explores different accounts of private regulation in Artificial


Intelligence (PRAI) and asks whose views are being implemented in the
development of non-state rules for AI. This question is explored through a
normative analysis of the political economy of technology and ethical
standardisation. The research characterises the distributional outcomes of
private regulation, showing how private regulation is currently shaping AI
governance. The article argues that the current AI governance framework is
built not only on its technical and ethical layers but also -and perhaps most
importantly- on the institutional and procedural architecture of international
standardisation. Drawing on empirical research, the article finds an increasing
role of China in private governance and suggests that the incorporation of
ethical discussions into standard-setting would be a first building block in the
formation of forthcoming AI governance in an imminently hyperconnected
world.

Keywords AI governance · standardisation · private governance · tech policy ·


geopolitics · China · ITU

1 Introduction

Ongoing debates around artificial intelligence (AI) typically revolve around if and
how to regulate it. AI brings about significant benefits, but it also poses serious
risks, especially related to privacy, safety, and security. To date, there is no specific
(political) consensus on how to best regulate this technology so that potential risks
of AI are offset by the innovations that it brings about. In this light, while all eyes
are on the legislator, this article pays attention to existing and emerging private rules
that work as a governance framework pending legislative initiatives. The focus is
on technical standardisation. There are a number of reasons that justify this choice.
First, technical standardisation stands as a meaningful type of private regulation.
Second, technical standardisation is highly prolific in the development of AI
technologies. Third, technical standardisation is not politically neutral. Fourth,
technical standardisation has the capacity for regulatory and policy diffusion. More
specifically, the article explores standardisation as an account of private regulation
in artificial intelligence (PRAI) and asks what views are being implemented in the
development of these non-state rules for AI. In so doing, it considers that private
regulation is being turned into a tool for power politics
The pervasive use of AI has spurred debates about the ethics of AI more
broadly. Most worryingly, a pervasive, uncontrolled and illegitimate use of AI for
criminal justice and surveillance purposes sparks fears of serious human rights
violations (Feldstein 2019). As a result, public opinion is becoming increasingly

Electronic copy available at: https://ssrn.com/abstract=3794761


sensitive to the use and effects of AI (Zhang & Dafoe 2019). Industry is also calling
for public regulation.1 In the meantime, an expanding catalogue of non-legislative
and private initiatives supporting the development and use of AI technologies is
found across sectors (manufacturing, healthcare, financial services, etc.) and actors.
Existing regulatory efforts include the participation of a multifaceted set of actors
–from national and supranational governments to private companies, like Alphabet
or Microsoft, to the European Group of Ethics set up by the European Commission
or NGOs initiatives for AI governance. These initiatives to ‘regulate’ AI grow in
parallel to the practices of multinational companies and their business models. In
this regard, and with public opinion divided about AI, its governance is certainly a
challenge for public regulators, who, often, do not have the capacity to regulate
highly technical fields.
Private regulation is often the regulatory response to the lack of expertise of
the traditional lawmaker (Haufler 2001). However, the delegation of rulemaking
power to private actors is controversial and contested, especially if private
regulators acquire regulatory authority. The problem lies precisely in the level of
state capacity for monitoring and overseeing delegated powers. It has been argued
that the required level of governmental control over rulemaking delegation can only
be assumed by advanced industralised democracies (Büthe 2010a). Such control is
often missing in developing economies, accentuating global power imbalances.
Furthermore, a wider political debate is taking place concerning national
security concerns surrounding the involvement of China and Chinese companies in
the development of standards for facial recognition technologies.2 Worryingly,
while more regional and market-driven standardisation has displaced the interest of
more global and government-driven standard-setters such as the International
Telecommunications Union (ITU), China is increasing efforts to strengthen its
position within the latter (Hoffmann et al. 2020; Kwak et al. 2012). Most notably,
standards developed within the ITU are commonly adopted as policy by developing
nations in Africa, the Middle East and Asia. As a result, the capacity of international
standardisation for regulatory and policy diffusion requires a re-examination of its
existing procedural principles.
The purpose of this article is to isolate existing market power dynamics in
the private regulation of AI. Based on empirical research consisting of interviews
and socio-legal research in international standardisation within the ITU, the
research finds that the rise of Eastern economies, predominantly China, is impacting
the politics of private regulation, particularly technical standardisation. 3 The article
poses questions of geopolitical governance in the context of private regulation: what
views are being implemented in the development of international standards for AI?
In its most ambitious aim, this paper outlines an agenda for the study of the politics
of private regulation. The article finds that existing ethical frameworks are detached
from what is happening in standardisation and argues that the incorporation of
ethical discussions into standard-setting can provide a workable interim framework,
while constituting the necessary foundation for the future development of AI
governance.

1
For example: https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-
for-public-regulation-and-corporate-responsibility/.
2
Financial Times (December 1, 2019), Chinese tech groups shaping UN facial recognition standards.
3
WIRED (March 13, 2019), China Is Catching Up to the US in AI Research—Fast.

Electronic copy available at: https://ssrn.com/abstract=3794761


2 Private regulation as a governance framework for AI (literature review)

In this article I follow the view that regards regulation as conceptually narrower
than governance (Black 2002). Thus, while regulation is about steering behaviour,
a broader understanding of governance, as opposed to regulation, involves
distributional outcomes (Braithwaite et al. 2007).

2.1 Private regulation as governance

Accounts of private regulation abound in the current international political


economy. Private regulation features the mechanisms through which private parties
produce a social order that enables its own functioning. As such, instances of private
regulation are often characterised by their capacity to establish a framework regime
based on mechanisms of self-governance, self-regulation and private enforcement,
organising entire regulatory regimes along multiple sectors (Bernstein 1992, 2001).
At the same time, the concept of private regulation is closely related to that
of private politics, i.e. the capacity of actors to influence the market without relying
on institutions such as the lawmaker or the courts (Baron 2003). There is no
separation of the political (public) and non-political (private). Private standards can
also allow or prohibit certain practices that put public interests at risk, performing
(public) regulatory functions where the power of the state is limited (Cafaggi 2015).
There are different reasons why the development of private regulation by
non-state bodies is a preferred solution to publicly made law, especially at the
transnational level, where the limitations of public regulation are more noticeable.
Private regulation usually operates either to fill the gap in areas left unattended by
public regulation or upon delegation from the legislator. This is particularly evident
in the ICT sector, characterised by the borderless nature of its markets, fast-
changing dynamics and the level of expertise required to regulate highly technical
areas. Another distinctive feature of private regulation is its capacity to provide
incentives for private parties to create private norms or standards. For example, one
of the most appealing incentives of technical standardisation is arguably its capacity
of standards to create and control markets.
Particularly in highly technical fields, private regulation is often the
regulatory response to the lack of expertise of the traditional lawmaker. However,
the involvement of experts into public decision-making and the delegation of
rulemaking power to private and non-state actors is often controversial and
contested. The legitimacy of private regulation as a regulatory technique has been
traditionally considered problematic for two reasons. First, reliance on private
players raises classic representation and democratic accountability difficulties. The
reluctance towards privately created rules comes in the form of an asserted lack of
political discourse and of democratic credentials, where the resulting rules better
respond to the market rather than to societal needs. Seen this way, the public
political debate, which is expected in the making of public legislation, is
marginalised, and in many instances the scrutiny of public regulators over private
standard-setters is missing. Some have signalled the insufficiencies of
proceduralisation for overcoming legitimacy concerns (Curtin & Senden 2011;
Scott et al. 2011), even where “relative” input legitimacy attempts tried to be offset
with the introduction of broader social interests (Werle & Eversen 2006). Secondly,
private regulation carries significant contributions to the realisation of regulatory
and policy goals, standing as a manifestation of ‘regulatory intermediation’ (Abbott

Electronic copy available at: https://ssrn.com/abstract=3794761


et al. 2017). The question arises as to whether private actors are suitable to
legitimately achieve regulatory goals. Delegation to non-State actors is considered
ill-suited in areas where private cooperation is not sufficient to minimize
externalities (cf. Abbott & Faude 2020).
As a response to these perceived problems, elements of public law making
(constitutionalisation, politics and institutions) have been incorporated into private
regulation. This proceduralisation of private regulation is often referred to as (self-
)constitutionalisation -grouped here also the Global Administrative Law project.
The constitutionalisation of private regulation provides a meta-regime for the
observation of procedural norms (Cafaggi 2016). Here, proceduralisation also refers
to the incorporation of political debate and institutions into private regulation with
a view to overcome the alleged inadequacy of private regulation to achieve
regulatory purposes.
Private regulation uses a system built in self-governance (procedures), self-
regulation (communities) and private enforcement (markets) that incorporates
elements of constitutionalisation, politics and institutions that are more
representative of a national legal system, constituted by ‘societies of individuals’,
than of global regimes, devised for (fragmented) ‘society of organisations’ and
‘society of networks’, in Ladeur’s terms (2011). In fact, some of the instruments
and incentives used such as reputation, peer pressure, contracts, etc., are not
traditionally accounts of public regulation. However, as shown, the incorporation
of certain regulatory goals to the private regulatory enterprise is built on a political
foundation. Such regulatory function changes the sign of private regulation from a
self-regulatory, and to some extent informal, scheme to a procedurally and
substantially legitimated structure (Cafaggi 2015). As a consequence, the resulting
constitutionalisation has been significantly useful for validating sensitive public
choices made by private regulators (Vallejo 2020).
The rise of proceduralisation is problematic for the purposes of scrutinising
instances suffering from legitimacy deficit. The shift from internal and informal
private regulation to proceduralised standardisation is no longer restricted to the
epistemic community concerned, it also spills over societal interests beyond the
professional field. However, state-centred conceptions of constitutionalism are not
suitable to private regulation as they cannot provide a sufficient normative
compatibility between public- and private-made law (De Burca 2008; Scott et al.
2011). It is within this systemic crack where AI governance is occurring.
The claim to be made here is that not only technical standards and ethical
codes and are relevant for AI governance, but specially the juridification and
constitutionalisation of private rulemaking what reinforces the normativity and
legitimacy of private rules increasing their capacity as a governance tool. Private
rules amplify their normativity where these have been produced or endorsement by
institutionalised organisations. Thus, the proceduralisation of private rulemaking
becomes key in the evolution of the governance of AI.

2.2 The politics of PRAI

2.2.1 Political in the shape of technical

Despite their level of technicality, technical standards are not politically neutral
(Büthe & Mattli 2011). Histories of technology, infrastructure, networks and

Electronic copy available at: https://ssrn.com/abstract=3794761


devices contain many examples of technical arrangements that contain explicit or
implicit economic, social, legal and political purposes (Winner 1980).
The development of technology is contingent on the compatibility of
standards. From a technical point of view, technical standardisation facilitates
systems and network compatibility, enabling interoperability. Economically, wider
interoperability translates into economies of scale, and market competition,
providing incentives for international cooperation and coordination in standards
development. Commercially, the adoption of recognised standards implies access
to certain markets and presupposes an additional and certified commitment with
certain procedural guarantees, the ones of the specific body where a particular
standard has been produced. Consequently, standardisation which requires a
necessary dialogue among the interested parties. Such dialogue is facilitated within
structured standard-developing organisations (SDOs), which function as a proxy
for channelling different views and reaching consensus in publicly and politically
sensitive issues (Mattli 2001). As a result, the interactions of private actors can also
be highly political (Büthe 2010a).
Certainly, the process of technical standardisation is critical in the evolution
of AI governance (Cihon 2019). Aware of the role of standards in shaping global
markets, national governments are increasingly engaging in international
standardisation. The significance of international fora of technical standardisation
in AI governance is underlined in different standardisation strategies by major
economies, which make standardisation a priority; for instance, the US National
Artificial Intelligence Research and Development Strategic Plan 4, the EU
Commission White Paper on AI, 5 or the Chinese AI Standardisation White Paper.6
The relevance of technical standards in AI governance is reflected in their
use as political means. Politically, the development of leading global standards can
be used to support particularly regulatory and normative understandings, with
significant impacts for instance on human rights and certain policies (DeNardis
2014).
The contribution of technical standardisation to AI governance can be
perceived in the following: First, technical standards define the capacities and
possibilities of AI development in terms of technical feasibility. The idea about how
the architecture has the capacity to constrain behaviour is central in Lessig’s
seminal account of Code as Law (Lessig 1999). Under this perspective, technical
standards could incorporate regulatory decisions by restricting or steering a
particular behaviour. Seen this way, technical standardisation has the capacity to
provide a framework for public order. In the development of 5G, for instance, AI
can optimise 5G capabilities by using different functionalities made possible or
enhanced because of AI.7 Secondly, technical standards also define the acceptable
possibilities for a particular controversial use of AI, such as automated decision-
making (ADM), due to the risk posed by inherently discriminatory bias (Friedman
& Nissenbaum 1996). And third, the institutionalisation and proceduralisation of
standard setting can play a constitutive role in shaping the governance of the sector
concerned. Existing efforts to translate private rules into widely actionable practices
do not end with establishing the necessary technical and ethical standards on which

4
Available at https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.
5
European Commission, White Paper on Artificial Intelligence - A European approach to excellence and trust.
COM(2020) 65 final.
6
China Electronics Standardization Institute (CESI). AI Standardization White Paper.
7
See Artificial Intelligence and future directions for ETSI. ETSI White Paper No. #34. 1st edition – June 2020.

Electronic copy available at: https://ssrn.com/abstract=3794761


AI ought to operate. As a result, together with standardisation efforts,
institutionalised private regulators, such as standard developing organisations
(SDOs), enjoy the capacity to channel market and regulatory power. While
coordinated purchasing decisions, seemingly “technical”, can restrict market
access, self-regulatory mechanisms such as certification and monitoring function as
a proxy for amplifying the governance capacity of non-State actors (Ronit &
Schneider 1999).

2.2.2 Political in the shape of procedural

In theory, private regulation remains ill-suited for the production of public ‘goods’
(Bergstrom et al. 1986). In practice, however, technical standardisation performed
by established and constitutionalised SDOs is not unambiguously private
regulation. Some SDOs are not even bona fide private bodies, for they are also
composed of governmental actors, with whom they divide the regulatory space
(Cafaggi 2015). The institutional and procedural structure of these bodies have been
crafted following, more generally, (north-)western accounts of democracy and
accountability as a response to long-existing multidimensional accountability and
democratic deficit claims (cf. Cantero Gamito 2020 in telecoms standardisation).
The result is a greater institutional complementarity – that is, the more a
transnational private body reproduces the procedures of publicly-made law, the
more likely it is that the State uses that body to pursue its interests (Büthe & Mattli
2011).
While the increasing use of private regulation has been widely questioned,
international standardisation by institutionalised SDOs, such as the International
Telecommunications Union (ITU) or the International Standardisation
Organization (ISO), has been largely uncontested – this is particularly true where
these organisations have already undergone a process of self-constitutionalisation.
The question that arises is then: who is using SDOs as a governance tool?
Institutional stability is also critical in standardisation. The value of the
standard depends on the platform where it has been produced – ‘if the standard-
setting platform is not stable, then the standard is not stable and therefore the
predictability of the business based on the standard is not secured’. 8 Institutional
stability requires collaboration, alignment and synchronisation of a large set of
business actors and technologies (Ali-Vehmas & Casey 2012). At the same time,
the presence of (large) network effects can create lock-in results, pushing for clarity
and structure in standardisation, especially in a context where standardisation
develops simultaneously in multiple parallel processes.
A standard can secure wider adoption not only because of its technical
quality only, but specially through political support (Kwak et al. 2012).
Standardisation within intergovernmental organisations is more likely to be
successfully diffused. Latecomers to the international standardisation arena, such
as China, will seek participation in already structured SSOs. In addition to increased
participation, newcomers will establish strategic alliance formations through
corporations for knowledge transfer and political cooperation (ibid.).
Major SDOs provide not only the necessary expertise and institutional
capacity for international cooperation but also the critical infrastructure for
standards diffusion (Büthe & Mattli 2011). Aware of this, the EU has
8
Interview with a former member of a team working on compatibility and industry collaboration, regular
participant in SDOs meetings. Helsinki, November 2016.

Electronic copy available at: https://ssrn.com/abstract=3794761


instrumentalised its position in international SDOs with a view to export EU values
beyond the internal market (Cantero Gamito 2018; Cantero Gamito & Micklitz
2020a). In this light, China is also taking positions to lead the race with clear goals
to be achieved by 2025. 9 Influencing standardisation processes is one of the steps
to meet such objectives (Li 2018; Wübbeke et al. 2016). From this perspective,
there are studies raising important questions about how private regulation, and
particularly international standardisation, can work as a tool for AI governance (e.g.
Cihon 2019). However, Cihon does not consider the role of the SDOs’ structures
and procedures in shaping not only AI standards but also AI policy and regulation
in a way that restructures power relations. Others have, however, examined the role
of the ITU to shape the future policy for the internet (Hoffmann et al 2020). This
paper considers the political and regulatory impact and influence of AI standards
developed within SDOs. The purpose here is to engage with critical questions about
the political economy of private regulation in general and international
standardisation in particular. In doing so, it is argued that the institutionalisation
and constitutionalisation of private regulation carry significant consequences for
the distribution of regulative and political power. As a result, what matters is not
only the standards themselves, but particularly the social, economic and political
system in which they are embedded; in other words, who are setting the standards
for AI? Identifying who are the key actors in private regulation better describes
what are the effects of private regulation for the global economy.

3 Method: Tracing the process of standardisation

This article is built on a hypothesised causal mechanism: participation in


standardisation is instrumentalised to produce a particular governance framework
for AI. Isolating the power dynamics behind international AI standardisation
requires the exploration of the context and the institutional structure in which
decision-making takes place. Consequently, the article primarily applies a process
tracing method to standardisation. Process tracing is a well-suited method for
testing hypothesised causal mechanisms (Bennett & Checkel 2014) not only related
to individuals but also to make inferences on structural explanations (George &
Bennett 2005 at 142ff). More specifically, process tracing is a useful tool for
inferring how a process produced the outcome of interest (Bennett & Checkel 2014
at 6). This article is an attempt to explain the relevant steps that contribute to causing
the outcome (ibid. at 8). Accordingly, testing this article’s hypothesis requires to
answer the following questions. First, who are controlling existing instances of
international standardisation? Second, is there a perceived trajectory of change in
the historical configuration and power positions? And third, is there a causation
between current power/control and recent and present outcomes?
It is important to note that the dynamics inside the mechanisms are not
always observable as otherwise argued by Hedström & Ylikoski (2010). Yet it is
still possible to make inferences of the observable implications emerging from the
mechanism under examination (Bennett & Checkel 2014 at 12). As a result, in
conducting this research, the intervening variables have been consciously selected
in light of the hypothesis formulated and the questions to be answered. The question
related to who is currently controlling standardisation is answered through the
observation of the composition of SDO’s structure as a variable. The same is true

9
Made in China 2025 (see http://english.www.gov.cn/2016special/madeinchina2025/) and China Standards
2035.

Electronic copy available at: https://ssrn.com/abstract=3794761


for the second question above. The third question, however, related to the causation
between observations and outcomes is rather diagnostic evidence for the reason of
avoiding a potentially infinite regress in finding the explanations of such event. This
evidence is further supported by recent studies on the topic and contemporary
journalistic analyses.10
Reliance on diagnostic evidence could reflect a shortcoming in the
methodological design of this research. As a result, this deficiency has been
overcome by complementing process tracing with further empirical research based
on interviews with experts from SDOs and regular participants in international
standardisation. Some of these interviews were conducted in 2016 in preparation of
previous research and they have been complemented by more recent conversations
conducted between November 2020 and January 2021. These were open-ended and
semi-structured interviews. The purpose of conducting open-ended interviews was
to let these interview unfold to obtain a comprehensive understanding of the
analysed institutions, their nature, their functioning and their interactions with each
other.
This article examines the internal structure and composition of the different
groups and committees where standards are proposed, discussed and agreed. Such
examination simply yields descriptive data. However, an explanatory analysis is
sought to explain why while standards are developed by consensus, there is a recent
trend to increase presence in the governing bodies of standardisation committees.
The article is built on the assumption that standardisation contributes not
only to regulatory export, but also to the dissemination of certain values and
political views by embedding these into the standards (Cantero Gamito 2018;
Cantero Gamito & Micklitz 2020b). Accordingly, this research assumes the legal
and normative acceptability of public policy delivered through non-legal
instruments.
Although there are hundreds of organisations working on technical
standard-setting (ref.), the research focuses primarily on the ITU due to its inter-
governmental nature. It is important to note that while the participation of western
countries in ITU’s delegations is decreasing (Hoffmann et al. 2020; Lazanski 2019)
in favour of more market-based SDOs such as for instance the ETSI (Ali-Vehmas
& Casey 2012; Cantero Gamito 2018) the ITU is currently being advertised as the
‘technology agency’ (Hoffmann et al. 2020).

4 Standardisation as a governance framework for AI (data & results)

Participation in international standardisation is appealing not only for the capacity


of standards to reduce barriers to trade and providing access to new markets, but
also for the possibility to influence the development of a preferred technology
(Büthe 2010b).11 In this light, strategic participation in standardisation activities
plays an important role at the time of defining national priorities and standardisation
strategies. For example, the American National Standards Institute (ANSI) has
recently updated the United States Standards Strategy (USSS) where it underlines
the importance of strengthening representation at international SDOs such as ISO,

10
Several media are lately reporting power dynamics within SDOs. This article collects a set of recent pieces
on the topic.
11
‘Standardisation exists because it is valuable to those taking part in it, private actors or the governments’.
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021. TSAG acts act as an advisory body to the study groups, membership and staff of ITU-T and it is
responsible for the ITU-T working procedures.

Electronic copy available at: https://ssrn.com/abstract=3794761


IEC and ITU while acknowledging the relevance of standards for enhancing the
competitiveness of the US industry.12 On the other hand, China Standards 2035
promotes a blueprint for the development of technology standards for critical
infrastructure, including 5G, big data, cloud computing, internet-of-things, and
AI.13 If and how these plans are materialised can be observed by examining salient
changes in the trajectory and composition of international standardisation.

4.1 AI standardisation’s landscape – an overview

Technical standards codify the requirements of technical systems to enable


interoperability and interconnection of networks and systems.14 The foundational
work of AI standardisation requires a conceptual layer for problem-definition and
identification (performed in the form of ontologies) in the development and use of
AI. This work is carried out by international SDOs. Technical standards are thus
critical for providing the technology upon which AI is built. In a simplified manner,
a set of technical standards are required for AI to operate: foundation standards are
needed for providing the architecture upon which AI is built, including reference
architecture, terms definition, data, etc.; platform standards provide the
specification for underlying platforms and systems, covering big data, cloud
computing, interconnection, and any other related to AI platforms; key technology
standards concern specific AI technologies such as machine learning, biometric
identification, natural language processing, etc; product and services standards refer
to standards in intelligent robots and other smart devices; application standards
include those standards incorporated in different sectors, such as smart homes,
smart finances, medical care, etc. 15
The current structure for the international standardisation of AI is composed
of the work of different SDOs as well as industry consortia, forums and even
individual companies. Relevant SDOs working on ICT-related topics, including AI,
are the International Standards Organisation (ISO), the International
Electrotechnical Commission (IEC), the Institute of Electrical and Electronics
Engineers (IEEE), the International Telecommunications Union (ITU), the Internet
Engineering Task Force (IETF), the European Committee for Electrotechnical
Standardization (CENELEC) or the European Telecommunications Standards
Institute (ETSI). These organisations, ordinarily developing ICT standards, are
currently dedicating specific work areas to the standardisation of AI-related topics.
The ISO and IEC Joint Technical Committee (JTC 1) have created a special
Subcommittee 42 on Artificial Intelligence (referred to as ‘SC42’). This dedicated
committee is created to provide guidance to ISO and IEC committees developing
AI applications. ISO/IEC SC 42 acts as the proponent for the standardisation
programme on AI within ISO and IEC. SC 42 is currently working on developing
conceptual standards (ISO/IEC CD 22989 Artificial intelligence - Concepts and

12
See the publicly available draft at
https://share.ansi.org/Shared%20Documents/News%20and%20Publications/Links%20Within%20Stories/Firs
t_Draft_USSS-2020_For_Comment.pdf.
13
See, for example, 2019 China Standardisation Development annual report; available at
http://www.cnstandards.net/index.php/china-standardization-annual-report-2019/. See also China Electronics
Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by J. Ding.
14
J Palfrey and U Gasser, Interop: The promise and perils of highly interconnected systems. (New York: Basic
Books, 2012).
15
China Electronics Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by
J. Ding.

Electronic copy available at: https://ssrn.com/abstract=3794761


terminology, and (ISO/IEC CD 23053 - Framework for AI and Machine Learning),
risk management in AI (ISO/IEC CD 23894), identifying bias in AI systems and AI
aided decision making (ISO/IEC AWI TR 24027) and collecting and referencing
AI use cases (ISO/IEC CD TR 24030), among others.16
The European Committee for Standardization (CEN) and CENELEC have
created a Focus Group on Artificial Intelligence following the need of
standardisation to address the identified problems by the European Commission
related to the deployment, interoperability, scalability, safety and liability of AI. 17
One of the main objective of this joint group is to identify which technical
committees within CEN and CENELEC will be impacted by AI and Big Data. At
the same time, the group will serve as a focal point for them and for the European
Commission in the identification of specific standardisation needs for AI. The
group will also strengthen European participation in the ISO and IEC’s technical
committees working on AI. This regional participation within more international
accounts of standardisation is critical in the incorporation of particular views and
values as part of the development of technical standards, especially where these are
foundational to the technology concerned. For instance, the CEN-CENELEC Focus
Group has recommended to the European Commission the use of the conceptual
and framework standards developed by the ISO/IEC JTC1/SC42 as a reference for
the definition of AI in upcoming legislation. 18 ETSI has also recently launched the
Industry Specification Group on Securing Artificial Intelligence (ISG SAI) with the
aim of developing standards for preserving security in AI environments. 19
Not only the conceptual ground of AI is shaped by technical standards, but
also the ethical one. For example, the IEEE Global Initiative on Ethics of
Autonomous and Intelligent Systems (IEEE A-IS) has the mission to ‘ensure every
stakeholder involved in the design and development of autonomous and intelligent
systems is educated, trained, and empowered to prioritise ethical considerations so
that these technologies are advanced for the benefit of humanity’. 20 The IEEE
P7000 series of standards projects addresses specific issues at the intersection of
technological and ethical considerations for AI. The CEN-CENELEC Focus Group
on Artificial Intelligence will be also responsible of liaising with the European
Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG).

4.2 AI standardisation works within ITU

The ITU is a United Nations specialised agency. Created in 1865, the ITU is
responsible for regulating and coordinating telecommunications internationally.
The international coordination of telecommunications required the creation of
widely recognised standards to make interconnection and interoperability possible,
turning the ITU into the oldest international standardisation body. Over time, as
technology develops, the ITU’s scope has expanded to cover ICT more broadly.
This includes the development of standards relevant for AI.

16
The entire list of ongoing projects under ISO/IEC JTC 1/SC 42 can be found at
https://www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0.
17
European Commission, Communication - Artificial Intelligence for Europe. COM(2018)237 final.
18
CEN-CENELEC response to the EC White Paper on AI (version 2020-06); available at
https://www.cencenelec.eu/news/policy_opinions/PolicyOpinions/CEN-
CLC%20Response%20to%20EC%20White%20Paper%20on%20AI.pdf.
19
See the Problem Statement (ETSI GR SAI 004 V1.1.1 (2020-12)) at
https://www.etsi.org/deliver/etsi_gr/SAI/001_099/004/01.01.01_60/gr_SAI004v010101p.pdf.
20
Available at https://standards.ieee.org/industry-connections/ec/autonomous-systems.html.

10

Electronic copy available at: https://ssrn.com/abstract=3794761


Standardisation at the ITU is carried out within the ITU
Telecommunications Standardization Sector (ITU-T). ITU-T produces standards
that define how telecommunication networks operate and interconnect. These
standards are formally known as ITU-T Recommendations (ITU-T Recs). The
overall direction and structure of ITU-T is set out by the World Telecommunication
Standardization Assembly (WTSA), which is held every 4 years and defines the
study period for ITU-T.
Standardisation is carried out by ITU Study Groups (SGs), which produce
the ITU-T Recommendations. SGs can be structured into working parties (WP) to
assist in the organisation of the SG’s work. Each Study Group organises its work
primarily in the form of study Questions. Questions address technical studies in a
particular area of standardisation. A Question is the basic project unit within ITU-
T and it is driven by contributions.21 The area of study of the project is defined by
the text of the Question, and this is generally approved by the SG. The study of
Questions is coordinated by the WP. The chairmen of SGs and WPs are encouraged
to delegate responsibility to rapporteurs for the detailed study of individual
Questions or small groups of related Questions, parts of Questions, terminology, or
amendment of existing Recommendations.22 Pursuant to the Rec. ITU-T A.1
(09/2019), ‘[s]pecific persons should be appointed as rapporteurs to be responsible
for progressing the study of those Questions, or specific study topics, that are felt
to be likely to benefit from such appointments (…)’. A rapporteur may also propose
the appointment of one or more associate rapporteurs, liaison rapporteurs or
editors.23 Rapporteurs, and their associate and liaison rapporteurs as well as the
editors, play an indispensable role in coordinating increasingly detailed, and often
highly technical, studies.24 The editor assists the rapporteur in the preparation of the
text of draft Recommendations or other Publications and is also responsible for
coordinating development of a work item.25
For the study period 2017-2020 there are 11 SGs covering different topics. 26
Each SG has a Chairman and a number of vice-chairmen appointed by the World
Telecommunication Standardization Assembly (WTSA).
In the field of AI, Study Group 16 (ITU-T SG16) is currently in charge of a
Question on AI-enabled multimedia applications (Q5/16). ITU-T SG16 is
responsible for standardisation of multimedia coding, systems and applications. In
particular, this group is in charge on developing standards related to ubiquitous
applications and multimedia capabilities for services and applications for existing
and future networks. The focus of Question Q5/16 is on the challenges facing the
deployment of AI-enabled multimedia applications and the impact of AI
technologies in standards for multimedia applications.

21
A contribution is defined as the membership input into a SG proposing new work areas, drafting
Recommendations or introducing changes to existing Recommendations.
22
Section 2.3.1 Recommendation ITU-T A.1 - Working methods for study groups of the ITU
Telecommunication Standardization Sector (09/2019), hereinafter ‘Rec. ITU-T A.1 (09/2019)’, available at
https://www.itu.int/rec/T-REC-A.1-201909-I/en.
23
Ibid. section 2.3.3.3.
24
Ibid. section 2.3.3.4.
25
Ibid. section 2.3.3.3.
26
The different SGs can be found at https://www.itu.int/en/ITU-T/studygroups/2017-2020/Pages/default.aspx.
Due to the pandemic, the next World Telecommunication Standardization Assembly (WTSA20), originally
scheduled in November 2020, has been postponed until 2022, so the works of the current SGs continue based
on the ITU-T work continuity plan (adopted October 31st, 2020), available at https://www.itu.int/md/S20-
CLVC2-C-0003/en.

11

Electronic copy available at: https://ssrn.com/abstract=3794761


SG16 is currently chaired by Mr Noah Luo, who currently leads Huawei’s
standards work in the field of multimedia and network functions virtualization
(NFV).27 SGs chairmen shall prepare an organization proposal and an action plan
for the relevant study period taking into account the priorities defined by
Telecommunication Standardization Advisory Group (TSAG) or decided at
WTSA.28 SG chairmen are also responsible of the selection of an appropriate team
of working party.29 The SG chairman and the rapporteurs work closely together.
The rapporteur for Question Q5/16 on AI-enabled multimedia applications is Mr
Yuntao Wang, from the China Academy of Information and Communications
Technology (CAICT).30
The following list contains the current Q5/16 working plan, which includes
‘approved’ and ‘under study’ work items within Q5/16 that, where approved, will
become ITU-T Recommendations; i.e. ITU standards:

Subject / Title Work item Status Timing Editor(s)


Metrics and evaluation methods F.748.11 (ex Approved 2021 Weimin Zhang (CAICT)
for deep neural network processor F.AI-DLPB) 2020-08-13 Zheyu Zhang (CAICT)
benchmark

Deep learning software F.AI-DLFE Under study 2021 Shuo Liu (NGAI)
framework evaluation Yanjun Ma (Baidu)
methodology Yuntao Wang (CAICT)

Technical framework for deep F.AI-DMPC Under study 2022 Min Liu (ICT)
neural network model partition Wei Meng (ZTE)
and collaborative execution Yuntao Wang (CAICT)
Yuwei Wang (ICT)
Framework for audio F.AI-FASD Under study 2022 Xiaofei Dong (CAICT)
structuralizing based on deep Sun Li (CAICT)
neural network Qing Liu (China Telecom)
Ranran Zeng (China
Telecom)
Technical requirements and F.AI-ILICSS Under study 2021 Xiaofei Dong (CAICT)
evaluation methods of intelligent Jiaxuan Hu (Tencent)
levels of intelligent customer Pin Wang (Tencent)
service system Xueqiang Zhang (NGAI)

Requirements for the construction F.AI-MKGDS Under study 2021 Lin Shi (RITT)
of multimedia knowledge graph Mingjun Sun (CATR)
database structure based on Yuntao Wang (CAICT)
artificial intelligence Mingzhi Zheng (NGAI)

Technical framework for shared F.AI-MLTF Under study 2021 Xiongwei Jia (China
machine learning system Unicom)
Kepeng Li (Alibaba)
Xinyang Piao (Alibaba)

Requirements of multimedia F.AI-RMCDP Under study 2021 Xiaofei Dong (CAICT)


composite data preprocessing Yueming Meng (NGAI)
Mingjun Sun (CATR)
Dan Zhang (NGAI)
Use cases and requirements for F.AI-SCS Under study 2021 Jie Li (China Telecom)
speech interaction of intelligent Menghua Tao (China
customer service Unicom)

27
Information available at http://www.itu.int/net4/ITU-T/lists/mgmt.aspx?Group=16.
28
Rec. ITU-T A.1 (09/2019), section 1.3.1.
29
Ibid. section 2.1.1.
30
Information available at http://www.itu.int/net4/ITU-T/lists/loqr.aspx?Group=16&Period=16.

12

Electronic copy available at: https://ssrn.com/abstract=3794761


Zhen Yang (QQ)

Requirements for smart factory F.AI-SF Under study 2022 Jie Li (China Telecom)
based on artificial intelligence Zhen Yang (QQ)

Requirements for smart speaker F.IMCS Under study 2021 Baoping Cheng (China
based intelligent multimedia Mobile)
communication system Jun Lei (China Mobile)

Requirements and evaluation F.REAIOCR Under study 2021 Xiaofei Dong (CAICT)
methods for AI-based optical Shuo Liu (CAICT)
character recognition service Shuhuan Mei (NGAI)

Requirements for smart F.SBNG Under study 2022 Guanyi Jia (China
broadband network gateway in Telecom)
multimedia content transmission Li Jiacong (China Telecom)
Bo Lei (China Telecom)

Requirements for smart class F.SCAI Under study 2021 Tengfei Liu (China
based on artificial intelligence Unicom)
Yongsheng Liu (China
Unicom)
Liang Wang (ZTE)
Yuntao Wang (CAICT)

Overview of convergence of F.Supp- Under study 2021 Zheng Huang (ZTE)


artificial intelligence and OCAIB Xiongwei Jia (China
blockchain Unicom)
Keng Li (Fiberhome)
Xiaojun Mu (China
Unicom)

Table: SG16 Q5/16 ITU Work Programme (source: ITU31)

As observed in the table above, 100% of the editors are representatives of Chinese
organisations. Although leading positions such as SG chairman, rapporteur or editor
do not evidence any particular influence on the development of the standard,32 the
significant participation of Chinese delegations in AI standardisation is consistent
with China’s strategy and growing engagement in international standardisation.
While in the study periods 1997-2000 and 2001-2004, China did not have any
representative as chair, rapporteur or editor in SG16, in the current study period
(2017-2020) there are Chinese representatives as rapporteur (5), editor (more than
150)33 and chair of the said SG.34
Following this trend, it is also important to signal that Q5/16 explicitly
mentions as other bodies with a relationship to this Question the work by ISO
(international), IEC (international), ETSI (Europe), Artificial Intelligence Industry
Alliance (China), and the China Communications Standards Association. 35
Moreover, in preparation for the upcoming WTSA-20, China has submitted the
largest number of candidates (12) for the position of Chairmen and Vice-Chair for
those Groups will have reached their term limit by WTSA-20, including SG16. This

31
Data retrieved from https://www.itu.int/ITU-T/workprog/wp_search.aspx?sg=16&q=5.
32
Rec. ITU-T A.1 (09/2019), see section 2.3.1.
33
It is important to note that editors do not serve during the entire study period.
34
Information on the composition of the different study periods available at https://www.itu.int/en/ITU-
T/studygroups/2017-2020/Pages/default.aspx.
35
See https://www.itu.int/en/ITU-T/studygroups/2017-2020/16/Pages/q5.aspx.

13

Electronic copy available at: https://ssrn.com/abstract=3794761


contrast with the submitted proposals from the USA, with only 5 candidates, Italy
(2), France (1) or Germany (0).36
In addition to the work performed under Questions, SGs may be assisted by
Focus Groups. Focus Groups (FG) open the door to participation of non-ITU
members, such as experts or members from other standard-setting organisations and
academia. FGs provide relevant material to be considered in the development of
ITU-T Recommendations. One of the reasons for establishing a Focus Group is the
existence of a significant interest in the subject to be the object of the FG and a need
to support the standardisation work of the SG.37
In 2017, the ITU created a dedicated Focus Group on Machine Learning for
Future Networks including 5G (FG ML5G). This group has been active until July
2020. FG ML5G was created to support the standardisation work of SG13 on Future
networks, with focus on IMT-2020,38 cloud computing and trusted network
infrastructures. During its lifetime, FG ML5G supported the work of 5 ITU
Recommendations that provide an architectural framework for the integration of
machine learning into 5G and future networks.39 This FG has also developed a
Recommendation containing a framework for data handling to enable machine
learning in future networks.40 FG ML5G’s chairman represents the Fraunhofer
Institute for Telecommunications (Germany), but 2 out of the 3 working groups
inside this FG are chaired by representatives from Chinese organisations.41
In 2019, the ITU created the Focus Group on Environmental Efficiency for
Artificial Intelligence and other Emerging Technologies (FG-AI4EE) with a view
to provide guidance on the environmentally efficient operation of emerging
technologies. This group is mandated to draft technical reports and technical
specifications that highlight the environmental performance of AI and other
emerging technologies to meet the 2030 Agenda for Sustainable Development and
its goals. Similarly to FG ML5G, China is significantly represented at the
management structure of FG-AI4EE, with representations at the level of FG Co-
Chairman (Huawei) and 2 out of 3 co-chairs within the FG’s working groups (WG2
- Assessment and Measurement of the Environmental Efficiency of AI and
Emerging Technologies and WG 3 - Implementation Guidelines of AI and
Emerging Technologies for Environmental Efficiency), with representations from
Huawei and China Telecoms respectively. 42 This FG supports the standardisation
work of SG5, which is in charge of developing standards on environment and the
circular economy. SG 5 is chaired by a representative of the CAICT. 43
There is also a Focus Group on AI for autonomous and assisted driving (FG-
AI4AD), created in 2019, for supporting standardization activities for services and
applications enabled by AI systems in autonomous and assisted driving. This FG
assists the work of SG16. Issues addressed at this FG include questions such as the

36
The full list of candidates can be found at https://www.itu.int/en/ITU-T/wtsa20/candidates/Pages/ms.aspx.
37
See section 2.1, Recommendation ITU-T A.7 - Focus groups: Establishment and working procedures,
available at https://www.itu.int/rec/T-REC-A.7-201610-I/en.
38
IMT stands for International Mobile Telecommunications.
39
ITU-T Y.3172 (06/2019) - Architectural framework for machine learning in future networks including IMT-
2020.
40
ITU-T Y.3174 (02/2020) - Framework for data handling to enable machine learning in future networks
including IMT-2020.
41
Representative of China Mobile as Chairman of WG2: Data formats & ML technologies and a representative
of ZTE as Chairman of WG3: ML-aware network architecture. One of the Vice-chairs of FG ML5G is from
CAICT.
42
See https://www.itu.int/en/ITU-T/focusgroups/ai4ee/Pages/default.aspx
43
Ibid.

14

Electronic copy available at: https://ssrn.com/abstract=3794761


extent to which data is recorded from an autonomous vehicle in case of accident for
the purposes of determining liability as well as relevant privacy questions related to
automated-driving.44 This FG was proposed and created by its chair, Founder of the
Autonomous Drivers Alliance (ADA) in the UK, and the Vice-chair is a
representative of China Telecom. 45 The output of the FG takes the form of a
Technical Report (TR) that provides information supporting the work of the parent
SG. The decision-making is based on consensus and, at least in this FG, no major
problems for reaching consensus nor demands from certain groups in the drafting
of the Technical Reports have been reported to date. 46
In the field of AI for health, a special cooperation between the ITU and the
World Health Organisation has been established under the ITU/WHO Focus Group
(FG-AI4H), which is currently working on establishing a standardized assessment
framework for the evaluation of AI-based methods for health, diagnosis, triage or
treatment decisions.47 Such foundational work is expected to be decisive in the
further development of AI technologies or a least to result in a further coordinated
approach for the development and use of AI. 48 This FG also supports SG 16. As to
representation, although FG-AI4H is also chaired by a representative of the
Fraunhofer Institute for Telecommunications, China is also taking part in the
management of the FG with a representative from CAICT as one of the Vice-
chairmen.
More recently, two additional FGs have been created in the field of AI: ITU-
T Focus Group on Autonomous Networks (FG-AN) and FG on AI for Natural
Disaster Management (FG-AI4NDM), both established in December 2020. The full
composition of these FGs is not yet available. FG-AN is chaired by a representative
from Rakuten (Japan)49 and FG-AI4NDM’s chairman represents the Fraunhofer
Institute for Telecommunications (Germany), and the vice-chair is shared among
the World Meteorological Organization (WMO), a representative from the
Government of India, and a representative from China Telecom.
In sum, the ITU is playing a considerable role in AI standardisation as it is
no longer merely a telecoms specialised agency. As mentioned above, the ITU has
an interest beyond telecommunications in contributing to the shaping of ICT policy
more broadly. Despite being a standardisation body, it is becoming ever more
visible that the nature of the ITU is not only technical but increasingly political,
especially over the last decades (Headrick 1991; Savage 1989). The growing
interest of China in occupying managerial roles in international standardisation is
consistent with its ambition to become the leader standard-setter by 2035 (Ding
2018). As shown, China is significantly represented at the ITU, including the
position of ITU’s Secretary General, held by Mr Houlin Zhao since 2014 and re-
elected in 2018.

4.3 ‘Why not to have the standard written the way you like it?’ 50

44
Interview with one of the FG-AI4H members on December 1st, 2020 (videoconference).
45
See https://www.itu.int/en/ITU-T/focusgroups/ai4ad/Pages/default.aspx.
46
Ibid.
47
ITU/WHO Focus Group on Artificial Intelligence for Health, White Paper. Available at
https://www.itu.int/en/ITU-T/focusgroups/ai4h/Documents/FG-AI4H_Whitepaper.pdf.
48
Ibid.
49
See https://www.itu.int/en/ITU-T/focusgroups/an/Pages/default.aspx.
50
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021.

15

Electronic copy available at: https://ssrn.com/abstract=3794761


This research observes an overrepresentation of China within ITU. Management
and lead roles are increasingly occupied by representatives of Chinese government
and Chinese corporations –typically the role of government role and corporate role
coincide due to the involvement and influence of government in large Chinese
companies.
In theory, an asymmetrical representation with a larger presence of one
particular country or region in managerial positions does not necessarily signal
greater powers and influence over the direction of the produced standards. The same
is true with regard to other roles, such as ITU rapporteurs and editors. ITU’s rules
of procedure make clear that rapporteurs and editors have to be neutral51 -
‘rapporteurs and editors do not have special decision-making powers’ ‘The role of
editors is to edit. Full stop’.52
In practice, however, procedures can be abused. Formal and structured
decision-making procedures can be instrumentalised to achieve certain political
interests (Büthe & Mattli 2011; Lazanski 2019). Patent policies and changes in the
configuration of intellectual property rights within SDOs also contribute to strategic
behaviour in the definition of strategic behaviour in standardisation (Delimatsis et
al. 2019). Therefore, it is not only the standard itself but its institutional and
procedural setting what can be arguably captured to fit political agendas. ‘Some
countries have always sought overrepresentation by submitting numerous
candidates to managerial positions of SGs at ITU-T’.53 This is an epiphenomenon
in standardisation. Due to the voluntary nature of standards, there is a long tradition
of forum shopping in standard-setting – ‘if you are not successful in one place, you
find another one. You’ll pick up the group where you have the biggest chance of
success’.54 ‘Standardisation professionals know how procedures work. China has
learned very well how to work with the process’.55
(Political) contestation plays a crucial role here. There is evidence of
contributions for new work items not accepted by an ITU-T SG due to lack of
consensus and shortly after submitted to another SG (Lazanski 2019 at 372). It is
also possible to seek influence through the creation of Focus Groups, which are
open to non-ITU members. FGs were created to answer specific questions
supporting the work of the SG. While their introduction has been incredibly positive
to the work and technical input to the SG, some FGs have gone beyond their Terms
of Reference and sought to influence standardisation in a particular direction by
submitting draft Recommendations to the SG. 56 Any corruption of the procedure
must be avoided. ‘The group needs to pay attention that the procedures are not
abused’.57

51
Section 2.3 Recommendation ITU-T A.1 - Working methods for study groups of the ITU
Telecommunication Standardization Sector (09/2019), hereinafter ‘Rec. ITU-T A.1 (09/2019)’, available at
https://www.itu.int/rec/T-REC-A.1-201909-I/en.
52
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021. TSAG acts act as an advisory body to the study groups, membership and staff of ITU-T and it is
responsible for the ITU-T working procedures.
53
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021.
54
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
55
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
56
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
57
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).

16

Electronic copy available at: https://ssrn.com/abstract=3794761


SDOs are dynamic bodies; their composition and participation changes over
time (Delimatsis 2015) --‘standardisation is a moving animal’.58 The composition
of the ITU shows an historical trajectory of change. Participation in ITU-T has
decreased as standards are being developed elsewhere. ‘In the ITU there has been a
great shift to regional groups in the last years’. 59 In addition to ITU’s member states,
ITU-T sector members (non-government actors) stands at 275,60 while in 2004 the
number of sector members in ITU-T was 650 (Lazanski 2019). This decrease in
ITU participation has an impact on the control of standardisation. ‘What is
important in standard-developing is to be there, to be present. If you are not
attending the meetings, do not blame the organisation. You cannot blame others for
being active’.61

4.4 Distributional outcomes: Digital power China

Similarly to the notion of Normative Power Europe (Manners 2002), it is important


to reconceptualise the scale and impact of the increasing power of China over digital
technologies and its effects in the rest of the world. Although the examined China’s
ambitions over technical standardisation it is just one example, it clearly illustrates
its capacity to influence implicit value choices. A shift in the current geopolitical
picture might involve a reconfiguration of digital governance towards greater
centralised control and empowered authoritarianism (Zuboff 2019).
China is now a critical actor in the discussions about AI governance (Arenal
et al. 2020; Roberts et al. 2020). The kind of power China is, what it says and, most
importantly, what it does as a power will shape the direction of AI governance in
the years to come.
What kind of power. China’s cyber capabilities are very closely following
behind those of the US (Voo et al. 2020). While in capability the US still outpaces
China, in intent China ranks first (ibid.). As shown, promoting Chinese technology
standards is one of its priorities also with regard to AI.62 The Chinese government
sees standards as playing a significant role in the country’s aspirations for AI
leadership. To date, existing differences in national technical standards with a view
to favour the domestic industry have not meant retaliation by the WTO upon
breaching the TBT (Cihon 2019). However, building a specific technical
infrastructure in critical technologies becomes a tool for achieving political
ambitions. China is taking the necessary steps to reinvent the internet infrastructure,
which may eventually lead to the emergence of an alternative internet (Hoffmann
et al. 2020), more centralised and with a higher level of governmental control. 63
The next step could be shaping the technological and normative framework for AI
(Ding 2018).

58
Interview with a former member of a team working on compatibility and industry collaboration, regular
participant in SDOs meetings. Helsinki, November 2016.
59
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
60
Data available at https://www.itu.int/online/mm/scripts/gensel11.
61
Interview with a former member of the ITU Telecommunication Standardization Advisory Group (TSAG),
January 2021 (videoconference).
62
See, for example, 2019 China Standardisation Development annual report; available at
http://www.cnstandards.net/index.php/china-standardization-annual-report-2019/. See also China Electronics
Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by J. Ding.
https://docs.google.com/document/d/1VqzyN2KINmKmY7mGke_KR77o1XQriwKGsuj9dO4MTDo/.
63
Financial Times (March 23, 2020), Inside China’s controversial mission to reinvent the internet.

17

Electronic copy available at: https://ssrn.com/abstract=3794761


What it says. With a straightforward AI policy, China is also politically set
to become the leading AI power. 64 Internally, this strategy can be used for
preserving techno-nationalist purposes, in Kwak et al’s terms (2012). This strategy
would be aimed at preserving national sovereignty. Externally, an instrumentalised
use of international standardisation can contribute to techno-globalist aspirations,
such as exporting their normative views on AI to the rest of the world -and some
countries are already following lead (Ding 2018; Hoffmann et al. 2020). This is
expected to have implications on human rights.
What it does. One of China’s ambitions seems to be placed in influencing
and expanding Chinese facial recognition technologies around the world.65 This is
taking place through standardisation. In this manner, the operation of standards
functions as an extension of politics beyond territories. For example, the newly
increased research and economic capacity of the Chinese government allows for the
diffusion of Chinese accounts of privacy and security to other countries which do
not have the economic, regulatory, technical and political capacity to develop their
own standards (The Atlantic 2020). This raises concerns of the risks posed on the
expected protected privacy and security of biometrics information, also impacting
human rights, such as the China’s control over the Uighurs, 66 and market access
(e.g. social credit systems).
In sum, standardisation works at the ITU are contributing to materialise the
Chinese AI governance model. The scant participation of European and American
representations at the UN body in favour of their own regional bodies has paved the
way for Chinese technology companies to develop their own standards on AI. ITU
standards are commonly adopted as policy by developing nations in Africa, the
Middle East and Asia, who do not have the capacity to lead or influence
standardisation.67 Moreover, backed up by the credibility of ITU standards, China
exports its technology through the Belt and Road Initiative (BRI), 68 which includes
the supply of infrastructure and surveillance technology to the participant countries
(Feldstein 2019), paving the way for a digital silk road (Huadong 2018).

5 Defining a governance model for AI (Discussion)

This article is motivated by the question of how standardisation is used as a tool for
power politics. In particular, assuming that standardisation works as a governance
framework for AI, what are the lessons to be drawn from China’s increased
participation in AI standardisation? With the help of the descriptive data displayed
above, this section draws some theoretical conclusions. From an institutional
analysis perspective, the results are revealing: standardisation is responding to
contemporary events in the geopolitical board. The following observations are
worth considering in the definition of a governance model for AI.

64
See the different initiatives at https://futureoflife.org/ai-policy-china/.
65
Plan for the Development of New Generation Artificial Intelligence (Guo Fa [2017] No. 35)
66
The New York Times (April 14, 2019), One Month, 500,000 Face Scans: How China Is Using A.I. to Profile
a Minority.
67
In addition to the increasing representation within the ITU, China has also secured managing positions at the
joint initiative ISO/IEC SC 42. The current chairperson of ISO/IEC SC 42 is Mr Wael William Diab. Mr Diab,
Senior Director at Huawei.
68
For further information see https://www.beltroad-initiative.com/belt-and-road/.

18

Electronic copy available at: https://ssrn.com/abstract=3794761


5.1 Three governance models for AI: EU, US, China

In the discussions about the future governance architecture for AI it is necessary to


look at the technological (and their underpinning political) forces that are shaping
society and political interactions and that will continue doing so in the years to
come. To date, there is no clear orientation of the regulatory landscape for AI;
whether we are heading to a globalist or nationalist regulatory model for AI. In
contrast, the world of standards might be signaling the way. Generally, while a
globalist regime does not consider national particularities, a nationalist one draws
on national needs and favors national-centric perspectives to avoid dependencies
from non-national actors and systems, usually for security reasons (Kwak et al.
2012). Drawing on the experiences on technology regulation, a nationalist system
seems the preferred alternative for a future regulatory framework for AI. Three
different governance models can already be identified in the regulation of
technological innovations. It is expected that these models are roughly reflected in
current instances of AI governance.
When it comes to tech regulation, the EU has generally adopted a paternalist
approach. The adoption of the GDPR, preceded by historical fines for violations of
EU competition law, entailed a significant change in the way the EU approached
the regulation of Big Tech. This approach has now turned the EU into a regulatory
benchmark (Bradford 2020). The EU aspires to set a regulatory standard also with
regard to AI (Von Der Leyen n.d.). While a legislative proposal is expected during
the first quarter of 2021, the EU Commission Communication (2018) and the White
Paper on AI (European Commission 2020) suggest a risk-based approach, in which
only high-risk AI would be subject to mandatory requirements (ibid at 17). A risk-
based approach is consistent with the necessity to establish the necessary balance
between innovation and proportionate regulation. Existing rules already cover some
of the risks posed by AI systems and, in principle, regulatory intervention would be
needed only where gaps are to be filled. However, a proper exercise of problem
definition in a context of empirical uncertainty can result in irreparable harms.
Policy discussions on how to regulate AI in the US include specific
Guidance for the Regulation of Artificial Intelligence Applications (2019) and the
Artificial Intelligence Initiative Act (S.1558). The current approach is, nonetheless,
one of wait-and-see while allowing space for the technology to develop. Although
this seems to be currently moving in the opposite direction (Fukuyama et al. 2021),
self-regulation and soft-law approaches have been the primary preferred option
when it comes to the regulation of tech. S.1558 proposes the acceleration of R&D
in the field of AI for the economic and national security of the country. In 2016, the
White House’s Office of Science and Technology Policy (OSTP) in its report
‘Preparing for the Future of Artificial Intelligence’ emphasises the value of AI
development and its benefits for the economy and the society (see Cath et al. 2017).
Accordingly, it calls for a ‘light-touch’ regulatory approach that allows innovation
while taking into consideration public goals and the societal impacts of AI, e.g.
substantial transformations in the job market. More recently, the 2019 Executive
Order on Maintaining American Leadership in Artificial Intelligence 69 underlines
the importance of maintaining US leadership in the development of AI
technologies. The document explicitly mentions leadership in the technical
standardisation of AI. Following this executive order, the OSTP’s ‘Guidance for

69
Available at https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-
leadership-artificial-intelligence/.

19

Electronic copy available at: https://ssrn.com/abstract=3794761


Regulation of Artificial Intelligence Applications’ 70 speaks of “forbearing from
new regulations” and provides ten regulatory principles that should guide agencies
in deciding whether Federal regulation is necessary.
China’s policy and governance model on AI is largely based on the control
over the data to support a model of social governance (Liu 2014). This is reflected
on its privacy standard 71 that, although containing stringent measures, it is of
voluntary application (Roberts et al. 2020). The non-binding nature of privacy rules
, together with existing interferences of the legislature into the judiciary, renders
privacy protections largely impracticable (ibid.). As to artificial intelligence, China
is decisively set to become the global AI innovation center by 2030. 72 The Chinese
governance approach to AI is focused on international competitiveness, economic
growth, and social governance (Roberts et al. 2020). China’s AIDP envisages the
incorporation into legislation of AI ethical standards by 2025. It remains to be seen
whether such rules will move away from the current governance model based on
surveillance and digital authoritarianism that characterises the Chinese approach to
tech and the internet.

5.2 An old mistake - From regulatory determinism to AI exceptionalism

The use of artificial intelligence is pervasive, and its possibilities remain unknown.
Yet, the way in which AI develops and it is incorporated into the society will depend
on its regulatory treatment. To date, the legislator has adopted a cautious approach
in the regulation of AI and only very specific rules have been put in place while
specific use cases are being considered to put forward more concrete regulatory
initiatives. This reflects a narrative of AI exceptionalism where the gap left by the
legislator is filled with technical and ethical overlays produced by non-state actors.
This article has shown how instances of political and economic power are
taking positions in shaping the direction of AI governance, particularly by using
traditional instances of non-state and private regulation such as international
standardisation. SDOs function as extensive governance structures beyond the state.
However, the lack of the State in the private regulation narrative is the wrong
perspective (Michaels 2010). As the paper showed, existing initiatives for AI
technical and ethical standardisation are strongly influenced by (geo)politics.
This story presents parallels to internet governance. The purpose of early
internet governance was to preserve a global, open and unregulated internet.
Nowadays, the such project is progressively failing in favour of a fragmented
regulatory space. The eventual balkanization of the internet would have
implications for AI governance since the protocols underpinning the internet are
critical in the definition of the capabilities of AI. Western countries are not
sufficiently following the development of internet standards within the ITU, which
might ultimately lead to the end of an open and global internet (Hoffmann et al.
2020).
While such wait-and-see approach offers significant parallels to internet
governance (Black & Murray 2019), the inability to adequately and timely govern
current technologies such AI-powered surveillance will pose significant liabilities

70
Available at https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-
of-AI-1-7-19.pdf.
71
Personal Information Security Specification (2018).
72
See Next Generation Artificial Intelligence Development Plan (AIDP) Issued by State Council (2017)
http://fi.china-embassy.org/eng/kxjs/P020171025789108009001.pdf.

20

Electronic copy available at: https://ssrn.com/abstract=3794761


(Lessig 2019), pressing the establishment of clearly defined policymaking
guidelines. A clear definition of the normative boundaries for acceptable uses of AI
is required. Regulatory action cannot be postponed or, worse, replaced by a
cacophony of AI ethics. Existing use cases of AI for facial recognition or automated
decision-making purposes are a matter for sound constitutional checks and balances
rather than not completely workable ethical frameworks.

5.3 Private regulation does not solve collective action problems

The development and widespread use of AI involves a collective action problem.


However, the current configuration of PRAI and its distributional outcomes
evidences a lack of incentives for cooperation. Private regulation will not take place
where no incentives for it exist (Büthe 2010b). Therefore, the actors who find
incentives to participate in private regulation are the ones who can benefit from it.
From this perspective, private regulation in AI is seemingly designed by the big
corporations for what the respective companies need.
Meaningful voices signal the risks posed by letting the industry to
unilaterally design the governing rules for AI (Benkler 2019). Indeed there is a
political economy behind standardisation and as such some benefit more than others
(Lessig 1999). On the one hand, the scale and impact of AI within and across
various sectors and the concentration of large AI providers amplify the presence of
network effects, incentivising their presence and voice within SDOs. On the other
hand, key actors within SDOs will be the ones supplying legislation; small
producers will be regulated by large producers through the incorporation of
standards into global supply chains (Cafaggi 2013). This reveals two main
problems. First, the regulative power found in private contractual arrangements is
often less openly regulatory and typically free from political accountability. And
second, the proceduralisation of private regulation prevents a much-needed
normative appraisal about value choices since regulatory delegation bypasses
national political contestation in the face of the seeming legitimacy of
institutionalised SDOs.
Ethical frameworks alone are not enough to overcome these problems.
Rather, the creation or endorsement of ethical principles can be part of the policies
of commercially motivated actors. With the creation of these frameworks the
companies use their power to define the red lines in AI ethics (AI Now 2019). The
result is that, regrettably, the industry’s influence over the direction of AI ethics has
stigmatised non-state initiatives for being instrumental, bearing the risk of
dismissing the intrinsic value of these documents (Bietti 2020).

5.4 Isolated ethical frameworks are not enough

Power dynamics are also visible in the development of ethical standards. To date,
the majority of documents containing ethical standards for AI come from non-State
actors such as civil society, the industry and inter-governmental institutions.
Besides these seemingly nonpartisan initiatives, ethical principles are also being
produced by big tech companies such Microsoft’s Responsible AI 73, the
controversial Google’s AI principles 74, or Tencent’s Ethical Framework for AI.75
73
Available at https://www.microsoft.com/en-us/ai/responsible-ai?activetab=pivot1%3aprimaryr6.
74
Available at https://ai.google/principles.
75
Available at https://www.tisi.org/13747.

21

Electronic copy available at: https://ssrn.com/abstract=3794761


Most of the ethical standards are, however, drafted by collective bodies rather than
individual firms. Examples include Future of Life’s Asilomar AI principles 76,
Access Now’s Human Rights in the age of AI77, and the OECD Principles on AI, 78
among many others. In addition to forthcoming legislative initiatives and existing
proposals for dedicated legislation, national governments are drafting their own
ethical standards too, usually accompanying domestic AI strategies and policies.
Examples of government-driven79 initiatives include the U.S. Department of
Defense Ethical principles for the use of AI80, the EU Ethics Guidelines for
Trustworthy AI drafted by the High-Level Expert Group on AI,81 or the Chinese
Governance Principles for a New Generation of Artificial Intelligence. 82
The plurality of initiatives for an ethical AI has not yielded a diverse set of
standards. Instead, identified principles reflect a “universal” understanding of how
to govern AI technologies (Cussins Newman 2020) and a considerable degree of
ethical standards convergence (Floridi & Cowls 2019; Jobin et al. 2019). These
standards revolve around similar main themes. In most instances these themes are
human rights, safety and security, privacy, transparency, explainability, fairness,
inclusiveness and responsibility (see Cussins Newman 2020; Fjeld et al. 2020;
Jobin et al. 2019).
A closer look to the initiatives in AI ethics reveals that the majority of the
standards for ethical AI are drafted by western players; more specifically, different
bodies from the United States, several EU countries and the UK altogether have
released 53 out of 84 analysed documents (Jobin et al. 2019). By contrast, the
endorsement of China of certain international ethical principles and the
development of the Beijing AI Principles 83 by the Beijing Academy of Artificial
Intelligence (BAAI) are, understandably, regarded with suspicion. Therefore,
although the latter principles substantially mirror the standards laid out by western-
origin documents (Floridi & Cowls 2019), they are seen as a disingenuous initiative
that comes from a country with a record on threatening civil liberties. 84
In theory, the main limitation of AI ethical standards, western or not, is their
lack of formal validity. Yet, ethical standards enjoy substantial capacity for
regulatory and policy diffusion despite their customary soft-law nature. Economic,
legal, reputational and even socio-political incentives provide sufficient
motivations to comply with these non-binding standards (Büthe 2010b. at p.19ff).
First, the development and perceived convergence of ethical standards is
significantly valuable for the early identification and definition of critical problems
that would require to be politically addressed, pushing political discussions into
specific directions. This is particularly attractive for commercial players, as they
can anticipate public regulation (Büthe 2010b). The added value for non-
commercial parties is to incentivise private parties to correct externalities where
public regulation is missing (ibid.). Secondly, the development of ethical standards

76
Available at https://futureoflife.org/ai-principles/?cn-reloaded=1.
77
Available at https://www.accessnow.org/cms/assets/uploads/2018/11/AI-and-Human-Rights.pdf.
78
Available at https://www.oecd.org/going-digital/ai/principles/.
79
Although these initiatives are government-driven contributors to their content include civil society, industry
players and stakeholders; e.g. China national AI standardization group and a national AI expert advisory group.
80
Available at https://media.defense.gov/2019/Oct/31/2002204458/-1/-
1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF.
81
Available at https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
82
See at https://www.loc.gov/law/foreign-news/article/china-ai-governance-principles-released/.
83
Available at https://www.baai.ac.cn/news/beijing-ai-principles-en.html.
84
MIT Technology Review, Why does Beijing suddenly care about AI ethics? May 31, 2019.

22

Electronic copy available at: https://ssrn.com/abstract=3794761


by some stakeholders creates incentives for private regulatory competition, which
would explain the volume of documents containing ethical guidelines that have
proliferated since 2016 (see Jobin et al. 2019), although this can also result in
inefficient outcomes such as a race to the bottom by creating more favourable
competing rules (Büthe 2010 p.14ff). And third, even where there are no public
rules requiring compliance, the targets of these ethical standards often observe them
with a view to increase the perceived legitimacy of private rulemaking by those
who demand regulation, such as consumers (Büthe 2010, at p. 16).
Moreover, the growing interest, including that from governments too, in
drafting AI ethical guidelines signals an increasing preference for principled-based
approaches. This regulatory choice, ultimately political, has aroused considerable
criticism. First, it has been argued that establishing voluntary schemes for ethical
AI could be an impractical response to the demands for an adequate regulatory
framework for AI, amounting to a ‘euphemism for inaction’ by public
policymakers.85 Second, there is a reaction against committees drafting ethical
standards for being exceedingly populated by the industry –critical voices come
even from inside these committees, resulting in accusations of “ethics washing”.86
Third, there is an increasing suspicion about ethical convergence in a market which
is expected to be soon dominated by technologies coming from China, a country
with political and economic interests in limiting personal freedoms (Chen 2013).
In sum, the exploration of existing ethical frameworks for AI signals a
geographic imbalance, leaving the global south and less economically developed
countries underrepresented in the international discussion about AI ethics (Jobin et
al. 2019). Nonetheless, ethical convergence will be contingent on the interpretation
and normative integration of these ethical standards into domestic legal
frameworks.

5.5 Ethical discussions should inform technical choices

The current discussion on AI governance is a conversation about competing views


on society, the role of the state, and the economy. Recognising the role of
standardisation in the governance of AI offers the possibility to make sensible
choices about the technical (which is, arguably, also political and normative) base
on which this revolutionary technology is to be built, with significant consequences
for the society and the individuals.
Overall, the decision over the future governance model for AI is a choice
between prioritising liberal values, such as privacy and freedom, or state
sovereignty and security in an imminent scenario of hyperconnectivity. This poses
the question as to which of the existing normative models is preferred to become a
benchmark in the design of AI governance. This is the wrong perspective –a
strategy of unilateral action in an hyperconnected world is an inadequate action. We
should come to terms with the idea that we are currently at the epicentre of a major
transformation in the international political economy. Thus, forthcoming regulatory
solutions would have to be flexible enough to apply differently to different contexts.
In the meantime, discussions about an ideal governance framework for AI (Dafoe
2018) require to consider that future AI developments will profoundly challenge
existing legal concepts in a context severely marked by the unknown societal impact

85
Project Syndicate, The False Promise of “Ethical AI”. April 24, 2020.
86
Der Tagesspiegel, Ethics washing made in Europe. April 8, 2019.

23

Electronic copy available at: https://ssrn.com/abstract=3794761


of AI, a lack of terminological and problematisation clarity, and an evolving
technology that is affected by a variety of regulatory frameworks (Gasser et al.
2018).
Decisions on the formation and operation of a successful governance
structure for AI are not simple or easy ones. The scale, complexity and novelty of
AI technologies demand a tiered approach for the governance of AI (Gasser &
Almeida 2017) composed of a technical layer, an ethical layer, and social/normative
one. The technical layer, which includes standards, is critical for providing the
technology upon which AI is built. The ethical dimension integrates ethics and
human rights into the picture. Finally, the social or normative layer would
incorporate normativity to the different norms that emerge from the technical and
ethical dimensions (ibid.). As shown in this article, the technical and ethical layers
are evolving towards providing their own governance frameworks for AI. However,
while these processes are taking place in parallel, they are disconnected from each
other. This is the result of three identified ill-conceived solutions. First, while China
is heavily investing efforts into developing technical standards consistent with their
pretensions, western regulators are engaged in impracticable, and non-enforceable,
ethical frameworks. Second, the belief that the institutionalisation and
proceduralisation of standardisation prevent fundamental critics about value
choices. As shown, technical standardisation is opening the regulatory space to non-
democracies. And third, the assumption that the alternative to counterbalance the
increasing role of China in shaping the future governance of tech must necessarily
come from dominant economies such as the US. This is a misconception, as
evidence shows that certain uses of AI technologies by the US might be mirroring
Chinese practices, posing risks of human rights violations (AI Now 2019).
Despite the difficulties to design an appropriate governance, even more so
regulatory, framework, there is a pressing need to define the limits for the
acceptable development and use of AI. Based on the identified political and
normative implications of technical standards, this article proposes the imperative
incorporation of ethical discussions into standardisation and the technical
architecture of AI.
First, technical standardisation plays an important role by appropriately
translating and integrating of these principles into standards (Cihon 2019). By
embracing ethical issues into the technical standardisation activity by SDOs,
standards become more credible commitments towards the materialisation of
ethical principles into tangible and measurable practices, such as the IEEE ECPAIS.
In this regard, it is important to note that the certification of ethics is a highly
political activity as it implies to decide what meets, and what does not, a particular
ethical threshold as well as defining the content of such threshold itself. Second,
monitoring and ethical certification schemes by neutral third parties, such as SDOs,
contribute to ensure credibility (Büthe 2010a) and function as modes of social
regulation (Bartley 2011). For instance, the dedicated IEEE Ethics Certification
Program for Autonomous and Intelligent Systems (IEEE ECPAIS) seeks to
reinforce requirements for transparency, accountability and privacy for the
prevention of algorithmic bias. And third, standardisation already has set the
conditions for international cooperation, opening opportunities for a US/EU-China
dialogue.
To conclude, to avoid potential and inherent biases, and given the evolving
character of AI, convergence towards substantive common ethical technical
standards would be considered a mark of efficacy only where private regulation has

24

Electronic copy available at: https://ssrn.com/abstract=3794761


an informal convergence or governance function, while all others potential
scenarios may permit substantive local divergence; i.e. a combination of private and
public regulation.

6 Conclusion

This article provided an understanding of the (geo)politics of private regulation for


artificial intelligence. China is relentlessly advancing in the technical
standardisation of AI. The research has investigated how China is committing
significant resources and efforts to drive standardisation towards specific
directions, especially via strategic participation and behaviour within the ITU. The
article has also explored the political dimension of existing ethical frameworks and
their role within the AI governance architecture.
The prevailing notion is that the numerous developments in AI and its uses
are outpacing the capacity for making wise and far-sighted decisions on their policy
and regulatory treatment. Consequently, investigating the different functions of the
private regulation of AI (PRAI) is a first building block for providing a framework
for future evaluation and normative design. This article has stimulated questions on
the potential of PRAI as an interim governance framework, yet foundational in its
development. The article has concluded that one way to materialise the ethical
promises contained in voluntary frameworks is to populate technical
standardisation with ethical considerations. Such an approach will have an impact
not only on government and industry practices but also on the definition of
theoretical concepts.

References

Abbott, Kenneth W., & Benjamin Faude (2020) “Choosing Low-Cost Institutions
in Global Governance,” International Theory 1–30.
AI Now (2019) AI Now 2019 Report.
Ali-Vehmas, Timo, & Thomas R. Casey (2012) “Evolution of Wireless Access
Provisioning: A Systems Thinking Approach,” 13 Competition and
Regulation in Network Industries 333–61.
Arenal, Alberto, et al. (2020) “Innovation Ecosystems Theory Revisited: The Case
of Artificial Intelligence in China,” 44 Telecommunications Policy 101960.
Baron, David P. (2003) “Private Politics,” Journal of Economics and Management
Strategy.
Bartley, Tim (2011) “Certification as a Mode of Social Regulation,” Handbook on
the Politics of Regulation. Edward Elgar Publishing Ltd.
Benkler, Yochai (2019) “Don’t Let Industry Write the Rules for AI,” 569 Nature
Nature Publishing Group.
Bennett, Andrew, & Jeffrey T Checkel (2014) Process Tracing From Philosophical

25

Electronic copy available at: https://ssrn.com/abstract=3794761


Roots to Best Practices. (A. Bennett and J. T. Checkel, eds.) Cambridge
University Press.
Bergstrom, Theodore, et al. (1986) “On the Private Provision of Public Goods,”
Journal of Public Economics.
Bernstein, Lisa (1992) “Opting out of the Legal System: Extralegal Contractual
Relations in the Diamond Industry,” The Journal of Legal Studies.
——— (2001) Private Commercial Law in the Cotton Industry: Creating
Cooperation through Rules, Norms, and Institutions. Michigan Law Review.
Bietti, Elettra (2020) “From Ethics Washing to Ethics Bashing: A View on Tech
Ethics from within Moral Philosophy,” FAT* 2020 - Proceedings of the 2020
Conference on Fairness, Accountability, and Transparency. Association for
Computing Machinery, Inc.
Black, Julia, & Andrew Murray (2019) “Regulating AI and Machine Learning:
Setting the Regulatory Agenda,” European Journal of Law and Technology.
Bradford, Anu (2020) The Brussels Effect. Oxford University Press.
De Burca, Grainne (2008) “Developing Democracy Beyond the State,” 46
Columbia Journal of Transnational Law 221–78.
Büthe, Tim (2010a) “Private Regulation in the Global Economy: A (P)Review,” 12
Business and Politics 1–38.
——— (2010b) “Global Private Politics: A Research Agenda,” 12 Business and
Politics.
Büthe, Tim, & Walter Mattli (2011) The New Global Rulers: The Privatization of
Regulation in the World Economy. The New Global Rulers: The Privatization
of Regulation in the World Economy Princeton University Press.
Cafaggi, Fabrizio (2013) “The Regulatory Functions of Transnational Commercial
Contracts New Architectures,” 36 Fordham International Law Journal 1557–
1618.
——— (2015) “The Many Features of Transnational Private Rule-Making:
Unexplored Relationships between Custom, Jura Mercatorum and Global
Private Regulation,” 36 U. Pa. J. Int’l L. 875–938.
——— (2016) “Transnational Private Regulation: Regulating Global Private
Regulators,” in S. Cassese, ed., Research Handbook on Global Administrative
Law. Edward Elgar.
Cantero Gamito, Marta (2018) “Europeanization through Standardization: ICT and
Telecommunications,” 37 Yearbook of European Law 395–423.
——— (2020) “The Legitimacy of Standardisation as a Regulatory Technique in
Telecommunications,” in M. Eliantonio and C. Cauffman, eds., The
Legitimacy of Standardisation as a Regulatory Technique. Edward Elgar
Publishing Ltd.
Cantero Gamito, Marta, & Hans Micklitz (2020a) The Role of the EU in
Transnational Legal Ordering. The Role of the EU in Transnational Legal
Ordering.
——— (2020b) “The Role of the EU in the Transnational Governance of Standards,
Contracts and Codes,” The Role of the EU in Transnational Legal Ordering.
Cath, Corinne, et al. (2017) “Artificial Intelligence and the ‘Good Society’: The
US, EU, and UK Approach,” 24 Science and Engineering Ethics 505–28.
Chen, J (2013) “A Middle Class without Democracy: Economic Growth and the
Prospects for Democratization in China.,”
Cihon, Peter (2019) Standards for AI Governance: International Standards to
Enable Global Coordination in AI Research & Development. Future of

26

Electronic copy available at: https://ssrn.com/abstract=3794761


Humanity Institute, University of Oxford.
Curtin, Deirdre, & Linda Senden (2011) “Public Accountability of Transnational
Private Regulation: Chimera or Reality,” 38 Journal of Law and Society 163–
88.
Cussins Newman, Jessica (2020) Decision Points in AI Governance. CLTC White
paper series.
Dafoe, Allan (2018) AI Governance: A Research Agenda.
Delimatsis, Panagiotis (2015) “Introduction: Continuity and Change in
International Standardisation,” The Law, Economics and Politics of
International Standardisation.
——— (2019) Exit, Voice and Loyalty : Strategic Behavior in Standards
Development Organizations. Discussion Paper TILEC Discussion Papers
Tilburg University, Tilburg Law and Economic Center.
DeNardis, Laura (2014) The Global War for Internet Governance. Yale University
Press.
Ding, Jeffrey (2018) Deciphering China’s AI Dream The Context, Components,
Capabilities, and Consequences of China’s Strategy to Lead the World in AI.
European Commission (2018) Communication from the Commission to the
European Parliamen, the European Council, the Council, the European
Economic and Social Committee and the Committee of the Regions on
Artificial Intelligence for Europe.
——— (2020) White Paper on Artificial Intelligence: A European Approach to
Excellence and Trust | European Commission.
Feldstein, Steven (2019) The Global Expansion of AI Surveillance.
Floridi, Luciano, & Josh Cowls (2019) “A Unified Framework of Five Principles
for AI in Society,” 1 Harvard Data Science Review.
Friedman, Batya, & Helen Nissenbaum (1996) “Bias in Computer Systems,” ACM
Transactions on Information Systems.
Fukuyama, Francis, et al. (2021) “How to Save Democracy From Technology,” 100
Foreign Affairs 98.
Gasser, Urs, et al. (2018) Artificial Intelligence (AI) for Development Series Module
on Setting the Stage for AI Governance: Interfaces, Infrastructures, and
Institutions for Policymakers and Regulators.
Gasser, Urs, & Virgilio A.F. Almeida (2017) “A Layered Model for AI
Governance,” 21 IEEE Internet Computing 58–62.
George, Alexander L., & Andrew. Bennett (2005) Case Studies and Theory
Development in the Social Sciences. MIT Press.
Haufler, Virginia (2001) A Public Role for the Private Sector: Industry Self-
Regulation in a Global Economy. Carnegy Endowment for International
Peace.
Headrick, Daniel R. (1991) “The Invisible Weapon: Telecommunications and
International Politics, 1851-1945 -,” Oxford University Press.
Hedström, Peter, & Petri Ylikoski (2010) “Causal Mechanisms in the Social
Sciences,” 36 Annual Review of Sociology 49–67.
Hoffmann, Stacie, et al. (2020) “Standardising the Splinternet: How China’s
Technical Standards Could Fragment the Internet,” Journal of Cyber Policy
1–26.
Huadong, Guo (2018) “Steps to the Digital Silk Road,” 554 Nature Nature
Publishing Group.
Jobin, Anna, et al. (2019) “The Global Landscape of AI Ethics Guidelines,” 1

27

Electronic copy available at: https://ssrn.com/abstract=3794761


Nature Machine Intelligence 389–99.
Kwak, Joo Young, et al. (2012) “The Evolution of Alliance Structure in China’s
Mobile Telecommunication Industry and Implications for International
Standardization,” 36 Telecommunications Policy 966–76.
Lazanski, Dominique (2019) “Governance in International Technical Standards-
Making: A Tripartite Model,” 4 Journal of Cyber Policy 362–79.
Lessig, Lawrence (1999) “The Law of the Horse: What Cyberlaw Might Teach,”
113 Harvard Law Review 501–46.
——— (2019) “The Law of the Horse at 20: Phases of the Net,” The World Wide
Web Conference on - WWW ’19. New York, New York, USA: Association
for Computing Machinery (ACM).
Von Der Leyen, Ursula (n.d.) A Union That Strives for More My Agenda for
Europe.
Li, Ling (2018) “China’s Manufacturing Locus in 2025: With a Comparison of
‘Made-in-China 2025’ and ‘Industry 4.0,’” 135 Technological Forecasting
and Social Change 66–74.
Liu, Jinfa (2014) “From Social Management to Social Governance: Social Conflict
Mediation in China,” 14 Journal of Public Affairs 93–104.
Manners, Ian (2002) “Normative Power Europe: A Contradiction in Terms?,”
Journal of Common Market Studies.
Mattli, Walter (2001) “The Politics and Economics of International Institutional
Standards Setting: An Introduction,” 8 Journal of European Public Policy
328–44.
Michaels, Ralf (2010) “The Mirage of Non-State Governance,” 2010 Utah Law
Review.
Roberts, Huw, et al. (2020) “The Chinese Approach to Artificial Intelligence: An
Analysis of Policy, Ethics, and Regulation,” June 2020 AI and Society.
Ronit, Karsten, & Volker Schneider (1999) “Global Governance through Private
Organizations,” 12 Governance 243–66.
Savage, James G (1989) “The Politics Of International Telecommunications
Regulation,” Routledge.
Scott, Colin, et al. (2011) “The Conceptual and Constitutional Challenge of
Transnational Private Regulation,” 38 Journal of Law and Society 1–19.
The Atlantic (2020) China’s Artificial Intelligence Surveillance State Goes Global
-.
Vallejo, Rodrigo (2020) “After Governance? The Idea of a Private Administrative
Law,” in P. F. Kjaer, ed., The Law of Political Economy. Cambridge
University Press.
Voo, Julia, et al. (2020) National Cyber Power Index 2020.
Werle, Raymund, & Eric J Eversen (2006) “Promoting Legitimacy in Technical
Standardization,” 2 Science, Technology & Innovation Studies 19–39.
Wübbeke, Jost, et al. (2016) The Making of a High-Tech Superpower and
Consequences for Industrial Countries. MERICS PAPERS.
Zhang, Baobao, & Allan Dafoe (2019) “Artificial Intelligence: American Attitudes
and Trends,” SSRN Electronic Journal.
Zuboff, Shoshana (2019) The Age of Surveillance Capitalism. PublicAffairs.

28

Electronic copy available at: https://ssrn.com/abstract=3794761

You might also like