You are on page 1of 22

NUS Law Working Paper No 2022/001

A Disclosure-Based Approach to Regulating AI in


Corporate Governance
Akshaya Kamalnath
Umakanth Varottil

Akshaya.Kamalnath@anu.edu.au
v.umakanth@nus.edu.sg

[January 2022]

© Copyright is held by the author or authors of each working paper. No part of this paper may be republished,
reprinted, or reproduced in any format without the permission of the paper’s author or authors.

Note: The views expressed in each paper are those of the author or authors of the paper. They do not necessarily
represent or reflect the views of the National University of Singapore.

Electronic copy available at: https://ssrn.com/abstract=4002876


A Disclosure-Based Approach to Regulating AI in Corporate Governance

Akshaya Kamalnath * & Umakanth Varottil **

1. Introduction

The use of technology, including AI, in corporate governance has been expanding. Two
headline grabbing developments in this regard are the appointment of an AI system, VITAL,
as an observer on the board of a Hong Kong company, and the installation of another AI
system, Alicia T, as a member of the management team in a European company. 1 Other
corporations have also begun to use AI systems for various corporate governance functions.

A survey of directors of corporations in the UK revealed that some key applications in the use
of AI systems and data analytics include information sharing between management and the
board of directors and a better ascertainment of the needs of employees and customers. 2
Another study that did not restrict itself to data from the UK similarly referred to AI
applications like board portals and risk management systems being widely used by corporate
boards. 3

*
Senior Lecturer, Australian National University College of Law.
**
Associate Professor, Faculty of Law, National University of Singapore.
This paper is a draft version of a chapter forthcoming in Indrajit Dube (ed.), Corporate Governance and
Artificial Intelligence – a Conflicting or Complementary Approach (Edward Elgar, UK).
1
Rob Wile, ‘A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors - Here's What It Actually
Does’ (Business Insider, 13 May 2014) < https://www.businessinsider.in/finance/a-venture-capital-firm-just-
named-an-algorithm-to-its-board-of-directors-heres-what-it-actually-does/articleshow/35075291.cms>;
‘Tieto the First Nordic Company to Appoint Artificial Intelligence to the Leadership Team of the New Data-
Driven Businesses Unit’ (Businesswire, 17 October 2016)
<https://www.businesswire.com/news/home/20161016005092/en/Tieto-the-First-Nordic-Company-to-
Appoint-Artificial-Intelligence-to-the-Leadership-Team-of-the-New-Data-Driven-Businesses-Unit> accessed
26 November, 2021.
2
Fabio Oliveira, Nada Kakabadse, and Nadeem Khan ‘Board engagement with digital technologies: A resource
dependence framework’ (2022) 139 Journal of Business Research 804.
3
Natalia Locke and Helen Bird, ‘Perspectives on the current and imagined role of artificial intelligence and
technology in corporate governance practice and regulation’ (2020) 35 Aust Jnl of Corp Law 4.

Electronic copy available at: https://ssrn.com/abstract=4002876


While board portals help share the relevant information (board packs) with board members,
they sometimes include functions like data review, risk management systems, and automated
audit systems that pick up red flags from the large volume of data (emails, social media posts
and chat transcripts) fed into them. 4 In financial service firms, AI based risk-management
systems have also been used to perform legal compliance functions like detecting credit card
fraud and money laundering. 5 Since the main task of the corporate board is monitoring
management, both the information flow to the board as well as risk management are crucial
aspects of corporate governance. Thus, AI clearly holds promise if it can help with these
important tasks.

Prevalent research also identifies AI’s potential uses in corporate governance. One study
argues that board appointments can be more effective when AI systems are used. 6 This is
premised on the fact that human managers and directors are biased and tend to appoint
directors who are friendly and are, therefore, unlikely to oppose them. An AI system that is
free of bias can appoint board members based on merit alone. Similarly, it has been argued
that the quality of board discussions could be enhanced when an AI system is involved in the
“discussion”. 7 The other directors can then discuss this assessment without hesitation, since
it came from the AI system. In other words, the AI system may not only highlight some issues
that human directors may not omitted to consider, but it may also put forth a view that the
human directors were hesitant to voice, because they prefer not to rock the boat. 8

Due to its ability to process large amounts of data and pick up aberrations, AI could also help
blow the whistle on certain wrong-doing within the corporation where human employees

4
Ibid, 9-14.
5
Ibid, 17. See also, Iris H-Y Chiu and Ernest WK Lim, ‘ Technology vs Ideology: How Far will Artificial Intelligence
and Distributed Ledger Technology Transform Corporate Governance and Business?’ (2021) 18 Berkeley
Business Law Journal 1, 13.
6
Isil Erel et al., ‘Research: Could Machine Learning Help Companies Select Better Board Directors’ (Harvard
Business Review, April 9, 2018), https://hbr.org/2018/04/research-could-machine-learning-help-companies-
selectbetter-board-directors accessed 26 November 2021.
7
Akshaya Kamalnath, 'The Perennial Quest for Board Independence: Artificial Intelligence to the Rescue?'
(2019) 83 Alb L Rev 43.
8
Ibid, 52.

Electronic copy available at: https://ssrn.com/abstract=4002876


may not do so for fear of retaliation. 9 This use-case fits into the broader idea of AI systems
helping with risk-assessment and monitoring, and more effectively addressing the agency
problems present in modern corporations. 10 With all these uses and possible future
applications of AI in corporate governance, it is clear that AI holds great promise. In fact,
Siebecker argues that the use of AI in corporate governance will free up directors and officers,
allowing them to be more attentive to the core goals and values of the corporation. 11

On the other hand, the use of AI in corporate governance also presents significant risks.
Privacy and security issues become important since large amounts of data are fed to the AI
system. 12 Even if secure AI systems are developed, relying on judgements made by an AI
system has other risks. The “black box problem” or the lack of transparency with AI decision-
making has been flagged in various contexts in which the use of AI has gained prominence. 13
This is likely to be an issue for corporate governance as well.

Further, human bias may get embedded within an AI system when it is fed biased data. For
example, if an AI system has been “trained” to make board appointments based on previous
appointments made by humans, the biases prevalent in human decisions are likely to have
been fed into the AI’s decision-making as well. 14 From a corporate governance perspective,
this is likely to generate conflicts of interest, and confer undue power on those who control
the decision-making regarding the deployment of specific AI technologies (such as managers

9
Vivienne Brand, ‘Corporate Whistleblowing, Smart Regulation, and Regtech’ (2020) 43(3) UNSW L J 801.
10
Chiu & Lim, above n 5, 31.
11
Michael R. Siebecker, 'Making Corporations More Humane through Artificial Intelligence' (2019) 45 J Corp L
95.
12
Chiara Picciau, 'The (Un)Predictable Impact of Technology on Corporate Governance' (2021) 17 Hastings Bus
LJ 67, 131.
13
Alan Dignam, ‘Artificial Intelligence: The very human dangers of dysfunctional design and autocratic
corporate governance’, Queen Mary University of London Legal Studies Research Paper No314/2019, 18;
Christopher M. Bruner, ‘Distributed Ledgers, Artificial Intelligence and the Purpose of the Corporation’ (2020)
79 Cambridge Law Journal 431, 439.
14
Martin Petrin, ‘Corporate Management in the Age of AI’ [2019] Columbia Business Law Review 965, 1005.

Electronic copy available at: https://ssrn.com/abstract=4002876


or controlling shareholders) to the detriment of other constituencies (such as outside
shareholders or other stakeholders). 15

There is also the more basic problem of AI making mistakes, with black-box decision-making
coming in the way of humans obtaining a full appreciation of the reasons for those mistakes. 16
In the corporate governance context, these decisions may have serious consequences which
shareholders and other stakeholders of the company may have to suffer. 17 Concomitantly,
directors could very well bear the brunt under the current directors’ liability framework for
manifestations of AI risk. 18 These concerns call for a risk-based assessment of the utility of AI
in corporate governance. 19

Due to the challenges posed by AI, international and national bodies have developed guiding
principles for the development and use of AI across sectors. 20 However, these are merely
principles rather than specific legal measures to regulate the use of AI in corporate
governance. 21 Hence, it falls upon directors to navigate AI issues, in order to steer clear of
liability and reputational fallout. Brand notes that the existing directors’ duties regime already

15
Luca Enriques and Dirk A. Zetzsche, 'Corporate Technologies and the Tech Nirvana Fallacy' (2020) 72
Hastings LJ 55, 61.
16
Dignam, above n 13, 19 – 22.
17
See John Armour and Horst Eidenmüller, ‘Self-Driving Corporations’ (2020) 10 Harvard Business Law
Review 87, 92; Assaf Hamdani, et al, ‘Technological progress and the future of the corporation’ (2018) 6
Journal of the British Academy 215, 241.
18
Enriques and Zetzsche, above n 15, 71 – 73.
19
See Kristin N. Johnson, ‘Automating the Risk of Bias’ (2019) 87 Geo Wash L Rev 1214, 1216; Sergio Alberto
Gramitto Ricci, ‘Artificial Agents in Corporate Boardrooms’ (2020) 105 Cornell L Rev 869, 873.
20
See eg, OECD, ‘Artificial Intelligence, Machine Learning and Big Data in Finance: Opportunities, Challenges
and Implications for Policy Makers' (2021) < OECD, ‘Artificial Intelligence, Machine Learning and Big Data in
Finance: Opportunities, Challenges and Implications for Policy Makers’> accessed 30 December 2021;
IOSCO, ‘The use of artificial intelligence and machine learning by market intermediaries and asset
managers: Final report’ (September 2021) < https://www.iosco.org/library/pubdocs/pdf/IOSCOPD684.pdf>
accessed 30 December 2021; Info-communications Media Development Authority (IMDA) and Personal
Data Protection Commission, Singapore (PDPC), ‘Model Artificial Intelligence Governance Framework’,
Second Edition (2020) < https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-
organisation/ai/sgmodelaigovframework2.pdf> accessed 30 December 2021.
21
Eleanor Hickman and Martin Petrin, ‘Trustworthy AI and Corporate Governance: The EU’s Ethics Guidelines
for Trustworthy Artificial Intelligence from a Company Law Perspective’ (Online, 2021) Eur. Bus. Organ. Law
Rev.

Electronic copy available at: https://ssrn.com/abstract=4002876


offers an ethical framework within which AI ethics can be navigated. 22 However, she rightly
notes a possible counter-argument to this – it may not be advisable to designate directors as
the gatekeepers of AI ethics because wider societal interests are at stake. 23 Indeed, scholars
have suggested a few regulatory approaches , which include licensing requirements, sandbox
regime, and a disclosure-based approach for the deployment of AI in corporate governance.

In this background, we argue that, given the current stage of development of AI technologies
in the corporate sector and the fact that the implications of their deployment in governance
matters are yet to be fully comprehended, the disclosure-based approach is most suitable as
a regulatory mechanism. While the lack of any regulation would be subject to dire risks,
licensing regimes and sandbox mechanisms would likely be overinclusive, costly, and
counterproductive.

To be sure, existing literature has raised the probability of a disclosure-based regime to


address risks in the context of AI more generally. However, that relates to the deployment of
AI in the business of a company. 24 For example, there is considerable focus on disclosure and
transparency in the use of AI in the financial sector and investment industry. 25 At the same
time, the discourse in relation to matters of corporate governance is limited. 26 We seek to
build upon this literature. We do so in conjunction with the companion chapter in this volume
by Möslein and Tayfun înce. 27

22
Vivienne Brand, ‘Artificial Intelligence and Corporate Boards: Some Ethical Implications’, in Andrew Godwin,
Pey Woan Lee and Rosemary Langford (eds) Innovation, Technology and Corporate Law (Edward Elgar
Publishing, 2021).
23
Ibid, 25.
24
See e.g., Sylvia Lu, ‘Algorithmic Opacity, Private Accountability, and Corporate Social Disclosure in the Age
of Artificial Intelligence’ (2020) Vand J Ent & Tech L 99.
25
ASIFMA, ‘Enabling an Efficient Regulatory Environment for AI’ (June 2021) <
https://www.asifma.org/research/enabling-an-efficient-regulatory-environment-for-ai/> accessed 30
December 2021, at 19-21; Financial Stability Institute, ‘Humans keeping AI in check – emerging regulatory
expectations in the financial sector’ (August 2021) < https://www.bis.org/fsi/publ/insights35.pdf> accessed
30 December 2021, at 8, 14.
26
Enriques & Zetzsche, above n 15, 96-97; Picciau, above n 12, 134-135.
27
See chapter IX in this volume.

Electronic copy available at: https://ssrn.com/abstract=4002876


From a methodological perspective, our comparative effort rests on three key Asian common
law jurisdictions of Singapore, Hong Kong, and India. On the one hand, Singapore and Hong
Kong, as leading regional financial centres, have made headway in the use of AI in several
sectors, such as the finance industry, including through proactive efforts of their market
regulators. 28 On the other, the size of the Indian listed market (in terms of number of listed
companies) makes its study immensely relevant. We also draw parallels from the US securities
law regime to the extent necessary. Our chapter also complements Möslein and Tayfun înce,
as their focus rests on the civil law jurisdictions of Germany and Turkey.

Part 2 of this chapter briefly examines the available regulatory options to address the risks
emanating from AI and corporate governance. Part 3 explores the broad structure and design
of a disclosure framework, including its objectives, the current disclosure regimes in identified
jurisdictions, the need for specific disclosures, the content and levels of disclosure, and the
audience that benefits from transparency. Part 4 concludes.

2. Regulatory options

The typical regulatory options for any issue range from hard law (enforceable rules) to soft
law (voluntary guidelines or principles to be followed) to simply doing nothing. In the context
of new technologies, sandboxes are becoming a regulatory tool of choice. Each of these
approaches have been addressed in the context of regulating AI in corporate governance.

On the one hand, we could argue that the “do nothing” or “wait and see” approach could
apply to the use of AI because its development is still nascent. The proliferation of principles
and guidelines for AI, rather than laws seems to echo this logic. On the other hand, the risks
posed by AI applications in various sectors including corporate governance in the near and

28
Hong Kong Monetary Authority, ‘Reshaping Banking with Artificial Intelligence’ <
https://www.hkma.gov.hk/media/eng/doc/key-functions/finanical-infrastructure/Whitepaper_on_AI.pdf>
accessed 30 December 2021; Monetary Authority of Singapore, ‘Principles to Promote Fairness, Ethics,
Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in
Singapore’s Financial Sector’ <
https://www.mas.gov.sg/~/media/MAS/News%20and%20Publications/Monographs%20and%20Informatio
n%20Papers/FEAT%20Principles%20Final.pdf> accessed 30 December 2021.

Electronic copy available at: https://ssrn.com/abstract=4002876


long term have prompted arguments against the “do nothing” approach. For instance, Casey
argues that we should go beyond guidelines and provide clear laws to incentivise “profit-
maximising” firms to develop AI that is compliant with the law. 29 Of course, the question then
becomes what type of regulatory route is most effective.

Dignam is of the opinion that AI manufacturers should be treated like the pharmaceutical
industry, i.e., products would be subjected to testing by a specialised body before being
licensed. 30 The specialised body in the AI context would have a “a mixture of technical, ethical,
legal and economic expertise”. 31 The body would then aim to ascertain a few criteria including
whether there is embedded bias, whether developers of the relevant AI technology were a
diverse group, whether public interest is served by the product, and whether the decisions
made by the product are explainable. 32 Another criterion that the body should ascertain,
according to Dignam, is whether the law has been inserted within the AI at the design phase,
and whether compliance with the law is continuously evaluated. 33

While such an approach might have benefits, there are some pitfalls as well. For instance, the
licensing regime under such a body may stifle innovation because the licensing process cannot
be quick if a wide array of factors is to be ascertained. Further, the fast pace of innovation in
this field means that appropriate laws (for each sector) cannot be enacted in advance of the
licensing process. In other words, the model runs into the Hayekian knowledge problem, i.e.,
where the planners (government or even regulatory bodies) may not have the required
information about the activity. 34

Regulatory sandboxes solve the knowledge problem by allowing a test phase where the
regulator can learn from those engaged in the relevant sector and, during that time, ensure

29
Bryan Casey, 'Amoral Machines, or: How Roboticists Can Learn to Stop Worrying and Love the Law' (2016-
2017) 111 Nw U L Rev Online 231.
30
Dignam, above n 13, 46.
31
Ibid.
32
Ibid, 47.
33
Ibid.
34
Frederick Hayek, ‘The Use of knowledge in society’, (1945) 35 American Economic Review 519.

Electronic copy available at: https://ssrn.com/abstract=4002876


that innovation is not stymied by uninformed regulation. 35 However, companies should
satisfy some pre-determined criteria set by the regulators before being allowed to participate
in the sandbox. 36 This type of approach has been widely used in the fintech sector, and in
some other areas like legal tech.

Drawing from the fintech experience, Bruner has suggested that developments in corporate
governance, including the use of AI, might be most effectively regulated by the sandbox
approach. 37 On somewhat similar lines, Fenwick and Vermeulen argue that the current
environment of fast-paced technological changes is not conducive to the traditional “model
in which regulatory decision-making is fact-based and the task of regulatory design can be
delegated to politicians and the traditional experts”. 38 Instead, they suggest an approach that
is “built on more flexible and inclusive processes that involve start-ups and established
companies, regulators, experts and the public”. 39 Corporate law already uses a tool to
regulate publicly traded companies, which is meant to allow for flexibility, and firm-specific
approaches in other areas of corporate governance – disclosures on specific matters. 40

Although the disclosure regime has traditionally been justified as a measure to inform
investors about important issues and thus protect their interests, it has been argued that
various stakeholders have also relied on corporate disclosures especially on issues of public
interest, like sustainability. 41 The disclosure regime may thus be suitable, with some tweaks,
for the regulation of AI applications in corporate governance in the current phase where such
applications are still being developed and companies are in various stages of adopting it. It

35
Akshaya Kamalnath and Hitoishi Sarkar, ‘Regulation of Corporate Activity in the Space Sector’ (2022) Santa
Clara Law Review (forthcoming).
36
Dirk A. Zetzsche, Ross P. Buckley, Janos N. Barberis & Douglas W. Arner, 'Regulating a Revolution: From
Regulatory Sandboxes to Smart Regulation' (2017) 23 Fordham J Corp & Fin L 31.
37
Bruner, above n 13, 456.
38
Mark Fenwick & Erik P. M. Vermeulen, 'Technology and Corporate Governance: Blockchain, Crypto, and
Artificial Intelligence' (2019) 48 Tex J Bus L 1, 14.
39
Ibid.
40
Merritt B. Fox, 'Required Disclosure and Corporate Governance' (1999) 62 Law and Contemp Probs 113.
41
Jill E Fisch, 'Making Sustainability Disclosure Sustainable' (2019) 107 Geo LJ 923; Ann M. Lipton, 'Not
Everything Is about Investors: The Case for Mandatory Stakeholder Disclosure' (2020) 37 Yale J on Reg 499,
527.

Electronic copy available at: https://ssrn.com/abstract=4002876


will allow investors to assess risks, regulators to gain information about AI applications in
corporate governance, and the general public to learn and engage in the discussion that can
further feed into future regulation.

Such a regime will also require less regulatory resources that what would be needed in a
regulatory sandbox. Further, a sandbox approach at this point would mean that only those
corporations that meet the eligibility criteria of a particular sandbox will be allowed to use AI
systems in corporate governance. This will mean that firms outside the sandbox will not be
able to experiment, however incrementally, with AI in corporate governance. Even for firms
that do gain entry into the sandbox, as Enriques and Zetzsche rightly point out, since AI
systems will not stop “learning” once they are out of the sandbox, the assessment made
during the sandbox period will quickly become out-dated. 42 Finally, while the sandbox
approach is able to facilitate regulatory learning, it is not designed to allow stakeholders and
the general public to assess how AI is being deployed in corporate governance.

For these reasons, we argue that a disclosure-based approach is most suitable for the
incorporation of AI into corporate governance. This would provide the necessary oversight by
reducing the information asymmetry between corporate insiders who deploy AI technology
and outsiders whose interests may be impacted by the use of such technology. At the same
time, the costs emanating from a disclosure-based regulation are unlikely to exceed its
benefits, as compared with other regulatory mechanisms such as licensing or sandbox.
Furthermore, disclosure requirements, designed appropriately, would not militate against the
need for greater innovation and deployment of novel technologies to enhance the benefits
of AI in corporate governance. 43 The next section will delve into the objectives of a disclosure-
based approach, the existing disclosure requirements under securities laws in three Asian
common law jurisdictions, and propose a roadmap towards a more tailored approach.

42
Enriques & Zetzsche, above n 15, 96.
43
Peter Cihon, ‘Corporate Governance of Artificial Intelligence in the Public Interest’ (2021) 12 Information
275 < https://doi.org/10.3390/info12070275> accessed 30 December 2021, at 19 (stressing that any policy
instrument relating to AI must not stifle innovation).

Electronic copy available at: https://ssrn.com/abstract=4002876


3. Contours of a Disclosure Regime

a. Objectives of a disclosure regime

A suitable disclosure regime is considered a useful way to mitigate the risks surrounding the
deployment of AI in business as well as in the governance of companies. 44 This is especially
so given the general opacity surrounding AI, which is characterised by the “black box”
problem. 45 Existing frameworks and guidelines recommending disclosure in the case of the
use of AI in business suggest that engendering disclosures relating to AI would ensure that
“the decision-making process is explainable, transparent and fair”. 46

Such a disclosure-based regulatory strategy generates at least two benefits. First, information
regarding the use of AI in corporate governance is made available to shareholders and other
stakeholders of a company, such that they appreciate the benefits and risks emanating from
AI tools. 47 This would enable the recipients of disclosures to “understand the material impact
of algorithms on firms’ economic profits and sustainability”. 48 It also facilitates comparability
between firms adopting AI in governance (including variations among the different systems
being deployed) and those that do not, and the share price of companies would also
incorporate the impact of such disclosures. 49

Second, a disclosure obligation imposed on companies would also have the effect of altering
the decision-making processes within companies in relation to AI. The need to go public with
the AI strategy and its implementation would incentivise the directors of companies to
develop a more robust framework for the effective conceptualisation, development, and

44
IOSCO, above n 20, 16.
45
Lu, above n 24, 99.
46
IMDA & PDPC, above n 20, 15.
47
See OECD, above n 20, 44.
48
Lu, above n 24, 128.
49
Enriques & Zetzsche, above n 15, 96.

10

Electronic copy available at: https://ssrn.com/abstract=4002876


utilisation of AI in corporate governance. 50 Management would, therefore, be required to
identify various risks and devise strategies for mitigating them. Moreover, directors, as
fiduciaries, will have to come clean on any possible conflicts of interest. Disclosures are
interwoven with the duties of trust and confidence imposed on company directors and, as
Siebecker notes, “encapsulated trust would cause corporate officers and directors to use AI
to engage more authentically with the corporate constituencies they serve rather than to use
AI to manipulate maliciously the preferences of consumers, investors, and corporate
stakeholders.” 51

The above regulatory objectives help shape the design of an optimal disclosure regime for AI
in corporate governance. With this background, we now explore whether (and to what
extent) existing legal regimes provide for such disclosure obligations, if any.

b. Current disclosure regimes

At the outset, given the early stages of deployment of AI in corporate governance, none of
the jurisdictions that are the subject matter of study in this chapter specifically require
disclosures on matters of AI. 52 That leads to the question whether such disclosures are
necessitated under more general principles of disclosure norms.

Under securities law, companies are generally obligated to publicly disclose information that
is considered “material”. The concept of materiality emanated in the US, wherein the
Supreme Court held in TSC Industries, Inc v. Northway Inc 53 that information is considered
material if there is “a substantial likelihood that the disclosure of [an] omitted fact would have
been viewed by the reasonable investor as having significantly altered the ‘total mix’ of
information made available.” 54 There is limited guidance on whether the use of AI in the

50
Siebecker, above n 11, 100.
51
Ibid, 127.
52
For the US, see Lu, above n 24, 106; Lynn M. LoPucki, ‘Algorithmic Entities’ (2018) 95 Wash U L Rev 887,
918.
53
426 U.S. 438 (1976).
54
Ibid, 449.

11

Electronic copy available at: https://ssrn.com/abstract=4002876


governance of a company can be considered material from the perspective of a “reasonable
investor” in that company. Arguably, continuous disclosure requirements contained in
Regulation S-K promulgated by the US Securities Exchange Commission (SEC) could attract
disclosure requirements under heads such as “description of business”, “risk factors”, and
“management discussion and analysis (MD&A)”. But, not only are there challenges in using
these generic disclosure requirements to ensnare the impact of AI on governance, but existing
disclosure trends by companies on AI implications do not inspire much confidence. 55

In view of the absence of specific disclosure requirements, the SEC has, in one instance,
utilised its general powers to rein in an investment firm that failed to disclose to its investors
its deployment of an algorithm in making investment decisions that then led to investors
suffering losses. In its action against BlueCrest Capital Management Limited, the SEC alleged
that the investment firm allocated substantial amounts of capital to “a replication algorithm
– called Rates Management Trading (‘RMI’)”, which BlueCrest failed to adequately disclose to
its investors, thereby resulting in a violation of the Securities Act of 1933 and the Investment
Advisers Act of 1940. 56 The SEC entered into a settlement by which BlueCrest was to pay
disgorgement, prejudgment interest, and civil penalty, all totalling USD 170 million. 57
Although this case relates to the use of AI in investment business (an activity regulated by the
SEC) and not corporate governance more generally, the BlueCrest settlement demonstrates
SEC’s intention to stretch the limits of general disclosure norms to ensnare algorithmic activity
even in the absence of specific disclosure requirements. 58

In consonance with the general principle of “materiality” discussed above, the Asian common
law jurisdictions of Singapore, Hong Kong, and India prescribe detailed disclosure
requirements for publicly listed companies. In Singapore, rule 703 of the Listing Manual of the

55
Lu, above n 24, 128-153.
56
Securities and Exchange Commission, In the Matter of BlueCrest Capital Management Limited (8 December
2020) < https://www.sec.gov/litigation/admin/2020/33-10896.pdf> accessed 3 January 2022.
57
Ibid, 14.
58
Debevoise & Plimpton, ‘Regulatory Risks for Not Disclosing Trading Algorithms - Five Takeaways from the
SEC’s $170 million Settlement with BlueCrest Capital’, Debevoise Update (12 January 2021) <
https://www.debevoise.com/insights/publications/2021/01/regulatory-risks-for-not-disclosing-trading>
accessed 3 January 2022, at 4.

12

Electronic copy available at: https://ssrn.com/abstract=4002876


Singapore Exchange (SGX) imposes a continuous disclosure requirement to (i) avoid the
establishment of a false market in the securities of the issuer, or (ii) reveal information that
would likely have a material effect on the price or value of the securities concerned. 59 As rule
703 does not specifically relate to AI, the deployment of digital technologies in corporate
governance becomes a disclosable item only if it satisfies either of the conditions in the rule.
Hence, much depends upon whether the information regarding AI is likely to have the
required impact on the market. Such a generic approach not only lacks clarity, but the need
for disclosure rests on a case-by-case analysis.

Similarly, in Hong Kong, chapter 13 of the Consolidated Mainboard Listing Rules of the Hong
Kong Stock Exchange (HKEX) sets out the circumstances in which companies are required to
make public disclosures of information. As a general matter, the HKEX stipulates that the
“continuing obligations … are primarily to ensure the maintenance of a fair and orderly market
in securities and that all users of the market have simultaneous access to the same
information.” 60 Although chapter 13 contains a wide array of items that necessitate disclosure
by companies, there are no specific transparency obligations regarding AI, thereby leaving its
disclosure to interpretation as a general matter. This is similar to the situation in Singapore.

Moving to India, the SEBI (Listing Obligations and Disclosure Requirements) Regulations, 2015
(the “LODR Regulations”) require listed companies to disclose events or information that are,
in the opinion of the board of directors, material in nature. Consistent with the other Asian
common law jurisdictions, the “materiality” test becomes crucial. Specifically, in a generic
catch-all provision, the LODR Regulations require the board of an Indian listed company to
make a disclosure regarding the “emerge of new technologies” and “brief details thereof”, if
this becomes “necessary to enable the holders of securities of the listed entity to appraise its
position and to avoid the establishment of a false market”. 61 Even here, it is unclear whether
the reference to “technologies” refers to those that affect the business of the company or its

59
See also, Singapore Exchange, Appendix 7.1 Corporate Disclosure Policy; Singapore Exchange, Practice Note
7.1 Continuing Disclosure.
60
Hong Kong Stock Exchange, Consolidated Mainboard Listing Rules, r 13.03.
61
SEBI LODR Regulations, Schedule III, Part C.

13

Electronic copy available at: https://ssrn.com/abstract=4002876


governance. Hence, despite a broad interpretation rendered by courts and tribunals in India
to the concept of “materiality” under securities regulation, 62 the prevailing ambivalence
regarding disclosure of AI in governance gives rise to a great deal of uncertainty.

In all, in the absence of specific disclosure requirements regarding AI, securities regulators in
Singapore, Hong Kong, and India seem hard-pressed to require companies to be transparent
regarding the use of AI in their governance. At most, they can exercise their general powers
under securities law under the “materiality” parameter to pursue actions, 63 but those will
have to be determined by a case-by-case basis, thereby compounding the uncertainty.

In addition to materiality-related reporting that focuses on the financial performance of a


company, jurisdictions have also introduced sustainability reporting based on environmental,
social, and governance (ESG) factors. In Singapore, the SGX has issued a Sustainability
Reporting Guide that requires companies to disclose on key ESG matters. 64 This does not
specifically cover the implications of the use of digital technology in the governance of a
company, and even the broader requirements under the Guide are insufficient to capture the
use of AI in the required detail.

In Hong Kong too, the Listing Rules of the HKEX require companies to report on ESG matters
at two levels: (i) certain mandatory disclosure requirements; and (ii) other ‘comply-or-explain’
provisions. 65 Under the mandatory disclosures, companies’ statements must contain the
board’s oversight of ESG issues, the board’s management approach and strategy, and the
board’s review of progress made against ESG-related issues. 66 While the Listing Rules contain
various specific disclosure requirements, they are largely focused on environmental and social
matters, and not on governance matters, let alone the use of technology to aid the
governance of a company.

62
See, DLF Limited v. Securities and Exchange Board of India, 2015 SCC OnLine SAT 54; Electrosteel Steels
Limited v. Securities and Exchange Board of India, 2019 SCC OnLine SAT.
63
This would be similar to US SEC’s efforts in the BlueCrest case. See nn 56-58 above and accompanying text.
64
SGX Listing Rules, Practice Note 7.6.
65
HKEX Mainboard Listing Rules, LR 13.91.
66
HKEX Mainboard Listing Rules, Appendix 27, Part B, para. 13.

14

Electronic copy available at: https://ssrn.com/abstract=4002876


Finally, India too has taken giant strides in enhancing business responsibility and sustainability
reporting (BRSR), which is now mandatory for the largest 1,000 companies by market
capitalisation. The LODR Regulations prescribe the BRSR requirements along the lines of the
National Guidelines for Responsible Business Conduct (NGRBCs) set out by the Government
of India. While fairly prescriptive in nature, the NGRBCs are focused largely on matters
relevant to outside stakeholders, such as environmental issues, labour matters and human
rights concerns, with no coverage pertaining to internal governance through the use of AI.

In all, while there has been a significant push in all the three Asian common law jurisdictions
studied herein to enhance sustainability reporting requirements, in addition to financial
reporting, the implications of technological advancements, including the use of AI, in the
governance of a company do not find adequate coverage. Hence, looking at Singapore, Hong
Kong and India, as well as parallel developments in the US, neither the financial (materiality-
based) reporting nor sustainability reporting sufficiently caters to the needs of the
transparency requirements outlined earlier which, therefore, leads us to consider whether
specific disclosure mandates ought to be put in place in light of the increasing wave of
digitisation that is likely to subsume the field of corporate governance.

c. A case for specific disclosures relating to AI

A brief examination of the securities law regimes in the three Asian jurisdictions of Singapore,
Hong Kong, and India, along with a comparative analysis of US securities law, indicates that
current regulation has failed to address AI through a disclosure-based approach despite the
deployment (actual and potential) of AI in corporate governance and the associated risks.
While the general principles such as “materiality” and ESG reporting can be extended to
ensnare AI-related risks within the current framework, that task can be rather daunting for
the securities regulator. Much depends upon matters of interpretation and the individual
facts and circumstances of each case, thereby causing tremendous uncertainty to both
companies as well as their shareholders and other stakeholders. Hence, given the increasing
use of AI in corporate governance, there is a strong case for the introduction of specific
disclosure norms pertaining to AI. A disclosure-based approach is particularly important at

15

Electronic copy available at: https://ssrn.com/abstract=4002876


this relatively early stage in the deployment of AI in corporate governance, and before it
becomes mature enough for other forms of regulatory monitoring to take hold.

Such a trajectory is not without precedent. Given the precipitous developments surrounding
the impact of corporations on climate change, it has taken less than a decade for securities
disclosures to transition from generic norms to very specific climate-related disclosures. 67
Such specific climate disclosures are required not only by national governments (and their
securities regulators), but also international bodies seeking harmonisation of disclosure
practices across the world. 68 Singapore, 69 Hong Kong, 70 and India 71 too have adopted specific
climate-related disclosure requirements, both as part of materiality reporting (from the
perspective of financial performance and risk), but also as an integral component of
sustainability reporting. Similar to such risk-based reporting on climate issues, the
introduction of disclosures relating to the use of AI, with its attendant financial and social
risks, is apt. 72

d. Content and levels of disclosure

Companies should be encouraged to disclose the rationale for their decision to adopt AI
technology as part of their governance process. In addition, they must disclose which specific
technology they use, the specific applications to which AI is deployed, and the general trends
and experiences regarding the use. 73 The essential idea is to provide shareholders and other

67
See Fisch, above 41, 937; Sarah E Light, 'The Law of the Corporation as Environmental Law' (2019) 71 Stan L
Rev 137, 165-171
68
One notable effort in this regard is the release in 2017 of the recommendations by the Taskforce on
Climate-Related Financial Disclosures (TCFD) on financial risk disclosure of climate-related matters.
69
Ernest Lim, ‘Directors’ Liability and Climate Risk: White Paper on Singapore’, Commonwealth Climate and
Law Initiative (April 2021), 17-18.
70
Ernest Lim, ‘Directors’ Liability and Climate Risk: White Paper on Hong Kong’, Commonwealth Climate and
Law Initiative (December 2021), 18-19.
71
Umakanth Varottil, ‘Directors’ Liability and Climate Risk: White Paper on India’, Commonwealth Climate
and Law Initiative (October 2021), 35-41.
72
Lu, above n 24, 106.
73
See, IOSCO, above n 20, 20.

16

Electronic copy available at: https://ssrn.com/abstract=4002876


stakeholders of the “role and extent that AI plays in the decision-making process” of a
company. 74 Given that disclosures form an integral part of the risk strategy of companies,
they must include a detailed treatment of the possible risks emanating from the use of AI, the
company’s strategies to mitigate such risks and the plan by which the board proposes to
implement the strategy. 75 As Enriques and Zetzsche articulate:

Existing period disclosures on corporate governance arrangements could thus be


supplemented with additional explanations on, for instance, whether the issuer has a
tech committee (or whether one of the other existing committees have CorpTech
oversight functions), whether any of the board members are tech experts, how
compensation for the coders is determined, how the board oversees code design,
development, and upgrading, whether the board regularly engages in the review of
existing IT structure, and so on. 76

The presentation of information should be such that it is accessible to a diverse range


investors and other stakeholders, irrespective of their levels of sophistication. 77 This will
enable them to appreciate the consequences of the use of AI in governance, and make
investment, governance, or other decisions accordingly. 78

As to the levels of disclosure, it is necessary to bear in the mind the target audience. For
instance, there could be differences in the disclosure levels for retail investors as compared
to institutional investors. 79 Moreover, regulators may require significantly more detailed
information than investors or customers. 80

74
IMDA & PDPC, above n 20, 53.
75
In a generic sense, such disclosures may already be encompassed under “risk factors” under current
disclosure norms. See Iris H-Y Chiu and Ernest WK Lim, “Managing Corporations’ Risk in Adopting Artificial
Intelligence: A Corporate Responsibility Paradigm’ (2021) 20 Wash U Global Stud L Rev 347, 387.
76
Enriques & Zetzsche, above n 15, 97.
77
See, IOSCO, above n 20, 20.
78
Ibid.
79
See OECD, above n 20, 45.
80
ASIFMA, above n 25, 21.

17

Electronic copy available at: https://ssrn.com/abstract=4002876


Finally, the use of a proportionality criterion would help in moderating the extent of
information to be disseminated, in terms of both quality and quantity. 81 For instance,
disclosure of sensitive or competitive information regarding the use of AI would likely be
counterproductive. Accordingly, disclosure norms would do well to carve out appropriate
exceptions where the situation so warrants in light of the proportionality standard. As one
study notes: “Any regulatory guidance should allow firms to make a risk assessment and
determine the appropriate level, rather than mandating a single standard.” 82

e. Audience for AI disclosures

Related closely to the content and level of disclosures is the question of who precisely
constitutes the audience for disclosure regarding the application of AI in the governance of
corporations. At the outset, since the use of AI could trigger the “materiality” threshold for
continuing disclosures as well as risk-based assessments, shareholders of a company would
constitute an important constituency that benefits from disclosure. This is particularly
because such information may be relevant in the determination of the stock price of the
company. 83 Other internal stakeholders such as directors of the company as well as
employees (who may be invested in the stock or holding options) would also benefit from
such AI disclosures.

Given that AI raises issues of sustainability and broader social responsibility of corporations,
it is also necessary to consider the interests of external stakeholders such as consumers,
creditors, the community, the environment, and public interest generally. 84 Including AI
information under sustainability reporting significant expands the audience for such
disclosures. As one scholar notes, a “sustainability standard would require firms to mitigate
problems surrounding algorithmic opacity that implies far-reaching consequences for the
larger public”, which therefore requires the disclosure norms to motivate “corporate long-

81
See, IOSCO, above n 20, 16.
82
ASIFMA, above n 25, 21.
83
Above n 49 and accompanying text.
84
Lu, above n 24, 107.

18

Electronic copy available at: https://ssrn.com/abstract=4002876


term thinking and encouragement of performance in the wider context of sustainability”. 85
Hence, any disclosure regime relating to AI in corporate governance must cater to the
diversity of the audience who would benefit from the relevant information.

Last but not least, the relevant market regulator in each jurisdiction will also be one of the
intended audiences of the disclosure, so that the disclosure requirements can also allow for
regulatory learning.

f. Voluntary vs. mandatory disclosures

There are several regulatory options when it comes to encouraging or requiring companies to
disclose how they have incorporated AI into their governance practices. They range from
voluntary disclosures to mandatory disclosures with the intermediate option of disclosure on
a “comply-or-explain” basis. The simplest option would be to induce companies to voluntarily
make disclosures, and to let the market determine the acceptability of each company’s
conduct. 86 However, we do not subscribe to this approach because managements may not
be sufficiently motivated to make accurate and complete disclosures on a voluntary basis, as
current trends regarding AI disclosures in the US have indicated. 87 At the other end of the
spectrum, scholars have called for mandatory disclosures on the use of AI in corporate
governance arrangements of companies. 88 However, mandating AI disclosures in governance
at this stage could be costly and counterproductive. This is because the deployment of AI in
governance is still at an evolutionary stage, with technologies developing rapidly. Moreover,
not all companies use AI in governance. Even those that do incorporate AI to varied extents
in their governance arrangements.

Hence, at this stage, it would be prudent to introduce disclosure norms on a “comply-or-


explain” basis, with sufficient flexibility to companies to determine the content and extent of

85
Ibid, 133.
86
Hickman & Petrin, above n 21, 7-8.
87
Lu, above n 24, 128-153.
88
Enriques & Zetzsche, above n 15, 61; Lu, above n 24, 128.

19

Electronic copy available at: https://ssrn.com/abstract=4002876


disclosures. As Picciau notes, the “obligations need not be burdensome for disclosing
companies, but could be crafted under a company-or-explain approach whereby only
companies that have made a significant investment in technology for governance purposes
would need to disclose”. 89 This would ensure that the regulatory response is proportionate
to the risk from AI, and that it would not curb technological innovations. However, we view
the “comply-or-explain” approach only as a transitional arrangement, with the intention that
the disclosure requirements eventually be made mandatory and specific once the use of AI in
governance becomes more widespread, and more details regarding the benefits and risks
become known. The comply-or-explain approach will also help regulators learn more about
the specific use cases and risks involved. This learning can then feed into policy discussions
that guide the framing of more specific mandatory disclosures in future.

4. Conclusion

Corporate governance is ripe for disruption by AI, and it is crucial that the regulatory
landscape strikes the right balance to allow for this disruption, with minimal costs. This
chapter, after considering various regulatory tools, has proposed that a disclosure-based
approach is most suitable for the regulation of AI in corporate governance. The approach
provides oversight by reducing the information asymmetry between corporate insiders who
deploy AI technology and outsiders whose interests may be impacted by the use of such
technology. At the same time, disclosure requirements are not so onerous as to stymie
innovative uses of AI in corporate governance. To lay out the contours of an appropriate
disclosure regime, we have considered the existing disclosure regimes in three Asian
jurisdictions – Singapore, Hong Kong, and India – alongside that of the US, to determine
whether the use of AI in corporate governance is already ensnared under these regimes.
Finding that it would be too vague to rely on the current disclosure regimes in these
jurisdictions for the regulation of AI in corporate governance, we instead propose that a
transitional comply-or-explain approach should be introduced and that this should later be

89
Picciau, above n 12, 134.

20

Electronic copy available at: https://ssrn.com/abstract=4002876


followed by a more specific and mandatory disclosure regime. The transitional comply-or-
explain period will also allow for regulatory learning.

*****

21

Electronic copy available at: https://ssrn.com/abstract=4002876

You might also like