Professional Documents
Culture Documents
AI Governance A Consolidated Reference
AI Governance A Consolidated Reference
A consolidated
reference
EU AI Act — European Commission Draft
SEPTEMBER 2023
Table of Contents
NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) . . . 189
Inquiries
info@onetrust.com
Support
support@onetrust.com
Web
www.onetrust.com
DISCLAIMER:
No part of this document may be reproduced in any form without the written permission of the copyright owner.
The contents of this document are subject to revision without notice due to continued progress in methodology,
design, and manufacturing. OneTrust LLC shall have no liability for any error or damage of any kind resulting from
the use of this document. OneTrust products, content and materials are for informational purposes only and not
for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any
particular issue. OneTrust materials do not guarantee compliance with applicable laws and regulations.
Copyright © 2023 OneTrust LLC. All rights reserved. Proprietary & Confidential AI GOVERNANCE: A CONSOLIDATED REFERENCE | 3
Establish
Trust Intelligence a unified US
Platform privacy program
Protect privacy and ensure US
compliance across the business
Visibility. Action. Automation.
Protect consumer rights
Collect consent, preferences, and first-party
data and activate data across the MarTech
Privacy & Data GRC & Security stack based on individual choice
Governance Assurance
Respond to employee privacy requests
Fully automate employee rights requests
like access, deletion, and broader do not
sell requests
Ethics & ESG &
Conduct privacy risk assessments
Compliance Sustainability Embed privacy by design into your business
data strategy to manage risk at scale
EUROPEAN COMMISSION
Brussels, 21.4.2021
COM(2021) 206 final
2021/0106 (COD)
1. CONTEXT OF THE and agriculture. However, the same policy options on how to achieve who were largely supportive of
elements and techniques that power the twin objective of promoting the regulatory intervention to address
PROPOSAL the socio-economic benefits of AI can uptake of AI and of addressing the the challenges and concerns raised
also bring about new risks or negative risks associated with certain uses by the increasing use of AI.
1.1. Reasons for and
consequences for individuals or of such technology. This proposal
objectives of the the society. In light of the speed of aims to implement the second
The proposal also responds to
The European Parliament has also European Parliament in full respect approach to AI that is limited to the
The proposal sets harmonised rules
undertaken a considerable amount of proportionality, subsidiarity and minimum necessary requirements to
for the development, placement
of work in the area of AI. In October better law making principles. address the risks and problems linked
on the market and use of AI
2020, it adopted a number of to AI, without unduly constraining or
quality data, documentation and Article 288 TFEU, will reduce legal ASSESSMENTS received, of which 352 were from
companies or business organisations/
traceability, transparency, human fragmentation and facilitate the
oversight, accuracy and robustness, development of a single market 3.1. Stakeholder associations, 406 from individuals
are strictly necessary to mitigate the for lawful, safe and trustworthy AI consultation (92%individuals from EU ), 152
on behalf of academic/research
risks to fundamental rights and safety systems. It will do so, in particular,
This proposal is the result of institutions, and 73 from public
posed by AI and that are not covered by introducing a harmonised set of
extensive consultation with all major authorities. Civil society’s voices were
by other existing legal frameworks. core requirements with regard to AI
stakeholders, in which the general represented by 160 respondents
Harmonised standards and systems classified as high-risk and
principles and minimum standards for (among which 9 consumers’
supporting guidance and compliance obligations for providers and users
consultation of interested parties by organisations, 129 non-governmental
tools will assist providers and users in of those systems, improving the
the Commission were applied. organisations and 22 trade unions), 72
complying with the requirements laid protection of fundamental rights and
respondents contributed as ‘others’.
down by the proposal and minimise providing legal certainty for operators An online public consultation was
Of the 352 business and industry
their costs. The costs incurred by and consumers alike. launched on 19 February 2020 along
representatives, 222 were companies
operators are proportionate to with the publication of the White
At the same time, the provisions and business representatives, 41.5%
the objectives achieved and the Paper on Artificial Intelligence
of the regulation are not overly of which were micro, small and
economic and reputational benefits and ran until 14 June 2020. The
prescriptive and leave room medium-sized enterprises. The rest
that operators can expect from this objective of that consultation was
for different levels of Member were business associations. Overall,
proposal. to collect views and opinions on
State action for elements that 84% of business and industry replies
the White Paper. It targeted all
2.4. Choice of the do not undermine the objectives
interested stakeholders from the
came from the EU-27. Depending on
of the initiative, in particular the the question, between 81 and 598
instrument public and private sectors, including
internal organisation of the market of the respondents used the free
governments, local authorities,
The choice of a regulation as a surveillance system and the uptake of text option to insert comments. Over
commercial and non-commercial
legal instrument is justified by the measures to foster innovation. 450 position papers were submitted
organisations, social partners,
need for a uniform application of through the EU Survey website,
the new rules, such as definition 3. RESULTS OF EX- experts, academics and citizens. After
either in addition to questionnaire
analysing all the responses received,
of AI, the prohibition of certain POST EVALUATIONS, answers (over 400) or as stand-alone
the Commission published a
- Option 3+: Horizontal EU The preferred option was considered the single market. As a result of at approximately EUR € 5000 to
legislative instrument following a suitable to address in the most higher demand due to higher trust, EUR € 8000 per year. Verification
proportionate risk-based approach effective way the objectives of this more available offers due to legal costs could amount to another EUR
specific provisions of field. adults that might be facilitated by The classification of an AI system as
AI systems could be covered by the high-risk is based on the intended
the proposal 5.2.2. PROHIBITED existing data protection, consumer purpose of the AI system, in line with
5.2.1. SCOPE AND ARTIFICIAL protection and digital service existing product safety legislation.
DEFINITIONS (TITLE I) INTELLIGENCE legislation that guarantee that natural Therefore, the classification as high-
persons are properly informed and risk does not only depend on the
PRACTICES (TITLE II)
Title I defines the subject matter have free choice not to be subject to function performed by the AI system,
of the regulation and the scope of Title II establishes a list of prohibited profiling or other practices that might but also on the specific purpose and
application of the new rules that AI. The regulation follows a risk- affect their behaviour. The proposal modalities for which that system is
cover the placing on the market, based approach, differentiating also prohibits AI-based social scoring used.
putting into service and use of between uses of AI that create (i) for general purposes done by public
Chapter 1 of Title III sets the
AI systems. It also sets out the an unacceptable risk, (ii) a high risk, authorities. Finally, the use of ‘real
classification rules and identifies
definitions used throughout the and (iii) low or minimal risk. The list time’ remote biometric identification
two main categories of high-risk AI
instrument. The definition of AI of prohibited practices in Title II systems in publicly accessible spaces
systems:
system in the legal framework aims comprises all those AI systems whose for the purpose of law enforcement is
to be as technology neutral and use is considered unacceptable also prohibited unless certain limited - AI systems intended to be used as
future proof as possible, taking into as contravening Union values, for exceptions apply. safety component of products that
account the fast technological and instance by violating fundamental are subject to third party ex-ante
market developments related to AI. rights. The prohibitions covers 5.2.3. HIGH-RISK AI conformity assessment;
In order to provide the needed legal practices that have a significant SYSTEMS (TITLE III)
potential to manipulate persons - other stand-alone AI systems
certainty, Title I is complemented by
Title III contains specific rules for AI with mainly fundamental rights
Annex I, which contains a detailed list through subliminal techniques
systems that create a high risk to implications that are explicitly
of approaches and techniques for beyond their consciousness or
the health and safety or fundamental listed in Annex III.
the development of AI to be adapted exploit vulnerabilities of specific
rights of natural persons. In line
by the Commission in line with new vulnerable groups such as children
purpose, they pose a high risk of operation of critical infrastructure, it (36) AI systems used in employment, sexual orientation. AI systems used
harm to the health and safety or the is appropriate to classify as high- workers management and access to monitor the performance and
or put into service shall be developed USERS OF HIGH-RISK obligations referred to in Article 51;
be documented in a systematic and
be adopted in accordance with the concluded an agreement may be referred to in Article 40 do not exist
Article 37 or where the Commission considers
examination procedure referred to in authorised to carry out the activities of
Article 74(2). notified Bodies under this Regulation. that the relevant harmonised
Challenge to the competence
standards are insufficient or that
of notified bodies Article 38 CHAPTER 5 there is a need to address specific
demonstrating the compliance follow the conformity assessment 3. For high-risk AI systems, to which
1. Taking into account their intended procedure set out in Annex VII. legal acts listed in Annex II, section
of a high-risk AI system with the
purpose, high-risk AI systems that A, apply, the provider shall follow the
requirements set out in Chapter 2
have been trained and tested on data For the purpose of the conformity
of this Title, the provider has applied relevant conformity assessment as
concerning the specific geographical, assessment procedure referred to in
harmonised standards referred to required under those legal acts. The
behavioural and functional setting Annex VII, the provider may choose
in Article 40, or, where applicable, requirements set out in Chapter 2 of
within which they are intended to any of the notified bodies. However,
common specifications referred to in this Title shall apply to those high-risk
be used shall be presumed to be in when the system is intended to be
Article 41, the provider shall follow one AI systems and shall be part of that
compliance with the requirement set put into service by law enforcement,
of the following procedures: assessment. Points 4.3., 4.4., 4.5. and
out in Article 10(4). immigration or asylum authorities
the fifth paragraph of point 4.6 of
(a) the conformity assessment as well as EU institutions, bodies or
Annex VII shall also apply.
2. High-risk AI systems that have been agencies, the market surveillance
procedure based on internal control
certified or for which a statement of authority referred to in Article 63(5) For the purpose of that assessment,
referred to in Annex VI;
conformity has been issued under or (6), as applicable, shall act as a notified bodies which have been
a cybersecurity scheme pursuant (b) the conformity assessment notified body. notified under those legal acts shall
to Regulation (EU) 2019/881 of the procedure based on assessment be entitled to control the conformity
European Parliament and of the of the quality management system 2. For high-risk AI systems referred to
of the high-risk AI systems with the
Council and the references of which in points 2 to 8 of Annex III, providers
and assessment of the technical requirements set out in Chapter
have been published in the Official shall follow the conformity assessment
documentation, with the involvement 2 of this Title, provided that the
Journal of the European Union shall procedure based on internal control
of a notified body, referred to in Annex compliance of those notified bodies
TITLE IV biometric categorisation, which are under the supervisory remit of other
permitted by law to detect, prevent MEASURES IN national authorities or competent
TRANSPARENCY and investigate criminal offences. SUPPORT OF authorities providing or supporting
raising activities about the application Article 56 Article 57 provide administrative and analytical
of this Regulation tailored to the support for the activities of the Board
Establishment of the Structure of the Board
needs of the small-scale providers and pursuant to this Regulation.
European Artificial Intelligence
users; 1. The Board shall be composed of the
Board 4. The Board may invite external
national supervisory authorities, who
(c) where appropriate, establish a experts and observers to attend its
best practices among Member States; AUTHORITIES national competent authorities are including to small-scale providers.
provided with adequate financial and Whenever national competent
(b) contribute to uniform Article 59 human resources to fulfil their tasks authorities intend to provide guidance
administrative practices in the under this Regulation. In particular, and advice with regard to an AI
Designation of national
Member States, including for the national competent authorities shall system in areas covered by other
competent authorities
functioning of regulatory sandboxes have a sufficient number of personnel Union legislation, the competent
referred to in Article 53; 1. National competent authorities permanently available whose national authorities under that Union
shall be established or designated by competences and expertise shall legislation shall be consulted, as
(c) issue opinions, recommendations
each Member State for the purpose include an in-depth understanding appropriate. Member States may also
or written contributions on matters
of ensuring the application and of artificial intelligence technologies, establish one central contact point for
related to the implementation of this
implementation of this Regulation. data and data computing, fundamental communication with operators.
Regulation, in particular
National competent authorities shall rights, health and safety risks and
8. When Union institutions, agencies
(i) on technical specifications or be organised so as to safeguard the knowledge of existing standards and
and bodies fall within the scope of
existing standards regarding the objectivity and impartiality of their legal requirements.
this Regulation, the European Data
activities and tasks.
STAND-ALONE HIGH- responsible for registering the proportionate to the nature of the under that legislation, the elements
artificial intelligence technologies and described in paragraphs 1, 2 and 3
RISK AI SYSTEMS system and have the legal authority to
the risks of the high-risk AI system. shall be integrated into that system
represent the provider.
Article 60 and plan as appropriate.
5. The Commission shall be the 2. The post-market monitoring system
EU database for stand-alone controller of the EU database. It shall shall actively and systematically The first subparagraph shall also apply
high-risk AI systems also ensure to providers adequate collect, document and analyse to high-risk AI systems referred to in
relevant data provided by users or point 5(b) of Annex III placed on the
technical and administrative support.
1. The Commission shall, in market or put into service by credit
collected through other sources
collaboration with the Member States,
TITLE VIII on the performance of high-risk AI institutions regulated by Directive
set up and maintain a EU database 2013/36/EU.
systems throughout their lifetime,
containing information referred to POST-MARKET and allow the provider to evaluate the
in paragraph 2 concerning high-risk
MONITORING, continuous compliance of AI systems CHAPTER 2
AI systems referred to in Article 6(2)
with the requirements set out in Title
which are registered in accordance INFORMATION SHARING OF
III, Chapter 2.
with Article 51. SHARING, MARKET INFORMATION ON
SURVEILLANCE 3. The post-market monitoring system
INCIDENTS AND
2. The data listed in Annex VIII shall
shall be based on a post-market
be entered into the EU database
CHAPTER 1 monitoring plan. The post-market
MALFUNCTIONING
by the providers. The Commission
monitoring plan shall be part of the Article 62
shall provide them with technical and POST-MARKET technical documentation referred
administrative support.
MONITORING to in Annex IV. The Commission Reporting of serious incidents
shall adopt an implementing act and of malfunctioning
3. Information contained in the EU
Article 61 laying down detailed provisions
database shall be accessible to the 1. Providers of high-risk AI systems
establishing a template for the post-
public. Post-market monitoring by
that might be relevant for the high-risk shall have the power to request and testing of the high-risk AI system authority of a Member State has
AI systems referred to in Annex III. access any documentation created through technical means. The market sufficient reasons to consider that
or maintained under this Regulation surveillance authority shall organise an AI system presents a risk as
Article 64 referred to in paragraph 1, they shall
when access to that documentation the testing with the close involvement
is necessary for the fulfilment of the of the requesting public authority or carry out an evaluation of the AI
Access to data and
competences under their mandate body within reasonable time following system concerned in respect of its
documentation
within the limits of their jurisdiction. the request. compliance with all the requirements
1. Access to data and documentation The relevant public authority or body and obligations laid down in this
6. Any information and documentation Regulation. When risks to the
in the context of their activities, the shall inform the market surveillance
obtained by the national public protection of fundamental rights
market surveillance authorities shall authority of the Member State
authorities or bodies referred to are present, the market surveillance
be granted full access to the training, concerned of any such request.
in paragraph 3 pursuant to the authority shall also inform the relevant
validation and testing datasets used
4. By 3 months after the entering provisions of this Article shall be national public authorities or bodies
by the provider, including through
into force of this Regulation, each treated in compliance with the referred to in Article 64(3). The
application programming interfaces
Member State shall identify the public confidentiality obligations set out in relevant operators shall cooperate
(‘API’) or other appropriate technical
authorities or bodies referred to in Article 70. as necessary with the market
means and tools enabling remote
paragraph 3 and make a list publicly surveillance authorities and the other
access.
Article 65
available on the website of the national national public authorities or bodies
2. Where necessary to assess the supervisory authority. Member States referred to in Article 64(3).
Compliant AI systems which surveillance authority of the Member authority of a Member State makes
CODES OF CONDUCT
State referred to in paragraph 1. one of the following findings, it shall
the large-scale IT systems established 4. Within [three years after the date of
Evaluation and review Article 85
by the legal acts listed in Annex IX application of this Regulation referred
that have been placed on the market 1. The Commission shall assess the to in Article 85(2)] and every four Entry into force and
or put into service before [12 months need for amendment of the list in years thereafter, the Commission shall application
after the date of application of this Annex III once a year following the evaluate the impact and effectiveness
1. This Regulation shall enter into force
Regulation referred to in Article entry into force of this Regulation. of codes of conduct to foster the
on the twentieth day following that of
85(2)], unless the replacement or application of the requirements set
2. By [three years after the date of its publication in the Official Journal of
amendment of those legal acts leads out in Title III, Chapter 2 and possibly
application of this Regulation referred the European Union.
to a significant change in the design or other additional requirements for
to in Article 85(2)] and every four
intended purpose of the AI system or AI systems other than high-risk AI 2. This Regulation shall apply from [24
years thereafter, the Commission
AI systems concerned. systems. months following the entering into
shall submit a report on the evaluation
force of the Regulation].
The requirements laid down in this and review of this Regulation to the 5. For the purpose of paragraphs 1
Regulation shall be taken into account, European Parliament and to the to 4 the Board, the Member States 3. By way of derogation from
where applicable, in the evaluation Council. The reports shall be made and national competent authorities
For the European Parliament the short or long term including a MEASURES 3.2. Estimated financial impact of the
detailed timeline for roll-out of the
2.1. Monitoring and reporting rules proposal on appropriations
The President implementation of the initiative
LEGISLATIVE gains, legal certainty, greater mechanism(s), the payment modalities operational appropriations
effectiveness or complementarities).
FINANCIAL and the control strategy proposed
3.2.3. Summary of estimated impact
For the purposes of this point
STATEMENT 'added value of Union involvement' 2.2.2. Information concerning the risks on administrative appropriations
is the value resulting from Union identified and the internal control
1. FRAMEWORK OF system(s) set up to mitigate them
3.2.4. Compatibility with the current
intervention which is additional to the multiannual financial framework
THE PROPOSAL/ value that would have been otherwise
2.2.3. Estimation and justification of
INITIATIVE created by Member States alone 3.2.5. Third-party contributions
the cost-effectiveness of the controls
1.1. Title of the proposal/initiative 1.5.3. Lessons learned from similar (ratio of "control costs ÷ value of 3.3. Estimated impact on revenue
referred to in Article 3, Framework Member States relating to equipment Council Directive 89/686/EEC (OJ L
and protective systems intended 81, 31.3.2016, p. 51);
point 1 1. Directive 2006/42/EC of the for use in potentially explosive
European Parliament and of the 10. Regulation (EU) 2016/426 of
(a) Machine learning approaches, atmospheres (OJ L 96, 29.3.2014, p.
Council of 17 May 2006 on machinery, the European Parliament and of
including supervised, unsupervised 309);
and amending Directive 95/16/EC (OJ the Council of 9 March 2016 on
and reinforcement learning, using
L 157, 9.6.2006, p. 24) [as repealed by 6. Directive 2014/53/EU of the appliances burning gaseous fuels and
a wide variety of methods including
the Machinery Regulation]; European Parliament and of the repealing Directive 2009/142/EC (OJ
deep learning;
Council of 16 April 2014 on the L 81, 31.3.2016, p. 99);
(b) Logic- and knowledge-based 2. Directive 2009/48/EC of the harmonisation of the laws of the
European Parliament and of the 11. Regulation (EU) 2017/745 of the
approaches, including knowledge Member States relating to the making
Council of 18 June 2009 on the safety European Parliament and of the
representation, inductive (logic) available on the market of radio
of toys (OJ L 170, 30.6.2009, p. 1); Council of 5 April 2017 on medical
programming, knowledge bases, equipment and repealing Directive
devices, amending Directive 2001/83/
inference and deductive engines, 1999/5/EC (OJ L 153, 22.5.2014, p. 62);
3. Directive 2013/53/EU of the
4.1. In addition to the application 4.4. In examining the technical documentation assessment certificate body refusing to issue the EU
referred to in point 3, an application documentation, the notified body may shall be issued by the notified body. technical documentation assessment
with a notified body of their choice require that the provider supplies The certificate shall indicate the certificate shall contain specific
shall be lodged by the provider for further evidence or carries out name and address of the provider, considerations on the quality data
the assessment of the technical further tests so as to enable a proper the conclusions of the examination, used to train the AI system, notably on
documentation relating to the AI assessment of conformity of the AI the conditions (if any) for its validity the reasons for non-compliance.
system which the provider intends to system with the requirements set out and the data necessary for the
4.7. Any change to the AI system that
place on the market or put into service in Title III, Chapter 2. Whenever the identification of the AI system.
could affect the compliance of the AI
and which is covered by the quality notified body is not satisfied with the
The certificate and its annexes shall system with the requirements or its
management system referred to tests carried out by the provider, the
contain all relevant information to intended purpose shall be approved
under point 3. notified body shall directly carry out
allow the conformity of the AI system by the notified body which issued
adequate tests, as appropriate.
4.2. The application shall include: to be evaluated, and to allow for the EU technical documentation
Establish
7. Interoperability
This document is provided free of charge. It may be reproduced and distributed free of charge without requiring
any further permissions, as long as it is not altered in any way. It may not be sold.
This document is available in the two OECD official languages (English and French). It may be translated into other
languages, as long as the translation is labelled "unofficial translation" and includes the following disclaimer: "This
translation has been prepared by [NAME OF TRANSLATION AUTHOR] for informational purpose only and its
accuracy cannot be guaranteed by the OECD. The only official versions are the English and French texts available
on the OECD website http://legalinstruments.oecd.org"
Background stewardship of trustworthy AI and - and international co-operation for competition, transitions in the
calls on AI actors to promote and trustworthy AI. labour market, and implications for
Information
implement them: democracy and human rights.
The Recommendation also includes
The Recommendation on Artificial
- inclusive growth, sustainable a provision for the development of The OECD has undertaken empirical
Intelligence (AI) – the first
development and well-being; metrics to measure AI research, and policy activities on AI in support
intergovernmental standard on AI –
development and deployment, and for of the policy debate over the past
was adopted by the OECD Council - human-centred values and
building an evidence base to assess two years, starting with a Technology
at Ministerial level on 22 May 2019 fairness;
progress in its implementation. Foresight Forum on AI in 2016 and
on the proposal of the Committee
an international conference on AI:
on Digital Economy Policy (CDEP). - transparency and explainability;
The OECD’s work on Artificial
Intelligent Machines, Smart Policies in
The Recommendation aims to foster Intelligence and rationale
- robustness, security and safety; 2017. The Organisation also conducted
innovation and trust in AI by promoting for developing the OECD
analytical and measurement work
the responsible stewardship of - and accountability. Recommendation on Artificial
that provides an overview of the AI
trustworthy AI while ensuring respect Intelligence
In addition to and consistent with technical landscape, maps economic
for human rights and democratic
these value-based principles, the and social impacts of AI technologies
values. Complementing existing OECD Artificial Intelligence (AI) is a general-
Recommendation also provides and their applications, identifies major
standards in areas such as privacy, purpose technology that has the
five recommendations to policy- policy considerations, and describes
digital security risk management, potential to improve the welfare and
makers pertaining to national policies AI initiatives from governments and
and responsible business conduct, well-being of people, to contribute to
and international co-operation for other stakeholders at national and
the Recommendation focuses on positive sustainable global economic
trustworthy AI, namely: international levels.
AI-specific issues and sets a standard activity, to increase innovation and
mandate-international-panel- can share and update, enabling the recovery – for example, via satellite, the Convention on the Organisation
artificial- intelligence and https:// comparison of their key elements in social networking and other data for Economic Co-operation and
www.gouvernement.fr/en/france- an interactive manner. It will also be (e.g. Google’s Community Mobility Development of 14 December 1960;
and-canada-create-new-expert- continuously updated with AI metrics, Reports) – and can help learn from the
HAVING REGARD to the OECD
international- panel-on-artificial- measurements, policies and good crisis and build early warning system
Guidelines for Multinational
intelligence). practices that could lead to further for future outbreaks. However, in order
Enterprises [OECD/LEGAL/0144];
updates in the practical guidance for to make the most of these innovative
In order to support implementation Recommendation of the Council
implementation. solutions, AI systems need to be
of the Recommendation, the Council concerning Guidelines Governing the
designed, developed and deployed in
instructed the CDEP to develop The Recommendation is open to Protection of Privacy and Transborder
a trustworthy manner, consistent with
practical guidance for implementation, non-OECD Member adherence, Flows of Personal Data [OECD/
the Recommendation: they should
to provide a forum for exchanging underscoring the global relevance LEGAL/0188]; Recommendation of
respect human rights and privacy;
information on AI policy and activities, of OECD AI policy work as well as the Council concerning Guidelines
be transparent, explainable, robust,
and to foster multi-stakeholder and the Recommendation’s call for for Cryptography Policy [OECD/
secure and safe; and actors involved
interdisciplinary dialogue. This will be international co-operation. LEGAL/0289]; Recommendation
in their development and use should
achieved largely through the OECD of the Council for Enhanced
Artificial Intelligence (AI) tools and remain accountable.
AI Policy Observatory, an inclusive Access and More Effective Use of
systems can support countries in their Public Sector Information [OECD/
hub for public policy on AI that aims to For more information, see:
response to the COVID-19 crisis. For LEGAL/0362]; Recommendation of
international co- and development that is free of b) Governments should review and to enhance the safety of workers
inappropriate bias and to improve adapt, as appropriate, their policy and the quality of jobs, to foster
operation for
interoperability and use of standards. and regulatory frameworks and entrepreneurship and productivity,
trustworthy AI assessment mechanisms as they apply and aim to ensure that the benefits
2.2. Fostering a digital ecosystem for
to AI systems to encourage innovation from AI are broadly and fairly shared.
V. RECOMMENDS that Adherents AI
and competition for trustworthy AI.
implement the following
Artificial Intelligence
that Adherents will do their best to
implement them.
Risk Management
- Substantive Outcome Documents
are adopted by the individual
listed Adherents rather than by
an OECD body, as the outcome
of a ministerial, high-level or other Framework (AI RMF
1.0)
meeting within the framework of
the Organisation. They usually set
general principles or long-term
goals and have a solemn character.
technologies, how- ever, also pose can change over time, sometimes
Individual: Harm Harm to an Harm to poses either a high or low risk. Some components.
to a person's organization's interconnected and risk measurement challenges include:
civil liberties, business operations. interdependent Tracking emergent risks:
rights, physical elements and Risks related to third-party software, Organizations’ risk management
or psychological Harm to an resources.
safety, or economic organization from hardware, and data: Third-party efforts will be enhanced by identifying
opportunity. security breaches or Harm to the global data or systems can accelerate and tracking emergent risks and
monetary loss. financial system,
Group/Community: supply chain, or research and development and considering techniques for measuring
Harm to a Harm to an interelated systems. facilitate technology transition. them.
group such as organization's
They also may complicate risk
discrimination reputation Harm to natural
resources, the measurement. Risk can emerge both AI system impact assessment
against a population
sub-group. environment, and approaches can help AI actors
from third-party data, software or
planet.
hardware itself and how it is used. understand potential impacts or
Societal: Harm
to a democratic Risk metrics or methodologies used harms within specific contexts.
participation or
by the organization developing the
educational access. Availability of reliable metrics:
AI system may not align with the risk
The current lack of consensus on
metrics or methodologies uses by the
Fig. 1. Examples of potential harms related to AI systems. Trustworthy AI systems and their responsible use robust and verifiable measurement
can mitigate negative risks and contribute to benefits for people, organizations, and ecosystems.
Key dimensions
emergent risks. As a regular process tradeoffs needed to balance
Application Data & AI model AI model Task & Application People
within an AI lifecycle, TEVV allows societal values and priorities context input output context & planet
for both mid-course remediation and related to civil liberties and rights,
post-hoc risk management. equity, the environment and the
Lifecyle stage
Plan & Collect & Build & Verify & Deploy Operate & Use or
planet, and the economy. Design process use model validate & use monitor impacted
data by
The People & Planet dimension at the
center of Figure 2 represents human Successful risk management TEVV includes
TEVV includes
TEVV includes integration, TEVV includes TEVV includes
internal & TEVV includes TEVV includes
TEVV
rights and the broader well-being of depends upon a sense of collective audit & impact
external model testing model testing
compliance audit & impact audit & impact
assessment testing & assessment assessment
validation
society and the planet. The AI actors responsibility among AI actors shown validation
in this dimension comprise a separate in Figure 3. The AI RMF functions, Articulate and Gather, validate, Create or Verify & Pilot, check Operate the AI Use
document the and clean data select validate, compatibility with system and system/technol-
AI RMF audience who informs the described in Section 5, require diverse system’s and document algorithms; calibrate, and legacy systems, continuously ogy; monitor &
concept and the metadata train models. interpret model verify regulatory assess its assess impacts;
Activities
assumptions, dataset, in light organizational (both intended advocate for
in- clude trade associations, standards and experiences. Diverse teams and context in of objectives, change, and and unintended) rights.
light of legal and legal and ethical evaluate user in light of
developing organizations, researchers, contribute to more open sharing of regulatory considerations. experience. objectives, legal
requirements and regulatory
advocacy groups, environmental ideas and assumptions about the and ethical requirements,
considerations. and ethical
groups, civil society organizations, purposes and functions of technology considerations.
end users, and potentially impacted – making these implicit aspects System Data scientists; Modelers; model engineers; data System integra- System End users,
operators; ends data engineers; scientists; developers; domain tors; developers; operators, end operators, and
in- dividuals and communities. These more explicit. This broader collective users; domain data providers; experts; with consultation of systems users, and practitioners;
experts; AI domain experts; socio-cultural analysts familiar with engineers; practitioners; impacted
actors can: perspective creates opportunities for designers;
impact
socio-cultural
analysts; human
the application context and TEVV
experts.
software
engineers;
domain experts;
AI designers;
individuals/com-
munities; general
Representative actors
assessors; TEVV factors experts; domain experts; impact public; policy
surfacing problems and identifying experts; product TEVV experts. procurement assessors; TEVV makers;
- assist in providing context and managers; experts; third- experts; system standards
existing and emergent risks. compliance party suppliers; funders; product organizations;
experts; auditors; C-suite managers; trade associa-
understanding potential and actual governance executives; with compliance tions; advocacy
experts; consultation of experts; groups;
impacts; 3. AI Risks and Trustworthiness organizational human factors auditors; environmental
management; experts, socio- governance groups; civil
C-suite cultural analysts, experts; society
executives; governance organizational organizations;
- be a source of formal or quasi- For AI systems to be trustworthy, impacted experts, TEVV management; researchers.
individuals/com- experts. impacted
formal norms and guidance for AI they often need to be responsive munities; individuals/com-
evaluators. munities;
risk management; to a multiplicity of cri- teria that evaluators.
the technology being designed, AI RMF, the Playbook is voluntary Framework users may apply these
Establish
provide a catalog of outcomes and
depth to be useful to practitioners.
approaches rather than prescribe
The AI RMF should allow for
a unified US
one-size-fits-all requirements.
communication of AI risks across an
organization, between organizations, 8. Take advantage of and foster
with customers, and to the public at
large.
greater awareness of existing
standards, guidelines, best practices,
privacy program
methodologies, and tools for
4. Provide common language and
managing AI risks – as well as illustrate Protect privacy and ensure US
understanding to manage AI risks.
the need for additional, improved compliance across the business
The AI RMF should offer taxonomy,
resources.
terminology, definitions, metrics, and
Protect consumer rights
characterizations for AI risk. 9. Be law- and regulation-agnostic. Collect consent, preferences, and first-party
The Framework should support data and activate data across the MarTech
5. Be easily usable and fit well with stack based on individual choice
organizations’ abilities to operate
other aspects of risk management.
under applicable domestic and Respond to employee privacy requests
Use of the Framework should be
international legal or regulatory Fully automate employee rights requests
intuitive and readily adaptable as
like access, deletion, and broader do not
regimes.
part of an organization’s broader risk sell requests
management strategy and processes. 10. Be a living document. The AI
Conduct privacy risk assessments
It should be consistent or aligned with RMF should be readily updated as Embed privacy by design into your business
other approaches to managing AI data strategy to manage risk at scale
technology, under- standing, and
risks. approaches to AI trustworthiness
Enforce data retention and minimization
and uses of AI change and as stake- Reduce your sensitive data footprint in
6. Be useful to a wide range of
holders learn from implementing AI compliance with retention and limitation
perspectives, sectors, and technology requirements
risk management generally and this
domains. The AI RMF should be
framework in particular.
universally applicable to any AI
technology and to context-specific Learn more at OneTrust.com