You are on page 1of 124

AI Governance:

A consolidated
reference
EU AI Act — European Commission Draft

OECD Recommendations on Artificial Intelligence

NIST AI Risk Management Framework

SEPTEMBER 2023
Table of Contents

EU AI Act - European Commission Draft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 07

OECD Recommendation of the Council on Artificial Intelligence. . . . . . . . . . 173

NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0) . . . 189

Inquiries
info@onetrust.com

Support
support@onetrust.com

Web
www.onetrust.com

DISCLAIMER:

No part of this document may be reproduced in any form without the written permission of the copyright owner.
The contents of this document are subject to revision without notice due to continued progress in methodology,
design, and manufacturing. OneTrust LLC shall have no liability for any error or damage of any kind resulting from
the use of this document. OneTrust products, content and materials are for informational purposes only and not
for the purpose of providing legal advice. You should contact your attorney to obtain advice with respect to any
particular issue. OneTrust materials do not guarantee compliance with applicable laws and regulations.
Copyright © 2023 OneTrust LLC. All rights reserved. Proprietary & Confidential AI GOVERNANCE: A CONSOLIDATED REFERENCE | 3
Establish
Trust Intelligence a unified US
Platform privacy program
Protect privacy and ensure US
compliance across the business
Visibility. Action. Automation.
Protect consumer rights
Collect consent, preferences, and first-party
data and activate data across the MarTech
Privacy & Data GRC & Security stack based on individual choice
Governance Assurance
Respond to employee privacy requests
Fully automate employee rights requests
like access, deletion, and broader do not
sell requests
Ethics & ESG &
Conduct privacy risk assessments
Compliance Sustainability Embed privacy by design into your business
data strategy to manage risk at scale

Enforce data retention and minimization


TRUSTED BY 14,000+ Reduce your sensitive data footprint in
compliance with retention and limitation
ORGANIZATIONS requirements

Learn more at OneTrust.com Learn more at OneTrust.com


EU AI Act - European
Commission Draft

EUROPEAN COMMISSION

Brussels, 21.4.2021
COM(2021) 206 final

2021/0106 (COD)

PROPOSAL FOR A REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE


COUNCIL LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE
(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS

{SEC(2021) 167 final} - {SWD(2021) 84} - {SWD(2021) 85 final}


SEPTEMBER 2023
EXPLANATORY sectors, including climate change, published the White Paper on AI - A a broad stakeholder consultation,
environment and health, the public European approach to excellence which was met with a great interest
MEMORANDUM
sector, finance, mobility, home affairs and trust . The White Paper sets out by a large number of stakeholders

1. CONTEXT OF THE and agriculture. However, the same policy options on how to achieve who were largely supportive of
elements and techniques that power the twin objective of promoting the regulatory intervention to address
PROPOSAL the socio-economic benefits of AI can uptake of AI and of addressing the the challenges and concerns raised
also bring about new risks or negative risks associated with certain uses by the increasing use of AI.
1.1. Reasons for and
consequences for individuals or of such technology. This proposal
objectives of the the society. In light of the speed of aims to implement the second
The proposal also responds to

proposal technological change and possible objective for the development of


explicit requests from the European
Parliament (EP) and the European
challenges, the EU is committed to an ecosystem of trust by proposing
This explanatory memorandum Council, which have repeatedly
strive for a balanced approach. It is a legal framework for trustworthy
accompanies the proposal for a expressed calls for legislative action
in the Union interest to preserve the AI. The proposal is based on EU
Regulation laying down harmonised to ensure a well-functioning internal
EU’s technological leadership and to values and fundamental rights and
rules on artificial intelligence market for artificial intelligence
ensure that Europeans can benefit aims to give people and other users
(Artificial Intelligence Act). Artificial systems (‘AI systems’) where
from new technologies developed the confidence to embrace AI-
Intelligence (AI) is a fast evolving both benefits and risks of AI are
and functioning according to Union based solutions, while encouraging
family of technologies that can adequately addressed at Union
values, fundamental rights and businesses to develop them. AI
bring a wide array of economic and level. It supports the objective of the
principles. should be a tool for people and be
societal benefits across the entire Union being a global leader in the
a force for good in society with the
spectrum of industries and social This proposal delivers on the political development of secure, trustworthy
ultimate aim of increasing human
activities. By improving prediction, commitment by President von and ethical artificial intelligence as
well-being. Rules for AI available
optimising operations and resource der Leyen, who announced in her stated by the European Council and
in the Union market or otherwise
allocation, and personalising political guidelines for the 2019-2024 ensures the protection of ethical
affecting people in the Union should
service delivery, the use of artificial Commission “A Union that strives principles as specifically requested
therefore be human centric, so that
intelligence can support socially and for more” , that the Commission by the European Parliament .
people can trust that the technology
environmentally beneficial outcomes would put forward legislation for a
is used in a way that is safe and
and provide key competitive In 2017, the European Council called
coordinated European approach on
compliant with the law, including
advantages to companies and the for a ‘sense of urgency to address
the human and ethical implications of
the respect of fundamental rights.
European economy. Such action is emerging trends’ including ‘issues
AI. Following on that announcement,
Following the publication of the White
especially needed in high-impact such as artificial intelligence …, while
on 19 February 2020 the Commission
Paper, the Commission launched
at the same time ensuring a high

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 9


level of data protection, digital rights resolutions related to AI, including Against this political context, the hindering technological development
and ethical standards’ . In its 2019 on ethics , liability and copyright Commission puts forward the or otherwise disproportionately
Conclusions on the Coordinated . In 2021, those were followed proposed regulatory framework increasing the cost of placing
Plan on the development and use of by resolutions on AI in criminal on Artificial Intelligence with the AI solutions on the market. The
artificial intelligence Made in Europe matters and in education, culture following specific objectives: proposal sets a robust and flexible
, the Council further highlighted and the audio-visual sector . The EP legal framework. On the one
- ensure that AI systems placed on
the importance of ensuring that Resolution on a Framework of Ethical hand, it is comprehensive and
the Union market and used are
European citizens’ rights are fully Aspects of Artificial Intelligence, future-proof in its fundamental
safe and respect existing law on
respected and called for a review of Robotics and Related Technologies regulatory choices, including the
fundamental rights and Union
the existing relevant legislation to specifically recommends to the principle-based requirements
values;
make it fit for purpose for the new Commission to propose legislative that AI systems should comply
opportunities and challenges raised action to harness the opportunities with. On the other hand, it puts in
- ensure legal certainty to facilitate
by AI. The European Council has also and benefits of AI, but also to ensure place a proportionate regulatory
investment and innovation in AI;
called for a clear determination of protection of ethical principles. system centred on a well-defined
the AI applications that should be The resolution includes a text - enhance governance and effective risk-based regulatory approach
considered high-risk . of the legislative proposal for a enforcement of existing law on that does not create unnecessary
regulation on ethical principles fundamental rights and safety restrictions to trade, whereby legal
The most recent Conclusions
for the development, deployment requirements applicable to AI intervention is tailored to those
from 21 October 2020 further
and use of AI, robotics and related systems; concrete situations where there
called for addressing the opacity,
technologies. In accordance with is a justified cause for concern or
complexity, bias, a certain degree - facilitate the development of a
the political commitment made where such concern can reasonably
of unpredictability and partially single market for lawful, safe and
by President von der Leyen in her be anticipated in the near future. At
autonomous behaviour of certain AI trustworthy AI applications and
Political Guidelines as regards the same time, the legal framework
systems, to ensure their compatibility prevent market fragmentation.
resolutions adopted by the European includes flexible mechanisms that
with fundamental rights and to
Parliament under Article 225 TFEU, To achieve those objectives, this enable it to be dynamically adapted
facilitate the enforcement of legal
this proposal takes into account the proposal presents a balanced and as the technology evolves and new
rules .
aforementioned resolution of the proportionate horizontal regulatory concerning situations emerge.

The European Parliament has also European Parliament in full respect approach to AI that is limited to the
The proposal sets harmonised rules
undertaken a considerable amount of proportionality, subsidiarity and minimum necessary requirements to
for the development, placement
of work in the area of AI. In October better law making principles. address the risks and problems linked
on the market and use of AI
2020, it adopted a number of to AI, without unduly constraining or

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 11


systems in the Union following a or ‘deep fakes’ are used. Union legislation on data are safety components of products,
proportionate risk-based approach. protection, consumer protection, this proposal will be integrated
The proposed rules will be enforced
It proposes a single future-proof non-discrimination and gender into the existing sectoral safety
through a governance system at
definition of AI. Certain particularly equality. The proposal is without legislation to ensure consistency,
Member States level, building on
harmful AI practices are prohibited prejudice and complements the avoid duplications and minimise
already existing structures, and a
as contravening Union values, General Data Protection Regulation additional burdens. In particular,
cooperation mechanism at Union
while specific restrictions and (Regulation (EU) 2016/679) and as regards high-risk AI systems
level with the establishment of a
safeguards are proposed in relation the Law Enforcement Directive related to products covered by the
European Artificial Intelligence
to certain uses of remote biometric (Directive (EU) 2016/680) with a New Legislative Framework (NLF)
Board. Additional measures are also
identification systems for the purpose set of harmonised rules applicable legislation (e.g. machinery, medical
proposed to support innovation,
of law enforcement. The proposal to the design, development and devices, toys), the requirements for
in particular through AI regulatory
lays down a solid risk methodology use of certain high-risk AI systems AI systems set out in this proposal
sandboxes and other measures to
to define “high-risk” AI systems that and restrictions on certain uses will be checked as part of the existing
reduce the regulatory burden and
pose significant risks to the health of remote biometric identification conformity assessment procedures
to support Small and Medium-Sized
and safety or fundamental rights of systems. Furthermore, the proposal under the relevant NLF legislation.
Enterprises (‘SMEs’) and start-ups.
persons. Those AI systems will have complements existing Union law With regard to the interplay of
to comply with a set of horizontal on non-discrimination with specific requirements, while the safety risks
1.2. Consistency
mandatory requirements for requirements that aim to minimise specific to AI systems are meant
trustworthy AI and follow conformity with existing policy the risk of algorithmic discrimination, to be covered by the requirements
assessment procedures before those provisions in the policy in particular in relation to the design of this proposal, NLF legislation
systems can be placed on the Union area and the quality of data sets used aims at ensuring the overall safety
market. Predictable, proportionate for the development of AI systems of the final product and therefore
and clear obligations are also placed The horizontal nature of the proposal complemented with obligations may contain specific requirements
on providers and users of those requires full consistency with existing for testing, risk management, regarding the safe integration of an
systems to ensure safety and respect Union legislation applicable to documentation and human oversight AI system into the final product. The
of existing legislation protecting sectors where high-risk AI systems throughout the AI systems’ lifecycle. proposal for a Machinery Regulation,
fundamental rights throughout the are already used or likely to be used The proposal is without prejudice to which is adopted on the same day
whole AI systems’ lifecycle. For some in the near future. the application of Union competition as this proposal fully reflects this
specific AI systems, only minimum law. approach. As regards high-risk AI
Consistency is also ensured with
transparency obligations are systems related to products covered
the EU Charter of Fundamental As regards high-risk AI systems which
proposed, in particular when chatbots by relevant Old Approach legislation
Rights and the existing secondary

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 13


(e.g. aviation, cars), this proposal prudential supervision . The proposal is part of a wider in ways that respect people’s rights
would not directly apply. However, comprehensive package of measures and earn their trust, making Europe
This proposal is also consistent with
the ex-ante essential requirements that address problems posed by fit for the digital age and turning the
the applicable Union legislation on
for high-risk AI systems set out in the development and use of AI, as next ten years into the Digital Decade
services, including on intermediary
this proposal will have to be taken examined in the White Paper on AI. .
services regulated by the
into account when adopting relevant Consistency and complementarity
e-Commerce Directive 2000/31/ Furthermore, the promotion of AI-
implementing or delegated legislation is therefore ensured with other
EC and the Commission’s recent driven innovation is closely linked to
under those acts. ongoing or planned initiatives of the
proposal for the Digital Services Act the Data Governance Act , the Open
Commission that also aim to address
As regards AI systems provided or (DSA) . Data Directive and other initiatives
those problems, including the revision
used by regulated credit institutions, under the EU strategy for data , which
of sectoral product legislation (e.g.
In relation to AI systems that are
the authorities responsible for the will establish trusted mechanisms
the Machinery Directive, the General
components of large-scale IT
supervision of the Union’s financial and services for the re-use, sharing
Product Safety Directive) and
systems in the Area of Freedom,
services legislation should be and pooling of data that are essential
initiatives that address liability issues
Security and Justice managed by
designated as competent authorities for the development of data-driven AI
related to new technologies, including
the European Union Agency for
for supervising the requirements in models of high quality.
AI systems. Those initiatives will build
the Operational Management of
this proposal to ensure a coherent
on and complement this proposal in
Large-Scale IT Systems (eu-LISA), The proposal also strengthens
enforcement of the obligations
order to bring legal clarity and foster
the proposal will not apply to those significantly the Union’s role to help
under this proposal and the Union’s
the development of an ecosystem of
AI systems that have been placed on shape global norms and standards
financial services legislation where
trust in AI in Europe.
the market or put into service before and promote trustworthy AI that is
AI systems are to some extent
one year has elapsed from the date of consistent with Union values and
implicitly regulated in relation to The proposal is also coherent with the
application of this Regulation, unless interests. It provides the Union with a
the internal governance system Commission’s overall digital strategy
the replacement or amendment of powerful basis to engage further with
of credit institutions. To further in its contribution to promoting
those legal acts leads to a significant its external partners, including third
enhance consistency, the conformity technology that works for people, one
change in the design or intended countries, and at international fora on
assessment procedure and some of of the three main pillars of the policy
purpose of the AI system or AI issues relating to AI.
the providers’ procedural obligations orientation and objectives announced
systems concerned.
under this proposal are integrated in the Communication ‘Shaping
2. LEGAL BASIS,
into the procedures under Directive Europe's digital future’ . It lays down a
1.3. Consistency with SUBSIDIARITY AND
2013/36/EU on access to the coherent, effective and proportionate
activity of credit institutions and the other Union policies framework to ensure AI is developed PROPORTIONALITY

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 15


2.1. Legal basis marketing, their use, the liability and publicly accessible spaces for the The objectives of this proposal can
the supervision by public authorities, purpose of law enforcement, it is be better achieved at Union level
The legal basis for the proposal is in and ii) the substantial diminishment appropriate to base this regulation, to avoid a further fragmentation of
the first place Article 114 of the Treaty of legal certainty for both providers in as far as those specific rules are the Single Market into potentially
on the Functioning of the European and users of AI systems on how concerned, on Article 16 of the TFEU. contradictory national frameworks
Union (TFEU), which provides for the existing and new rules will apply to preventing the free circulation of
adoption of measures to ensure the those systems in the Union. Given 2.2. Subsidiarity goods and services embedding AI. A
establishment and functioning of the the wide circulation of products and (for non-exclusive solid European regulatory framework
internal market. services across borders, these two
competence) for trustworthy AI will also ensure
problems can be best solved through a level playing field and protect all
This proposal constitutes a core
EU harmonizing legislation. The nature of AI, which often relies people, while strengthening Europe’s
part of the EU digital single market
on large and varied datasets and competitiveness and industrial basis
strategy. The primary objective Indeed, the proposal defines common
which may be embedded in any in AI. Only common action at Union
of this proposal is to ensure the mandatory requirements applicable
product or service circulating freely level can also protect the Union’s
proper functioning of the internal to the design and development of
within the internal market, entails digital sovereignty and leverage its
market by setting harmonised rules certain AI systems before they are
that the objectives of this proposal tools and regulatory powers to shape
in particular on the development, placed on the market that will be
cannot be effectively achieved by global rules and standards.
placing on the Union market and further operationalised through
Member States alone. Furthermore,
the use of products and services harmonised technical standards. The
an emerging patchwork of potentially 2.3. Proportionality
making use of AI technologies or proposal also addresses the situation
divergent national rules will hamper
provided as stand-alone AI systems. after AI systems have been placed The proposal builds on existing legal
the seamless circulation of products
Some Member States are already on the market by harmonising the frameworks and is proportionate and
and services related to AI systems
considering national rules to ensure way in which ex-post controls are necessary to achieve its objectives,
across the EU and will be ineffective
that AI is safe and is developed and conducted. since it follows a risk-based approach
in ensuring the safety and protection
used in compliance with fundamental and imposes regulatory burdens
of fundamental rights and Union
rights obligations. This will likely In addition, considering that this only when an AI system is likely
values across the different Member
lead to two main problems: i) a proposal contains certain specific to pose high risks to fundamental
States. National approaches in
fragmentation of the internal market rules on the protection of individuals rights and safety. For other, non-
addressing the problems will only
on essential elements regarding with regard to the processing of high-risk AI systems, only very
create additional legal uncertainty
in particular the requirements for personal data, notably restrictions of limited transparency obligations
and barriers, and will slow market
the AI products and services, their the use of AI systems for ‘real-time’ are imposed, for example in terms
uptake of AI.
remote biometric identification in

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 17


of the provision of information to harmful AI-enabled practices and STAKEHOLDER summary outcome and the individual
flag the use of an AI system when the classification of certain AI responses on its website .
CONSULTATIONS
interacting with humans. For high-risk systems. The direct applicability of
AI systems, the requirements of high a Regulation, in accordance with
AND IMPACT In total, 1215 contributions were

quality data, documentation and Article 288 TFEU, will reduce legal ASSESSMENTS received, of which 352 were from
companies or business organisations/
traceability, transparency, human fragmentation and facilitate the
oversight, accuracy and robustness, development of a single market 3.1. Stakeholder associations, 406 from individuals

are strictly necessary to mitigate the for lawful, safe and trustworthy AI consultation (92%individuals from EU ), 152
on behalf of academic/research
risks to fundamental rights and safety systems. It will do so, in particular,
This proposal is the result of institutions, and 73 from public
posed by AI and that are not covered by introducing a harmonised set of
extensive consultation with all major authorities. Civil society’s voices were
by other existing legal frameworks. core requirements with regard to AI
stakeholders, in which the general represented by 160 respondents
Harmonised standards and systems classified as high-risk and
principles and minimum standards for (among which 9 consumers’
supporting guidance and compliance obligations for providers and users
consultation of interested parties by organisations, 129 non-governmental
tools will assist providers and users in of those systems, improving the
the Commission were applied. organisations and 22 trade unions), 72
complying with the requirements laid protection of fundamental rights and
respondents contributed as ‘others’.
down by the proposal and minimise providing legal certainty for operators An online public consultation was
Of the 352 business and industry
their costs. The costs incurred by and consumers alike. launched on 19 February 2020 along
representatives, 222 were companies
operators are proportionate to with the publication of the White
At the same time, the provisions and business representatives, 41.5%
the objectives achieved and the Paper on Artificial Intelligence
of the regulation are not overly of which were micro, small and
economic and reputational benefits and ran until 14 June 2020. The
prescriptive and leave room medium-sized enterprises. The rest
that operators can expect from this objective of that consultation was
for different levels of Member were business associations. Overall,
proposal. to collect views and opinions on
State action for elements that 84% of business and industry replies
the White Paper. It targeted all
2.4. Choice of the do not undermine the objectives
interested stakeholders from the
came from the EU-27. Depending on
of the initiative, in particular the the question, between 81 and 598
instrument public and private sectors, including
internal organisation of the market of the respondents used the free
governments, local authorities,
The choice of a regulation as a surveillance system and the uptake of text option to insert comments. Over
commercial and non-commercial
legal instrument is justified by the measures to foster innovation. 450 position papers were submitted
organisations, social partners,
need for a uniform application of through the EU Survey website,
the new rules, such as definition 3. RESULTS OF EX- experts, academics and citizens. After
either in addition to questionnaire
analysing all the responses received,
of AI, the prohibition of certain POST EVALUATIONS, answers (over 400) or as stand-alone
the Commission published a

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 19


contributions (over 50). should be calculated taking into of the Commission’s Strategy on 1250 stakeholders, including over
account the impact on rights and Artificial Intelligence. In April 2019, 450 additional position papers. As
Overall, there is a general agreement
safety. the Commission supported the key a result, the Commission published
amongst stakeholders on a need
requirements set out in the HLEG an Inception Impact Assessment,
for action. A large majority of Regulatory sandboxes could be very
ethics guidelines for Trustworthy AI , which in turn attracted more than 130
stakeholders agree that legislative useful for the promotion of AI and are
which had been revised to take into comments . Additional stakeholder
gaps exist or that new legislation welcomed by certain stakeholders,
account more than 500 submissions workshops and events were also
is needed. However, several especially the Business Associations.
from stakeholders. The key organised the results of which
stakeholders warn the Commission
requirements reflect a widespread support the analysis in the impact
Among those who formulated their
to avoid duplication, conflicting
and common approach, as evidenced assessment and the policy choices
opinion on the enforcement models,
obligations and overregulation. There
by a plethora of ethical codes and made in this proposal . An external
more than 50%, especially from the
were many comments underlining
principles developed by many private study was also procured to feed into
business associations, were in favour
the importance of a technology
and public organisations in Europe the impact assessment.
of a combination of an ex-ante risk
neutral and proportionate regulatory
and beyond, that AI development
self-assessment and an ex-post
framework.
enforcement for high-risk AI systems. and use should be guided by certain 3.3. Impact assessment
essential value-oriented principles.
Stakeholders mostly requested a In line with its “Better Regulation”
narrow, clear and precise definition 3.2. Collection and use The Assessment List for Trustworthy
policy, the Commission conducted an
for AI. Stakeholders also highlighted of expertise Artificial Intelligence (ALTAI) made
impact assessment for this proposal
those requirements operational in
that besides the clarification of the examined by the Commission's
The proposal builds on two years of a piloting process with over 350
term of AI, it is important to define Regulatory Scrutiny Board. A
analysis and close involvement of organisations.
‘risk’, ‘high-risk’, ‘low-risk’, ‘remote meeting with the Regulatory Scrutiny
stakeholders, including academics,
biometric identification’ and ‘harm’. Board was held on 16 December
In addition, the AI Alliance
businesses, social partners, non-
was formed as a platform for 2020, which was followed by a
Most of the respondents are explicitly governmental organisations, Member
approximately 4000 stakeholders to negative opinion. After substantial
in favour of the risk-based approach. States and citizens. The preparatory
debate the technological and societal revision of the impact assessment
Using a risk-based framework was work started in 2018 with the setting
implications of AI, culminating in a to address the comments and
considered a better option than up of a High-Level Expert Group on
yearly AI Assembly. a resubmission of the impact
blanket regulation of all AI systems. AI (HLEG) which had an inclusive
assessment, the Regulatory Scrutiny
The types of risks and threats should and broad composition of 52 well- The White Paper on AI further Board issued a positive opinion
be based on a sector-by-sector and known experts tasked to advise the developed this inclusive approach, on 21 March 2021. The opinions of
case-by-case approach. Risks also Commission on the implementation inciting comments from more than

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 21


the Regulatory Scrutiny Board, the + codes of conduct for non-high- proposal. By requiring a restricted certainty, and the absence of
recommendations and an explanation risk AI systems; yet effective set of actions from AI obstacles to cross-border movement
of how they have been taken into developers and users, the preferred of AI systems, the single market for
- Option 4: Horizontal EU legislative
account are presented in Annex 1 of option limits the risks of violation AI will likely flourish. The European
instrument establishing mandatory
the impact assessment. of fundamental rights and safety Union will continue to develop a fast-
requirements for all AI systems,
of people and foster effective growing AI ecosystem of innovative
The Commission examined different irrespective of the risk they pose.
supervision and enforcement, by services and products embedding
policy options to achieve the general
targeting the requirements only to AI technology or stand-alone AI
According to the Commission's
objective of the proposal, which is
systems where there is a high risk systems, resulting in increased digital
established methodology, each
to ensure the proper functioning of
that such violations could occur. As a autonomy.
policy option was evaluated
the single market by creating the
result, that option keeps compliance
against economic and societal
conditions for the development and Businesses or public authorities
costs to a minimum, thus avoiding
impacts, with a particular focus
use of trustworthy AI in the Union. that develop or use AI applications
an unnecessary slowing of uptake
on impacts on fundamental rights.
that constitute a high risk for the
due to higher prices and compliance
Four policy options of different The preferred option is option 3+, a
safety or fundamental rights of
costs. In order to address possible
degrees of regulatory intervention regulatory framework for high-risk
citizens would have to comply
were assessed: AI systems only, with the possibility disadvantages for SMEs, this option
with specific requirements and
includes several provisions to support
for all providers of non-high-risk
- Option 1: EU legislative instrument obligations. Compliance with these
their compliance and reduce their
AI systems to follow a code of
setting up a voluntary labelling requirements would imply
costs, including creation of regulatory
conduct. The requirements will
scheme; costs amounting to approximately
concern data, documentation and sandboxes and obligation to consider
EUR € 6000 to EUR € 7000 for the
traceability, provision of information SMEs interests when setting fees
- Option 2: a sectoral, “ad-hoc” supply of an average high-risk AI
and transparency, human oversight related to conformity assessment.
approach; system of around EUR € 170000 by
and robustness and accuracy and
The preferred option will increase 2025. For AI users, there would also
- Option 3: Horizontal EU would be mandatory for high-risk AI
people’s trust in AI, companies will be the annual cost for the time spent
legislative instrument following systems. Companies that introduced
gain in legal certainty, and Member on ensuring human oversight where
a proportionate risk-based codes of conduct for other AI
States will see no reason to take this is appropriate, depending on the
approach; systems would do so voluntarily.
unilateral action that could fragment use case. Those have been estimated

- Option 3+: Horizontal EU The preferred option was considered the single market. As a result of at approximately EUR € 5000 to

legislative instrument following a suitable to address in the most higher demand due to higher trust, EUR € 8000 per year. Verification

proportionate risk-based approach effective way the objectives of this more available offers due to legal costs could amount to another EUR

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 23


€ 3000 to EUR € 7500 for suppliers will create legal certainty and ensure enforcement of the new rules. and freedom of assembly (Article
of high-risk AI. Businesses or public that no obstacle to the cross-border 12), to ensure protection of the right
authorities that develop or use any AI provision of AI-related services and 3.5. Fundamental to an effective remedy and to a fair
applications not classified as high risk products emerge. For companies rights trial, the rights of defence and the
would only have minimal obligations using AI, it will promote trust among presumption of innocence (Articles
The use of AI with its specific
of information. However, they could their customers. For national public 47 and 48), as well as the general
characteristics (e.g. opacity,
choose to join others and together administrations, it will promote public principle of good administration.
complexity, dependency on data,
adopt a code of conduct to follow trust in the use of AI and strengthen Furthermore, as applicable in certain
autonomous behaviour) can
suitable requirements, and to ensure enforcement mechanisms (by domains, the proposal will positively
adversely affect a number of
that their AI systems are trustworthy. introducing a European coordination affect the rights of a number of
fundamental rights enshrined in the
In such a case, costs would be at mechanism, providing for appropriate special groups, such as the workers’
EU Charter of Fundamental Rights
most as high as for high-risk AI capacities, and facilitating audits of rights to fair and just working
(‘the Charter’). This proposal seeks
systems, but most probably lower. the AI systems with new requirements conditions (Article 31), a high level
for documentation, traceability to ensure a high level of protection
of consumer protection (Article 28),
The impacts of the policy options on for those fundamental rights and
and transparency). Moreover, the the rights of the child (Article 24)
different categories of stakeholders aims to address various sources
framework will envisage specific and the integration of persons with
(economic operators/ business; of risks through a clearly defined
measures supporting innovation, disabilities (Article 26). The right to a
conformity assessment bodies, risk-based approach. With a set of
including regulatory sandboxes and high level of environmental protection
standardisation bodies and other requirements for trustworthy AI and
specific measures supporting small- and the improvement of the quality
public bodies; individuals/citizens; proportionate obligations on all value
scale users and providers of high-risk of the environment (Article 37) is
researchers) are explained in detail chain participants, the proposal will
AI systems to comply with the new also relevant, including in relation to
in Annex 3 of the Impact assessment enhance and promote the protection
rules. the health and safety of people. The
supporting this proposal. of the rights protected by the
obligations for ex ante testing, risk
The proposal also specifically Charter: the right to human dignity
management and human oversight
3.4. Regulatory fitness aims at strengthening Europe’s (Article 1), respect for private life and
will also facilitate the respect of other
and simplification competitiveness and industrial basis protection of personal data (Articles
fundamental rights by minimising
in AI. Full consistency is ensured with 7 and 8), non-discrimination (Article
the risk of erroneous or biased AI-
This proposal lays down obligation existing sectoral Union legislation 21) and equality between women and
assisted decisions in critical areas
that will apply to providers and applicable to AI systems (e.g. on men (Article 23). It aims to prevent
such as education and training,
users of high-risk AI systems. For products and services) that will a chilling effect on the rights to
employment, important services, law
providers who develop and place bring further clarity and simplify the freedom of expression (Article 11)
enforcement and the judiciary. In
such systems on the Union market, it

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 25


case infringements of fundamental remedy and to the necessary expertise and human and financial complies with the requirements
rights still happen, effective redress transparency towards supervision resources. Depending on the pre- laid down in the proposal and to
for affected persons will be made and enforcement authorities, in line existing structure in each Member exercise enhanced oversight over
possible by ensuring transparency with their mandates. Any disclosure State, this could amount to 1 to 25 Full those AI systems posing high risks
and traceability of the AI systems of information will be carried out in Time Equivalents per Member State. to fundamental rights. To feed this
coupled with strong ex post controls. compliance with relevant legislation database, AI providers will be obliged
A detailed overview of the costs
in the field, including Directive to provide meaningful information
This proposal imposes some involved is provided in the ‘financial
2016/943 on the protection of about their systems and the
restrictions on the freedom to statement’ linked to this proposal.
undisclosed know-how and business conformity assessment carried out on
conduct business (Article 16) and
information (trade secrets) against those systems.
the freedom of art and science 5. OTHER ELEMENTS
their unlawful acquisition, use and
(Article 13) to ensure compliance Moreover, AI providers will be
with overriding reasons of public
disclosure. When public authorities
5.1. Implementation obliged to inform national competent
and notified bodies need to be given
interest such as health, safety,
access to confidential information or
plans and monitoring, authorities about serious incidents
consumer protection and the
source code to examine compliance evaluation or malfunctioning that constitute
protection of other fundamental
with substantial obligations, they are and reporting a breach of fundamental rights
rights (‘responsible innovation’) when obligations as soon as they become
high-risk AI technology is developed
placed under binding confidentiality arrangements aware of them, as well as any recalls
obligations.
and used. Those restrictions are Providing for a robust monitoring or withdrawals of AI systems from
proportionate and limited to the 4. BUDGETARY and evaluation mechanism is crucial the market. National competent
minimum necessary to prevent and authorities will then investigate
IMPLICATIONS to ensure that the proposal will be
mitigate serious safety risks and likely effective in achieving its specific the incidents/or malfunctioning,
infringements of fundamental rights. Member States will have to designate objectives. The Commission will be in collect all the necessary information
supervisory authorities in charge charge of monitoring the effects of and regularly transmit it to the
The increased transparency
of implementing the legislative the proposal. It will establish a system Commission with adequate metadata.
obligations will also not
requirements. Their supervisory for registering stand-alone high-risk The Commission will complement
disproportionately affect the
function could build on existing AI applications in a public EU-wide this information on the incidents by a
right to protection of intellectual
arrangements, for example regarding database. This registration will comprehensive analysis of the overall
property (Article 17(2)), since they
conformity assessment bodies also enable competent authorities, market for AI.
will be limited only to the minimum
or market surveillance, but would users and other interested people
necessary information for individuals The Commission will publish a
require sufficient technological to verify if the high-risk AI system
to exercise their right to an effective report evaluating and reviewing

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 27


the proposed AI framework five technological developments. Key or persons with disabilities in order with a risk-based approach,
years following the date on which it participants across the AI value to materially distort their behaviour in those high-risk AI systems are
becomes applicable. chain are also clearly defined such a manner that is likely to cause them permitted on the European market
as providers and users of AI systems or another person psychological or subject to compliance with certain
5.2. Detailed that cover both public and private physical harm. Other manipulative mandatory requirements and an
explanation of the operators to ensure a level playing or exploitative practices affecting ex-ante conformity assessment.

specific provisions of field. adults that might be facilitated by The classification of an AI system as
AI systems could be covered by the high-risk is based on the intended
the proposal 5.2.2. PROHIBITED existing data protection, consumer purpose of the AI system, in line with

5.2.1. SCOPE AND ARTIFICIAL protection and digital service existing product safety legislation.

DEFINITIONS (TITLE I) INTELLIGENCE legislation that guarantee that natural Therefore, the classification as high-
persons are properly informed and risk does not only depend on the
PRACTICES (TITLE II)
Title I defines the subject matter have free choice not to be subject to function performed by the AI system,

of the regulation and the scope of Title II establishes a list of prohibited profiling or other practices that might but also on the specific purpose and

application of the new rules that AI. The regulation follows a risk- affect their behaviour. The proposal modalities for which that system is

cover the placing on the market, based approach, differentiating also prohibits AI-based social scoring used.

putting into service and use of between uses of AI that create (i) for general purposes done by public
Chapter 1 of Title III sets the
AI systems. It also sets out the an unacceptable risk, (ii) a high risk, authorities. Finally, the use of ‘real
classification rules and identifies
definitions used throughout the and (iii) low or minimal risk. The list time’ remote biometric identification
two main categories of high-risk AI
instrument. The definition of AI of prohibited practices in Title II systems in publicly accessible spaces
systems:
system in the legal framework aims comprises all those AI systems whose for the purpose of law enforcement is

to be as technology neutral and use is considered unacceptable also prohibited unless certain limited - AI systems intended to be used as
future proof as possible, taking into as contravening Union values, for exceptions apply. safety component of products that
account the fast technological and instance by violating fundamental are subject to third party ex-ante
market developments related to AI. rights. The prohibitions covers 5.2.3. HIGH-RISK AI conformity assessment;
In order to provide the needed legal practices that have a significant SYSTEMS (TITLE III)
potential to manipulate persons - other stand-alone AI systems
certainty, Title I is complemented by
Title III contains specific rules for AI with mainly fundamental rights
Annex I, which contains a detailed list through subliminal techniques
systems that create a high risk to implications that are explicitly
of approaches and techniques for beyond their consciousness or
the health and safety or fundamental listed in Annex III.
the development of AI to be adapted exploit vulnerabilities of specific
rights of natural persons. In line
by the Commission in line with new vulnerable groups such as children

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 29


This list of high-risk AI systems in which ensures that the proposed AI conformity assessment procedures, established. This follows the model
Annex III contains a limited number framework is compatible with those while Chapter 5 explains in detail the of the New Legislative Framework
of AI systems whose risks have adopted by the EU’s international conformity assessment procedures legislation implemented through
already materialised or are likely trade partners. The precise technical to be followed for each type of internal control checks by the
to materialise in the near future. solutions to achieve compliance high-risk AI system. The conformity providers with the exception of
To ensure that the regulation can with those requirements may be assessment approach aims to remote biometric identification
be adjusted to emerging uses and provided by standards or by other minimise the burden for economic systems that would be subject to
applications of AI, the Commission technical specifications or otherwise operators as well as for notified third party conformity assessment. A
may expand the list of high-risk AI be developed in accordance with bodies, whose capacity needs to be comprehensive ex-ante conformity
systems used within certain pre- general engineering or scientific progressively ramped up over time. assessment through internal checks,
defined areas, by applying a set knowledge at the discretion of AI systems intended to be used as combined with a strong ex-post
of criteria and risk assessment the provider of the AI system. This safety components of products enforcement, could be an effective
methodology. flexibility is particularly important, that are regulated under the New and reasonable solution for those
because it allows providers of AI Legislative Framework legislation systems, given the early phase of
Chapter 2 sets out the legal
systems to choose the way to meet (e.g. machinery, toys, medical devices, the regulatory intervention and the
requirements for high-risk AI
their requirements, taking into etc.) will be subject to the same fact the AI sector is very innovative
systems in relation to data and data
account the state-of-the-art and ex-ante and ex-post compliance and expertise for auditing is only now
governance, documentation and
technological and scientific progress and enforcement mechanisms of being accumulated. An assessment
recording keeping, transparency
in this field. the products of which they are a through internal checks for ‘stand-
and provision of information to
component. The key difference is that alone’ high-risk AI systems would
users, human oversight, robustness, Chapter 3 places a clear set of
the ex-ante and ex-post mechanisms require a full, effective and properly
accuracy and security. The proposed horizontal obligations on providers of
will ensure compliance not only with documented ex ante compliance with
minimum requirements are already high-risk AI systems. Proportionate
the requirements established by all requirements of the regulation
state-of-the-art for many diligent obligations are also placed on
sectorial legislation, but also with and compliance with robust quality
operators and the result of two users and other participants
the requirements established by this and risk management systems
years of preparatory work, derived across the AI value chain (e.g.,
regulation. and post-market monitoring. After
from the Ethics Guidelines of the importers, distributors, authorized
the provider has performed the
HLEG , piloted by more than 350 representatives). As regards stand-alone high-risk
relevant conformity assessment,
organisations . They are also largely AI systems that are referred to
Chapter 4 sets the framework it should register those stand-
consistent with other international in Annex III, a new compliance
for notified bodies to be involved alone high-risk AI systems in an EU
recommendations and principles, and enforcement system will be
as independent third parties in database that will be managed by

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 31


the Commission to increase public risks of manipulation they pose. Title V contributes to the objective facilitate a smooth, effective and
transparency and oversight and Transparency obligations will apply to create a legal framework that is harmonised implementation of this
strengthen ex post supervision by for systems that (i) interact with innovation-friendly, future-proof and regulation by contributing to the
competent authorities. By contrast, humans, (ii) are used to detect resilient to disruption. To that end, effective cooperation of the national
for reasons of consistency with the emotions or determine association it encourages national competent supervisory authorities and the
existing product safety legislation, with (social) categories based on authorities to set up regulatory Commission and providing advice
the conformity assessments of AI biometric data, or (iii) generate or sandboxes and sets a basic and expertise to the Commission.
systems that are safety components manipulate content (‘deep fakes’). framework in terms of governance, It will also collect and share best
of products will follow a system with When persons interact with an supervision and liability. AI regulatory practices among the Member States.
third party conformity assessment AI system or their emotions or sandboxes establish a controlled
At national level, Member States
procedures already established characteristics are recognised environment to test innovative
will have to designate one or more
under the relevant sectoral product through automated means, technologies for a limited time on
national competent authorities
safety legislation. New ex ante re- people must be informed of that the basis of a testing plan agreed
and, among them, the national
assessments of the conformity will circumstance. If an AI system is with the competent authorities. Title
supervisory authority, for the purpose
be needed in case of substantial used to generate or manipulate V also contains measures to reduce
of supervising the application and
modifications to the AI systems image, audio or video content that the regulatory burden on SMEs and
implementation of the regulation. The
(and notably changes which go appreciably resembles authentic start-ups.
European Data Protection Supervisor
beyond what is pre-determined content, there should be an obligation
by the provider in its technical to disclose that the content is 5.2.6. will act as the competent authority
for the supervision of the Union
documentation and checked at the generated through automated means, GOVERNANCE AND
institutions, agencies and bodies
moment of the ex-ante conformity subject to exceptions for legitimate
IMPLEMENTATION when they fall within the scope of this
assessment). purposes (law enforcement, freedom
(TITLES VI, VII AND VII) regulation.
of expression). This allows persons to
5.2.4. make informed choices or step back Title VI sets up the governance Title VII aims to facilitate the
TRANSPARENCY from a given situation. systems at Union and national monitoring work of the Commission
OBLIGATIONS FOR level. At Union level, the proposal and national authorities through
5.2.5. MEASURES
CERTAIN AI SYSTEMS establishes a European Artificial the establishment of an EU-wide
IN SUPPORT OF Intelligence Board (the ‘Board’), database for stand-alone high-risk
(TITLE IV)
INNOVATION (TITLE composed of representatives AI systems with mainly fundamental
Title IV concerns certain AI systems V) from the Member States and rights implications. The database
to take account of the specific the Commission. The Board will will be operated by the Commission

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 33


and provided with data by the foresee the automatic creation of of non-high-risk AI systems to Title XI sets out rules for the exercise
providers of the AI systems, who will any additional bodies or authorities apply voluntarily the mandatory of delegation and implementing
be required to register their systems at Member State level. Member requirements for high-risk AI systems powers. The proposal empowers
before placing them on the market or States may therefore appoint (and (as laid out in Title III). Providers of the Commission to adopt, where
otherwise putting them into service. draw upon the expertise of) existing non-high-risk AI systems may create appropriate, implementing acts to
sectorial authorities, who would be and implement the codes of conduct ensure uniform application of the
Title VIII sets out the monitoring and
entrusted also with the powers to themselves. Those codes may also regulation or delegated acts to
reporting obligations for providers
monitor and enforce the provisions of include voluntary commitments update or complement the lists in
of AI systems with regard to post-
the regulation. related, for example, to environmental Annexes I to VII.
market monitoring and reporting
sustainability, accessibility for
and investigating on AI-related All this is without prejudice to the Title XII contains an obligation for
persons with disability, stakeholders’
incidents and malfunctioning. existing system and allocation of the Commission to assess regularly
participation in the design and
Market surveillance authorities powers of ex-post enforcement of the need for an update of Annex III
development of AI systems, and
would also control the market and obligations regarding fundamental and to prepare regular reports on
diversity of development teams.
investigate compliance with the rights in the Member States. When the evaluation and review of the
obligations and requirements for all necessary for their mandate, existing 5.2.8. FINAL regulation. It also lays down final
high-risk AI systems already placed supervision and enforcement provisions, including a differentiated
PROVISIONS (TITLES
on the market. Market surveillance authorities will also have the transitional period for the initial date
authorities would have all powers power to request and access
X, XI AND XII) of the applicability of the regulation to
under Regulation (EU) 2019/1020 any documentation maintained facilitate the smooth implementation
Title X emphasizes the obligation of all
on market surveillance. Ex-post following this regulation and, where for all parties concerned.
parties to respect the confidentiality
enforcement should ensure that needed, request market surveillance of information and data and sets out
2021/0106 (COD)
once the AI system has been put authorities to organise testing of the rules for the exchange of information
on the market, public authorities high-risk AI system through technical obtained during the implementation Proposal for a
have the powers and resources means. of the regulation. Title X also includes
to intervene in case AI systems REGULATION OF THE EUROPEAN
measures to ensure the effective
generate unexpected risks, which 5.2.7. CODES OF implementation of the regulation
PARLIAMENT AND OF THE
warrant rapid action. They will also CONDUCT (TITLE IX) through effective, proportionate, and
COUNCIL
monitor compliance of operators with dissuasive penalties for infringements
Title IX creates a framework for LAYING DOWN HARMONISED
their relevant obligations under the of the provisions.
the creation of codes of conduct, RULES ON ARTIFICIAL
regulation. The proposal does not
which aim to encourage providers INTELLIGENCE (ARTIFICIAL

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 35


INTELLIGENCE ACT) AND internal market by laying down a operators that develop or use AI Board.
AMENDING CERTAIN UNION uniform legal framework in particular systems. A consistent and high level
(3) Artificial intelligence is a fast
LEGISLATIVE ACTS for the development, marketing of protection throughout the Union
evolving family of technologies
and use of artificial intelligence in should therefore be ensured, while
THE EUROPEAN PARLIAMENT AND that can contribute to a wide array
conformity with Union values. This divergences hampering the free
THE COUNCIL OF THE EUROPEAN of economic and societal benefits
Regulation pursues a number of circulation of AI systems and related
UNION, across the entire spectrum of
overriding reasons of public interest, products and services within the
industries and social activities. By
such as a high level of protection of internal market should be prevented,
Having regard to the Treaty on the
improving prediction, optimising
health, safety and fundamental rights, by laying down uniform obligations
Functioning of the European Union,
operations and resource
and it ensures the free movement of for operators and guaranteeing the
and in particular Articles 16 and 114
allocation, and personalising digital
AI-based goods and services cross- uniform protection of overriding
thereof,
solutions available for individuals
border, thus preventing Member reasons of public interest and of
Having regard to the proposal from and organisations, the use of
States from imposing restrictions rights of persons throughout the
the European Commission, artificial intelligence can provide
on the development, marketing and internal market based on Article
key competitive advantages to
use of AI systems, unless explicitly 114 of the Treaty on the Functioning
After transmission of the draft companies and support socially and
authorised by this Regulation. of the European Union (TFEU).
legislative act to the national environmentally beneficial outcomes,
To the extent that this Regulation
parliaments, (2) Artificial intelligence systems for example in healthcare, farming,
contains specific rules on the
(AI systems) can be easily deployed education and training, infrastructure
Having regard to the opinion of protection of individuals with regard
in multiple sectors of the economy management, energy, transport and
the European Economic and Social to the processing of personal data
and society, including cross border, logistics, public services, security,
Committee , concerning restrictions of the use
and circulate throughout the justice, resource and energy
of AI systems for ‘real-time’ remote
Having regard to the opinion of the Union. Certain Member States efficiency, and climate change
biometric identification in publicly
Committee of the Regions , have already explored the adoption mitigation and adaptation.
accessible spaces for the purpose
of national rules to ensure that
of law enforcement, it is appropriate
Acting in accordance with the (4) At the same time, depending
artificial intelligence is safe and is
to base this Regulation, in as far as
ordinary legislative procedure, on the circumstances regarding its
developed and used in compliance
those specific rules are concerned, specific application and use, artificial
with fundamental rights obligations.
Whereas: on Article 16 of the TFEU. In light of intelligence may generate risks and
Differing national rules may lead to
those specific rules and the recourse cause harm to public interests and
(1) The purpose of this Regulation fragmentation of the internal market
to Article 16 TFEU, it is appropriate to rights that are protected by Union
is to improve the functioning of the and decrease legal certainty for
consult the European Data Protection law. Such harm might be material or

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 37


immaterial. (6) The notion of AI system should to amend that list. well as the different risks involved, a
be clearly defined to ensure legal distinction should be made between
(5) A Union legal framework laying (7) The notion of biometric data used
certainty, while providing the flexibility ‘real-time’ and ‘post’ remote biometric
down harmonised rules on artificial in this Regulation is in line with and
to accommodate future technological identification systems. In the case of
intelligence is therefore needed should be interpreted consistently
developments. The definition should ‘real-time’ systems, the capturing of
to foster the development, use with the notion of biometric data as
be based on the key functional the biometric data, the comparison
and uptake of artificial intelligence defined in Article 4(14) of Regulation
characteristics of the software, in and the identification occur all
in the internal market that at the (EU) 2016/679 of the European
particular the ability, for a given set instantaneously, near-instantaneously
same time meets a high level of Parliament and of the Council , Article
of human-defined objectives, to or in any event without a significant
protection of public interests, 3(18) of Regulation (EU) 2018/1725 of
generate outputs such as content, delay. In this regard, there should be
such as health and safety and the the European Parliament and of the
predictions, recommendations, no scope for circumventing the rules
protection of fundamental rights, as Council and Article 3(13) of Directive
or decisions which influence the of this Regulation on the ‘real-time’
recognised and protected by Union (EU) 2016/680 of the European
environment with which the system use of the AI systems in question by
law. To achieve that objective, rules Parliament and of the Council .
interacts, be it in a physical or providing for minor delays. ‘Real-time’
regulating the placing on the market
digital dimension. AI systems can systems involve the use of ‘live’ or
(8) The notion of remote biometric
and putting into service of certain AI
be designed to operate with varying ‘near-‘live’ material, such as video
identification system as used in
systems should be laid down, thus
levels of autonomy and be used on a footage, generated by a camera or
this Regulation should be defined
ensuring the smooth functioning
stand-alone basis or as a component other device with similar functionality.
functionally, as an AI system intended
of the internal market and allowing
of a product, irrespective of whether In the case of ‘post’ systems, in
for the identification of natural
those systems to benefit from the
the system is physically integrated contrast, the biometric data have
persons at a distance through the
principle of free movement of goods
into the product (embedded) or already been captured and the
comparison of a person’s biometric
and services. By laying down those
serve the functionality of the product comparison and identification occur
data with the biometric data
rules, this Regulation supports the
without being integrated therein only after a significant delay. This
contained in a reference database,
objective of the Union of being a
(non-embedded). The definition of involves material, such as pictures or
and without prior knowledge
global leader in the development
AI system should be complemented video footage generated by closed
whether the targeted person will
of secure, trustworthy and ethical
by a list of specific techniques and circuit television cameras or private
be present and can be identified,
artificial intelligence, as stated by the
approaches used for its development, devices, which has been generated
irrespectively of the particular
European Council , and it ensures
which should be kept up-to–date in before the use of the system in
technology, processes or types of
the protection of ethical principles,
the light of market and technological respect of the natural persons
biometric data used. Considering
as specifically requested by the
developments through the adoption concerned.
their different characteristics and
European Parliament .
of delegated acts by the Commission manners in which they are used, as

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 39


(9) For the purposes of this space is accessible to the public circumstances, the AI system used by States. Such agreements have
Regulation the notion of publicly should however be determined on a the operator outside the Union could been concluded bilaterally between
accessible space should be case-by-case basis, having regard process data lawfully collected in Member States and third countries
understood as referring to any to the specificities of the individual and transferred from the Union, and or between the European Union,
physical place that is accessible to situation at hand. provide to the contracting operator in Europol and other EU agencies and
the public, irrespective of whether the Union the output of that AI system third countries and international
(10) In order to ensure a level
the place in question is privately or resulting from that processing, organisations.
playing field and an effective
publicly owned. Therefore, the notion without that AI system being placed
protection of rights and freedoms (12) This Regulation should also
does not cover places that are private on the market, put into service or
of individuals across the Union, the apply to Union institutions, offices,
in nature and normally not freely used in the Union. To prevent the
rules established by this Regulation bodies and agencies when acting as
accessible for third parties, including circumvention of this Regulation and
should apply to providers of AI a provider or user of an AI system.
law enforcement authorities, unless to ensure an effective protection of
systems in a non-discriminatory AI systems exclusively developed
those parties have been specifically natural persons located in the Union,
manner, irrespective of whether they or used for military purposes should
invited or authorised, such as homes, this Regulation should also apply to
are established within the Union or be excluded from the scope of this
private clubs, offices, warehouses providers and users of AI systems
in a third country, and to users of AI Regulation where that use falls
and factories. Online spaces are that are established in a third country,
systems established within the Union. under the exclusive remit of the
not covered either, as they are not to the extent the output produced by
Common Foreign and Security
physical spaces. However, the mere those systems is used in the Union.
(11) In light of their digital nature,
Policy regulated under Title V of
fact that certain conditions for Nonetheless, to take into account
certain AI systems should fall within
the Treaty on the European Union
accessing a particular space may existing arrangements and special
the scope of this Regulation even
(TEU). This Regulation should be
apply, such as admission tickets or needs for cooperation with foreign
when they are neither placed on
without prejudice to the provisions
age restrictions, does not mean that partners with whom information
the market, nor put into service, nor
regarding the liability of intermediary
the space is not publicly accessible and evidence is exchanged, this
used in the Union. This is the case for
service providers set out in Directive
within the meaning of this Regulation. Regulation should not apply to
example of an operator established
2000/31/EC of the European
Consequently, in addition to public public authorities of a third country
in the Union that contracts certain
Parliament and of the Council [as
spaces such as streets, relevant and international organisations
services to an operator established
amended by the Digital Services Act].
parts of government buildings and when acting in the framework of
outside the Union in relation to an
most transport infrastructure, spaces international agreements concluded
activity to be performed by an AI (13) In order to ensure a consistent
such as cinemas, theatres, shops and at national or European level for law
system that would qualify as high-risk and high level of protection of
shopping centres are normally also enforcement and judicial cooperation
and whose effects impact natural public interests as regards health,
publicly accessible. Whether a given with the Union or with its Member
persons located in the Union. In those safety and fundamental rights,

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 41


common normative standards for practices are particularly harmful and relation to such AI systems should originally generated or collected or
all high-risk AI systems should be should be prohibited because they not be stifled by the prohibition, if to a detrimental treatment that is
established. Those standards should contradict Union values of respect such research does not amount disproportionate or unjustified to
be consistent with the Charter of for human dignity, freedom, equality, to use of the AI system in human- the gravity of their social behaviour.
fundamental rights of the European democracy and the rule of law and machine relations that exposes Such AI systems should be therefore
Union (the Charter) and should Union fundamental rights, including natural persons to harm and such prohibited.
be non-discriminatory and in line the right to non-discrimination, data research is carried out in accordance
(18) The use of AI systems for ‘real-
with the Union’s international trade protection and privacy and the rights with recognised ethical standards for
time’ remote biometric identification
commitments. of the child. scientific research.
of natural persons in publicly
(14) In order to introduce a (16) The placing on the market, (17) AI systems providing social accessible spaces for the purpose
proportionate and effective set of putting into service or use of certain scoring of natural persons for of law enforcement is considered
binding rules for AI systems, a clearly AI systems intended to distort general purpose by public authorities particularly intrusive in the rights
defined risk-based approach should human behaviour, whereby physical or on their behalf may lead to and freedoms of the concerned
be followed. That approach should or psychological harms are likely discriminatory outcomes and the persons, to the extent that it may
tailor the type and content of such to occur, should be forbidden. exclusion of certain groups. They affect the private life of a large part
rules to the intensity and scope Such AI systems deploy subliminal may violate the right to dignity and of the population, evoke a feeling of
of the risks that AI systems can components individuals cannot non-discrimination and the values constant surveillance and indirectly
generate. It is therefore necessary to perceive or exploit vulnerabilities of of equality and justice. Such AI dissuade the exercise of the freedom
prohibit certain artificial intelligence children and people due to their age, systems evaluate or classify the of assembly and other fundamental
practices, to lay down requirements physical or mental incapacities. They trustworthiness of natural persons rights. In addition, the immediacy
for high-risk AI systems and do so with the intention to materially based on their social behaviour of the impact and the limited
obligations for the relevant operators, distort the behaviour of a person and in multiple contexts or known or opportunities for further checks or
and to lay down transparency in a manner that causes or is likely to predicted personal or personality corrections in relation to the use of
obligations for certain AI systems. cause harm to that or another person. characteristics. The social score such systems operating in ‘real-time’
The intention may not be presumed obtained from such AI systems carry heightened risks for the rights
(15) Aside from the many beneficial
if the distortion of human behaviour may lead to the detrimental or and freedoms of the persons that
uses of artificial intelligence, that
results from factors external to the unfavourable treatment of natural are concerned by law enforcement
technology can also be misused
AI system which are outside of the persons or whole groups thereof in activities.
and provide novel and powerful
control of the provider or the user. social contexts, which are unrelated
tools for manipulative, exploitative (19) The use of those systems for
Research for legitimate purposes in to the context in which the data was
and social control practices. Such the purpose of law enforcement

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 43


should therefore be prohibited, the Council Framework Decision use of ‘real-time’ remote biometric minimum necessary and be subject
except in three exhaustively listed 2002/584/JHA, some are in practice identification systems in publicly to appropriate safeguards and
and narrowly defined situations, likely to be more relevant than others, accessible spaces for the purpose of conditions, as determined in national
where the use is strictly necessary in that the recourse to ‘real-time’ law enforcement should be subject to law and specified in the context of
to achieve a substantial public remote biometric identification appropriate limits in time and space, each individual urgent use case by
interest, the importance of which will foreseeably be necessary and having regard in particular to the the law enforcement authority itself.
outweighs the risks. Those situations proportionate to highly varying evidence or indications regarding the In addition, the law enforcement
involve the search for potential degrees for the practical pursuit threats, the victims or perpetrator. authority should in such situations
victims of crime, including missing of the detection, localisation, The reference database of persons seek to obtain an authorisation as
children; certain threats to the life identification or prosecution of should be appropriate for each use soon as possible, whilst providing the
or physical safety of natural persons a perpetrator or suspect of the case in each of the three situations reasons for not having been able to
or of a terrorist attack; and the different criminal offences listed and mentioned above. request it earlier.
detection, localisation, identification having regard to the likely differences
(21) Each use of a ‘real-time’ remote (22) Furthermore, it is appropriate
or prosecution of perpetrators or in the seriousness, probability and
biometric identification system in to provide, within the exhaustive
suspects of the criminal offences scale of the harm or possible negative
publicly accessible spaces for the framework set by this Regulation
referred to in Council Framework consequences.
purpose of law enforcement should that such use in the territory of
Decision 2002/584/JHA if those
(20) In order to ensure that those be subject to an express and specific a Member State in accordance
criminal offences are punishable in
systems are used in a responsible authorisation by a judicial authority with this Regulation should only
the Member State concerned by a
and proportionate manner, it is also or by an independent administrative be possible where and in as far as
custodial sentence or a detention
important to establish that, in each authority of a Member State. Such the Member State in question has
order for a maximum period of at least
of those three exhaustively listed authorisation should in principle be decided to expressly provide for
three years and as they are defined in
and narrowly defined situations, obtained prior to the use, except in the possibility to authorise such use
the law of that Member State. Such
certain elements should be taken into duly justified situations of urgency, in its detailed rules of national law.
threshold for the custodial sentence
account, in particular as regards the that is, situations where the need to Consequently, Member States remain
or detention order in accordance with
nature of the situation giving rise to use the systems in question is such as free under this Regulation not to
national law contributes to ensure
the request and the consequences of to make it effectively and objectively provide for such a possibility at all or
that the offence should be serious
the use for the rights and freedoms impossible to obtain an authorisation to only provide for such a possibility
enough to potentially justify the
of all persons concerned and the before commencing the use. In in respect of some of the objectives
use of ‘real-time’ remote biometric
safeguards and conditions provided such situations of urgency, the use capable of justifying authorised use
identification systems. Moreover,
for with the use. In addition, the should be restricted to the absolute identified in this Regulation.
of the 32 criminal offences listed in

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 45


(23) The use of AI systems for ‘real- Directive 2016/680. However, the requirements resulting from Article TFEU, Denmark is not bound by rules
time’ remote biometric identification use of ‘real-time’ remote biometric 9(1) of Regulation (EU) 2016/679, laid down in Article 5(1), point (d), (2)
of natural persons in publicly identification systems in publicly Article 10(1) of Regulation (EU) and (3) of this Regulation adopted
accessible spaces for the purpose of accessible spaces for purposes other 2018/1725 and Article 10 of Directive on the basis of Article 16 of the TFEU,
law enforcement necessarily involves than law enforcement, including (EU) 2016/680, as applicable. or subject to their application, which
the processing of biometric data. The by competent authorities, should relate to the processing of personal
(25) In accordance with Article 6a
rules of this Regulation that prohibit, not be covered by the specific data by the Member States when
of Protocol No 21 on the position
subject to certain exceptions, framework regarding such use for carrying out activities falling within
of the United Kingdom and Ireland
such use, which are based on the purpose of law enforcement the scope of Chapter 4 or Chapter 5
in respect of the area of freedom,
Article 16 TFEU, should apply as lex set by this Regulation. Such use for of Title V of Part Three of the TFEU.
security and justice, as annexed to
specialis in respect of the rules on purposes other than law enforcement
the TEU and to the TFEU, Ireland (27) High-risk AI systems should only
the processing of biometric data should therefore not be subject to
is not bound by the rules laid down be placed on the Union market or
contained in Article 10 of Directive the requirement of an authorisation
in Article 5(1), point (d), (2) and (3) put into service if they comply with
(EU) 2016/680, thus regulating such under this Regulation and the
of this Regulation adopted on the certain mandatory requirements.
use and the processing of biometric applicable detailed rules of national
basis of Article 16 of the TFEU which Those requirements should ensure
data involved in an exhaustive law that may give effect to it.
relate to the processing of personal that high-risk AI systems available
manner. Therefore, such use and
(24) Any processing of biometric data by the Member States when in the Union or whose output is
processing should only be possible
data and other personal data carrying out activities falling within otherwise used in the Union do
in as far as it is compatible with the
involved in the use of AI systems for the scope of Chapter 4 or Chapter 5 not pose unacceptable risks to
framework set by this Regulation,
biometric identification, other than of Title V of Part Three of the TFEU, important Union public interests as
without there being scope, outside
in connection to the use of ‘real- where Ireland is not bound by the recognised and protected by Union
that framework, for the competent
time’ remote biometric identification rules governing the forms of judicial law. AI systems identified as high-risk
authorities, where they act for
systems in publicly accessible spaces cooperation in criminal matters or should be limited to those that have
purpose of law enforcement, to use
for the purpose of law enforcement police cooperation which require a significant harmful impact on the
such systems and process such data
as regulated by this Regulation, compliance with the provisions laid health, safety and fundamental rights
in connection thereto on the grounds
including where those systems are down on the basis of Article 16 of the of persons in the Union and such
listed in Article 10 of Directive
used by competent authorities in TFEU. limitation minimises any potential
(EU) 2016/680. In this context, this
publicly accessible spaces for other restriction to international trade, if
Regulation is not intended to provide
(26) In accordance with Articles 2 and
purposes than law enforcement, any.
the legal basis for the processing
2a of Protocol No 22 on the position
should continue to comply with all
of personal data under Article 8 of
of Denmark, annexed to the TEU and (28) AI systems could produce

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 47


adverse outcomes to health and when classifying an AI system as severity of the harm that an AI system without interfering with existing
safety of persons, in particular high-risk. Those rights include the can cause, including in relation to the governance, conformity assessment
when such systems operate right to human dignity, respect for health and safety of persons. and enforcement mechanisms and
as components of products. private and family life, protection authorities established therein,
(29) As regards high-risk AI systems
Consistently with the objectives of personal data, freedom of the mandatory requirements for
that are safety components of
of Union harmonisation legislation expression and information, freedom high-risk AI systems laid down in
products or systems, or which are
to facilitate the free movement of of assembly and of association, this Regulation when adopting
themselves products or systems
products in the internal market and to and non-discrimination, consumer any relevant future delegated or
falling within the scope of Regulation
ensure that only safe and otherwise protection, workers’ rights, rights implementing acts on the basis of
(EC) No 300/2008 of the European
compliant products find their way into of persons with disabilities, right those acts.
Parliament and of the Council ,
the market, it is important that the to an effective remedy and to a
Regulation (EU) No 167/2013 of the (30) As regards AI systems that are
safety risks that may be generated fair trial, right of defence and the
European Parliament and of the safety components of products,
by a product as a whole due to its presumption of innocence, right
Council , Regulation (EU) No 168/2013 or which are themselves products,
digital components, including AI to good administration. In addition
of the European Parliament and of falling within the scope of certain
systems, are duly prevented and to those rights, it is important to
the Council , Directive 2014/90/EU of Union harmonisation legislation, it
mitigated. For instance, increasingly highlight that children have specific
the European Parliament and of the is appropriate to classify them as
autonomous robots, whether in the rights as enshrined in Article 24 of
Council , Directive (EU) 2016/797 of high-risk under this Regulation if the
context of manufacturing or personal the EU Charter and in the United
the European Parliament and of the product in question undergoes the
assistance and care should be able Nations Convention on the Rights of
Council , Regulation (EU) 2018/858 conformity assessment procedure
to safely operate and performs their the Child (further elaborated in the
of the European Parliament and with a third-party conformity
functions in complex environments. UNCRC General Comment No. 25
of the Council , Regulation (EU) assessment body pursuant to
Similarly, in the health sector where as regards the digital environment),
2018/1139 of the European Parliament that relevant Union harmonisation
the stakes for life and health are both of which require consideration
and of the Council , and Regulation legislation. In particular, such
particularly high, increasingly of the children’s vulnerabilities and
(EU) 2019/2144 of the European products are machinery, toys, lifts,
sophisticated diagnostics systems provision of such protection and care
Parliament and of the Council , it is equipment and protective systems
and systems supporting human as necessary for their well-being. The
appropriate to amend those acts intended for use in potentially
decisions should be reliable and fundamental right to a high level of
to ensure that the Commission explosive atmospheres, radio
accurate. The extent of the adverse environmental protection enshrined
takes into account, on the basis equipment, pressure equipment,
impact caused by the AI system on in the Charter and implemented
of the technical and regulatory recreational craft equipment,
the fundamental rights protected by in Union policies should also be
specificities of each sector, and cableway installations, appliances
the Charter is of particular relevance considered when assessing the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 49


burning gaseous fuels, medical fundamental rights of persons, taking risk the AI systems intended to be to self-employment, notably for
devices, and in vitro diagnostic into account both the severity of the used as safety components in the the recruitment and selection of
medical devices. possible harm and its probability of management and operation of road persons, for making decisions on
occurrence and they are used in a traffic and the supply of water, gas, promotion and termination and
(31) The classification of an AI
number of specifically pre-defined heating and electricity, since their for task allocation, monitoring or
system as high-risk pursuant to this
areas specified in the Regulation. failure or malfunctioning may put at evaluation of persons in work-
Regulation should not necessarily
The identification of those systems is risk the life and health of persons at related contractual relationships,
mean that the product whose safety
based on the same methodology and large scale and lead to appreciable should also be classified as high-
component is the AI system, or
criteria envisaged also for any future disruptions in the ordinary conduct of risk, since those systems may
the AI system itself as a product,
amendments of the list of high-risk AI social and economic activities. appreciably impact future career
is considered ‘high-risk’ under the
systems. prospects and livelihoods of these
criteria established in the relevant (35) AI systems used in education
persons. Relevant work-related
Union harmonisation legislation (33) Technical inaccuracies of AI or vocational training, notably for
contractual relationships should
that applies to the product. This is systems intended for the remote determining access or assigning
involve employees and persons
notably the case for Regulation (EU) biometric identification of natural persons to educational and
providing services through platforms
2017/745 of the European Parliament persons can lead to biased results vocational training institutions or to
as referred to in the Commission
and of the Council and Regulation and entail discriminatory effects. evaluate persons on tests as part
Work Programme 2021. Such
(EU) 2017/746 of the European This is particularly relevant when of or as a precondition for their
persons should in principle not be
Parliament and of the Council , where it comes to age, ethnicity, sex or education should be considered high-
considered users within the meaning
a third-party conformity assessment disabilities. Therefore, ‘real-time’ and risk, since they may determine the
of this Regulation. Throughout the
is provided for medium-risk and high- ‘post’ remote biometric identification educational and professional course
recruitment process and in the
risk products. systems should be classified as of a person’s life and therefore affect
evaluation, promotion, or retention of
high-risk. In view of the risks that they their ability to secure their livelihood.
(32) As regards stand-alone AI persons in work-related contractual
pose, both types of remote biometric When improperly designed and used,
systems, meaning high-risk AI relationships, such systems may
identification systems should be such systems may violate the right to
systems other than those that are perpetuate historical patterns of
subject to specific requirements education and training as well as the
safety components of products, or discrimination, for example against
on logging capabilities and human right not to be discriminated against
which are themselves products, it is women, certain age groups, persons
oversight. and perpetuate historical patterns of
appropriate to classify them as high- with disabilities, or persons of
discrimination.
risk if, in the light of their intended (34) As regards the management and certain racial or ethnic origins or

purpose, they pose a high risk of operation of critical infrastructure, it (36) AI systems used in employment, sexual orientation. AI systems used

harm to the health and safety or the is appropriate to classify as high- workers management and access to monitor the performance and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 51


behaviour of these persons may also exempt AI systems for the purpose Finally, AI systems used to dispatch and the presumption of innocence,
impact their rights to data protection of creditworthiness assessment and or establish priority in the dispatching could be hampered, in particular,
and privacy. credit scoring when put into service of emergency first response services where such AI systems are not
by small-scale providers for their should also be classified as high-risk sufficiently transparent, explainable
(37) Another area in which the use
own use. Natural persons applying since they make decisions in very and documented. It is therefore
of AI systems deserves special
for or receiving public assistance critical situations for the life and appropriate to classify as high-risk a
consideration is the access to and
benefits and services from public health of persons and their property. number of AI systems intended to be
enjoyment of certain essential
authorities are typically dependent used in the law enforcement context
private and public services and (38) Actions by law enforcement
on those benefits and services and where accuracy, reliability and
benefits necessary for people to fully authorities involving certain uses
in a vulnerable position in relation transparency is particularly important
participate in society or to improve of AI systems are characterised
to the responsible authorities. If AI to avoid adverse impacts, retain
one’s standard of living. In particular, by a significant degree of power
systems are used for determining public trust and ensure accountability
AI systems used to evaluate the imbalance and may lead to
whether such benefits and services and effective redress. In view of the
credit score or creditworthiness of surveillance, arrest or deprivation
should be denied, reduced, revoked nature of the activities in question
natural persons should be classified of a natural person’s liberty as
or reclaimed by authorities, they and the risks relating thereto, those
as high-risk AI systems, since they well as other adverse impacts on
may have a significant impact on high-risk AI systems should include in
determine those persons’ access fundamental rights guaranteed
persons’ livelihood and may infringe particular AI systems intended to be
to financial resources or essential in the Charter. In particular, if the
their fundamental rights, such as used by law enforcement authorities
services such as housing, electricity, AI system is not trained with high
the right to social protection, non- for individual risk assessments,
and telecommunication services. AI quality data, does not meet adequate
discrimination, human dignity or an polygraphs and similar tools or to
systems used for this purpose may requirements in terms of its accuracy
effective remedy. Those systems detect the emotional state of natural
lead to discrimination of persons or or robustness, or is not properly
should therefore be classified person, to detect ‘deep fakes’, for
groups and perpetuate historical designed and tested before being put
as high-risk. Nonetheless, this the evaluation of the reliability of
patterns of discrimination, for on the market or otherwise put into
Regulation should not hamper evidence in criminal proceedings,
example based on racial or ethnic service, it may single out people in a
the development and use of for predicting the occurrence or
origins, disabilities, age, sexual discriminatory or otherwise incorrect
innovative approaches in the public reoccurrence of an actual or potential
orientation, or create new forms of or unjust manner. Furthermore, the
administration, which would stand to criminal offence based on profiling
discriminatory impacts. Considering exercise of important procedural
benefit from a wider use of compliant of natural persons, or assessing
the very limited scale of the impact fundamental rights, such as the right
and safe AI systems, provided that personality traits and characteristics
and the available alternatives on to an effective remedy and to a fair
those systems do not entail a high or past criminal behaviour of natural
the market, it is appropriate to trial as well as the right of defence
risk to legal and natural persons. persons or groups, for profiling in the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 53


course of detection, investigation as high-risk AI systems intended to (40) Certain AI systems intended system is necessarily lawful under
or prosecution of criminal offences, be used by the competent public for the administration of justice and other acts of Union law or under
as well as for crime analytics authorities charged with tasks democratic processes should be national law compatible with Union
regarding natural persons. AI systems in the fields of migration, asylum classified as high-risk, considering law, such as on the protection
specifically intended to be used for and border control management their potentially significant impact of personal data, on the use of
administrative proceedings by tax as polygraphs and similar tools or on democracy, rule of law, individual polygraphs and similar tools or other
and customs authorities should not to detect the emotional state of a freedoms as well as the right to an systems to detect the emotional
be considered high-risk AI systems natural person; for assessing certain effective remedy and to a fair trial. state of natural persons. Any such
used by law enforcement authorities risks posed by natural persons In particular, to address the risks of use should continue to occur solely
for the purposes of prevention, entering the territory of a Member potential biases, errors and opacity, in accordance with the applicable
detection, investigation and State or applying for visa or asylum; it is appropriate to qualify as high- requirements resulting from the
prosecution of criminal offences. for verifying the authenticity of risk AI systems intended to assist Charter and from the applicable acts
the relevant documents of natural judicial authorities in researching of secondary Union law and national
(39) AI systems used in migration,
persons; for assisting competent and interpreting facts and the law law. This Regulation should not be
asylum and border control
public authorities for the examination and in applying the law to a concrete understood as providing for the legal
management affect people who
of applications for asylum, visa and set of facts. Such qualification ground for processing of personal
are often in particularly vulnerable
residence permits and associated should not extend, however, to AI data, including special categories of
position and who are dependent on
complaints with regard to the systems intended for purely ancillary personal data, where relevant.
the outcome of the actions of the
objective to establish the eligibility administrative activities that do not
competent public authorities. The (42) To mitigate the risks from high-
of the natural persons applying affect the actual administration of
accuracy, non-discriminatory nature risk AI systems placed or otherwise
for a status. AI systems in the area justice in individual cases, such as
and transparency of the AI systems put into service on the Union market
of migration, asylum and border anonymisation or pseudonymisation
used in those contexts are therefore for users and affected persons,
control management covered by this of judicial decisions, documents
particularly important to guarantee certain mandatory requirements
Regulation should comply with the or data, communication between
the respect of the fundamental rights should apply, taking into account
relevant procedural requirements personnel, administrative tasks or
of the affected persons, notably the intended purpose of the use
set by the Directive 2013/32/EU of allocation of resources.
their rights to free movement, non- of the system and according to
the European Parliament and of the
discrimination, protection of private (41) The fact that an AI system is the risk management system to be
Council , the Regulation (EC) No
life and personal data, international classified as high risk under this established by the provider.
810/2009 of the European Parliament
protection and good administration. Regulation should not be interpreted
and of the Council and other relevant
(43) Requirements should apply to
It is therefore appropriate to classify as indicating that the use of the
legislation.
high-risk AI systems as regards the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 55


quality of data sets used, technical representative and free of errors and other relevant entities, such (46) Having information on how high-
documentation and record- and complete in view of the intended as digital innovation hubs, testing risk AI systems have been developed
keeping, transparency and the purpose of the system. They experimentation facilities and and how they perform throughout
provision of information to users, should also have the appropriate researchers, should be able to access their lifecycle is essential to verify
human oversight, and robustness, statistical properties, including as and use high quality datasets within compliance with the requirements
accuracy and cybersecurity. Those regards the persons or groups of their respective fields of activities under this Regulation. This requires
requirements are necessary to persons on which the high-risk AI which are related to this Regulation. keeping records and the availability
effectively mitigate the risks for system is intended to be used. In European common data spaces of a technical documentation,
health, safety and fundamental particular, training, validation and established by the Commission containing information which
rights, as applicable in the light of testing data sets should take into and the facilitation of data sharing is necessary to assess the
the intended purpose of the system, account, to the extent required in between businesses and with compliance of the AI system with
and no other less trade restrictive the light of their intended purpose, government in the public interest will the relevant requirements. Such
measures are reasonably available, the features, characteristics or be instrumental to provide trustful, information should include the
thus avoiding unjustified restrictions elements that are particular to the accountable and non-discriminatory general characteristics, capabilities
to trade. specific geographical, behavioural or access to high quality data for the and limitations of the system,
functional setting or context within training, validation and testing of algorithms, data, training, testing and
(44) High data quality is essential
which the AI system is intended to AI systems. For example, in health, validation processes used as well as
for the performance of many AI
be used. In order to protect the right the European health data space documentation on the relevant risk
systems, especially when techniques
of others from the discrimination will facilitate non-discriminatory management system. The technical
involving the training of models are
that might result from the bias in AI access to health data and the documentation should be kept up to
used, with a view to ensure that the
systems, the providers shouldbe able training of artificial intelligence date.
high-risk AI system performs as
to process also special categories algorithms on those datasets, in a
intended and safely and it does not (47) To address the opacity that
of personal data, as a matter of privacy-preserving, secure, timely,
become the source of discrimination may make certain AI systems
substantial public interest, in order to transparent and trustworthy manner,
prohibited by Union law. High quality incomprehensible to or too complex
ensure the bias monitoring, detection and with an appropriate institutional
training, validation and testing data for natural persons, a certain degree
and correction in relation to high-risk governance. Relevant competent
sets require the implementation of of transparency should be required
AI systems. authorities, including sectoral ones,
appropriate data governance and for high-risk AI systems. Users should
providing or supporting the access to
management practices. Training, (45) For the development of high- be able to interpret the system
data may also support the provision
validation and testing data sets risk AI systems, certain actors, output and use it appropriately.
of high-quality data for the training,
should be sufficiently relevant, such as providers, notified bodies High-risk AI systems should therefore
validation and testing of AI systems.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 57


be accompanied by relevant cybersecurity in accordance with can leverage AI specific assets, and compliance of products (‘New
documentation and instructions of the generally acknowledged state such as training data sets (e.g. Legislative Framework for the
use and include concise and clear of the art. The level of accuracy data poisoning) or trained models marketing of products’).
information, including in relation and accuracy metrics should be (e.g. adversarial attacks), or exploit
(53) It is appropriate that a specific
to possible risks to fundamental communicated to the users. vulnerabilities in the AI system’s
natural or legal person, defined as the
rights and discrimination, where digital assets or the underlying ICT
(50) The technical robustness is provider, takes the responsibility for
appropriate. infrastructure. To ensure a level of
a key requirement for high-risk AI the placing on the market or putting
cybersecurity appropriate to the
(48) High-risk AI systems should be systems. They should be resilient into service of a high-risk AI system,
risks, suitable measures should
designed and developed in such a against risks connected to the regardless of whether that natural
therefore be taken by the providers
way that natural persons can oversee limitations of the system (e.g. or legal person is the person who
of high-risk AI systems, also taking
their functioning. For this purpose, errors, faults, inconsistencies, designed or developed the system.
into account as appropriate the
appropriate human oversight unexpected situations) as well as
underlying ICT infrastructure. (54) The provider should establish a
measures should be identified by against malicious actions that may
sound quality management system,
the provider of the system before compromise the security of the (52) As part of Union harmonisation
ensure the accomplishment of the
its placing on the market or putting AI system and result in harmful or legislation, rules applicable to the
required conformity assessment
into service. In particular, where otherwise undesirable behaviour. placing on the market, putting
procedure, draw up the relevant
appropriate, such measures should Failure to protect against these into service and use of high-risk
documentation and establish a
guarantee that the system is subject risks could lead to safety impacts or AI systems should be laid down
robust post-market monitoring
to in-built operational constraints that negatively affect the fundamental consistently with Regulation (EC) No
system. Public authorities which put
cannot be overridden by the system rights, for example due to erroneous 765/2008 of the European Parliament
into service high-risk AI systems
itself and is responsive to the human decisions or wrong or biased outputs and of the Council setting out the
for their own use may adopt and
operator, and that the natural persons generated by the AI system. requirements for accreditation and
implement the rules for the quality
to whom human oversight has the market surveillance of products,
(51) Cybersecurity plays a crucial management system as part of the
been assigned have the necessary Decision No 768/2008/EC of the
role in ensuring that AI systems are quality management system adopted
competence, training and authority to European Parliament and of the
resilient against attempts to alter at a national or regional level, as
carry out that role. Council on a common framework
their use, behaviour, performance appropriate, taking into account the
for the marketing of products and
(49) High-risk AI systems should or compromise their security specificities of the sector and the
Regulation (EU) 2019/1020 of the
perform consistently throughout their properties by malicious third parties competences and organisation of the
European Parliament and of the
lifecycle and meet an appropriate exploiting the system’s vulnerabilities. public authority in question.
Council on market surveillance
level of accuracy, robustness and Cyberattacks against AI systems

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 59


(55) Where a high-risk AI system that Union shall, by written mandate, the user of the AI system should be Regulation. However, the Commission
is a safety component of a product appoint an authorised representative the natural or legal person, public could adopt common technical
which is covered by a relevant New established in the Union. authority, agency or other body under specifications in areas where no
Legislative Framework sectorial whose authority the AI system is harmonised standards exist or where
(57) In line with New Legislative
legislation is not placed on the market operated except where the use is they are insufficient.
Framework principles, specific
or put into service independently made in the course of a personal non-
obligations for relevant economic (62) In order to ensure a high level
from the product, the manufacturer professional activity.
operators, such as importers of trustworthiness of high-risk AI
of the final product as defined
and distributors, should be set to (60) In the light of the complexity systems, those systems should be
under the relevant New Legislative
ensure legal certainty and facilitate of the artificial intelligence value subject to a conformity assessment
Framework legislation should comply
regulatory compliance by those chain, relevant third parties, notably prior to their placing on the market or
with the obligations of the provider
relevant operators. the ones involved in the sale and putting into service.
established in this Regulation and
the supply of software, software
notably ensure that the AI system
(58) Given the nature of AI (63) It is appropriate that, in order to
tools and components, pre-trained
embedded in the final product
systems and the risks to safety minimise the burden on operators
models and data, or providers of
complies with the requirements of
and fundamental rights possibly and avoid any possible duplication,
network services, should cooperate,
this Regulation.
associated with their use, including for high-risk AI systems related
as appropriate, with providers and
as regard the need to ensure proper to products which are covered
(56) To enable enforcement of this users to enable their compliance with
monitoring of the performance of by existing Union harmonisation
Regulation and create a level-playing the obligations under this Regulation
an AI system in a real-life setting, legislation following the New
field for operators, and taking into and with competent authorities
it is appropriate to set specific Legislative Framework approach, the
account the different forms of making established under this Regulation.
responsibilities for users. Users compliance of those AI systems with
available of digital products, it is
should in particular use high-risk (61) Standardisation should play a key the requirements of this Regulation
important to ensure that, under all
AI systems in accordance with the role to provide technical solutions should be assessed as part of the
circumstances, a person established
instructions of use and certain other to providers to ensure compliance conformity assessment already
in the Union can provide authorities
obligations should be provided for with this Regulation. Compliance with foreseen under that legislation. The
with all the necessary information
with regard to monitoring of the harmonised standards as defined applicability of the requirements
on the compliance of an AI system.
functioning of the AI systems and in Regulation (EU) No 1025/2012 of this Regulation should thus not
Therefore, prior to making their AI
with regard to record-keeping, as of the European Parliament and of affect the specific logic, methodology
systems available in the Union, where
appropriate. the Council should be a means for or general structure of conformity
an importer cannot be identified,
providers to demonstrate conformity assessment under the relevant
providers established outside the (59) It is appropriate to envisage that
with the requirements of this specific New Legislative Framework

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 61


legislation. This approach is fully its own responsibility, with the only In addition, as regards AI systems or protection of life and health of
reflected in the interplay between exception of AI systems intended which continue to ‘learn’ after natural persons and the protection of
this Regulation and the [Machinery to be used for the remote biometric being placed on the market or put industrial and commercial property,
Regulation]. While safety risks of AI identification of persons, for which into service (i.e. they automatically Member States could authorise the
systems ensuring safety functions the involvement of a notified body in adapt how functions are carried placing on the market or putting into
in machinery are addressed by the the conformity assessment should be out), it is necessary to provide rules service of AI systems which have not
requirements of this Regulation, foreseen, to the extent they are not establishing that changes to the undergone a conformity assessment.
certain specific requirements in prohibited. algorithm and its performance that
(69) In order to facilitate the work
the [Machinery Regulation] will have been pre-determined by the
(65) In order to carry out third- of the Commission and the Member
ensure the safe integration of the AI provider and assessed at the moment
party conformity assessment for AI States in the artificial intelligence
system into the overall machinery, of the conformity assessment
systems intended to be used for the field as well as to increase the
so as not to compromise the safety should not constitute a substantial
remote biometric identification of transparency towards the public,
of the machinery as a whole. The modification.
persons, notified bodies should be providers of high-risk AI systems
[Machinery Regulation] applies the
designated under this Regulation by (67) High-risk AI systems should other than those related to products
same definition of AI system as this
the national competent authorities, bear the CE marking to indicate falling within the scope of relevant
Regulation.
provided they are compliant with their conformity with this Regulation existing Union harmonisation
(64) Given the more extensive a set of requirements, notably on so that they can move freely within legislation, should be required to
experience of professional pre- independence, competence and the internal market. Member States register their high-risk AI system in a
market certifiers in the field of absence of conflicts of interests. should not create unjustified EU database, to be established and
product safety and the different obstacles to the placing on the managed by the Commission. The
(66) In line with the commonly Commission should be the controller
nature of risks involved, it is market or putting into service of
established notion of substantial of that database, in accordance
appropriate to limit, at least in an high-risk AI systems that comply with
modification for products regulated with Regulation (EU) 2018/1725 of
initial phase of application of this the requirements laid down in this
by Union harmonisation legislation, the European Parliament and of
Regulation, the scope of application Regulation and bear the CE marking.
it is appropriate that an AI system the Council . In order to ensure the
of third-party conformity assessment
undergoes a new conformity (68) Under certain conditions, rapid full functionality of the database,
for high-risk AI systems other than
assessment whenever a change availability of innovative technologies when deployed, the procedure
those related to products. Therefore,
occurs which may affect the may be crucial for health and safety for setting the database should
the conformity assessment of such
compliance of the system with this of persons and for society as a whole. include the elaboration of functional
systems should be carried out as a
Regulation or when the intended It is thus appropriate that under specifications by the Commission and
general rule by the provider under
purpose of the system changes. exceptional reasons of public security

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 63


an independent audit report. to be authentic, should disclose a controlled experimentation and Article 6 of Regulation (EU)
that the content has been artificially and testing environment in the 2018/1725, and without prejudice
(70) Certain AI systems intended
created or manipulated by labelling development and pre-marketing to Article 4(2) of Directive (EU)
to interact with natural persons
the artificial intelligence output phase with a view to ensuring 2016/680. Participants in the
or to generate content may pose
accordingly and disclosing its compliance of the innovative AI sandbox should ensure appropriate
specific risks of impersonation or
artificial origin. systems with this Regulation and safeguards and cooperate with the
deception irrespective of whether
other relevant Union and Member competent authorities, including by
they qualify as high-risk or not. In (71) Artificial intelligence is a rapidly
States legislation; to enhance legal following their guidance and acting
certain circumstances, the use of developing family of technologies
certainty for innovators and the expeditiously and in good faith to
these systems should therefore be that requires novel forms of
competent authorities’ oversight and mitigate any high-risks to safety
subject to specific transparency regulatory oversight and a safe
understanding of the opportunities, and fundamental rights that may
obligations without prejudice to the space for experimentation, while
emerging risks and the impacts of arise during the development and
requirements and obligations for ensuring responsible innovation and
AI use, and to accelerate access experimentation in the sandbox. The
high-risk AI systems. In particular, integration of appropriate safeguards
to markets, including by removing conduct of the participants in the
natural persons should be notified and risk mitigation measures. To
barriers for small and medium sandbox should be taken into account
that they are interacting with an AI ensure a legal framework that is
enterprises (SMEs) and start-ups. when competent authorities decide
system, unless this is obvious from innovation-friendly, future-proof
To ensure uniform implementation whether to impose an administrative
the circumstances and the context and resilient to disruption, national
across the Union and economies of fine under Article 83(2) of Regulation
of use. Moreover, natural persons competent authorities from one
scale, it is appropriate to establish 2016/679 and Article 57 of Directive
should be notified when they are or more Member States should be
common rules for the regulatory 2016/680.
exposed to an emotion recognition encouraged to establish artificial
sandboxes’ implementation and a
system or a biometric categorisation intelligence regulatory sandboxes (73) In order to promote andprotect
framework for cooperation between
system. Such information and to facilitate the development and innovation, it is important that the
the relevant authorities involved in
notifications should be provided testing of innovative AI systems interests of small-scale providers and
the supervision of the sandboxes.
in accessible formats for persons under strict regulatory oversight users of AI systems are taken into
This Regulation should provide the
with disabilities. Further, users, who before these systems are placed particular account. To this objective,
legal basis for the use of personal
use an AI system to generate or on the market or otherwise put into Member States should develop
data collected for other purposes
manipulate image, audio or video service. initiatives, which are targeted
for developing certain AI systems
content that appreciably resembles at those operators, including on
(72) The objectives of the regulatory in the public interest within the AI
existing persons, places or events awareness raising and information
sandboxes should be to foster regulatory sandbox, in line with Article
and would falsely appear to a person communication. Moreover, the
AI innovation by establishing 6(4) of Regulation (EU) 2016/679,

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 65


specific interests and needs of level should possibly contribute to the of advisory tasks, including issuing account the experience on the use
small-scale providers shall be implementation of this Regulation. opinions, recommendations, advice of high-risk AI systems for improving
taken into account when Notified Within their respective mission and or guidance on matters related to the their systems and the design and
Bodies set conformity assessment fields of competence, they may implementation of this Regulation, development process or can take
fees. Translation costs related to provide in particular technical and including on technical specifications any possible corrective action in a
mandatory documentation and scientific support to providers and or existing standards regarding the timely manner, all providers should
communication with authorities notified bodies. requirements established in this have a post-market monitoring
may constitute a significant cost Regulation and providing advice to system in place. This system is also
(75) It is appropriate that the
for providers and other operators, and assisting the Commission on key to ensure that the possible
Commission facilitates, to the extent
notably those of a smaller scale. specific questions related to artificial risks emerging from AI systems
possible, access to Testing and
Member States should possibly intelligence. which continue to ‘learn’ after being
Experimentation Facilities to bodies,
ensure that one of the languages placed on the market or put into
groups or laboratories established or (77) Member States hold a key role
determined and accepted by them service can be more efficiently and
accredited pursuant to any relevant in the application and enforcement
for relevant providers’ documentation timely addressed. In this context,
Union harmonisation legislation and of this Regulation. In this respect,
and for communication with providers should also be required
which fulfil tasks in the context of each Member State should designate
operators is one which is broadly to have a system in place to report
conformity assessment of products one or more national competent
understood by the largest possible to the relevant authorities any
or devices covered by that Union authorities for the purpose of
number of cross-border users. serious incidents or any breaches to
harmonisation legislation. This is supervising the application and
national and Union law protecting
(74) In order to minimise the risks notably the case for expert panels, implementation of this Regulation.
fundamental rights resulting from the
to implementation resulting from expert laboratories and reference In order to increase organisation
use of their AI systems.
lack of knowledge and expertise in laboratories in the field of medical efficiency on the side of Member
the market as well as to facilitate devices pursuant to Regulation States and to set an official point of (79) In order to ensure an appropriate
compliance of providers and notified (EU) 2017/745 and Regulation (EU) contact vis-à-vis the public and other and effective enforcement of the
bodies with their obligations under 2017/746. counterparts at Member State and requirements and obligations set
this Regulation, the AI-on demand Union levels, in each Member State out by this Regulation, which is
(76) In order to facilitate a
platform, the European Digital one national authority should be Union harmonisation legislation, the
smooth, effective and harmonised
Innovation Hubs and the Testing designated as national supervisory system of market surveillance and
implementation of this Regulation a
and Experimentation Facilities authority. compliance of products established
European Artificial Intelligence Board
established by the Commission and by Regulation (EU) 2019/1020
should be established. The Board (78) In order to ensure that providers
the Member States at national or EU should apply in its entirety. Where
should be responsible for a number of high-risk AI systems can take into

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 67


necessary for their mandate, national supervised financial institutions. To of conduct intended to foster of the European Parliament and of
public authorities or bodies, which further enhance the consistency the voluntary application of the the Council would apply as a safety
supervise the application of Union between this Regulation and the mandatory requirements applicable net.
law protecting fundamental rights, rules applicable to credit institutions to high-risk AI systems. Providers
(83) In order to ensure trustful
including equality bodies, should also regulated under Directive 2013/36/ should also be encouraged to apply
and constructive cooperation of
have access to any documentation EU of the European Parliament and of on a voluntary basis additional
competent authorities on Union and
created under this Regulation. the Council , it is also appropriate to requirements related, for example,
national level, all parties involved in
integrate the conformity assessment to environmental sustainability,
(80) Union legislation on financial the application of this Regulation
procedure and some of the providers’ accessibility to persons with
services includes internal should respect the confidentiality
procedural obligations in relation to disability, stakeholders’ participation
governance and risk management of information and data obtained in
risk management, post marketing in the design and development of
rules and requirements which are carrying out their tasks.
monitoring and documentation AI systems, and diversity of the
applicable to regulated financial
into the existing obligations and development teams. The Commission
(84) Member States should take all
institutions in the course of provision
procedures under Directive 2013/36/ may develop initiatives, including
necessary measures to ensure that
of those services, including when
EU. In order to avoid overlaps, limited of a sectorial nature, to facilitate
the provisions of this Regulation
they make use of AI systems. In order
derogations should also be envisaged the lowering of technical barriers
are implemented, including by
to ensure coherent application and
in relation to the quality management hindering cross-border exchange of
laying down effective, proportionate
enforcement of the obligations under
system of providers and the data for AI development, including on
and dissuasive penalties for their
this Regulation and relevant rules and
monitoring obligation placed on users data access infrastructure, semantic
infringement. For certain specific
requirements of the Union financial
of high-risk AI systems to the extent and technical interoperability of infringements, Member States should
services legislation, the authorities
that these apply to credit institutions different types of data. take into account the margins and
responsible for the supervision and
regulated by Directive 2013/36/EU.
criteria set out in this Regulation. The
enforcement of the financial services (82) It is important that AI systems
European Data Protection Supervisor
legislation, including where applicable (81) The development of AI systems related to products that are not
should have the power to impose
the European Central Bank, should be other than high-risk AI systems in high-risk in accordance with this
fines on Union institutions, agencies
designated as competent authorities accordance with the requirements Regulation and thus are not required
and bodies falling within the scope of
for the purpose of supervising the of this Regulation may lead to a to comply with the requirements
this Regulation.
implementation of this Regulation, larger uptake of trustworthy artificial set out herein are nevertheless safe
including for market surveillance intelligence in the Union. Providers when placed on the market or put (85) In order to ensure that the
activities, as regards AI systems of non-high-risk AI systems should into service. To contribute to this regulatory framework can be adapted
provided or used by regulated and be encouraged to create codes objective, the Directive 2001/95/EC where necessary, the power to adopt

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 69


acts in accordance with Article 290 of delegated acts, the European achieve that objective. delivered an opinion on […]”.
TFEU should be delegated to the Parliament and the Council receive
(88) This Regulation should apply HAVE ADOPTED THIS REGULATION:
Commission to amend the techniques all documents at the same time as
from … [OP – please insert the date
and approaches referred to in Annex Member States’ experts, and their
established in Art. 85]. However, TITLE I
I to define AI systems, the Union experts systematically have access
the infrastructure related to the
harmonisation legislation listed in to meetings of Commission expert
governance and the conformity
GENERAL
Annex II, the high-risk AI systems groups dealing with the preparation
listed in Annex III, the provisions of delegated acts.
assessment system should be PROVISIONS
operational before that date,
regarding technical documentation Article 1
(86) In order to ensure uniform therefore the provisions on notified
listed in Annex IV, the content of
conditions for the implementation bodies and governance structure Subject matter
the EU declaration of conformity in
of this Regulation, implementing should apply from … [OP – please
Annex V, the provisions regarding the
powers should be conferred on the insert the date – three months This Regulation lays down:
conformity assessment procedures
Commission. Those powers should following the entry into force of this
in Annex VI and VII and the provisions (a) harmonised rules for the placing
be exercised in accordance with Regulation]. In addition, Member
establishing the high-risk AI systems on the market, the putting into
Regulation (EU) No 182/2011 of the States should lay down and notify
to which the conformity assessment service and the use of artificial
European Parliament and of the to the Commission the rules on
procedure based on assessment intelligence systems (‘AI systems’) in
Council . penalties, including administrative
of the quality management system the Union;
fines, and ensure that they are
and assessment of the technical
(87) Since the objective of this
properly and effectively implemented (a) prohibitions of certain artificial
documentation should apply. It is Regulation cannot be sufficiently
by the date of application of this intelligence practices;
of particular importance that the achieved by the Member States
Regulation. Therefore the provisions
Commission carry out appropriate and can rather, by reason of the (b) specific requirements for high-
on penalties should apply from [OP
consultations during its preparatory scale or effects of the action, be risk AI systems and obligations for
– please insert the date – twelve
work, including at expert level, better achieved at Union level, operators of such systems;
months following the entry into force
and that those consultations be the Union may adopt measures in
of this Regulation].
conducted in accordance with (c) harmonised transparency rules for
accordance with the principle of
the principles laid down in the AI systems intended to interact with
subsidiarity as set out in Article 5 (89) The European Data Protection
Interinstitutional Agreement of 13 natural persons, emotion recognition
TEU. In accordance with the principle Supervisor and the European Data
April 2016 on Better Law-Making systems and biometric categorisation
of proportionality as set out in that Protection Board were consulted
. In particular, to ensure equal systems, and AI systems used to
Article, this Regulation does not go in accordance with Article 42(2)
participation in the preparation generate or manipulate image, audio
beyond what is necessary in order to of Regulation (EU) 2018/1725 and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 71


or video content; (b) Regulation (EU) No 167/2013; Section IV of Directive 2000/31/EC (3) ‘small-scale provider’ means
of the European Parliament and of a provider that is a micro or small
(d) rules on market monitoring and (c) Regulation (EU) No 168/2013;
the Council [as to be replaced by enterprise within the meaning of
surveillance.
the corresponding provisions of the Commission Recommendation
(d) Directive 2014/90/EU;
Digital Services Act]. 2003/361/EC ;
Article 2
(e) Directive (EU) 2016/797;
Scope Article 3 (4) ‘user’ means any natural or legal
(f) Regulation (EU) 2018/858; person, public authority, agency or
1. This Regulation applies to: Definitions other body using an AI system under
(g) Regulation (EU) 2018/1139;
its authority, except where the AI
For the purpose of this Regulation,
(a) providers placing on the market or
(h) Regulation (EU) 2019/2144. system is used in the course of a
the following definitions apply:
putting into service AI systems in the
personal non-professional activity;
Union, irrespective of whether those 3. This Regulation shall not apply (1) ‘artificial intelligence system’
providers are established within the to AI systems developed or used (5) ‘authorised representative’
(AI system) means software that is
Union or in a third country; exclusively for military purposes. means any natural or legal person
developed with one or more of the
established in the Union who has
techniques and approaches listed
(b) users of AI systems located within 4. This Regulation shall not apply to received a written mandate from
in Annex I and can, for a given set of
the Union; public authorities in a third country a provider of an AI system to,
human-defined objectives, generate
nor to international organisations respectively, perform and carry out
(c) providers and users of AI systems outputs such as content, predictions,
falling within the scope of this on its behalf the obligations and
that are located in a third country, recommendations, or decisions
Regulation pursuant to paragraph procedures established by this
where the output produced by the influencing the environments they
1, where those authorities or Regulation;
system is used in the Union; interact with;
organisations use AI systems in
2. For high-risk AI systems that are the framework of international (6) ‘importer’ means any natural or
(1) ‘provider’ means a natural or legal
safety components of products or agreements for law enforcement and legal person established in the Union
person, public authority, agency
systems, or which are themselves judicial cooperation with the Union or that places on the market or puts into
or other body that develops an AI
products or systems, falling within with one or more Member States. service an AI system that bears the
system or that has an AI system
the scope of the following acts, only name or trademark of a natural or
developed with a view to placing it on
5. This Regulation shall not affect legal person established outside the
Article 84 of this Regulation shall the market or putting it into service
the application of the provisions on Union;
apply: under its own name or trademark,
the liability of intermediary service
whether for payment or free of
(a) Regulation (EC) 300/2008; providers set out in Chapter II, (7) ‘distributor’ means any natural or
charge;

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 73


legal person in the supply chain, other use, promotional or sales materials any measure aimed at achieving the (22) ‘notified body’ means a
than the provider or the importer, that and statements, as well as in the return to the provider of an AI system conformity assessment body
makes an AI system available on the technical documentation; made available to users; designated in accordance with this
Union market without affecting its Regulation and other relevant Union
(13) ‘reasonably foreseeable misuse’ (17) ‘withdrawal of an AI system’
properties; harmonisation legislation;
means the use of an AI system in a means any measure aimed at
(8) ‘operator’ means the provider, the way that is not in accordance with preventing the distribution, display (23) ‘substantial modification’
user, the authorised representative, its intended purpose, but which may and offer of an AI system; means a change to the AI system
the importer and the distributor; result from reasonably foreseeable following its placing on the market
(18) ‘performance of an AI system’
human behaviour or interaction with or putting into service which affects
(9) ‘placing on the market’ means the means the ability of an AI system to
other systems; the compliance of the AI system
first making available of an AI system achieve its intended purpose;
with the requirements set out in
on the Union market; (14) ‘safety component of a product
Title III, Chapter 2 of this Regulation
(19) ‘notifying authority’ means
or system’ means a component
or results in a modification to the
(10) ‘making available on the market’ the national authority responsible
of a product or of a system which
intended purpose for which the AI
means any supply of an AI system for setting up and carrying out
fulfils a safety function for that
system has been assessed;
for distribution or use on the Union the necessary procedures for
product or system or the failure or
market in the course of a commercial the assessment, designation and
malfunctioning of which endangers (24) ‘CE marking of conformity’ (CE
activity, whether in return for payment notification of conformity assessment
the health and safety of persons or marking) means a marking by which a
or free of charge; bodies and for their monitoring;
property; provider indicates that an AI system
is in conformity with the requirements
(11) ‘putting into service’ means the (20) ‘conformity assessment’ means
(15) ‘instructions for use’ means the
set out in Title III, Chapter 2 of this
supply of an AI system for first use the process of verifying whether
information provided by the provider
Regulation and other applicable
directly to the user or for own use the requirements set out in Title III,
to inform the user of in particular
Union legislation harmonising the
on the Union market for its intended Chapter 2 of this Regulation relating
an AI system’s intended purpose
conditions for the marketing of
purpose; to an AI system have been fulfilled;
and proper use, inclusive of the
products (‘Union harmonisation
specific geographical, behavioural
(12) ‘intended purpose’ means the use (21) ‘conformity assessment body’ legislation’) providing for its affixing;
or functional setting within which the
for which an AI system is intended by means a body that performs
high-risk AI system is intended to be (25) ‘post-market monitoring’ means
the provider, including the specific third-party conformity assessment
used; all activities carried out by providers
context and conditions of use, as activities, including testing,
of AI systems to proactively collect
specified in the information supplied certification and inspection;
(16) ‘recall of an AI system’ means
and review experience gained from
by the provider in the instructions for

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 75


the use of AI systems they place on its non-learnable parameters and means an AI system for the purpose only instant identification, but also
the market or put into service for its learning process, among other of identifying or inferring emotions or limited short delays in order to avoid
the purpose of identifying any need things, in order to prevent overfitting; intentions of natural persons on the circumvention.
to immediately apply any necessary whereas the validation dataset can basis of their biometric data;
(38) ‘‘post’ remote biometric
corrective or preventive actions; be a separate dataset or part of the
(35) ‘biometric categorisation system’ identification system’ means a remote
training dataset, either as a fixed or
(26) ‘market surveillance authority’ means an AI system for the purpose biometric identification system other
variable split;
means the national authority carrying of assigning natural persons to than a ‘real-time’ remote biometric
out the activities and taking the (31) ‘testing data’ means data used for specific categories, such as sex, age, identification system;
measures pursuant to Regulation providing an independent evaluation hair colour, eye colour, tattoos, ethnic
(39) ‘publicly accessible space’
(EU) 2019/1020; of the trained and validated AI system origin or sexual or political orientation,
means any physical place accessible
in order to confirm the expected on the basis of their biometric data;
(27) ‘harmonised standard’ means to the public, regardless of whether
performance of that system before
a European standard as defined in (36) ‘remote biometric identification certain conditions for access may
its placing on the market or putting
Article 2(1)(c) of Regulation (EU) No system’ means an AI system for apply;
into service;
1025/2012; the purpose of identifying natural
(40) ‘law enforcement authority’
(32) ‘input data’ means data provided persons at a distance through the
(28) ‘common specifications’ means means:
to or directly acquired by an AI comparison of a person’s biometric
a document, other than a standard,
system on the basis of which the data with the biometric data (a) any public authority competent
containing technical solutions
system produces an output; contained in a reference database, for the prevention, investigation,
providing a means to, comply with
and without prior knowledge of the detection or prosecution of criminal
certain requirements and obligations (33) ‘biometric data’ means personal
user of the AI system whether the offences or the execution of criminal
established under this Regulation; data resulting from specific technical
person will be present and can be penalties, including the safeguarding
processing relating to the physical,
(29) ‘training data’ means data used identified ; against and the prevention of threats
physiological or behavioural
for training an AI system through to public security; or
characteristics of a natural person, (37) ‘‘real-time’ remote biometric
fitting its learnable parameters,
which allow or confirm the unique identification system’ means a (b) any other body or entity
including the weights of a neural
identification of that natural remote biometric identification entrusted by Member State law to
network;
person, such as facial images or system whereby the capturing of exercise public authority and public
(30) ‘validation data’ means data dactyloscopic data; biometric data, the comparison and powers for the purposes of the
used for providing an evaluation of the identification all occur without a prevention, investigation, detection
(34) ‘emotion recognition system’
the trained AI system and for tuning significant delay. This comprises not or prosecution of criminal offences

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 77


or the execution of criminal penalties, (44) ‘serious incident’ means any PRACTICES for the evaluation or classification
including the safeguarding against incident that directly or indirectly of the trustworthiness of natural
and the prevention of threats to leads, might have led or might lead to Article 5 persons over a certain period of
public security; any of the following: time based on their social behaviour
1. The following artificial intelligence
or known or predicted personal or
(41) ‘law enforcement’ means (a) the death of a person or serious practices shall be prohibited:
personality characteristics, with the
activities carried out by law damage to a person’s health, to
(a) the placing on the market, putting social score leading to either or both
enforcement authorities for the property or the environment,
into service or use of an AI system of the following:
prevention, investigation, detection
(b) a serious and irreversible that deploys subliminal techniques
or prosecution of criminal offences (i) detrimental or unfavourable
disruption of the management and beyond a person’s consciousness
or the execution of criminal penalties, treatment of certain natural persons
operation of critical infrastructure. in order to materially distort a
including the safeguarding against or whole groups thereof in social
person’s behaviour in a manner
and the prevention of threats to contexts which are unrelated to
Article 4 that causes or is likely to cause that
public security; the contexts in which the data was
person or another person physical or
Amendments to Annex I originally generated or collected;
(42) ‘national supervisory authority’ psychological harm;
means the authority to which The Commission is empowered to (ii) detrimental or unfavourable
(b) the placing on the market, putting
a Member State assigns the adopt delegated acts in accordance treatment of certain natural persons
into service or use of an AI system
responsibility for the implementation with Article 73 to amend the list of or whole groups thereof that is
that exploits any of the vulnerabilities
and application of this Regulation, for techniques and approaches listed unjustified or disproportionate to
of a specific group of persons due
coordinating the activities entrusted in Annex I, in order to update that their social behaviour or its gravity;
to their age, physical or mental
to that Member State, for acting list to market and technological
disability, in order to materially (d) the use of ‘real-time’ remote
as the single contact point for the developments on the basis of
distort the behaviour of a person biometric identification systems in
Commission, and for representing characteristics that are similar to the
pertaining to that group in a manner publicly accessible spaces for the
the Member State at the European techniques and approaches listed
that causes or is likely to cause that purpose of law enforcement, unless
Artificial Intelligence Board; therein.
person or another person physical or and in as far as such use is strictly
(43) ‘national competent authority’ TITLE II psychological harm; necessary for one of the following
means the national supervisory objectives:
(c) the placing on the market, putting
authority, the notifying authority and PROHIBITED
into service or use of AI systems by (i) the targeted search for specific
the market surveillance authority; ARTIFICIAL public authorities or on their behalf potential victims of crime, including
INTELLIGENCE
AI GOVERNANCE: A CONSOLIDATED REFERENCE | 79
missing children; (b) the consequences of the use justified situation of urgency, the use State shall lay down in its national
of the system for the rights and of the system may be commenced law the necessary detailed rules for
(ii) the prevention of a specific,
freedoms of all persons concerned, in without an authorisation and the the request, issuance and exercise
substantial and imminent threat to
particular the seriousness, probability authorisation may be requested only of, as well as supervision relating
the life or physical safety of natural
and scale of those consequences. during or after the use. to, the authorisations referred to
persons or of a terrorist attack;
in paragraph 3. Those rules shall
In addition, the use of ‘real-time’ The competent judicial or
also specify in respect of which of
(iii) the detection, localisation,
remote biometric identification administrative authority shall only
the objectives listed in paragraph
identification or prosecution of a
systems in publicly accessible spaces grant the authorisation where it
1, point (d), including which of the
perpetrator or suspect of a criminal
for the purpose of law enforcement is satisfied, based on objective
criminal offences referred to in
offence referred to in Article 2(2)
for any of the objectives referred to evidence or clear indications
point (iii) thereof, the competent
of Council Framework Decision
in paragraph 1 point d) shall comply presented to it, that the use of
authorities may be authorised to use
2002/584/JHA and punishable in
with necessary and proportionate the ‘real-time’ remote biometric
those systems for the purpose of law
the Member State concerned by a
safeguards and conditions in relation identification system at issue is
enforcement.
custodial sentence or a detention
to the use, in particular as regards the necessary for and proportionate
order for a maximum period of at
least three years, as determined by
temporal, geographic and personal to achieving one of the objectives TITLE III
limitations. specified in paragraph 1, point (d), as
the law of that Member State.
identified in the request. In deciding HIGH-RISK AI
3. As regards paragraphs 1, point
2. The use of ‘real-time’ remote
(d) and 2, each individual use for the
on the request, the competent judicial
SYSTEMS
or administrative authority shall take
biometric identification systems in
purpose of law enforcement of a ‘real-
publicly accessible spaces for the into account the elements referred to CHAPTER 1
time’ remote biometric identification
in paragraph 2.
purpose of law enforcement for
any of the objectives referred to in
system in publicly accessible
CLASSIFICATION
spaces shall be subject to a prior 4. A Member State may decide to
paragraph 1 point d) shall take into
authorisation granted by a judicial provide for the possibility to fully or
OF AI SYSTEMS AS
account the following elements:
authority or by an independent partially authorise the use of ‘real- HIGH-RISK
administrative authority of the time’ remote biometric identification
(a) the nature of the situation giving Article 6
Member State in which the use is to systems in publicly accessible spaces
rise to the possible use, in particular
the seriousness, probability and scale take place, issued upon a reasoned for the purpose of law enforcement Classification rules for high-
of the harm caused in the absence of request and in accordance with the within the limits and under the risk AI systems
detailed rules of national law referred conditions listed in paragraphs 1,
the use of the system;
1. Irrespective of whether an AI
to in paragraph 4. However, in a duly point (d), 2 and 3. That Member

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 81


system is placed on the market or adopt delegated acts in accordance system; in relation to the user of an AI system,
put into service independently from with Article 73 to update the list in particular due to an imbalance of
(b) the extent to which an AI system
the products referred to in points in Annex III by adding high-risk AI power, knowledge, economic or social
has been used or is likely to be used;
(a) and (b), that AI system shall be systems where both of the following circumstances, or age;
considered high-risk where both of conditions are fulfilled: (c) the extent to which the use of an
(g) the extent to which the outcome
the following conditions are fulfilled: AI system has already caused harm
(a) the AI systems are intended to produced with an AI system is easily
to the health and safety or adverse
(a) the AI system is intended to be used in any of the areas listed in reversible, whereby outcomes having
impact on the fundamental rights or
be used as a safety component points 1 to 8 of Annex III; an impact on the health or safety of
has given rise to significant concerns
of a product, or is itself a product, persons shall not be considered as
(b) the AI systems pose a risk of harm in relation to the materialisation
covered by the Union harmonisation easily reversible;
to the health and safety, or a risk of such harm or adverse impact,
legislation listed in Annex II;
of adverse impact on fundamental as demonstrated by reports or (h) the extent to which existing Union
(b) the product whose safety rights, that is, in respect of its severity documented allegations submitted to legislation provides for:
component is the AI system, or the AI and probability of occurrence, national competent authorities;
(i) effective measures of redress in
system itself as a product, is required equivalent to or greater than the risk
(d) the potential extent of such harm relation to the risks posed by an AI
to undergo a third-party conformity of harm or of adverse impact posed
or such adverse impact, in particular system, with the exclusion of claims
assessment with a view to the placing by the high-risk AI systems already
in terms of its intensity and its ability for damages;
on the market or putting into service referred to in Annex III.
to affect a plurality of persons;
of that product pursuant to the Union (ii) effective measures to prevent or
2. When assessing for the purposes
harmonisation legislation listed in (e) the extent to which potentially substantially minimise those risks.
of paragraph 1 whether an AI system
Annex II. harmed or adversely impacted
poses a risk of harm to the health and
persons are dependent on the CHAPTER 2
2. In addition to the high-risk AI safety or a risk of adverse impact on
outcome produced with an AI system,
systems referred to in paragraph 1, AI fundamental rights that is equivalent
in particular because for practical REQUIREMENTS
systems referred to in Annex III shall to or greater than the risk of harm
also be considered high-risk. posed by the high-risk AI systems
or legal reasons it is not reasonably FOR HIGH-RISK AI
already referred to in Annex III, the
possible to opt-out from that SYSTEMS
Article 7 outcome;
Commission shall take into account
Article 8
the following criteria:
Amendments to Annex III (f) the extent to which potentially
harmed or adversely impacted Compliance with the
(a) the intended purpose of the AI
1. The Commission is empowered to
persons are in a vulnerable position requirements

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 83


1. High-risk AI systems shall comply (b) estimation and evaluation of the referred to in paragraph 2, point In eliminating or reducing risks
with the requirements established in risks that may emerge when the high- (d) shall be such that any residual related to the use of the high-risk AI
this Chapter. risk AI system is used in accordance risk associated with each hazard as system, due consideration shall be
with its intended purpose and under well as the overall residual risk of given to the technical knowledge,
2. The intended purpose of the
conditions of reasonably foreseeable the high-risk AI systems is judged experience, education, training to
high-risk AI system and the risk
misuse; acceptable, provided that the high- be expected by the user and the
management system referred to in
risk AI system is used in accordance environment in which the system is
Article 9 shall be taken into account (c) evaluation of other possibly
with its intended purpose or under intended to be used.
when ensuring compliance with those arising risks based on the analysis of
conditions of reasonably foreseeable
requirements. data gathered from the post-market 5. High-risk AI systems shall be tested
misuse. Those residual risks shall be
monitoring system referred to in for the purposes of identifying the
communicated to the user.
Article 9 Article 61; most appropriate risk management
In identifying the most appropriate measures. Testing shall ensure
Risk management system (d) adoption of suitable risk
risk management measures, the that high-risk AI systems perform
management measures in
1. A risk management system shall following shall be ensured: consistently for their intended
accordance with the provisions of the
be established, implemented, purpose and they are in compliance
following paragraphs. (a) elimination or reduction of risks
documented and maintained in with the requirements set out in this
as far as possible through adequate
relation to high-risk AI systems. Chapter.
3. The risk management measures
design and development;
referred to in paragraph 2, point (d)
2. The risk management system shall 6. Testing procedures shall be
shall give due consideration to the (b) where appropriate,
consist of a continuous iterative suitable to achieve the intended
effects and possible interactions implementation of adequate
process run throughout the entire purpose of the AI system and do not
resulting from the combined mitigation and control measures
lifecycle of a high-risk AI system, need to go beyond what is necessary
application of the requirements in relation to risks that cannot be
requiring regular systematic to achieve that purpose.
set out in this Chapter 2. They shall eliminated;
updating. It shall comprise the
take into account the generally 7. The testing of the high-risk AI
following steps:
(c) provision of adequate information
acknowledged state of the art, systems shall be performed, as
pursuant to Article 13, in particular
(a) identification and analysis of including as reflected in relevant appropriate, at any point in time
as regards the risks referred to in
the known and foreseeable risks harmonised standards or common throughout the development process,
paragraph 2, point (b) of this Article,
associated with each high-risk AI specifications. and, in any event, prior to the placing
and, where appropriate, training to
system; on the market or the putting into
4. The risk management measures users.
service. Testing shall be made against

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 85


preliminarily defined metrics and 2. Training, validation and testing data free of errors and complete. They fundamental rights and freedoms
probabilistic thresholds that are sets shall be subject to appropriate shall have the appropriate statistical of natural persons, including
appropriate to the intended purpose data governance and management properties, including, where technical limitations on the re-use
of the high-risk AI system. practices. Those practices shall applicable, as regards the persons or and use of state-of-the-art security
concern in particular, groups of persons on which the high- and privacy-preserving measures,
8. When implementing the risk
risk AI system is intended to be used. such as pseudonymisation, or
management system described (a) the relevant design choices;
These characteristics of the data sets encryption where anonymisation
in paragraphs 1 to 7, specific
may be met at the level of individual may significantly affect the purpose
(b) data collection;
consideration shall be given to
data sets or a combination thereof. pursued.
whether the high-risk AI system is (c) relevant data preparation
likely to be accessed by or have an processing operations, such as 4. Training, validation and testing 6. Appropriate data governance and
impact on children. annotation, labelling, cleaning, data sets shall take into account, to management practices shall apply
the extent required by the intended for the development of high-risk
enrichment and aggregation;
9. For credit institutions regulated by
purpose, the characteristics or AI systems other than those which
Directive 2013/36/EU, the aspects (d) the formulation of relevant elements that are particular to the make use of techniques involving the
described in paragraphs 1 to 8 shall assumptions, notably with respect specific geographical, behavioural training of models in order to ensure
be part of the risk management to the information that the data are or functional setting within which the that those high-risk AI systems
procedures established by those supposed to measure and represent; high-risk AI system is intended to be comply with paragraph 2.
institutions pursuant to Article 74 of
used.
that Directive. (e) a prior assessment of the Article 11
availability, quantity and suitability of 5. To the extent that it is strictly
Article 10 the data sets that are needed; necessary for the purposes of
Technical documentation
ensuring bias monitoring, detection
Data and data governance (f) examination in view of possible 1. The technical documentation of a
and correction in relation to the
biases; high-risk AI system shall be drawn up
1. High-risk AI systems which make high-risk AI systems, the providers of
before that system is placed on the
use of techniques involving the (g) the identification of any possible such systems may process special
market or put into service and shall
training of models with data shall be data gaps or shortcomings, and how categories of personal data referred
be kept up-to date.
developed on the basis of training, those gaps and shortcomings can be to in Article 9(1) of Regulation (EU)
validation and testing data sets that addressed. 2016/679, Article 10 of Directive The technical documentation
meet the quality criteria referred to in (EU) 2016/680 and Article 10(1) of shall be drawn up in such a way
paragraphs 2 to 5. 3. Training, validation and testing data Regulation (EU) 2018/1725, subject to demonstrate that the high-
sets shall be relevant, representative, to appropriate safeguards for the risk AI system complies with the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 87


requirements set out in this Chapter Record-keeping (a) recording of the period of each 2. High-risk AI systems shall be
and provide national competent use of the system (start date and time accompanied by instructions
authorities and notified bodies with all 1. High-risk AI systems shall be and end date and time of each use); for use in an appropriate digital
the necessary information to assess designed and developed with format or otherwise that include
capabilities enabling the automatic (b) the reference database against
the compliance of the AI system with concise, complete, correct and clear
recording of events (‘logs’) while the which input data has been checked
those requirements. It shall contain, information that is relevant, accessible
high-risk AI systems is operating. by the system;
at a minimum, the elements set out in and comprehensible to users.
Annex IV. Those logging capabilities shall
(c) the input data for which the
conform to recognised standards or 3. The information referred to in
search has led to a match;
2. Where a high-risk AI system common specifications. paragraph 2 shall specify:
related to a product, to which the (d) the identification of the natural
2. The logging capabilities shall (a) the identity and the contact details
legal acts listed in Annex II, section persons involved in the verification
ensure a level of traceability of the of the provider and, where applicable,
A apply, is placed on the market or of the results, as referred to in Article
AI system’s functioning throughout of its authorised representative;
put into service one single technical 14 (5).
documentation shall be drawn up its lifecycle that is appropriate to the
(b) the characteristics, capabilities
containing all the information set out intended purpose of the system. Article 13 and limitations of performance of the
in Annex IV as well as the information
3. In particular, logging capabilities high-risk AI system, including:
Transparency and provision of
required under those legal acts.
shall enable the monitoring of the information to users (i) its intended purpose;
3. The Commission is empowered to operation of the high-risk AI system
with respect to the occurrence of 1. High-risk AI systems shall be
adopt delegated acts in accordance (ii) the level of accuracy, robustness
situations that may result in the AI designed and developed in such a
with Article 73 to amend Annex IV and cybersecurity referred to in
system presenting a risk within the way to ensure that their operation
where necessary to ensure that, in Article 15 against which the high-
meaning of Article 65(1) or lead is sufficiently transparent to enable
the light of technical progress, the risk AI system has been tested
to a substantial modification, and users to interpret the system’s
technical documentation provides and validated and which can be
facilitate the post-market monitoring output and use it appropriately. An
all the necessary information to expected, and any known and
referred to in Article 61. appropriate type and degree of
assess the compliance of the system foreseeable circumstances that may
transparency shall be ensured, with a
with the requirements set out in this have an impact on that expected
4. For high-risk AI systems referred to view to achieving compliance with the
Chapter. level of accuracy, robustness and
in paragraph 1, point (a) of Annex III, relevant obligations of the user and
cybersecurity;
Article 12 the logging capabilities shall provide, of the provider set out in Chapter 3 of
at a minimum: this Title. (iii) any known or foreseeable

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 89


circumstance, related to the use of risk AI system and any necessary (a) identified and built, when or recommendations for decisions to
the high-risk AI system in accordance maintenance and care measures technically feasible, into the high-risk be taken by natural persons;
with its intended purpose or under to ensure the proper functioning of AI system by the provider before it
(c) be able to correctly interpret
conditions of reasonably foreseeable that AI system, including as regards is placed on the market or put into
the high-risk AI system’s output,
misuse, which may lead to risks to software updates. service;
taking into account in particular the
the health and safety or fundamental
Article 14 (b) identified by the provider before characteristics of the system and
rights;
placing the high-risk AI system on the interpretation tools and methods
Human oversight
(iv) its performance as regards the the market or putting it into service available;
persons or groups of persons on and that are appropriate to be
1. High-risk AI systems shall be
(d) be able to decide, in any particular
which the system is intended to be implemented by the user.
designed and developed in such
situation, not to use the high-risk
used; a way, including with appropriate
4. The measures referred to in AI system or otherwise disregard,
human-machine interface tools, that
(v) when appropriate, specifications paragraph 3 shall enable the override or reverse the output of the
they can be effectively overseen by
for the input data, or any other relevant individuals to whom human oversight high-risk AI system;
natural persons during the period in
information in terms of the training, is assigned to do the following, as
which the AI system is in use. (e) be able to intervene on the
validation and testing data sets used, appropriate to the circumstances:
operation of the high-risk AI system or
taking into account the intended 2. Human oversight shall aim at
(a) fully understand the capacities interrupt the system through a “stop”
purpose of the AI system. preventing or minimising the risks to
and limitations of the high-risk AI button or a similar procedure.
health, safety or fundamental rights
(c) the changes to the high-risk AI system and be able to duly monitor its
that may emerge when a high-risk AI 5. For high-risk AI systems referred to
system and its performance which operation, so that signs of anomalies,
system is used in accordance with its in point 1(a) of Annex III, the measures
have been pre-determined by the dysfunctions and unexpected
intended purpose or under conditions referred to in paragraph 3 shall be
provider at the moment of the initial performance can be detected and
of reasonably foreseeable misuse, such as to ensure that, in addition,
conformity assessment, if any; addressed as soon as possible;
in particular when such risks persist no action or decision is taken by the
(d) the human oversight measures notwithstanding the application of (b) remain aware of the possible user on the basis of the identification
referred to in Article 14, including other requirements set out in this tendency of automatically relying or resulting from the system unless this
the technical measures put in place Chapter. over-relying on the output produced has been verified and confirmed by at
to facilitate the interpretation of the by a high-risk AI system (‘automation least two natural persons.
3. Human oversight shall be ensured
outputs of AI systems by the users; bias’), in particular for high-risk AI
through either one or all of the Article 15
systems used to provide information
(e) the expected lifetime of the high- following measures:

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 91


Accuracy, robustness and in such a way to ensure that possibly AI SYSTEMS AND (g) take the necessary corrective
cybersecurity biased outputs due to outputs used actions, if the high-risk AI system is not
OTHER PARTIES
as an input for future operations in conformity with the requirements
1. High-risk AI systems shall be (‘feedback loops’) are duly addressed Article 16 set out in Chapter 2 of this Title;
designed and developed in such with appropriate mitigation measures.
a way that they achieve, in the Obligations of providers of (h) inform the national competent
light of their intended purpose, 4. High-risk AI systems shall be high-risk AI systems authorities of the Member States
an appropriate level of accuracy, resilient as regards attempts by in which they made the AI system
unauthorised third parties to alter their Providers of high-risk AI systems shall: available or put it into service and,
robustness and cybersecurity, and
perform consistently in those respects use or performance by exploiting the where applicable, the notified body
(a) ensure that their high-risk AI
throughout their lifecycle. system vulnerabilities. of the non-compliance and of any
systems are compliant with the
corrective actions taken;
The technical solutions aimed at requirements set out in Chapter 2 of
2. The levels of accuracy and the
ensuring the cybersecurity of high-risk this Title; (i) to affix the CE marking to their
relevant accuracy metrics of high-risk
AI systems shall be declared in the AI systems shall be appropriate to the high-risk AI systems to indicate the
(b) have a quality management system
accompanying instructions of use. relevant circumstances and the risks. conformity with this Regulation in
in place which complies with Article 17;
accordance with Article 49;
3. High-risk AI systems shall be The technical solutions to address AI
(c) draw-up the technical
resilient as regards errors, faults specific vulnerabilities shall include, (j) upon request of a national
documentation of the high-risk AI
or inconsistencies that may occur where appropriate, measures to competent authority, demonstrate the
system;
within the system or the environment prevent and control for attacks trying conformity of the high-risk AI system
in which the system operates, in to manipulate the training dataset (d) when under their control, keep the with the requirements set out in
particular due to their interaction with (‘data poisoning’), inputs designed to logs automatically generated by their Chapter 2 of this Title.
natural persons or other systems. cause the model to make a mistake high-risk AI systems;
(‘adversarial examples’), or model
Article 17
The robustness of high-risk AI systems (e) ensure that the high-risk AI system
flaws.
Quality management system
may be achieved through technical undergoes the relevant conformity
redundancy solutions, which may CHAPTER 3 assessment procedure, prior to its 1. Providers of high-risk AI systems
include backup or fail-safe plans. placing on the market or putting into shall put a quality management system
OBLIGATIONS OF service; in place that ensures compliance with
High-risk AI systems that continue to
learn after being placed on the market
PROVIDERS AND (f) comply with the registration
this Regulation. That system shall

or put into service shall be developed USERS OF HIGH-RISK obligations referred to in Article 51;
be documented in a systematic and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 93


orderly manner in the form of written are not applied in full, the means to sectoral ones, providing or supporting Article 40 of this Regulation shall be
policies, procedures and instructions, be used to ensure that the high- the access to data, notified bodies, taken into account.
and shall include at least the following risk AI system complies with the other operators, customers or other
Article 18
aspects: requirements set out in Chapter 2 of interested parties;
this Title; Obligation to draw up
(a) a strategy for regulatory (k) systems and procedures for record
technical documentation
compliance, including compliance with (f) systems and procedures for keeping of all relevant documentation
conformity assessment procedures data management, including data and information; 1. Providers of high-risk AI systems
and procedures for the management collection, data analysis, data shall draw up the technical
(l) resource management, including
of modifications to the high-risk AI labelling, data storage, data filtration, documen¬tation referred to in Article
security of supply related measures;
system; data mining, data aggregation, data 11 in accordance with Annex IV.
retention and any other operation (m) an accountability framework
(b) techniques, procedures and
regarding the data that is performed 2. Providers that are credit institutions
setting out the responsibilities of the
systematic actions to be used for the
before and for the purposes of the regulated by Directive 2013/36/
management and other staff with
design, design control and design
placing on the market or putting into EU shall maintain the technical
regard to all aspects listed in this
verification of the high-risk AI system;
service of high-risk AI systems; documentation as part of the
paragraph.
documentation concerning internal
(c) techniques, procedures and
(g) the risk management system 2. The implementation of aspects governance, arrangements, processes
systematic actions to be used for
referred to in Article 9; referred to in paragraph 1 shall be and mechanisms pursuant to Article
the development, quality control and
proportionate to the size of the 74 of that Directive.
quality assurance of the high-risk AI (h) the setting-up, implementation
provider’s organisation.
system; and maintenance of a post-market Article 19
monitoring system, in accordance with 3. For providers that are credit
(d) examination, test and validation
Article 61; Conformity assessment
institutions regulated by Directive
procedures to be carried out before,
2013/36/ EU, the obligation to put a
during and after the development (i) procedures related to the 1. Providers of high-risk AI systems
quality management system in place
of the high-risk AI system, and the reporting of serious incidents and of shall ensure that their systems
shall be deemed to be fulfilled by
frequency with which they have to be malfunctioning in accordance with undergo the relevant conformity
complying with the rules on internal
carried out; Article 62; assessment procedure in accordance
governance arrangements, processes with Article 43, prior to their placing
(e) technical specifications, including (j) the handling of communication and mechanisms pursuant to Article on the market or putting into service.
standards, to be applied and, where with national competent authorities, 74 of that Directive. In that context, any Where the compliance of the AI
the relevant harmonised standards competent authorities, including harmonised standards referred to in

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 94 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 95


systems with the requirements set in the light of the intended purpose Duty of information competent authority, providers shall
out in Chapter 2 of this Title has of high-risk AI system and applicable also give that authority access to the
been demonstrated following that legal obligations under Union or Where the high-risk AI system logs automatically generated by the
conformity assessment, the providers national law. presents a risk within the meaning of high-risk AI system, to the extent such
shall draw up an EU declaration Article 65(1) and that risk is known logs are under their control by virtue
2. Providers that are credit institutions to the provider of the system, that
of conformity in accordance with of a contractual arrangement with the
regulated by Directive 2013/36/EU provider shall immediately inform the
Article 48 and affix the CE marking of user or otherwise by law.
shall maintain the logs automatically national competent authorities of
conformity in accordance with Article
generated by their high-risk AI the Member States in which it made Article 24
49.
systems as part of the documentation the system available and, where
Obligations of product
2. For high-risk AI systems referred under Articles 74 of that Directive. applicable, the notified body that
manufacturers
to in point 5(b) of Annex III that issued a certificate for the high-risk
Article 21
are placed on the market or put AI system, in particular of the non- Where a high-risk AI system related
into service by providers that are compliance and of any corrective
Corrective actions to products to which the legal acts
credit institutions regulated by actions taken. listed in Annex II, section A, apply,
Directive 2013/36/EU, the conformity Providers of high-risk AI systems
is placed on the market or put into
assessment shall be carried out as which consider or have reason to Article 23
service together with the product
part of the procedure referred to in consider that a high-risk AI system
Cooperation with competent manufactured in accordance with
Articles 97 to101 of that Directive. which they have placed on the market
authorities those legal acts and under the name
or put into service is not in conformity
of the product manufacturer, the
Article 20 with this Regulation shall immediately Providers of high-risk AI systems shall, manufacturer of the product shall take
take the necessary corrective actions upon request by a national competent
Automatically generated logs the responsibility of the compliance of
to bring that system into conformity, authority, provide that authority with the AI system with this Regulation and,
1. Providers of high-risk AI systems to withdraw it or to recall it, as all the information and documentation as far as the AI system is concerned,
shall keep the logs automatically appropriate. They shall inform the necessary to demonstrate the have the same obligations imposed
generated by their high-risk AI distributors of the high-risk AI system conformity of the high-risk AI system by the present Regulation on the
systems, to the extent such logs in question and, where applicable, with the requirements set out in provider.
are under their control by virtue of a the authorised representative and Chapter 2 of this Title, in an official
contractual arrangement with the user importers accordingly. Union language determined by the Article 25
or otherwise by law. The logs shall be Member State concerned. Upon a
Article 22 Authorised representatives
kept for a period that is appropriate reasoned request from a national

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 96 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 97


1. Prior to making their systems are under the control of the provider AI system is not in conformity with to demonstrate the conformity
available on the Union market, where by virtue of a contractual arrangement this Regulation, it shall not place of a high-risk AI system with the
an importer cannot be identified, with the user or otherwise by law; that system on the market until that requirements set out in Chapter 2 of
providers established outside the AI system has been brought into this Title in a language which can be
(c) cooperate with competent national
Union shall, by written mandate, conformity. Where the high-risk AI easily understood by that national
authorities, upon a reasoned request,
appoint an authorised representative system presents a risk within the competent authority, including
on any action the latter takes in
which is established in the Union. meaning of Article 65(1), the importer access to the logs automatically
relation to the high-risk AI system.
shall inform the provider of the AI generated by the high-risk AI system
2. The authorised representative shall
system and the market surveillance to the extent such logs are under the
Article 26
perform the tasks specified in the
authorities to that effect. control of the provider by virtue of
mandate received from the provider. Obligations of importers a contractual arrangement with the
The mandate shall empower the 3. Importers shall indicate their name,
user or otherwise by law. They shall
authorised representative to carry out 1. Before placing a high-risk AI system registered trade name or registered
also cooperate with those authorities
the following tasks: on the market, importers of such trade mark, and the address at
on any action national competent
system shall ensure that: which they can be contacted on the
authority takes in relation to that
(a) keep a copy of the EU declaration
high-risk AI system or, where that is
(a) the appropriate conformity system.
of conformity and the technical
not possible, on its packaging or its
documentation at the disposal of the assessment procedure has been
accompanying documentation, as Article 27
national competent authorities and carried out by the provider of that AI
applicable.
national authorities referred to in system Obligations of distributors
Article 63(7); 4. Importers shall ensure that, while
(b) the provider has drawn up 1. Before making a high-risk AI system
a high-risk AI system is under their
(b) provide a national competent the technical documentation in available on the market, distributors
responsibility, where applicable,
authority, upon a reasoned accordance with Annex IV; shall verify that the high-risk AI system
storage or transport conditions do
request, with all the information bears the required CE conformity
(c) the system bears the required not jeopardise its compliance with the
and documentation necessary marking, that it is accompanied by
conformity marking and is requirements set out in Chapter 2 of
to demonstrate the conformity the required documentation and
accompanied by the required this Title.
of a high-risk AI system with the instruction of use, and that the
documentation and instructions of
requirements set out in Chapter 2 of 5. Importers shall provide national provider and the importer of the
use.
this Title, including access to the logs competent authorities, upon a system, as applicable, have complied
automatically generated by the high- 2. Where an importer considers or has reasoned request, with all necessary with the obligations set out in this
risk AI system to the extent such logs reason to consider that a high-risk information and documentation Regulation.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 98 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 99


2. Where a distributor considers or has it or shall ensure that the provider, the 1. Any distributor, importer, user or shall verify that the high-risk AI system
reason to consider that a high-risk AI importer or any relevant operator, as other third-party shall be considered bears the required CE conformity
system is not in conformity with the appropriate, takes those corrective a provider for the purposes of this marking, that it is accompanied by
requirements set out in Chapter 2 of actions. Where the high-risk AI system Regulation and shall be subject to the required documentation and
this Title, it shall not make the high-risk presents a risk within the meaning the obligations of the provider under instruction of use, and that the
AI system available on the market until of Article 65(1), the distributor shall Article 16, in any of the following provider and the importer of the
that system has been brought into immediately inform the national circumstances: system, as applicable, have complied
conformity with those requirements. competent authorities of the Member with the obligations set out in this
(a) they place on the market or put into
Furthermore, where the system States in which it has made the Regulation.
service a high-risk AI system under
presents a risk within the meaning product available to that effect,
their name or trademark; 2. Where a distributor considers or has
of Article 65(1), the distributor shall giving details, in particular, of the
reason to consider that a high-risk AI
inform the provider or the importer non-compliance and of any corrective (b) they modify the intended purpose
system is not in conformity with the
of the system, as applicable, to that actions taken. of a high-risk AI system already placed
requirements set out in Chapter 2 of
effect. on the market or put into service;
5. Upon a reasoned request from this Title, it shall not make the high-risk
3. Distributors shall ensure that, a national competent authority, AI system available on the market until
(c) they make a substantial
while a high-risk AI system is under distributors of high-risk AI systems that system has been brought into
modification to the high-risk AI system.
their responsibility, where applicable, shall provide that authority with all conformity with those requirements.
storage or transport conditions do the information and documentation 2. Where the circumstances referred Furthermore, where the system
not jeopardise the compliance of the necessary to demonstrate the to in paragraph 1, point (b) or (c), presents a risk within the meaning
system with the requirements set out conformity of a high-risk system occur, the provider that initially placed of Article 65(1), the distributor shall
in Chapter 2 of this Title. with the requirements set out in the high-risk AI system on the market inform the provider or the importer
Chapter 2 of this Title. Distributors or put it into service shall no longer of the system, as applicable, to that
4. A distributor that considers or has
shall also cooperate with that national be considered a provider for the effect.
reason to consider that a high-risk AI
competent authority on any action purposes of this Regulation.
system which it has made available 3. Distributors shall ensure that,
taken by that authority.
on the market is not in conformity Article 27 while a high-risk AI system is under
with the requirements set out in Article 28 their responsibility, where applicable,
Obligations of distributors
Chapter 2 of this Title shall take the storage or transport conditions do
Obligations of distributors,
corrective actions necessary to bring 1. Before making a high-risk AI system not jeopardise the compliance of the
importers, users or any other
that system into conformity with those available on the market, distributors system with the requirements set out
third-party
requirements, to withdraw it or recall in Chapter 2 of this Title.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 100 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 101


4. A distributor that considers or has Chapter 2 of this Title. Distributors to in paragraph 1, point (b) or (c), 4. Users shall monitor the operation
reason to consider that a high-risk AI shall also cooperate with that national occur, the provider that initially placed of the high-risk AI system on the basis
system which it has made available competent authority on any action the high-risk AI system on the market of the instructions of use. When they
on the market is not in conformity taken by that authority. or put it into service shall no longer have reasons to consider that the use
with the requirements set out in be considered a provider for the in accordance with the instructions
Chapter 2 of this Title shall take the Article 28 purposes of this Regulation. of use may result in the AI system
corrective actions necessary to bring presenting a risk within the meaning
that system into conformity with those Obligations of Article 29 of Article 65(1) they shall inform the
requirements, to withdraw it or recall distributors, importers, Obligations of users of high- provider or distributor and suspend
it or shall ensure that the provider, the users or any other risk AI systems the use of the system. They shall also
importer or any relevant operator, as
third-party inform the provider or distributor
appropriate, takes those corrective 1. Users of high-risk AI systems shall when they have identified any serious
actions. Where the high-risk AI system 1. Any distributor, importer, user or use such systems in accordance with incident or any malfunctioning
presents a risk within the meaning other third-party shall be considered the instructions of use accompanying within the meaning of Article 62 and
of Article 65(1), the distributor shall a provider for the purposes of this the systems, pursuant to paragraphs interrupt the use of the AI system. In
immediately inform the national Regulation and shall be subject to 2 and 5. case the user is not able to reach the
competent authorities of the Member the obligations of the provider under provider, Article 62 shall apply mutatis
2. The obligations in paragraph 1
States in which it has made the Article 16, in any of the following mutandis.
are without prejudice to other user
product available to that effect, circumstances:
obligations under Union or national For users that are credit institutions
giving details, in particular, of the
(a) they place on the market or put into law and to the user’s discretion regulated by Directive 2013/36/EU,
non-compliance and of any corrective
service a high-risk AI system under in organising its own resources the monitoring obligation set out in the
actions taken.
their name or trademark; and activities for the purpose of first subparagraph shall be deemed to
5. Upon a reasoned request from implementing the human oversight be fulfilled by complying with the rules
(b) they modify the intended purpose
a national competent authority, measures indicated by the provider. on internal governance arrangements,
of a high-risk AI system already placed
distributors of high-risk AI systems processes and mechanisms pursuant
on the market or put into service; 3. Without prejudice to paragraph 1, to
shall provide that authority with all to Article 74 of that Directive.
the extent the user exercises control
the information and documentation (c) they make a substantial over the input data, that user shall 5. Users of high-risk AI systems shall
necessary to demonstrate the modification to the high-risk AI system. ensure that input data is relevant in keep the logs automatically generated
conformity of a high-risk system
view of the intended purpose of the by that high-risk AI system, to the
with the requirements set out in 2. Where the circumstances referred
high-risk AI system. extent such logs are under their

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 102 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 103


control. The logs shall be kept for a Notifying authorities conformity assessment bodies to the notifying authority of the
period that is appropriate in the light perform or any consultancy services Member State in which they are
of the intended purpose of the high- 1. Each Member State shall designate on a commercial or competitive basis. established.
risk AI system and applicable legal or establish a notifying authority
responsible for setting up and carrying 6. Notifying authorities shall safeguard 2. The application for notification shall
obligations under Union or national
out the necessary procedures for the confidentiality of the information be accompanied by a description
law.
the assessment, designation and they obtain. of the conformity assessment
Users that are credit institutions notification of conformity assessment activities, the conformity assessment
7. Notifying authorities shall have
regulated by Directive 2013/36/EU bodies and for their monitoring. module or modules and the artificial
a sufficient number of competent
shall maintain the logs as part of the intelligence technologies for which
2. Member States may designate a personnel at their disposal for the
documentation concerning internal the conformity assessment body
national accreditation body referred proper performance of their tasks.
governance arrangements, processes claims to be competent, as well
and mechanisms pursuant to Article to in Regulation (EC) No 765/2008 as as by an accreditation certificate,
8. Notifying authorities shall make
74 of that Directive. a notifying authority. where one exists, issued by a national
sure that conformity assessments
are carried out in a proportionate accreditation body attesting that the
6. Users of high-risk AI systems 3. Notifying authorities shall be
manner, avoiding unnecessary conformity assessment body fulfils
shall use the information provided established, organised and operated
burdens for providers and that the requirements laid down in Article
under Article 13 to comply with in such a way that no conflict of
notified bodies perform their activities 33. Any valid document related to
their obligation to carry out a data interest arises with conformity
taking due account of the size of an existing designations of the applicant
protection impact assessment under assessment bodies and the objectivity
undertaking, the sector in which it notified body under any other Union
Article 35 of Regulation (EU) 2016/679 and impartiality of their activities are
operates, its structure and the degree harmonisation legislation shall be
or Article 27 of Directive (EU) safeguarded.
of complexity of the AI system in added.
2016/680, where applicable.
4. Notifying authorities shall be question.
3. Where the conformity assessment
organised in such a way that
CHAPTER 4 Article 31 body concerned cannot provide
decisions relating to the notification
an accreditation certificate, it shall
NOTIFIYING of conformity assessment bodies are
Application of a conformity provide the notifying authority with
taken by competent persons different
AUTHORITIES AND from those who carried out the
assessment body for the documentary evidence necessary

NOTIFIED BODIES assessment of those bodies.


notification for the verification, recognition and
regular monitoring of its compliance
Article 30 1. Conformity assessment bodies shall
5. Notifying authorities shall not with the requirements laid down in
submit an application for notification
offer or provide any activities that Article 33. For notified bodies which

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 104 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 105


are designated under any other Member States within one month of a independent of the provider of a law. The staff of notified bodies shall
Union harmonisation legislation, all notification. high-risk AI system in relation to which be bound to observe professional
documents and certificates linked to it performs conformity assessment secrecy with regard to all information
5. Notifying authorities shall notify the
those designations may be used to activities. Notified bodies shall also obtained in carrying out their tasks
Commission and the other Member
support their designation procedure be independent of any other operator under this Regulation, except in
States of any subsequent relevant
under this Regulation, as appropriate. having an economic interest in the relation to the notifying authorities
changes to the notification.
high-risk AI system that is assessed, of the Member State in which their
Article 32 as well as of any competitors of the activities are carried out.
Article 33
provider.
Notification procedure 7. Notified bodies shall have
Notified bodies
5. Notified bodies shall be organised procedures for the performance of
1. Notifying authorities may notify only
1. Notified bodies shall verify the and operated so as to safeguard activities which take due account of
conformity assessment bodies which
conformity of high-risk AI system the independence, objectivity and the size of an undertaking, the sector
have satisfied the requirements laid
in accordance with the conformity impartiality of their activities. Notified in which it operates, its structure, the
down in Article 33.
assessment procedures referred to in bodies shall document and implement degree of complexity of the AI system
2. Notifying authorities shall notify the Article 43. a structure and procedures to in question.
Commission and the other Member safeguard impartiality and to promote
2. Notified bodies shall satisfy the 8. Notified bodies shall take out
States using the electronic notification and apply the principles of impartiality
organisational, quality management, appropriate liability insurance
tool developed and managed by the throughout their organisation,
resources and process requirements for their conformity assessment
Commission. personnel and assessment activities.
that are necessary to fulfil their tasks. activities, unless liability is assumed
3. The notification shall include full 6. Notified bodies shall have by the Member State concerned in
3. The organisational structure,
details of the conformity assessment documented procedures in place accordance with national law or that
allocation of responsibilities, reporting
activities, the conformity assessment ensuring that their personnel, Member State is directly responsible
lines and operation of notified bodies
module or modules and the artificial committees, subsidiaries, for the conformity assessment.
shall be such as to ensure that there
intelligence technologies concerned. subcontractors and any associated
is confidence in the performance by 9. Notified bodies shall be capable
body or personnel of external bodies
4. The conformity assessment and in the results of the conformity of carrying out all the tasks falling
respect the confidentiality of the
body concerned may perform the assessment activities that the notified to them under this Regulation with
information which comes into their
activities of a notified body only bodies conduct. the highest degree of professional
possession during the performance
where no objections are raised integrity and the requisite
4. Notified bodies shall be of conformity assessment activities,
by the Commission or the other competence in the specific field,
except when disclosure is required by

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 106 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 107


whether those tasks are carried out by all relevant documentation, including 4. Notified bodies shall keep at the 1. Where a notifying authority has
notified bodies themselves or on their the providers’ documentation, to disposal of the notifying authority the suspicions or has been informed
behalf and under their responsibility. the notifying authority referred to in relevant documents concerning the that a notified body no longer meets
Article 30 to allow it to conduct its assessment of the qualifications of the the requirements laid down in
10. Notified bodies shall have sufficient
assessment, designation, notification, subcontractor or the subsidiary and Article 33, or that it is failing to fulfil
internal competences to be able
monitoring and surveillance activities the work carried out by them under its obligations, that authority shall
to effectively evaluate the tasks
and to facilitate the assessment this Regulation. without delay investigate the matter
conducted by external parties on
outlined in this Chapter. with the utmost diligence. In that
their behalf. To that end, at all times Article 35 context, it shall inform the notified
and for each conformity assessment Article 34 body concerned about the objections
Identification numbers
procedure and each type of high-risk
raised and give it the possibility to
Subsidiaries of and and lists of notified bodies
AI system in relation to which they
make its views known. If the notifying
subcontracting by notified designated under this
have been designated, the notified
authority comes to the conclusion
bodies Regulation
body shall have permanent availability
that the notified body investigation no
of sufficient administrative, technical 1. Where a notified body subcontracts 1. The Commission shall assign an longer meets the requirements laid
and scientific personnel who possess specific tasks connected with the identification number to notified down in Article 33 or that it is failing
experience and knowledge relating conformity assessment or has bodies. It shall assign a single number, to fulfil its obligations, it shall restrict,
to the relevant artificial intelligence recourse to a subsidiary, it shall even where a body is notified under suspend or withdraw the notification
technologies, data and data ensure that the subcontractor or the several Union acts. as appropriate, depending on the
computing and to the requirements subsidiary meets the requirements seriousness of the failure. It shall also
set out in Chapter 2 of this Title. 2. The Commission shall make publicly
laid down in Article 33 and shall inform immediately inform the Commission
available the list of the bodies notified
the notifying authority accordingly. and the other Member States
11. Notified bodies shall participate in under this Regulation, including the
coordination activities as referred to accordingly.
2. Notified bodies shall take full identification numbers that have been
in Article 38. They shall also take part responsibility for the tasks performed assigned to them and the activities 2. In the event of restriction,
directly or be represented in European by subcontractors or subsidiaries for which they have been notified. The suspension or withdrawal of
standardisation organisations, or wherever these are established. Commission shall ensure that the list is notification, or where the notified body
ensure that they are aware and up to kept up to date. has ceased its activity, the notifying
date in respect of relevant standards. 3. Activities may be subcontracted or
authority shall take appropriate steps
carried out by a subsidiary only with Article 36
12. Notified bodies shall make to ensure that the files of that notified
the agreement of the provider.
available and submit upon request Changes to notifications body are either taken over by another

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 108 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 109


notified body or kept available for the necessary corrective measures, Conformity assessment bodies Common specifications
responsible notifying authorities at including withdrawal of notification if established under the law of a third
their request. necessary. That implementing act shall country with which the Union has 1. Where harmonised standards

be adopted in accordance with the concluded an agreement may be referred to in Article 40 do not exist
Article 37 or where the Commission considers
examination procedure referred to in authorised to carry out the activities of
Article 74(2). notified Bodies under this Regulation. that the relevant harmonised
Challenge to the competence
standards are insufficient or that
of notified bodies Article 38 CHAPTER 5 there is a need to address specific

1. The Commission shall, where safety or fundamental right concerns,


Coordination of notified
necessary, investigate all cases where STANDARDS, the Commission may, by means of
bodies
there are reasons to doubt whether CONFORMITY implementing acts, adopt common
specifications in respect of the
a notified body complies with the 1. The Commission shall ensure that, ASSESSMENT,
requirements laid down in Article 33. with regard to the areas covered requirements set out in Chapter 2 of
CERTIFICATES, this Title. Those implementing acts
by this Regulation, appropriate
2. The Notifying authority shall provide
coordination and cooperation
REGISTRATION shall be adopted in accordance with
the Commission, on request, with the examination procedure referred to
between notified bodies active in the Article 40
all relevant information relating to in Article 74(2).
conformity assessment procedures of
the notification of the notified body
AI systems pursuant to this Regulation Harmonised standards
concerned. 2. The Commission, when preparing
are put in place and properly operated
High-risk AI systems which are in the common specifications referred to
in the form of a sectoral group of
3. The Commission shall ensure that in paragraph 1, shall gather the views
conformity with harmonised standards
notified bodies.
all confidential information obtained of relevant bodies or expert groups
or parts thereof the references of
in the course of its investigations established under relevant sectorial
2. Member States shall ensure that the which have been published in the
pursuant to this Article is treated Union law.
bodies notified by them participate in Official Journal of the European
confidentially.
the work of that group, directly or by Union shall be presumed to be in
3. High-risk AI systems which are
means of designated representatives. conformity with the requirements set
4. Where the Commission ascertains in conformity with the common
out in Chapter 2 of this Title, to the
that a notified body does not meet specifications referred to in
Article 39 extent those standards cover those
or no longer meets the requirements paragraph 1 shall be presumed to be
requirements.
laid down in Article 33, it shall adopt Conformity assessment in conformity with the requirements
a reasoned decision requesting the bodies of third countries Article 41 set out in Chapter 2 of this Title, to the
notifying Member State to take the extent those common specifications

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 110 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 111


cover those requirements. be presumed to be in compliance with VII. as referred to in Annex VI, which
the cybersecurity requirements set does not provide for the involvement
4. Where providers do not comply with Where, in demonstrating the
out in Article 15 of this Regulation in of a notified body. For high-risk AI
the common specifications referred compliance of a high-risk AI system
so far as the cybersecurity certificate systems referred to in point 5(b) of
to in paragraph 1, they shall duly justify with the requirements set out in
or statement of conformity or parts Annex III, placed on the market or
that they have adopted technical Chapter 2 of this Title, the provider has
thereof cover those requirements. put into service by credit institutions
solutions that are at least equivalent not applied or has applied only in part
regulated by Directive 2013/36/EU,
thereto. Article 43 harmonised standards referred to in
the conformity assessment shall be
Article 40, or where such harmonised
Article 42 Conformity assessment carried out as part of the procedure
standards do not exist and common
referred to in Articles 97 to101 of that
Presumption of conformity 1. For high-risk AI systems listed specifications referred to in Article
Directive.
with certain requirements in point 1 of Annex III, where, in 41 are not available, the provider shall

demonstrating the compliance follow the conformity assessment 3. For high-risk AI systems, to which
1. Taking into account their intended procedure set out in Annex VII. legal acts listed in Annex II, section
of a high-risk AI system with the
purpose, high-risk AI systems that A, apply, the provider shall follow the
requirements set out in Chapter 2
have been trained and tested on data For the purpose of the conformity
of this Title, the provider has applied relevant conformity assessment as
concerning the specific geographical, assessment procedure referred to in
harmonised standards referred to required under those legal acts. The
behavioural and functional setting Annex VII, the provider may choose
in Article 40, or, where applicable, requirements set out in Chapter 2 of
within which they are intended to any of the notified bodies. However,
common specifications referred to in this Title shall apply to those high-risk
be used shall be presumed to be in when the system is intended to be
Article 41, the provider shall follow one AI systems and shall be part of that
compliance with the requirement set put into service by law enforcement,
of the following procedures: assessment. Points 4.3., 4.4., 4.5. and
out in Article 10(4). immigration or asylum authorities
the fifth paragraph of point 4.6 of
(a) the conformity assessment as well as EU institutions, bodies or
Annex VII shall also apply.
2. High-risk AI systems that have been agencies, the market surveillance
procedure based on internal control
certified or for which a statement of authority referred to in Article 63(5) For the purpose of that assessment,
referred to in Annex VI;
conformity has been issued under or (6), as applicable, shall act as a notified bodies which have been
a cybersecurity scheme pursuant (b) the conformity assessment notified body. notified under those legal acts shall
to Regulation (EU) 2019/881 of the procedure based on assessment be entitled to control the conformity
European Parliament and of the of the quality management system 2. For high-risk AI systems referred to
of the high-risk AI systems with the
Council and the references of which in points 2 to 8 of Annex III, providers
and assessment of the technical requirements set out in Chapter
have been published in the Official shall follow the conformity assessment
documentation, with the involvement 2 of this Title, provided that the
Journal of the European Union shall procedure based on internal control
of a notified body, referred to in Annex compliance of those notified bodies

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 112 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 113


with requirements laid down in Article to the high-risk AI system and its the risks to health and safety and principle of proportionality, suspend
33(4), (9) and (10) has been assessed performance that have been pre- protection of fundamental rights or withdraw the certificate issued or
in the context of the notification determined by the provider at the posed by such systems as well as the impose any restrictions on it, unless
procedure under those legal acts. moment of the initial conformity availability of adequate capacities and compliance with those requirements
assessment and are part of the resources among notified bodies. is ensured by appropriate corrective
Where the legal acts listed in Annex
information contained in the technical action taken by the provider of
II, section A, enable the manufacturer Article 44
documentation referred to in point the system within an appropriate
of the product to opt out from a
2(f) of Annex IV, shall not constitute a deadline set by the notified body. The
Certificates
third-party conformity assessment,
substantial modification. notified body shall give reasons for its
provided that that manufacturer has 1. Certificates issued by notified decision.
applied all harmonised standards 5. The Commission is empowered to bodies in accordance with Annex VII
covering all the relevant requirements, adopt delegated acts in accordance shall be drawn-up in an official Union Article 45
that manufacturer may make use of with Article 73 for the purpose of language determined by the Member
Appeal against decisions of
that option only if he has also applied updating Annexes VI and Annex VII State in which the notified body is
notified bodies
harmonised standards or, where in order to introduce elements of the established or in an official Union
applicable, common specifications conformity assessment procedures language otherwise acceptable to the Member States shall ensure that an
referred to in Article 41, covering the that become necessary in light of notified body. appeal procedure against decisions
requirements set out in Chapter 2 of technical progress. of the notified bodies is available to
this Title. 2. Certificates shall be valid for the
parties having a legitimate interest in
6. The Commission is empowered period they indicate, which shall not
that decision.
4. High-risk AI systems shall undergo to adopt delegated acts to amend exceed five years. On application
a new conformity assessment paragraphs 1 and 2 in order to subject by the provider, the validity of a Article 46
procedure whenever they are high-risk AI systems referred to certificate may be extended for
substantially modified, regardless in points 2 to 8 of Annex III to the Information obligations of
further periods, each not exceeding
of whether the modified system is conformity assessment procedure notified bodies
five years, based on a re-assessment
intended to be further distributed or referred to in Annex VII or parts in accordance with the applicable 1. Notified bodies shall inform the
continues to be used by the current thereof. The Commission shall adopt conformity assessment procedures. notifying authority of the following:
user. such delegated acts taking into
account the effectiveness of the 3. Where a notified body finds that (a) any Union technical documentation
For high-risk AI systems that continue
conformity assessment procedure an AI system no longer meets the assessment certificates, any
to learn after being placed on the
based on internal control referred to requirements set out in Chapter 2 of supplements to those certificates,
market or put into service, changes
in Annex VI in preventing or minimizing this Title, it shall, taking account of the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 114 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 115


quality management system approvals (b) EU technical documentation protection and the protection of key 4. Where, within 15 calendar days of
issued in accordance with the assessment certificates or any industrial and infrastructural assets. receipt of the notification referred
requirements of Annex VII; supplements thereto which it has That authorisation shall be for a limited to in paragraph 2, objections are
refused, withdrawn, suspended period of time, while the necessary raised by a Member State against
(b) any refusal, restriction, suspension
or otherwise restricted, and, upon conformity assessment procedures an authorisation issued by a
or withdrawal of a Union technical
request, of the certificates and/or are being carried out, and shall market surveillance authority of
documentation assessment certificate
supplements thereto which it has terminate once those procedures another Member State, or where
or a quality management system
issued. have been completed. The completion the Commission considers the
approval issued in accordance with
of those procedures shall be authorisation to be contrary to Union
the requirements of Annex VII; 3. Each notified body shall provide
undertaken without undue delay. law or the conclusion of the Member
the other notified bodies carrying
States regarding the compliance of
(c) any circumstances affecting the
out similar conformity assessment 2. The authorisation referred to in
the system as referred to in paragraph
scope of or conditions for notification;
activities covering the same artificial paragraph 1 shall be issued only if
2 to be unfounded, the Commission
intelligence technologies with relevant the market surveillance authority
(d) any request for information which shall without delay enter into
information on issues relating to concludes that the high-risk AI system
they have received from market consultation with the relevant Member
negative and, on request, positive complies with the requirements of
surveillance authorities regarding State; the operator(s) concerned shall
conformity assessment results. Chapter 2 of this Title. The market
conformity assessment activities; be consulted and have the possibility
surveillance authority shall inform the
to present their views. In view thereof,
(e) on request, conformity assessment Article 47 Commission and the other Member
the Commission shall decide whether
activities performed within the scope States of any authorisation issued
Derogation from conformity the authorisation is justified or not. The
of their notification and any other pursuant to paragraph 1.
assessment procedure Commission shall address its decision
activity performed, including cross-
to the Member State concerned and
3. Where, within 15 calendar days of
border activities and subcontracting. 1. By way of derogation from Article
the relevant operator or operators.
receipt of the information referred
43, any market surveillance authority
2. Each notified body shall inform the to in paragraph 2, no objection has
may authorise the placing on the 5. If the authorisation is considered
other notified bodies of: been raised by either a Member State
market or putting into service of unjustified, this shall be withdrawn by
or the Commission in respect of an
specific high-risk AI systems within the market surveillance authority of
(a) quality management system
authorisation issued by a market
the territory of the Member State the Member State concerned.
approvals which it has refused,
surveillance authority of a Member
concerned, for exceptional reasons of
suspended or withdrawn, and, upon
State in accordance with paragraph 6. By way of derogation from
public security or the protection of life
request, of quality system approvals
1, that authorisation shall be deemed paragraphs 1 to 5, for high-risk AI
and health of persons, environmental
which it has issued;
justified. systems intended to be used as safety

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 116 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 117


components of devices, or which set out in Chapter 2 of this Title. The in accordance with Article 73 for the promotional material which mentions
are themselves devices, covered EU declaration of conformity shall purpose of updating the content of that the high-risk AI system fulfils the
by Regulation (EU) 2017/745 and contain the information set out in the EU declaration of conformity set requirements for CE marking.
Regulation (EU) 2017/746, Article Annex V and shall be translated into an out in Annex V in order to introduce
Article 50
59 of Regulation (EU) 2017/745 and official Union language or languages elements that become necessary in
Article 54 of Regulation (EU) 2017/746 required by the Member State(s) in light of technical progress. Document retention
shall apply also with regard to the which the high-risk AI system is made
Article 49 The provider shall, for a period ending
derogation from the conformity available.
assessment of the compliance with 10 years after the AI system has been
CE marking of conformity
3. Where high-risk AI systems are
the requirements set out in Chapter 2 placed on the market or put into
subject to other Union harmonisation
of this Title. 1. The CE marking shall be affixed service, keep at the disposal of the
legislation which also requires an EU visibly, legibly and indelibly for national competent authorities:
Article 48 declaration of conformity, a single high-risk AI systems. Where that is
EU declaration of conformity shall (a) the technical documentation
not possible or not warranted on
EU declaration of conformity referred to in Article 11;
be drawn up in respect of all Union account of the nature of the high-risk
1. The provider shall draw up a written legislations applicable to the high- AI system, it shall be affixed to the (b) the documentation concerning the
risk AI system. The declaration shall
EU declaration of conformity for each packaging or to the accompanying quality management system referred
contain all the information required
AI system and keep it at the disposal documentation, as appropriate. to Article 17;
for identification of the Union
of the national competent authorities
harmonisation legislation to which the 2. The CE marking referred to in
for 10 years after the AI system has (c) the documentation concerning the
declaration relates. paragraph 1 of this Article shall be
been placed on the market or put changes approved by notified bodies
subject to the general principles set
into service. The EU declaration of where applicable;
4. By drawing up the EU declaration of out in Article 30 of Regulation (EC) No
conformity shall identify the AI system
conformity, the provider shall assume 765/2008. (d) the decisions and other documents
for which it has been drawn up. A copy
responsibility for compliance with the issued by the notified bodies where
of the EU declaration of conformity
requirements set out in Chapter 2 of 3. Where applicable, the CE applicable;
shall be given to the relevant national
this Title. The provider shall keep the marking shall be followed by the
competent authorities upon request. (e) the EU declaration of conformity
EU declaration of conformity up-to- identification number of the notified
date as appropriate. body responsible for the conformity referred to in Article 48.
2. The EU declaration of conformity
assessment procedures set out
shall state that the high-risk AI system Article 51
5. The Commission shall be in Article 43. The identification
in question meets the requirements
empowered to adopt delegated acts number shall also be indicated in any Registration

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 118 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 119


Before placing on the market or criminal offence. for the rights and freedoms of third and, where relevant, other Union and
putting into service a high-risk AI parties. Member States legislation supervised
2. Users of an emotion recognition
system referred to in Article 6(2), within the sandbox.
system or a biometric categorisation 4. Paragraphs 1, 2 and 3 shall not affect
the provider or, where applicable,
system shall inform of the operation the requirements and obligations set 2. Member States shall ensure
the authorised representative
of the system the natural persons out in Title III of this Regulation. that to the extent the innovative
shall register that system in the EU
exposed thereto. This obligation AI systems involve the processing
database referred to in Article 60.
shall not apply to AI systems used for TITLE V of personal data or otherwise fall

TITLE IV biometric categorisation, which are under the supervisory remit of other
permitted by law to detect, prevent MEASURES IN national authorities or competent
TRANSPARENCY and investigate criminal offences. SUPPORT OF authorities providing or supporting

OBLIGATIONS FOR INNOVATION access to data, the national data


3. Users of an AI system that
protection authorities and those other
CERTAIN AI SYSTEMS generates or manipulates image, audio Article 53 national authorities are associated
or video content that appreciably
Article 52 AI regulatory sandboxes
to the operation of the AI regulatory
resembles existing persons, objects,
sandbox.
Transparency obligations for places or other entities or events and
1. AI regulatory sandboxes established
certain AI systems would falsely appear to a person to be
by one or more Member States
3. The AI regulatory sandboxes
authentic or truthful (‘deep fake’), shall shall not affect the supervisory and
competent authorities or the
1. Providers shall ensure that AI disclose that the content has been corrective powers of the competent
European Data Protection Supervisor
systems intended to interact with artificially generated or manipulated. authorities. Any significant risks to
shall provide a controlled environment
natural persons are designed and health and safety and fundamental
However, the first subparagraph shall that facilitates the development,
developed in such a way that natural rights identified during the
not apply where the use is authorised testing and validation of innovative AI
persons are informed that they are development and testing of such
by law to detect, prevent, investigate systems for a limited time before their
interacting with an AI system, unless systems shall result in immediate
and prosecute criminal offences or placement on the market or putting
this is obvious from the circumstances mitigation and, failing that, in the
it is necessary for the exercise of the into service pursuant to a specific
and the context of use. This obligation suspension of the development and
right to freedom of expression and plan. This shall take place under the
shall not apply to AI systems testing process until such mitigation
the right to freedom of the arts and direct supervision and guidance by
authorised by law to detect, prevent, takes place.
sciences guaranteed in the Charter the competent authorities with a
investigate and prosecute criminal
of Fundamental Rights of the EU, and view to ensuring compliance with 4. Participants in the AI regulatory
offences, unless those systems are
subject to appropriate safeguards the requirements of this Regulation sandbox shall remain liable under
available for the public to report a

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 120 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 121


applicable Union and Member States in accordance with the examination (ii) public safety and public health, (e) any personal data processed are
liability legislation for any harm procedure referred to in Article 74(2). including disease prevention, control not be transmitted, transferred or
inflicted on third parties as a result and treatment; otherwise accessed by other parties;
Article 54
from the experimentation taking place
(iii) a high level of protection and (f) any processing of personal data in
in the sandbox. Further processing of personal improvement of the quality of the the context of the sandbox do not lead
data for developing certain AI
5. Member States’ competent environment; to measures or decisions affecting the
systems in the public interest
authorities that have established AI data subjects;
in the AI regulatory sandbox (b) the data processed are necessary
regulatory sandboxes shall coordinate
for complying with one or more of the (g) any personal data processed in the
their activities and cooperate within 1. In the AI regulatory sandbox
requirements referred to in Title III, context of the sandbox are deleted
the framework of the European personal data lawfully collected for
Chapter 2 where those requirements once the participation in the sandbox
Artificial Intelligence Board. They shall other purposes shall be processed
cannot be effectively fulfilled by has terminated or the personal data
submit annual reports to the Board for the purposes of developing and
processing anonymised, synthetic or has reached the end of its retention
and the Commission on the results testing certain innovative AI systems
other non-personal data; period;
from the implementation of those in the sandbox under the following
scheme, including good practices, conditions: (c) there are effective monitoring (h) the logs of the processing of
lessons learnt and recommendations mechanisms to identify if any high personal data in the context of the
on their setup and, where relevant, on (a) the innovative AI systems shall
risks to the fundamental rights of the sandbox are kept for the duration of
the application of this Regulation and be developed for safeguarding
data subjects may arise during the the participation in the sandbox and
other Union legislation supervised substantial public interest in one or
sandbox experimentation as well as 1 year after its termination, solely for
within the sandbox. more of the following areas:
response mechanism to promptly the purpose of and only as long as
mitigate those risks and, where necessary for fulfilling accountability
6. The modalities and the conditions (i) the prevention, investigation,
necessary, stop the processing; and documentation obligations under
of the operation of the AI regulatory detection or prosecution of criminal
this Article or other application Union
sandboxes, including the eligibility offences or the execution of criminal
(d) any personal data to be processed
or Member States legislation;
criteria and the procedure for the penalties, including the safeguarding
in the context of the sandbox are
application, selection, participation against and the prevention of threats
in a functionally separate, isolated (i) complete and detailed description
and exiting from the sandbox, to public security, under the control
and protected data processing of the process and rationale behind
and the rights and obligations and responsibility of the competent
environment under the control of the training, testing and validation of
of the participants shall be set authorities. The processing shall be
the participants and only authorised the AI system is kept together with the
out in implementing acts. Those based on Member State or Union law;
persons have access to that data; testing results as part of the technical
implementing acts shall be adopted

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 122 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 123


documentation in Annex IV; dedicated channel for communication 1. A ‘European Artificial Intelligence shall be represented by the head or
with small-scale providers and user Board’ (the ‘Board’) is established. equivalent high-level official of that
(j) a short summary of the AI project
and other innovators to provide authority, and the European Data
developed in the sandbox, its 2. The Board shall provide advice and
guidance and respond to queries Protection Supervisor. Other national
objectives and expected results assistance to the Commission in order
about the implementation of this authorities may be invited to the
published on the website of the to:
Regulation. meetings, where the issues discussed
competent authorities.
are of relevance for them.
(a) contribute to the effective
2. The specific interests and needs
2. Paragraph 1 is without prejudice to cooperation of the national
of the small-scale providers shall 2. The Board shall adopt its rules of
Union or Member States legislation supervisory authorities and the
be taken into account when setting procedure by a simple majority of its
excluding processing for other Commission with regard to matters
the fees for conformity assessment members, following the consent of the
purposes than those explicitly covered by this Regulation;
under Article 43, reducing those Commission. The rules of procedure
mentioned in that legislation.
fees proportionately to their size and shall also contain the operational
(b) coordinate and contribute
market size. aspects related to the execution of the
Article 55 to guidance and analysis by the
Board’s tasks as listed in Article 58.
Commission and the national
Measures for small-scale TITLE VI supervisory authorities and other
The Board may establish sub-groups
providers and users as appropriate for the purpose of
competent authorities on emerging
GOVERNANCE examining specific questions.
issues across the internal market with
1. Member States shall undertake the
following actions: CHAPTER 1 regard to matters covered by this
3. The Board shall be chaired by the
Regulation;
Commission. The Commission shall
(a) provide small-scale providers and EUROPEAN convene the meetings and prepare
(c) assist the national supervisory
start-ups with priority access to the
ARTIFICIAL authorities and the Commission in
the agenda in accordance with
AI regulatory sandboxes to the extent
that they fulfil the eligibility conditions;
INTELLIGENCE ensuring the consistent application of
the tasks of the Board pursuant to
this Regulation and with its rules of
BOARD this Regulation.
(b) organise specific awareness procedure. The Commission shall

raising activities about the application Article 56 Article 57 provide administrative and analytical

of this Regulation tailored to the support for the activities of the Board
Establishment of the Structure of the Board
needs of the small-scale providers and pursuant to this Regulation.
European Artificial Intelligence
users; 1. The Board shall be composed of the
Board 4. The Board may invite external
national supervisory authorities, who
(c) where appropriate, establish a experts and observers to attend its

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 124 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 125


meetings and may hold exchanges requirements set out in Title III, 2. Each Member State shall designate 5. Member States shall report to the
with interested third parties to inform Chapter 2, a national supervisory authority among Commission on an annual basis on
its activities to an appropriate extent. the national competent authorities. the status of the financial and human
(ii) on the use of harmonised
To that end the Commission may The national supervisory authority resources of the national competent
standards or common specifications
facilitate exchanges between the shall act as notifying authority and authorities with an assessment of
referred to in Articles 40 and 41,
Board and other Union bodies, offices, market surveillance authority unless a their adequacy. The Commission
agencies and advisory groups. (iii) on the preparation of guidance Member State has organisational and shall transmit that information to the
administrative reasons to designate Board for discussion and possible
documents, including the
Article 58 more than one authority. recommendations.
guidelines concerning the setting
Tasks of the Board of administrative fines referred to in
3. Member States shall inform the 6. The Commission shall facilitate the
Article 71.
Commission of their designation or exchange of experience between
When providing advice and assistance
designations and, where applicable, national competent authorities.
to the Commission in the context CHAPTER 2
the reasons for designating more than
of Article 56(2), the Board shall in
7. National competent authorities may
particular: NATIONAL one authority.
provide guidance and advice on the

(a) collect and share expertise and


COMPETENT 4. Member States shall ensure that implementation of this Regulation,

best practices among Member States; AUTHORITIES national competent authorities are including to small-scale providers.
provided with adequate financial and Whenever national competent
(b) contribute to uniform Article 59 human resources to fulfil their tasks authorities intend to provide guidance
administrative practices in the under this Regulation. In particular, and advice with regard to an AI
Designation of national
Member States, including for the national competent authorities shall system in areas covered by other
competent authorities
functioning of regulatory sandboxes have a sufficient number of personnel Union legislation, the competent
referred to in Article 53; 1. National competent authorities permanently available whose national authorities under that Union
shall be established or designated by competences and expertise shall legislation shall be consulted, as
(c) issue opinions, recommendations
each Member State for the purpose include an in-depth understanding appropriate. Member States may also
or written contributions on matters
of ensuring the application and of artificial intelligence technologies, establish one central contact point for
related to the implementation of this
implementation of this Regulation. data and data computing, fundamental communication with operators.
Regulation, in particular
National competent authorities shall rights, health and safety risks and
8. When Union institutions, agencies
(i) on technical specifications or be organised so as to safeguard the knowledge of existing standards and
and bodies fall within the scope of
existing standards regarding the objectivity and impartiality of their legal requirements.
this Regulation, the European Data
activities and tasks.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 126 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 127


Protection Supervisor shall act as 4. The EU database shall contain providers and post-market market monitoring plan and the list of
the competent authority for their personal data only insofar as monitoring plan for high-risk AI elements to be included in the plan.
supervision. necessary for collecting and systems
4. For high-risk AI systems covered
processing information in accordance
TITLE VII with this Regulation. That information 1. Providers shall establish and by the legal acts referred to in Annex
document a post-market monitoring II, where a post-market monitoring
shall include the names and contact
EU DATABASE FOR details of natural persons who are system in a manner that is system and plan is already established

STAND-ALONE HIGH- responsible for registering the proportionate to the nature of the under that legislation, the elements
artificial intelligence technologies and described in paragraphs 1, 2 and 3
RISK AI SYSTEMS system and have the legal authority to
the risks of the high-risk AI system. shall be integrated into that system
represent the provider.
Article 60 and plan as appropriate.
5. The Commission shall be the 2. The post-market monitoring system
EU database for stand-alone controller of the EU database. It shall shall actively and systematically The first subparagraph shall also apply

high-risk AI systems also ensure to providers adequate collect, document and analyse to high-risk AI systems referred to in
relevant data provided by users or point 5(b) of Annex III placed on the
technical and administrative support.
1. The Commission shall, in market or put into service by credit
collected through other sources
collaboration with the Member States,
TITLE VIII on the performance of high-risk AI institutions regulated by Directive
set up and maintain a EU database 2013/36/EU.
systems throughout their lifetime,
containing information referred to POST-MARKET and allow the provider to evaluate the
in paragraph 2 concerning high-risk
MONITORING, continuous compliance of AI systems CHAPTER 2
AI systems referred to in Article 6(2)
with the requirements set out in Title
which are registered in accordance INFORMATION SHARING OF
III, Chapter 2.
with Article 51. SHARING, MARKET INFORMATION ON
SURVEILLANCE 3. The post-market monitoring system
INCIDENTS AND
2. The data listed in Annex VIII shall
shall be based on a post-market
be entered into the EU database
CHAPTER 1 monitoring plan. The post-market
MALFUNCTIONING
by the providers. The Commission
monitoring plan shall be part of the Article 62
shall provide them with technical and POST-MARKET technical documentation referred
administrative support.
MONITORING to in Annex IV. The Commission Reporting of serious incidents
shall adopt an implementing act and of malfunctioning
3. Information contained in the EU
Article 61 laying down detailed provisions
database shall be accessible to the 1. Providers of high-risk AI systems
establishing a template for the post-
public. Post-market monitoring by

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 128 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 129


placed on the Union market shall after the entry into force of this Regulation: purposes of this Regulation shall be
report any serious incident or any Regulation, at the latest. the authority responsible for market
(a) any reference to an economic
malfunctioning of those systems surveillance activities designated
3. For high-risk AI systems referred operator under Regulation (EU)
which constitutes a breach of under those legal acts.
to in point 5(b) of Annex III which 2019/1020 shall be understood as
obligations under Union law intended
are placed on the market or put into including all operators identified in 4. For AI systems placed on the
to protect fundamental rights to the
service by providers that are credit Title III, Chapter 3 of this Regulation; market, put into service or used by
market surveillance authorities of the
institutions regulated by Directive financial institutions regulated by
Member States where that incident or (b) any reference to a product under
2013/36/EU and for high-risk AI Union legislation on financial services,
breach occurred. Regulation (EU) 2019/1020 shall
systems which are safety components the market surveillance authority for
be understood as including all AI
Such notification shall be made of devices, or are themselves devices, the purposes of this Regulation shall
systems falling within the scope of this
immediately after the provider has covered by Regulation (EU) 2017/745 be the relevant authority responsible
Regulation.
established a causal link between and Regulation (EU) 2017/746, the for the financial supervision of those
the AI system and the incident or notification of serious incidents or institutions under that legislation.
2. The national supervisory authority
malfunctioning or the reasonable malfunctioning shall be limited to shall report to the Commission
5. For AI systems listed in point 1(a)
likelihood of such a link, and, in any those that that constitute a breach of on a regular basis the outcomes
in so far as the systems are used for
event, not later than 15 days after obligations under Union law intended of relevant market surveillance
law enforcement purposes, points
the providers becomes aware to protect fundamental rights. activities. The national supervisory
6 and 7 of Annex III, Member States
of the serious incident or of the authority shall report, without delay,
malfunctioning. CHAPTER 3 to the Commission and relevant
shall designate as market surveillance
authorities for the purposes of this
national competition authorities any
2. Upon receiving a notification related ENFORCEMENT Regulation either the competent data
information identified in the course
to a breach of obligations under Union protection supervisory authorities
law intended to protect fundamental Article 63 of market surveillance activities
under Directive (EU) 2016/680, or
that may be of potential interest
rights, the market surveillance Regulation 2016/679 or the national
Market surveillance and for the application of Union law on
authority shall inform the national competent authorities supervising
control of AI systems in the competition rules.
public authorities or bodies referred the activities of the law enforcement,
Union market
to in Article 64(3). The Commission immigration or asylum authorities
3. For high-risk AI systems, related
shall develop dedicated guidance 1. Regulation (EU) 2019/1020 shall putting into service or using those
to products to which legal acts
to facilitate compliance with the apply to AI systems covered by this systems.
listed in Annex II, section A apply, the
obligations set out in paragraph 1. That Regulation. However, for the purpose market surveillance authority for the
6. Where Union institutions, agencies
guidance shall be issued 12 months of the effective enforcement of this

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 130 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 131


and bodies fall within the scope of conformity of the high-risk AI system shall notify the list to the Commission Procedure for dealing with AI
this Regulation, the European Data with the requirements set out in Title and all other Member States and keep systems presenting a risk at
Protection Supervisor shall act as their III, Chapter 2 and upon a reasoned the list up to date. national level
market surveillance authority. request, the market surveillance
5. Where the documentation referred 1. AI systems presenting a risk shall be
authorities shall be granted access to
7. Member States shall facilitate to in paragraph 3 is insufficient understood as a product presenting
the source code of the AI system.
the coordination between market to ascertain whether a breach of a risk defined in Article 3, point 19 of
surveillance authorities designated 3. National public authorities or obligations under Union law intended Regulation (EU) 2019/1020 insofar as
under this Regulation and other bodies which supervise or enforce the to protect fundamental rights has risks to the health or safety or to the
relevant national authorities or bodies respect of obligations under Union occurred, the public authority or body protection of fundamental rights of
which supervise the application of law protecting fundamental rights referred to paragraph 3 may make persons are concerned.
Union harmonisation legislation listed in relation to the use of high-risk a reasoned request to the market
in Annex II or other Union legislation AI systems referred to in Annex III surveillance authority to organise 2. Where the market surveillance

that might be relevant for the high-risk shall have the power to request and testing of the high-risk AI system authority of a Member State has

AI systems referred to in Annex III. access any documentation created through technical means. The market sufficient reasons to consider that

or maintained under this Regulation surveillance authority shall organise an AI system presents a risk as
Article 64 referred to in paragraph 1, they shall
when access to that documentation the testing with the close involvement
is necessary for the fulfilment of the of the requesting public authority or carry out an evaluation of the AI
Access to data and
competences under their mandate body within reasonable time following system concerned in respect of its
documentation
within the limits of their jurisdiction. the request. compliance with all the requirements

1. Access to data and documentation The relevant public authority or body and obligations laid down in this
6. Any information and documentation Regulation. When risks to the
in the context of their activities, the shall inform the market surveillance
obtained by the national public protection of fundamental rights
market surveillance authorities shall authority of the Member State
authorities or bodies referred to are present, the market surveillance
be granted full access to the training, concerned of any such request.
in paragraph 3 pursuant to the authority shall also inform the relevant
validation and testing datasets used
4. By 3 months after the entering provisions of this Article shall be national public authorities or bodies
by the provider, including through
into force of this Regulation, each treated in compliance with the referred to in Article 64(3). The
application programming interfaces
Member State shall identify the public confidentiality obligations set out in relevant operators shall cooperate
(‘API’) or other appropriate technical
authorities or bodies referred to in Article 70. as necessary with the market
means and tools enabling remote
paragraph 3 and make a list publicly surveillance authorities and the other
access.
Article 65
available on the website of the national national public authorities or bodies
2. Where necessary to assess the supervisory authority. Member States referred to in Article 64(3).

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 132 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 133


Where, in the course of that all appropriate corrective action is one or more of the following: that measure shall be deemed
evaluation, the market surveillance taken in respect of all the AI systems justified. This is without prejudice
(a) a failure of the AI system to meet
authority finds that the AI system does concerned that it has made available to the procedural rights of the
requirements set out in Title III,
not comply with the requirements on the market throughout the Union. concerned operator in accordance
Chapter 2;
and obligations laid down in this with Article 18 of Regulation (EU)
5. Where the operator of an AI system
Regulation, it shall without delay 2019/1020.
(b) shortcomings in the harmonised
does not take adequate corrective
require the relevant operator to take
standards or common specifications
action within the period referred to in 9. The market surveillance authorities
all appropriate corrective actions to
referred to in Articles 40 and
paragraph 2, the market surveillance of all Member States shall ensure
bring the AI system into compliance,
41 conferring a presumption of
authority shall take all appropriate that appropriate restrictive measures
to withdraw the AI system from
conformity.
provisional measures to prohibit or are taken in respect of the product
the market, or to recall it within a
restrict the AI system's being made concerned, such as withdrawal of the
reasonable period, commensurate 7. The market surveillance authorities
available on its national market, to product from their market, without
with the nature of the risk, as it may of the Member States other than
withdraw the product from that delay.
prescribe. the market surveillance authority
market or to recall it. That authority of the Member State initiating the
Article 66
The market surveillance authority shall inform the Commission and the procedure shall without delay inform
shall inform the relevant notified body other Member States, without delay, of the Commission and the other Union safeguard procedure
accordingly. Article 18 of Regulation those measures. Member States of any measures
(EU) 2019/1020 shall apply to the 1. Where, within three months of
adopted and of any additional
6. The information referred to in receipt of the notification referred to
measures referred to in the second information at their disposal relating
paragraph 5 shall include all available in Article 65(5), objections are raised
subparagraph. to the non-compliance of the AI
details, in particular the data by a Member State against a measure
system concerned, and, in the event
3. Where the market surveillance necessary for the identification of the taken by another Member State, or
of disagreement with the notified
authority considers that non- non-compliant AI system, the origin where the Commission considers
national measure, of their objections.
compliance is not restricted to its of the AI system, the nature of the the measure to be contrary to Union
national territory, it shall inform the non-compliance alleged and the risk 8. Where, within three months of law, the Commission shall without
Commission and the other Member involved, the nature and duration of receipt of the information referred to delay enter into consultation with the
States of the results of the evaluation the national measures taken and the in paragraph 5, no objection has been relevant Member State and operator
and of the actions which it has arguments put forward by the relevant raised by either a Member State or the or operators and shall evaluate the
required the operator to take. operator. In particular, the market Commission in respect of a provisional national measure. On the basis of
surveillance authorities shall indicate measure taken by a Member State, the results of that evaluation, the
4. The operator shall ensure that
whether the non-compliance is due to

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 134 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 135


Commission shall decide whether the present a risk 3. The Member State shall immediately require the relevant provider to
national measure is justified or not inform the Commission and the other put an end to the non-compliance
within 9 months from the notification 1. Where, having performed an Member States. That information concerned:
referred to in Article 65(5) and notify evaluation under Article 65, the market shall include all available details, in
surveillance authority of a Member (a) the conformity marking has been
such decision to the Member State particular the data necessary for
State finds that although an AI system affixed in violation of Article 49;
concerned. the identification of the AI system
is in compliance with this Regulation, it concerned, the origin and the supply
(b) the conformity marking has not
2. If the national measure is considered presents a risk to the health or safety chain of the AI system, the nature of
been affixed;
justified, all Member States shall take of persons, to the compliance with the risk involved and the nature and
the measures necessary to ensure obligations under Union or national duration of the national measures (c) the EU declaration of conformity
that the non-compliant AI system law intended to protect fundamental taken. has not been drawn up;
is withdrawn from their market, rights or to other aspects of public
and shall inform the Commission interest protection, it shall require 4. The Commission shall without (d) the EU declaration of conformity
accordingly. If the national measure is the relevant operator to take all delay enter into consultation with has not been drawn up correctly;
considered unjustified, the Member appropriate measures to ensure that the Member States and the relevant
(e) the identification number of the
State concerned shall withdraw the the AI system concerned, when placed operator and shall evaluate the
notified body, which is involved in the
measure. on the market or put into service, no national measures taken. On the basis
conformity assessment procedure,
longer presents that risk, to withdraw of the results of that evaluation, the
3. Where the national measure is where applicable, has not been affixed;
the AI system from the market or to Commission shall decide whether
considered justified and the non-
recall it within a reasonable period, the measure is justified or not and, 2. Where the non-compliance
compliance of the AI system is
commensurate with the nature of the where necessary, propose appropriate referred to in paragraph 1 persists, the
attributed to shortcomings in the
risk, as it may prescribe. measures. Member State concerned shall take
harmonised standards or common
all appropriate measures to restrict
specifications referred to in Articles 2. The provider or other relevant 5. The Commission shall address its
or prohibit the high-risk AI system
40 and 41 of this Regulation, the operators shall ensure that corrective decision to the Member States.
being made available on the market or
Commission shall apply the procedure action is taken in respect of all the
Article 68 ensure that it is recalled or withdrawn
provided for in Article 11 of Regulation AI systems concerned that they
from the market.
(EU) No 1025/2012. have made available on the market Formal non-compliance
Article 67
throughout the Union within the
TITLE IX
timeline prescribed by the market 1. Where the market surveillance

Compliant AI systems which surveillance authority of the Member authority of a Member State makes
CODES OF CONDUCT
State referred to in paragraph 1. one of the following findings, it shall

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 136 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 137


Article 69 3. Codes of conduct may be drawn up information and data obtained in consultation of the originating national
by individual providers of AI systems carrying out their tasks and activities competent authority and the user
Codes of conduct or by organisations representing in such a manner as to protect, in when high-risk AI systems referred
them or by both, including with particular: to in points 1, 6 and 7 of Annex III are
1. The Commission and the Member
the involvement of users and any used by law enforcement, immigration
States shall encourage and facilitate (a) intellectual property rights, and
interested stakeholders and their or asylum authorities, when such
the drawing up of codes of conduct confidential business information
representative organisations. Codes disclosure would jeopardise public
intended to foster the voluntary or trade secrets of a natural or legal
of conduct may cover one or more and national security interests.
application to AI systems other person, including source code, except
AI systems taking into account the
than high-risk AI systems of the the cases referred to in Article 5 of When the law enforcement,
similarity of the intended purpose of
requirements set out in Title III, Directive 2016/943 on the protection immigration or asylum authorities
the relevant systems.
Chapter 2 on the basis of technical of undisclosed know-how and are providers of high-risk AI systems
specifications and solutions that 4. The Commission and the Board business information (trade secrets) referred to in points 1, 6 and 7 of
are appropriate means of ensuring shall take into account the specific against their unlawful acquisition, use Annex III, the technical documentation
compliance with such requirements interests and needs of the small- and disclosure apply. referred to in Annex IV shall remain
in light of the intended purpose of the scale providers and start-ups when within the premises of those
systems. (b) the effective implementation of
encouraging and facilitating the authorities. Those authorities shall
this Regulation, in particular for the
drawing up of codes of conduct. ensure that the market surveillance
2. The Commission and the Board shall purpose of inspections, investigations
authorities referred to in Article
encourage and facilitate the drawing
TITLE X or audits;(c) public and national
63(5) and (6), as applicable, can,
up of codes of conduct intended to security interests;
upon request, immediately access
foster the voluntary application to
CONFIDENTIALITY the documentation or obtain a copy
AI systems of requirements related (c) integrity of criminal or
for example to environmental
AND PENALTIES administrative proceedings. thereof. Only staff of the market
surveillance authority holding the
sustainability, accessibility for Article 70 2. Without prejudice to paragraph appropriate level of security clearance
persons with a disability, stakeholders
1, information exchanged on a shall be allowed to access that
participation in the design and Confidentiality
confidential basis between the documentation or any copy thereof.
development of the AI systems and
1. National competent authorities national competent authorities
diversity of development teams on 3. Paragraphs 1 and 2 shall not affect
and notified bodies involved in and between national competent
the basis of clear objectives and key the rights and obligations of the
the application of this Regulation authorities and the Commission shall
performance indicators to measure Commission, Member States and
shall respect the confidentiality of not be disclosed without the prior
the achievement of those objectives. notified bodies with regard to the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 138 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 139


exchange of information and the interests of small-scale providers and worldwide annual turnover for the 7. Each Member State shall lay down
dissemination of warnings, nor the start-up and their economic viability. preceding financial year, whichever is rules on whether and to what extent
obligations of the parties concerned higher. administrative fines may be imposed
2. The Member States shall notify
to provide information under criminal on public authorities and bodies
the Commission of those rules and 5. The supply of incorrect, incomplete
law of the Member States. established in that Member State.
of those measures and shall notify or misleading information to notified
4. The Commission and Member it, without delay, of any subsequent bodies and national competent 8. Depending on the legal system
States may exchange, where amendment affecting them. authorities in reply to a request shall of the Member States, the rules on
necessary, confidential information be subject to administrative fines administrative fines may be applied
3. The following infringements shall
with regulatory authorities of third of up to 10 000 000 EUR or, if the in such a manner that the fines are
be subject to administrative fines
countries with which they have offender is a company, up to 2 % of its imposed by competent national courts
of up to 30 000 000 EUR or, if the
concluded bilateral or multilateral total worldwide annual turnover for the of other bodies as applicable in those
offender is company, up to 6 % of its
confidentiality arrangements preceding financial year, whichever is Member States. The application of
total worldwide annual turnover for the
guaranteeing an adequate level of higher. such rules in those Member States
preceding financial year, whichever is
confidentiality. shall have an equivalent effect.
higher: 6. When deciding on the amount of the
Article 71 administrative fine in each individual Article 72
(a) non-compliance with the
case, all relevant circumstances of the
Penalties prohibition of the artificial intelligence Administrative fines on Union
specific situation shall be taken into
practices referred to in Article 5; institutions, agencies and
account and due regard shall be given
1. In compliance with the terms
bodies
to the following:
and conditions laid down in this (b) non-compliance of the AI system
Regulation, Member States shall lay with the requirements laid down in 1. The European Data Protection
(a) the nature, gravity and duration
down the rules on penalties, including Article 10. Supervisor may impose administrative
of the infringement and of its
administrative fines, applicable to fines on Union institutions, agencies
consequences;
4. The non-compliance of the AI
infringements of this Regulation and and bodies falling within the scope
system with any requirements or
shall take all measures necessary to (b) whether administrative fines have of this Regulation. When deciding
obligations under this Regulation,
ensure that they are properly and been already applied by other market whether to impose an administrative
other than those laid down in
effectively implemented. The penalties surveillance authorities to the same fine and deciding on the amount of the
Articles 5 and 10, shall be subject
provided for shall be effective, operator for the same infringement. administrative fine in each individual
to administrative fines of up to 20
proportionate, and dissuasive. They case, all relevant circumstances of the
000 000 EUR or, if the offender is (c) the size and market share of the
shall take into particular account the specific situation shall be taken into
a company, up to 4 % of its total operator committing the infringement;

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 140 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 141


account and due regard shall be given Article 10. Protection Supervisor’s file, subject to 3. The delegation of power referred
to the following: the legitimate interest of individuals or to in Article 4, Article 7(1), Article
3. The non-compliance of the AI
undertakings in the protection of their 11(3), Article 43(5) and (6) and Article
(a) the nature, gravity and duration system with any requirements or
personal data or business secrets. 48(5) may be revoked at any time by
of the infringement and of its obligations under this Regulation,
the European Parliament or by the
consequences; other than those laid down in 6. Funds collected by imposition of
Council. A decision of revocation shall
Articles 5 and 10, shall be subject to fines in this Article shall be the income
put an end to the delegation of power
(b) the cooperation with the European
administrative fines of up to 250 000 of the general budget of the Union.
specified in that decision. It shall take
Data Protection Supervisor in order to
EUR.
effect the day following that of its
remedy the infringement and mitigate TITLE XI
publication in the Official Journal of
the possible adverse effects of the 4. Before taking decisions pursuant
infringement, including compliance to this Article, the European Data DELEGATION the European Union or at a later date
specified therein. It shall not affect the
with any of the measures previously Protection Supervisor shall give
OF POWER AND validity of any delegated acts already
ordered by the European Data the Union institution, agency or
COMMITTEE in force.
Protection Supervisor against the body which is the subject of the
Union institution or agency or body proceedings conducted by the PROCEDURE
4. As soon as it adopts a delegated
concerned with regard to the same European Data Protection Supervisor
Article 73 act, the Commission shall notify
subject matter; the opportunity of being heard on
it simultaneously to the European
the matter regarding the possible Exercise of the delegation Parliament and to the Council.
(c) any similar previous infringements
infringement. The European Data
by the Union institution, agency or 1. The power to adopt delegated
Protection Supervisor shall base his 5. Any delegated act adopted
body; acts is conferred on the Commission
or her decisions only on elements pursuant to Article 4, Article 7(1),
subject to the conditions laid down in
and circumstances on which the Article 11(3), Article 43(5) and (6) and
2. The following infringements shall be
this Article.
parties concerned have been able Article 48(5) shall enter into force only
subject to administrative fines of up to
to comment. Complainants, if any, if no objection has been expressed
500 000 EUR: 2. The delegation of power referred
shall be associated closely with the by either the European Parliament or
to in Article 4, Article 7(1), Article 11(3),
(a) non-compliance with the proceedings. the Council within a period of three
Article 43(5) and (6) and Article 48(5)
prohibition of the artificial intelligence months of notification of that act to
shall be conferred on the Commission
5. The rights of defense of the parties
practices referred to in Article 5; the European Parliament and the
for an indeterminate period of time
concerned shall be fully respected in
Council or if, before the expiry of that
from [entering into force of the
(b) non-compliance of the AI system the proceedings. They shall be entitled
period, the European Parliament and
Regulation].
with the requirements laid down in to have access to the European Data
the Council have both informed the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 142 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 143


Commission that they will not object. of security equipment concerning * Regulation (EU) YYY/XX [on Artificial “4. For Artificial Intelligence systems
That period shall be extended by Artificial Intelligence systems in the Intelligence] (OJ …).” which are safety components in
three months at the initiative of the meaning of Regulation (EU) YYY/ the meaning of Regulation (EU)
Article 77
European Parliament or of the Council. XX [on Artificial Intelligence] of the YYY/XX [on Artificial Intelligence]
European Parliament and of the of the European Parliament and of
Amendment to Regulation
Article 74 Council*, the requirements set out in the Council*, when carrying out its
(EU) No 168/2013
Chapter 2, Title III of that Regulation activities pursuant to paragraph
Committee procedure
shall be taken into account.” In Article 22(5) of Regulation (EU) No 1 and when adopting technical
1. The Commission shall be assisted 168/2013, the following subparagraph specifications and testing standards
* Regulation (EU) YYY/XX [on Artificial
by a committee. That committee shall is added: in accordance with paragraphs 2
Intelligence] (OJ …).”
be a committee within the meaning of and 3, the Commission shall take into
“When adopting delegated acts
Regulation (EU) No 182/2011. account the requirements set out in
Article 76 pursuant to the first subparagraph
Title III, Chapter 2 of that Regulation.
2. Where reference is made to this concerning Artificial Intelligence
Amendment to Regulation
paragraph, Article 5 of Regulation (EU) systems which are safety components * Regulation (EU) YYY/XX [on Artificial
(EU)
No 182/2011 shall apply. in the meaning of Regulation (EU) Intelligence] (OJ …).”.
No 167/2013 YYY/XX on [Artificial Intelligence] of
TITLE XII the European Parliament and of the Article 79
In Article 17(5) of Regulation (EU) No
Council*, the requirements set out in
Amendment to Directive (EU)
FINAL PROVISIONS 167/2013, the following subparagraph
Title III, Chapter 2 of that Regulation
is added: 2016/797
shall be taken into account.
Article 75
In Article 5 of Directive (EU) 2016/797,
“When adopting delegated acts
* Regulation (EU) YYY/XX [on Artificial
Amendment to Regulation pursuant to the first subparagraph the following paragraph is added:
Intelligence] (OJ …).”
(EC) No 300/2008 concerning artificial intelligence
“12. When adopting delegated
systems which are safety components Article 78
In Article 4(3) of Regulation (EC) No acts pursuant to paragraph 1 and
in the meaning of Regulation (EU)
300/2008, the following subparagraph implementing acts pursuant to
Amendment to Directive
YYY/XX [on Artificial Intelligence] of
is added: paragraph 11 concerning Artificial
2014/90/EU
the European Parliament and of the
Intelligence systems which are
“When adopting detailed measures Council*, the requirements set out in In Article 8 of Directive 2014/90/EU, safety components in the meaning
related to technical specifications Title III, Chapter 2 of that Regulation the following paragraph is added: of Regulation (EU) YYY/XX [on
and procedures for approval and use shall be taken into account.
Artificial Intelligence] of the European

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 144 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 145


Parliament and of the Council*, (EU) 2018/1139 Chapter 2 of that Regulation shall be acts concerning Artificial Intelligence
the requirements set out in Title III, taken into account.” systems which are safety components
Chapter 2 of that Regulation shall be Regulation (EU) 2018/1139 is amended in the meaning of Regulation (EU)
as follows: (3) In Article 43, the following
taken into account. YYY/XX [on Artificial Intelligence],
paragraph is added:
the requirements set out in Title III,
* Regulation (EU) YYY/XX [on Artificial (1) In Article 17, the following paragraph
Chapter 2 of that Regulation shall be
is added: “4. When adopting implementing acts
Intelligence] (OJ …).”.
taken into account.”
pursuant to paragraph 1 concerning
Article 80 “3. Without prejudice to paragraph Artificial Intelligence systems
(6) In Article 58, the following
2, when adopting implementing acts which are safety components in
paragraph is added:
Amendment to Regulation pursuant to paragraph 1 concerning the meaning of Regulation (EU)
(EU) 2018/858 Artificial Intelligence systems which YYY/XX [on Artificial Intelligence], “3. When adopting delegated acts
are safety components in the meaning the requirements set out in Title III, pursuant to paragraphs 1 and 2
In Article 5 of Regulation (EU)
of Regulation (EU) YYY/XX [on Chapter 2 of that Regulation shall be concerning Artificial Intelligence
2018/858 the following paragraph is
Artificial Intelligence] of the European taken into account.” systems which are safety components
added:
Parliament and of the Council*, in the meaning of Regulation (EU)
the requirements set out in Title III, (4) In Article 47, the following
“4. When adopting delegated acts YYY/XX [on Artificial Intelligence] ,
Chapter 2 of that Regulation shall be paragraph is added:
pursuant to paragraph 3 concerning the requirements set out in Title III,
Artificial Intelligence systems which taken into account. Chapter 2 of that Regulation shall be
“3. When adopting delegated acts
are safety components in the meaning taken into account.”.
* Regulation (EU) YYY/XX [on Artificial pursuant to paragraphs 1 and 2
of Regulation (EU) YYY/XX [on
Intelligence] (OJ …).” concerning Artificial Intelligence
Artificial Intelligence] of the European Article 82
systems which are safety components
Parliament and of the Council *, (2) In Article 19, the following in the meaning of Regulation (EU) Amendment to Regulation
the requirements set out in Title III, paragraph is added: YYY/XX [on Artificial Intelligence], (EU) 2019/2144
Chapter 2 of that Regulation shall be
the requirements set out in Title III,
taken into account. “4. When adopting delegated acts In Article 11 of Regulation (EU)
Chapter 2 of that Regulation shall be
pursuant to paragraphs 1 and 2 2019/2144, the following paragraph is
taken into account.”
* Regulation (EU) YYY/XX [on Artificial concerning Artificial Intelligence added:
Intelligence] (OJ …).”. systems which are safety components (5) In Article 57, the following
in the meaning of Regulation (EU) “3. When adopting the implementing
paragraph is added:
Article 81
YYY/XX [on Artificial Intelligence], acts pursuant to paragraph 2,
the requirements set out in Title III, “When adopting those implementing concerning artificial intelligence
Amendment to Regulation

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 146 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 147


systems which are safety components of each large-scale IT systems public. shall provide the Commission with
in the meaning of Regulation (EU) established by the legal acts listed in information on its request.
3. The reports referred to in paragraph
YYY/XX [on Artificial Intelligence] of Annex IX to be undertaken as provided
2 shall devote specific attention to the 6. In carrying out the evaluations and
the European Parliament and of the for in those respective acts.
following: reviews referred to in paragraphs 1
Council*, the requirements set out in
2. This Regulation shall apply to the to 4 the Commission shall take into
Title III, Chapter 2 of that Regulation (a) the status of the financial and
high-risk AI systems, other than the account the positions and findings of
shall be taken into account. human resources of the national
ones referred to in paragraph 1, that the Board, of the European Parliament,
competent authorities in order to
* Regulation (EU) YYY/XX [on Artificial have been placed on the market of the Council, and of other relevant
effectively perform the tasks assigned
Intelligence] (OJ …).”. or put into service before [date of bodies or sources.
to them under this Regulation;
application of this Regulation referred
Article 83 7. The Commission shall, if necessary,
to in Article 85(2)], only if, from that (b) the state of penalties, and notably
submit appropriate proposals to
date, those systems are subject to
AI systems already placed on administrative fines as referred to
amend this Regulation, in particular
significant changes in their design or
the market or put into service in Article 71(1), applied by Member
taking into account developments in
intended purpose. States to infringements of the
technology and in the light of the state
1. This Regulation shall not apply to the
provisions of this Regulation.
AI systems which are components of Article 84 of progress in the information society.

the large-scale IT systems established 4. Within [three years after the date of
Evaluation and review Article 85
by the legal acts listed in Annex IX application of this Regulation referred
that have been placed on the market 1. The Commission shall assess the to in Article 85(2)] and every four Entry into force and
or put into service before [12 months need for amendment of the list in years thereafter, the Commission shall application
after the date of application of this Annex III once a year following the evaluate the impact and effectiveness
1. This Regulation shall enter into force
Regulation referred to in Article entry into force of this Regulation. of codes of conduct to foster the
on the twentieth day following that of
85(2)], unless the replacement or application of the requirements set
2. By [three years after the date of its publication in the Official Journal of
amendment of those legal acts leads out in Title III, Chapter 2 and possibly
application of this Regulation referred the European Union.
to a significant change in the design or other additional requirements for
to in Article 85(2)] and every four
intended purpose of the AI system or AI systems other than high-risk AI 2. This Regulation shall apply from [24
years thereafter, the Commission
AI systems concerned. systems. months following the entering into
shall submit a report on the evaluation
force of the Regulation].
The requirements laid down in this and review of this Regulation to the 5. For the purpose of paragraphs 1
Regulation shall be taken into account, European Parliament and to the to 4 the Board, the Member States 3. By way of derogation from
where applicable, in the evaluation Council. The reports shall be made and national competent authorities

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 148 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 149


paragraph 2: 1.2. Policy area(s) concerned 1.5.4. Compatibility with the assessment of the expected levels of
Multiannual Financial Framework risk of error (at payment & at closure)
(a) Title III, Chapter 4 and Title VI shall 1.3. The proposal/initiative relates to:
and possible synergies with other
apply from [three months following the 2.3. Measures to prevent fraud and
appropriate instruments
1.4. Objective(s)
entry into force of this Regulation]; irregularities
1.5.5 Assessment of the different
1.4.1. General objective(s)
(b) Article 71 shall apply from [twelve
available financing options, including 3. ESTIMATED
months following the entry into force 1.4.2. Specific objective(s) scope for redeployment FINANCIAL IMPACT
of this Regulation].
1.4.3. Expected result(s) and impact 1.6. Duration and financial impact of
OF THE PROPOSAL/
This Regulation shall be binding in its
the proposal/initiative INITIATIVE
entirety and directly applicable in all 1.4.4. Indicators of performance
Member States. 1.7. Management mode(s) planned 3.1. Heading(s) of the multiannual
1.5. Grounds for the proposal/initiative financial framework and expenditure
Done at Brussels,
1.5.1. Requirement(s) to be met in
2. MANAGEMENT budget line(s) affected

For the European Parliament the short or long term including a MEASURES 3.2. Estimated financial impact of the
detailed timeline for roll-out of the
2.1. Monitoring and reporting rules proposal on appropriations
The President implementation of the initiative

2.2. Management and control system 3.2.1. Summary of estimated impact on


For the Council 1.5.2. Added value of Union operational appropriations
involvement (it may result from 2.2.1. Justification of the management
The President
different factors, e.g. coordination mode(s), the funding implementation 3.2.2. Estimated output funded with

LEGISLATIVE gains, legal certainty, greater mechanism(s), the payment modalities operational appropriations
effectiveness or complementarities).
FINANCIAL and the control strategy proposed
3.2.3. Summary of estimated impact
For the purposes of this point
STATEMENT 'added value of Union involvement' 2.2.2. Information concerning the risks on administrative appropriations

is the value resulting from Union identified and the internal control
1. FRAMEWORK OF system(s) set up to mitigate them
3.2.4. Compatibility with the current
intervention which is additional to the multiannual financial framework
THE PROPOSAL/ value that would have been otherwise
2.2.3. Estimation and justification of
INITIATIVE created by Member States alone 3.2.5. Third-party contributions
the cost-effectiveness of the controls

1.1. Title of the proposal/initiative 1.5.3. Lessons learned from similar (ratio of "control costs ÷ value of 3.3. Estimated impact on revenue

experiences in the past the related funds managed"), and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 150 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 151


LEGISLATIVE (symbolic) reasoning and expert European Parliament and of the 7. Directive 2014/68/EU of the
systems; Council of 20 November 2013 on European Parliament and of the
FINANCIAL
recreational craft and personal Council of 15 May 2014 on the
STATEMENT (c) Statistical approaches, Bayesian
watercraft and repealing Directive harmonisation of the laws of the
estimation, search and optimization
94/25/EC (OJ L 354, 28.12.2013, p. 90); Member States relating to the making
View the Legislative Financial
methods.
available on the market of pressure
Statement of the European
4. Directive 2014/33/EU of the
equipment (OJ L 189, 27.6.2014, p. 164);
Commission's proposal at https:// ANNEX II European Parliament and of the
eur-lex.europa.eu/legal-content/EN/
Council of 26 February 2014 on the 8. Regulation (EU) 2016/424 of the
TXT/?uri=celex%3A52021PC0206 LIST OF UNION harmonisation of the laws of the European Parliament and of the
HARMONISATION Member States relating to lifts and Council of 9 March 2016 on cableway
ANNEX I
LEGISLATION safety components for lifts (OJ L 96, installations and repealing Directive
29.3.2014, p. 251); 2000/9/EC (OJ L 81, 31.3.2016, p. 1);
ARTIFICIAL
Section A – List of
INTELLIGENCE 5. Directive 2014/34/EU of the 9. Regulation (EU) 2016/425 of the
Union harmonisation
TECHNIQUES AND European Parliament and of the European Parliament and of the
legislation based on Council of 26 February 2014 on the Council of 9 March 2016 on personal
APPROACHES
the New Legislative harmonisation of the laws of the protective equipment and repealing

referred to in Article 3, Framework Member States relating to equipment Council Directive 89/686/EEC (OJ L
and protective systems intended 81, 31.3.2016, p. 51);
point 1 1. Directive 2006/42/EC of the for use in potentially explosive
European Parliament and of the 10. Regulation (EU) 2016/426 of
(a) Machine learning approaches, atmospheres (OJ L 96, 29.3.2014, p.
Council of 17 May 2006 on machinery, the European Parliament and of
including supervised, unsupervised 309);
and amending Directive 95/16/EC (OJ the Council of 9 March 2016 on
and reinforcement learning, using
L 157, 9.6.2006, p. 24) [as repealed by 6. Directive 2014/53/EU of the appliances burning gaseous fuels and
a wide variety of methods including
the Machinery Regulation]; European Parliament and of the repealing Directive 2009/142/EC (OJ
deep learning;
Council of 16 April 2014 on the L 81, 31.3.2016, p. 99);
(b) Logic- and knowledge-based 2. Directive 2009/48/EC of the harmonisation of the laws of the
European Parliament and of the 11. Regulation (EU) 2017/745 of the
approaches, including knowledge Member States relating to the making
Council of 18 June 2009 on the safety European Parliament and of the
representation, inductive (logic) available on the market of radio
of toys (OJ L 170, 30.6.2009, p. 1); Council of 5 April 2017 on medical
programming, knowledge bases, equipment and repealing Directive
devices, amending Directive 2001/83/
inference and deductive engines, 1999/5/EC (OJ L 153, 22.5.2014, p. 62);
3. Directive 2013/53/EU of the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 152 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 153


EC, Regulation (EC) No 178/2002 and 3. Regulation (EU) No 167/2013 of p. 1); 3. Regulation (EU) 2019/2144 and establishing a European Union
Regulation (EC) No 1223/2009 and the European Parliament and of the of the European Parliament and of Aviation Safety Agency, and amending
repealing Council Directives 90/385/ Council of 5 February 2013 on the the Council of 27 November 2019 Regulations (EC) No 2111/2005, (EC)
EEC and 93/42/EEC (OJ L 117, 5.5.2017, approval and market surveillance of on type-approval requirements for No 1008/2008, (EU) No 996/2010,
p. 1; agricultural and forestry vehicles (OJ motor vehicles and their trailers, and (EU) No 376/2014 and Directives
L 60, 2.3.2013, p. 1); systems, components and separate 2014/30/EU and 2014/53/EU of the
12. Regulation (EU) 2017/746 of the
technical units intended for such European Parliament and of the
European Parliament and of the 4. Directive 2014/90/EU of the
vehicles, as regards their general Council, and repealing Regulations
Council of 5 April 2017 on in vitro European Parliament and of the
safety and the protection of vehicle (EC) No 552/2004 and (EC) No
diagnostic medical devices and Council of 23 July 2014 on marine
occupants and vulnerable road 216/2008 of the European Parliament
repealing Directive 98/79/EC and equipment and repealing Council
users, amending Regulation (EU) and of the Council and Council
Commission Decision 2010/227/EU Directive 96/98/EC (OJ L 257,
2018/858 of the European Parliament Regulation (EEC) No 3922/91 (OJ
(OJ L 117, 5.5.2017, p. 176). 28.8.2014, p. 146);
and of the Council and repealing L 212, 22.8.2018, p. 1), in so far as the
Regulations (EC) No 78/2009, (EC) design, production and placing on
Section B. List of other 5. Directive (EU) 2016/797 of the
No 79/2009 and (EC) No 661/2009 of the market of aircrafts referred to
European Parliament and of the
Union harmonisation the European Parliament and of the in points (a) and (b) of Article 2(1)
Council of 11 May 2016 on the
legislation interoperability of the rail system Council and Commission Regulations thereof, where it concerns unmanned
(EC) No 631/2009, (EU) No 406/2010, aircraft and their engines, propellers,
within the European Union (OJ L 138,
1. Regulation (EC) No 300/2008 of
(EU) No 672/2010, (EU) No 1003/2010, parts and equipment to control them
26.5.2016, p. 44).
the European Parliament and of the
(EU) No 1005/2010, (EU) No remotely, are concerned.
Council of 11 March 2008 on common
6. Regulation (EU) 2018/858 of 1008/2010, (EU) No 1009/2010, (EU)
rules in the field of civil aviation
the European Parliament and of No 19/2011, (EU) No 109/2011, (EU) No ANNEX III
security and repealing Regulation
the Council of 30 May 2018 on the 458/2011, (EU) No 65/2012, (EU) No
(EC) No 2320/2002 (OJ L 97, 9.4.2008,
approval and market surveillance 130/2012, (EU) No 347/2012, (EU) No
HIGH-RISK AI
p. 72).
of motor vehicles and their trailers, 351/2012, (EU) No 1230/2012 and (EU) SYSTEMS REFERRED
2. Regulation (EU) No 168/2013 of and of systems, components and 2015/166 (OJ L 325, 16.12.2019, p. 1); TO IN ARTICLE 6(2)
separate technical units intended for
the European Parliament and of the
such vehicles, amending Regulations 7. Regulation (EU) 2018/1139 of the High-risk AI systems pursuant to
Council of 15 January 2013 on the
(EC) No 715/2007 and (EC) No European Parliament and of the Article 6(2) are the AI systems listed in
approval and market surveillance
595/2009 and repealing Directive Council of 4 July 2018 on common any of the following areas:
of two- or three-wheel vehicles and
2007/46/EC (OJ L 151, 14.6.2018, rules in the field of civil aviation
quadricycles (OJ L 60, 2.3.2013, p. 52); 1. Biometric identification and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 154 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 155


categorisation of natural persons: and access to self-employment: AI systems put into service by small in the course of investigation or
scale providers for their own use; prosecution of criminal offences;
(a) AI systems intended to be used (a) AI systems intended to be used
for the ‘real-time’ and ‘post’ remote for recruitment or selection of natural (c) AI systems intended to be used (e) AI systems intended to be used
biometric identification of natural persons, notably for advertising to dispatch, or to establish priority by law enforcement authorities
persons; vacancies, screening or filtering in the dispatching of emergency for predicting the occurrence or
applications, evaluating candidates in first response services, including by reoccurrence of an actual or potential
2. Management and operation of
the course of interviews or tests; firefighters and medical aid. criminal offence based on profiling
critical infrastructure:
of natural persons as referred to in
(b) AI intended to be used for 6. Law enforcement:
Article 3(4) of Directive (EU) 2016/680
(a) AI systems intended to be
making decisions on promotion
or assessing personality traits and
used as safety components in the (a) AI systems intended to be used
and termination of work-related
characteristics or past criminal
management and operation of road by law enforcement authorities for
contractual relationships, for task
behaviour of natural persons or
traffic and the supply of water, gas, making individual risk assessments of
allocation and for monitoring and
groups;
heating and electricity. natural persons in order to assess the
evaluating performance and behavior
risk of a natural person for offending
of persons in such relationships. (f) AI systems intended to be used
3. Education and vocational training:
or reoffending or the risk for potential
by law enforcement authorities for
5. Access to and enjoyment of victims of criminal offences;
(a) AI systems intended to be used profiling of natural persons as referred
essential private services and public
for the purpose of determining to in Article 3(4) of Directive (EU)
(b) AI systems intended to be used
services and benefits:
access or assigning natural persons 2016/680 in the course of detection,
by law enforcement authorities as
to educational and vocational training investigation or prosecution of
(a) AI systems intended to be used polygraphs and similar tools or to
institutions; criminal offences;
by public authorities or on behalf of detect the emotional state of a natural
public authorities to evaluate the person;
(b) AI systems intended to be used for (g) AI systems intended to be used
eligibility of natural persons for public
the purpose of assessing students in for crime analytics regarding natural
(c) AI systems intended to be used by
assistance benefits and services, as
educational and vocational training persons, allowing law enforcement
law enforcement authorities to detect
well as to grant, reduce, revoke, or
institutions and for assessing authorities to search complex related
deep fakes as referred to in article
reclaim such benefits and services;
participants in tests commonly and unrelated large data sets available
52(3);
required for admission to educational in different data sources or in different
(b) AI systems intended to be used
institutions. (d) AI systems intended to be used data formats in order to identify
to evaluate the creditworthiness of
by law enforcement authorities for unknown patterns or discover hidden
natural persons or establish their
4. Employment, workers management
evaluation of the reliability of evidence relationships in the data.
credit score, with the exception of

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 156 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 157


7. Migration, asylum and border control 8. Administration of justice and (c) the versions of relevant software or (b) the design specifications of the
management: democratic processes: firmware and any requirement related system, namely the general logic of
to version update; the AI system and of the algorithms;
(a) AI systems intended to be used (a) AI systems intended to assist a
the key design choices including
by competent public authorities as judicial authority in researching and (d) the description of all forms in which
the rationale and assumptions
polygraphs and similar tools or to interpreting facts and the law and in the AI system is placed on the market
made, also with regard to persons
detect the emotional state of a natural applying the law to a concrete set of or put into service;
or groups of persons on which the
person; facts.
system is intended to be used; the
(e) the description of hardware on
main classification choices; what the
(b) AI systems intended to be used ANNEX IV which the AI system is intended to run;
system is designed to optimise for
by competent public authorities to
(f) where the AI system is a
assess a risk, including a security risk, TECHNICAL and the relevance of the different
component of products, photographs parameters; the decisions about any
a risk of irregular immigration, or a
DOCUMENTATION or illustrations showing external possible trade-off made regarding the
health risk, posed by a natural person
referred to in Article features, marking and internal layout technical solutions adopted to comply
who intends to enter or has entered
into the territory of a Member State; 11(1) of those products; with the requirements set out in Title
III, Chapter 2;
The technical documentation referred (g) instructions of use for the user
(c) AI systems intended to be used
to in Article 11(1) shall contain at and, where applicable installation (c) the description of the system
by competent public authorities for
least the following information, as instructions; architecture explaining how software
the verification of the authenticity
applicable to the relevant AI system: components build on or feed into
of travel documents and supporting 2. A detailed description of the
each other and integrate into the
documentation of natural persons and
1. A general description of the AI elements of the AI system and of the
overall processing; the computational
detect non-authentic documents by
system including: process for its development, including:
resources used to develop, train, test
checking their security features;
(a) the methods and steps performed and validate the AI system;
(a) its intended purpose, the person/s
(d) AI systems intended to assist
developing the system the date and for the development of the AI system,
(d) where relevant, the data
competent public authorities for the
the version of the system; including, where relevant, recourse to
requirements in terms of datasheets
examination of applications for asylum, pre-trained systems or tools provided
describing the training methodologies
visa and residence permits and (b) how the AI system interacts or can by third parties and how these have
and techniques and the training data
associated complaints with regard to be used to interact with hardware been used, integrated or modified by
sets used, including information about
the eligibility of the natural persons or software that is not part of the AI the provider;
the provenance of those data sets,
applying for a status. system itself, where applicable;
their scope and main characteristics;

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 158 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 159


how the data was obtained and other relevant requirements set out in management system in accordance The EU declaration of conformity
selected; labelling procedures Title III, Chapter 2 as well as potentially with Article 9; referred to in Article 48, shall contain
(e.g. for supervised learning), data discriminatory impacts; test logs and all of the following information:
5. A description of any change made
cleaning methodologies (e.g. outliers all test reports dated and signed by
to the system through its lifecycle; 1. AI system name and type and any
detection); the responsible persons, including
additional unambiguous reference
with regard to pre-determined
6. A list of the harmonised standards
(e) assessment of the human allowing identification and traceability
changes as referred to under point (f).
applied in full or in part the references
oversight measures needed in of the AI system;
of which have been published in the
accordance with Article 14, including 3. Detailed information about the
Official Journal of the European Union; 2. Name and address of the provider
an assessment of the technical monitoring, functioning and control of
where no such harmonised standards or, where applicable, their authorised
measures needed to facilitate the the AI system, in particular with regard
have been applied, a detailed representative;
interpretation of the outputs of AI to: its capabilities and limitations in
description of the solutions adopted
systems by the users, in accordance performance, including the degrees 3. A statement that the EU declaration
to meet the requirements set out in
with Articles 13(3)(d); of accuracy for specific persons of conformity is issued under the sole
Title III, Chapter 2, including a list of
or groups of persons on which the responsibility of the provider;
(f) where applicable, a detailed other relevant standards and technical
system is intended to be used and the
description of pre-determined specifications applied;
overall expected level of accuracy in 4. A statement that the AI system in
changes to the AI system and its
relation to its intended purpose; the question is in conformity with this
7. A copy of the EU declaration of
performance, together with all the
foreseeable unintended outcomes Regulation and, if applicable, with
conformity;
relevant information related to the
and sources of risks to health and any other relevant Union legislation
technical solutions adopted to ensure
safety, fundamental rights and 8. A detailed description of the system that provides for the issuing of an EU
continuous compliance of the AI
discrimination in view of the intended in place to evaluate the AI system declaration of conformity;
system with the relevant requirements
purpose of the AI system; the human performance in the post-market
set out in Title III, Chapter 2; 5. References to any relevant
oversight measures needed in phase in accordance with Article 61,
harmonised standards used or any
accordance with Article 14, including including the post-market monitoring
(g) the validation and testing other common specification in relation
the technical measures put in place plan referred to in Article 61(3).
procedures used, including to which conformity is declared;
information about the validation to facilitate the interpretation of the
outputs of AI systems by the users; ANNEX V 6. Where applicable, the name and
and testing data used and their
specifications on input data, as identification number of the notified
main characteristics; metrics used
appropriate;
EU DECLARATION OF
to measure accuracy, robustness, body, a description of the conformity
cybersecurity and compliance with
CONFORMITY assessment procedure performed and
4. A detailed description of the risk

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 160 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 161


identification of the certificate issued; 4. The provider also verifies that the and testing of AI systems pursuant in place to ensure that the quality
design and development process of to Article 17 shall be examined in management system remains
7. Place and date of issue of the
the AI system and its post-market accordance with point 3 and shall be adequate and effective;
declaration, name and function of the
monitoring as referred to in Article subject to surveillance as specified in
person who signed it as well as an (f) a written declaration that the same
61 is consistent with the technical point 5. The technical documentation
indication for, and on behalf of whom, application has not been lodged with
documentation. of the AI system shall be examined in
that person signed, signature. any other notified body.
accordance with point 4.
ANNEX VII
ANNEX VI 3. Quality management system
3.2. The quality management system
shall be assessed by the notified
CONFORMITY BASED
CONFORMITY 3.1. The application of the provider body, which shall determine whether it
ON ASSESSMENT
ASSESSMENT shall include: satisfies the requirements referred to
OF QUALITY in Article 17.
PROCEDURE BASED (a) the name and address of the
MANAGEMENT
ON INTERNAL provider and, if the application The decision shall be notified
SYSTEM AND
CONTROL is lodged by the authorised to the provider or its authorised
ASSESSMENT representative, their name and representative.
1. The conformity assessment OF TECHNICAL address as well;
The notification shall contain the
procedure based on internal control is
DOCUMENTATION (b) the list of AI systems covered conclusions of the assessment of the
the conformity assessment procedure
under the same quality management quality management system and the
based on points 2 to 4. 1. Introduction
system; reasoned assessment decision.
2. The provider verifies that the Conformity based on assessment
(c) the technical documentation for 3.3. The quality management system
established quality management of quality management system
each AI system covered under the as approved shall continue to be
system is in compliance with the and assessment of the technical
same quality management system; implemented and maintained by the
requirements of Article 17. documentation is the conformity
provider so that it remains adequate
assessment procedure based on
(d) the documentation concerning the
3. The provider examines the and efficient.
points 2 to 5.
quality management system which
information contained in the technical
shall cover all the aspects listed under 3.4. Any intended change to the
documentation in order to assess the 2. Overview
Article 17; approved quality management system
compliance of the AI system with the
The approved quality management or the list of AI systems covered by the
relevant essential requirements set
(e) a description of the procedures
system for the design, development latter shall be brought to the attention
out in Title III, Chapter 2.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 162 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 163


of the notified body by the provider. (a) the name and address of the 4.5. Where necessary to assess the control of the AI system while in use,
provider; conformity of the high-risk AI system where applicable.
The proposed changes shall be
with the requirements set out in Title
examined by the notified body, which (b) a written declaration that the same Where the AI system is not in
III, Chapter 2 and upon a reasoned
shall decide whether the modified application has not been lodged with conformity with the requirements set
request, the notified body shall also be
quality management system continues any other notified body; out in Title III, Chapter 2, the notified
granted access to the source code of
to satisfy the requirements referred to body shall refuse to issue an EU
the AI system.
(c) the technical documentation
in point 3.2 or whether a reassessment technical documentation assessment
referred to in Annex IV.
is necessary. 4.6. The decision shall be notified certificate and shall inform the
to the provider or its authorised applicant accordingly, giving detailed
4.3. The technical documentation shall
The notified body shall notify
representative. The notification reasons for its refusal.
be examined by the notified body. To
the provider of its decision. The
shall contain the conclusions of
this purpose, the notified body shall
notification shall contain the Where the AI system does not meet
the assessment of the technical
be granted full access to the training
conclusions of the examination the requirement relating to the data
documentation and the reasoned
and testing datasets used by the
of the changes and the reasoned used to train it, re-training of the AI
assessment decision.
provider, including through application
assessment decision. system will be needed prior to the
programming interfaces (API) or other
Where the AI system is in conformity application for a new conformity
4. Control of the technical appropriate means and tools enabling
with the requirements set out in assessment. In this case, the reasoned
documentation. remote access.
Title III, Chapter 2, an EU technical assessment decision of the notified

4.1. In addition to the application 4.4. In examining the technical documentation assessment certificate body refusing to issue the EU

referred to in point 3, an application documentation, the notified body may shall be issued by the notified body. technical documentation assessment

with a notified body of their choice require that the provider supplies The certificate shall indicate the certificate shall contain specific

shall be lodged by the provider for further evidence or carries out name and address of the provider, considerations on the quality data

the assessment of the technical further tests so as to enable a proper the conclusions of the examination, used to train the AI system, notably on

documentation relating to the AI assessment of conformity of the AI the conditions (if any) for its validity the reasons for non-compliance.

system which the provider intends to system with the requirements set out and the data necessary for the
4.7. Any change to the AI system that
place on the market or put into service in Title III, Chapter 2. Whenever the identification of the AI system.
could affect the compliance of the AI
and which is covered by the quality notified body is not satisfied with the
The certificate and its annexes shall system with the requirements or its
management system referred to tests carried out by the provider, the
contain all relevant information to intended purpose shall be approved
under point 3. notified body shall directly carry out
allow the conformity of the AI system by the notified body which issued
adequate tests, as appropriate.
4.2. The application shall include: to be evaluated, and to allow for the EU technical documentation

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 164 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 165


assessment certificate. The provider 5.2. For assessment purposes, the to date with regard to high-risk 8. A scanned copy of the certificate
shall inform such notified body of provider shall allow the notified body AI systems to be registered in referred to in point 7, when applicable;
its intention to introduce any of the to access the premises where the accordance with Article 51.
9. Member States in which the AI
above-mentioned changes or if it design, development, testing of the AI
1. Name, address and contact details system is or has been placed on the
becomes otherwise aware of the systems is taking place. The provider
of the provider; market, put into service or made
occurrence of such changes. The shall further share with the notified
available in the Union;
intended changes shall be assessed body all necessary information. 2. Where submission of information
by the notified body which shall is carried out by another person on 10. A copy of the EU declaration of
5.3. The notified body shall carry out
decide whether those changes behalf of the provider, the name, conformity referred to in Article 48;
periodic audits to make sure that
require a new conformity assessment address and contact details of that
the provider maintains and applies 11. Electronic instructions for use; this
in accordance with Article 43(4) or person;
the quality management system and information shall not be provided for
whether they could be addressed
shall provide the provider with an 3. Name, address and contact details high-risk AI systems in the areas of law
by means of a supplement to
audit report. In the context of those of the authorised representative, enforcement and migration, asylum
the EU technical documentation
audits, the notified body may carry out where applicable; and border control management
assessment certificate. In the latter
additional tests of the AI systems for referred to in Annex III, points 1, 6 and
case, the notified body shall assess
which an EU technical documentation 4. AI system trade name and any
the changes, notify the provider of 7.
assessment certificate was issued. additional unambiguous reference
its decision and, where the changes
allowing identification and traceability 12. URL for additional information
are approved, issue to the provider
ANNEX VIII of the AI system; (optional).
a supplement to the EU technical
documentation assessment
INFORMATION TO BE 5. Description of the intended purpose ANNEX IX
certificate.
SUBMITTED UPON of the AI system;
UNION LEGISLATION
5. Surveillance of the approved quality THE REGISTRATION 6. Status of the AI system (on the
ON LARGE-SCALE
management system.
OF HIGH-RISK market, or in service; no longer placed
IT SYSTEMS IN THE
5.1. The purpose of the surveillance AI SYSTEMS IN on the market/in service, recalled);
AREA OF FREEDOM,
carried out by the notified body ACCORDANCE WITH 7. Type, number and expiry date of the
SECURITY AND
referred to in Point 3 is to make sure
ARTICLE 51 certificate issued by the notified body
that the provider duly fulfils the terms and the name or identification number JUSTICE
and conditions of the approved quality The following information shall be of that notified body, when applicable;
1. Schengen Information System
management system. provided and thereafter kept up

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 166 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 167


(a) Regulation (EU) 2018/1860 of Commission Decision 2010/261/EU country national or stateless person the European Parliament and of
the European Parliament and of the (OJ L 312, 7.12.2018, p. 56). and on requests for the comparison the Council of 12 September 2018
Council of 28 November 2018 on the with Eurodac data by Member establishing a European Travel
2. Visa Information System
use of the Schengen Information States' law enforcement authorities Information and Authorisation System
System for the return of illegally and Europol for law enforcement (ETIAS) and amending Regulations
(a) Proposal for a REGULATION OF
staying third-country nationals (OJ L purposes and amending Regulations (EU) No 1077/2011, (EU) No 515/2014,
THE EUROPEAN PARLIAMENT
312, 7.12.2018, p. 1). (EU) 2018/1240 and (EU) 2019/818 – (EU) 2016/399, (EU) 2016/1624 and
AND OF THE COUNCIL amending
COM(2020) 614 final. (EU) 2017/2226 (OJ L 236, 19.9.2018,
Regulation (EC) No 767/2008,
(b) Regulation (EU) 2018/1861 of
p. 1).
Regulation (EC) No 810/2009,
the European Parliament and of the 4. Entry/Exit System
Regulation (EU) 2017/2226, Regulation
Council of 28 November 2018 on the (b) Regulation (EU) 2018/1241 of
(EU) 2016/399, Regulation XX/2018 (a) Regulation (EU) 2017/2226 of
establishment, operation and use of the European Parliament and of
[Interoperability Regulation], and the European Parliament and of
the Schengen Information System the Council of 12 September 2018
Decision 2004/512/EC and repealing the Council of 30 November 2017
(SIS) in the field of border checks, amending Regulation (EU) 2016/794
Council Decision 2008/633/JHA - establishing an Entry/Exit System
and amending the Convention for the purpose of establishing a
COM(2018) 302 final. To be updated (EES) to register entry and exit
implementing the Schengen European Travel Information and
once the Regulation is adopted (April/ data and refusal of entry data of
Agreement, and amending and Authorisation System (ETIAS) (OJ L
May 2021) by the co-legislators. third-country nationals crossing the
repealing Regulation (EC) No 236, 19.9.2018, p. 72).
external borders of the Member
1987/2006 (OJ L 312, 7.12.2018, p. 14) 3. Eurodac
States and determining the conditions 6. European Criminal Records
(c) Regulation (EU) 2018/1862 of for access to the EES for law Information System on third-country
(a) Amended proposal for a
the European Parliament and of the enforcement purposes, and amending nationals and stateless persons
REGULATION OF THE EUROPEAN
Council of 28 November 2018 on the Convention implementing
PARLIAMENT AND OF THE COUNCIL
(a) Regulation (EU) 2019/816 of the
the establishment, operation and the Schengen Agreement and
on the establishment of 'Eurodac' for
European Parliament and of the
use of the Schengen Information Regulations (EC) No 767/2008 and
the comparison of biometric data for
Council of 17 April 2019 establishing
System (SIS) in the field of police (EU) No 1077/2011 (OJ L 327, 9.12.2017,
the effective application of Regulation
a centralised system for the
cooperation and judicial cooperation p. 20).
(EU) XXX/XXX [Regulation on
identification of Member States
in criminal matters, amending and Asylum and Migration Management]
5. European Travel Information and holding conviction information on
repealing Council Decision 2007/533/ and of Regulation (EU) XXX/XXX
Authorisation System third-country nationals and stateless
JHA, and repealing Regulation (EC) [Resettlement Regulation], for
persons (ECRIS-TCN) to supplement
No 1986/2006 of the European identifying an illegally staying third- (a) Regulation (EU) 2018/1240 of
the European Criminal Records
Parliament and of the Council and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 168 AI GOVERNANCE: A CONSOLIDATED REFERENCE | 169


Information System and amending
Regulation (EU) 2018/1726 (OJ L 135,
22.5.2019, p. 1).

Establish
7. Interoperability

(a) Regulation (EU) 2019/817 of


the European Parliament and
of the Council of 20 May 2019
a unified US
on establishing a framework
for interoperability between EU
privacy program
information systems in the field of
borders and visa (OJ L 135, 22.5.2019, Protect privacy and ensure US
p. 27). compliance across the business
(b) Regulation (EU) 2019/818 of
Protect consumer rights
the European Parliament and Collect consent, preferences, and first-party
of the Council of 20 May 2019 data and activate data across the MarTech
stack based on individual choice
on establishing a framework
for interoperability between EU Respond to employee privacy requests
information systems in the field of Fully automate employee rights requests
like access, deletion, and broader do not
police and judicial cooperation, asylum
sell requests
and migration (OJ L 135, 22.5.2019, p.
85). Conduct privacy risk assessments
Embed privacy by design into your business
data strategy to manage risk at scale

Enforce data retention and minimization


Reduce your sensitive data footprint in
compliance with retention and limitation
requirements

Learn more at OneTrust.com

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 170


Recommendation of
This document is published under the responsibility of the Secretary-General of the OECD. It reproduces an
OECD Legal Instrument and may contain additional material. The opinions expressed and arguments employed in
the additional material do not necessarily reflect the official views of OECD Member countries.

the Council on Artificial


This document, as well as any data and any map included herein, are without prejudice to the status of or
sovereignty over any territory, to the delimitation of international frontiers and boundaries and to the name of any
territory, city or area.
For access to the official and up­to-date texts of OECD Legal Instruments, as well as other related information,
please consult the Compendium of OECD Legal Instruments at http://legalinstruments.oecd.org.

Series: OECD Legal Instruments Intelligence


Photo credit: © kras99/Shutterstock.com OECD Legal Instruments
© OECD 2022

This document is provided free of charge. It may be reproduced and distributed free of charge without requiring
any further permissions, as long as it is not altered in any way. It may not be sold.

This document is available in the two OECD official languages (English and French). It may be translated into other
languages, as long as the translation is labelled "unofficial translation" and includes the following disclaimer: "This
translation has been prepared by [NAME OF TRANSLATION AUTHOR] for informational purpose only and its
accuracy cannot be guaranteed by the OECD. The only official versions are the English and French texts available
on the OECD website http://legalinstruments.oecd.org"
Background stewardship of trustworthy AI and - and international co-operation for competition, transitions in the
calls on AI actors to promote and trustworthy AI. labour market, and implications for
Information
implement them: democracy and human rights.
The Recommendation also includes
The Recommendation on Artificial
- inclusive growth, sustainable a provision for the development of The OECD has undertaken empirical
Intelligence (AI) – the first
development and well-being; metrics to measure AI research, and policy activities on AI in support
intergovernmental standard on AI –
development and deployment, and for of the policy debate over the past
was adopted by the OECD Council - human-centred values and
building an evidence base to assess two years, starting with a Technology
at Ministerial level on 22 May 2019 fairness;
progress in its implementation. Foresight Forum on AI in 2016 and
on the proposal of the Committee
an international conference on AI:
on Digital Economy Policy (CDEP). - transparency and explainability;
The OECD’s work on Artificial
Intelligent Machines, Smart Policies in
The Recommendation aims to foster Intelligence and rationale
- robustness, security and safety; 2017. The Organisation also conducted
innovation and trust in AI by promoting for developing the OECD
analytical and measurement work
the responsible stewardship of - and accountability. Recommendation on Artificial
that provides an overview of the AI
trustworthy AI while ensuring respect Intelligence
In addition to and consistent with technical landscape, maps economic
for human rights and democratic
these value-based principles, the and social impacts of AI technologies
values. Complementing existing OECD Artificial Intelligence (AI) is a general-
Recommendation also provides and their applications, identifies major
standards in areas such as privacy, purpose technology that has the
five recommendations to policy- policy considerations, and describes
digital security risk management, potential to improve the welfare and
makers pertaining to national policies AI initiatives from governments and
and responsible business conduct, well-being of people, to contribute to
and international co-operation for other stakeholders at national and
the Recommendation focuses on positive sustainable global economic
trustworthy AI, namely: international levels.
AI-specific issues and sets a standard activity, to increase innovation and

that is implementable and sufficiently productivity, and to help respond to


- investing in AI research and This work has demonstrated the need
flexible to stand the test of time in this key global challenges. It is deployed
development; fostering a digital to shape a stable policy environment
rapidly evolving field. In June 2019, in many sectors ranging from
ecosystem for AI; at the international level to foster
at the Osaka Summit, G20 Leaders production, finance and transport to
trust in and adoption of AI in society.
welcomed G20 AI Principles, drawn - shaping an enabling policy healthcare and security.
Against this background, the OECD
from the OECD Recommendation. environment for AI; Committee on Digital Economy
Alongside benefits, AI also raises
Policy (CDEP) agreed to develop a
The Recommendation identifies - building human capacity and challenges for our societies and
draft Council Recommendation to
five complementary values-based preparing for labour market economies, notably regarding
promote a human- centric approach to
principles for the responsible transformation; economic shifts and inequalities,
trustworthy AI, that fosters research,

COMPREHENSIVE UNITED STATES PRIVACY LAWS | 175


preserves economic incentives i) inclusive growth, sustainable The development of the from the diligence, engagement
to innovate, and applies to all development and well-being; ii) Recommendation was participatory and substantive contributions of the
stakeholders. human-centred values and fairness; in nature, incorporating input from a experts participating in AIGO, as well
iii) transparency and explainability; iv) broad range of sources throughout as from their multi-stakeholder and
Complementing existing OECD
robustness, security and safety; and the process. In May 2018, the multidisciplinary backgrounds.
standards already relevant to
v) accountability. This section further CDEP agreed to form an expert
AI – such as those on privacy Drawing on the final output document
calls on AI actors to promote and group to scope principles to foster
and data protection, digital of the AIGO, a draft Recommendation
implement these principles according trust in and adoption of AI, with a
security risk management, and was developed in the CDEP and with
to their roles. view to developing a draft Council
responsible business conduct – the consultation of other relevant
Recommendation in the course of
the Recommendation focuses on 2. National policies and OECD bodies. The CDEP approved
2019. The AI Group of experts at the
policy issues that are specific to AI international co-operation for a final draft Recommendation and
OECD (AIGO) was subsequently
and strives to set a standard that is trustworthy AI: consistent with the agreed to transmit it to the OECD
established, comprising over 50
implementable and flexible enough five aforementioned principles, Council for adoption in a special
experts from different disciplines
to stand the test of time in a rapidly this section provides five meeting on 14-15 March 2019.
and different sectors (government,
evolving field. The Recommendation recommendations to Members and The OECD Council adopted the
industry, civil society, trade unions, the
contains five high-level values-based non- Members having adhered to the Recommendation at its meeting at
technical community and academia)
principles and five recommendations draft Recommendation (hereafter the Ministerial level on 22-23 May 2019.
- see http://www.oecd.org/going-
for national policies and international “Adherents”) to implement in their
digital/ai/oecd-aigo-membership-
Follow-up, monitoring of
co-operation. It also proposes a national policies and international co-
list.pdf for the full list. Between implementation and dissemination
common understanding of key terms, operation: i) investing in AI research
September 2018 and February tools
such as “AI system” and “AI actors”, for and development; ii) fostering a
2019 the group held four meetings:
the purposes of the Recommendation. digital ecosystem for AI; iii) shaping
in Paris, France, in September and The OECD Recommendation on AI
an enabling policy environment
November 2018, in Cambridge, MA, provides the first intergovernmental
More specifically, the
for AI; iv) building human capacity
United States, at the Massachusetts standard for AI policies and a
Recommendation includes two
and preparing for labour market
Institute of Technology (MIT) in foundation on which to conduct
substantive sections:
transformation; and v) international
January 2019, back to back with the further analysis and develop tools
co-operation for trustworthy AI.
1. Principles for responsible MIT AI Policy Congress, and finally to support governments in their
stewardship of trustworthy AI: the first in Dubai, United Arab Emirates, at implementation efforts. In this
An inclusive and participatory process
section sets out five complementary the World Government Summit in regard, it instructs the CDEP to
for developing the Recommendation
principles relevant to all stakeholders: February 2019. The work benefited monitor the implementation of the

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 177


Recommendation and report to the help countries encourage, nurture and example, AI can help policymakers and Using artificial intelligence to help
Council on its implementation and monitor the responsible development the medical community understand combat COVID-19;
continued relevance five years after of trustworthy artificial intelligence the COVID-19 virus and accelerate
Tracking and tracing COVID:
its adoption and regularly thereafter. systems for the benefit of society. It research on treatments by rapidly
Protecting privacy and data while
The CDEP is also instructed to will combine resources from across analysing large volumes of research
using apps and biometrics
continue its work on AI, building the OECD with those of partners from data. It can also be employed to help
on this Recommendation, and all stakeholder groups to provide detect, diagnose and prevent the
For further information please consult:
taking into account work in other multidisciplinary, evidence-based spread of the virus. Conversational
oecd.ai. Contact information: ai@oecd.
international fora, such as UNESCO, policy analysis on AI. The Observatory and interactive AI systems help
org.
the European Union, the Council of is planned to be launched late 2019 respond to the health crisis through
Europe and the initiative to build an and will include a live database of AI personalised information, advice and THE COUNCIL,
International Panel on AI (see https:// strategies, policies and initiatives that treatment. Finally, AI tools can help
pm.gc.ca/eng/news/2018/12/06/ countries and other stakeholders monitor the economic crisis and the HAVING REGARD to Article 5 b) of

mandate-international-panel- can share and update, enabling the recovery – for example, via satellite, the Convention on the Organisation

artificial- intelligence and https:// comparison of their key elements in social networking and other data for Economic Co-operation and

www.gouvernement.fr/en/france- an interactive manner. It will also be (e.g. Google’s Community Mobility Development of 14 December 1960;

and-canada-create-new-expert- continuously updated with AI metrics, Reports) – and can help learn from the
HAVING REGARD to the OECD
international- panel-on-artificial- measurements, policies and good crisis and build early warning system
Guidelines for Multinational
intelligence). practices that could lead to further for future outbreaks. However, in order
Enterprises [OECD/LEGAL/0144];
updates in the practical guidance for to make the most of these innovative
In order to support implementation Recommendation of the Council
implementation. solutions, AI systems need to be
of the Recommendation, the Council concerning Guidelines Governing the
designed, developed and deployed in
instructed the CDEP to develop The Recommendation is open to Protection of Privacy and Transborder
a trustworthy manner, consistent with
practical guidance for implementation, non-OECD Member adherence, Flows of Personal Data [OECD/
the Recommendation: they should
to provide a forum for exchanging underscoring the global relevance LEGAL/0188]; Recommendation of
respect human rights and privacy;
information on AI policy and activities, of OECD AI policy work as well as the Council concerning Guidelines
be transparent, explainable, robust,
and to foster multi-stakeholder and the Recommendation’s call for for Cryptography Policy [OECD/
secure and safe; and actors involved
interdisciplinary dialogue. This will be international co-operation. LEGAL/0289]; Recommendation
in their development and use should
achieved largely through the OECD of the Council for Enhanced
Artificial Intelligence (AI) tools and remain accountable.
AI Policy Observatory, an inclusive Access and More Effective Use of
systems can support countries in their Public Sector Information [OECD/
hub for public policy on AI that aims to For more information, see:
response to the COVID-19 crisis. For LEGAL/0362]; Recommendation of

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 179


the Council on Digital Security Risk RECOGNISING that AI has pervasive, factor for the diffusion and adoption of the opportunities offered, and
Management for Economic and Social far-reaching and global implications AI; and that a well-informed whole-of- addressing the challenges raised,
Prosperity [OECD/LEGAL/0415]; that are transforming societies, society public debate is necessary for by AI applications, and empowering
Recommendation of the Council on economic sectors and the world of capturing the beneficial potential of stakeholders to engage is essential
Consumer Protection in E-commerce work, and are likely to increasingly do the technology, while limiting the risks to fostering adoption of trustworthy
[OECD/LEGAL/0422]; Declaration so in the future; associated with it; AI in society, and to turning AI
on the Digital Economy: Innovation, trustworthiness into a competitive
RECOGNISING that AI has the UNDERLINING that certain existing
Growth and Social Prosperity (Cancún parameter in the global marketplace;
potential to improve the welfare and national and international legal,
Declaration) [OECD/LEGAL/0426];
well-being of people, to contribute to regulatory and policy frameworks On the proposal of the Committee on
Declaration on Strengthening SMEs
positive sustainable global economic already have relevance to AI, Digital Economy Policy:
and Entrepreneurship for Productivity
activity, to increase innovation and including those related to human
and Inclusive Growth [OECD/
I. AGREES that for the purpose of this
productivity, and to help respond to rights, consumer and personal data
LEGAL/0439]; as well as the 2016
Recommendation the following terms
key global challenges; protection, intellectual property rights,
Ministerial Statement on Building
should be understood as follows:
responsible business conduct, and
more Resilient and Inclusive Labour
RECOGNISING that, at the same
competition, while noting that the
Markets, adopted at the OECD Labour AI system: An AI system is a machine-
time, these transformations may
appropriateness of some frameworks
and Employment Ministerial Meeting; based system that can, for a given
have disparate effects within, and
may need to be assessed and new set of human-defined objectives,
between societies and economies,
HAVING REGARD to the Sustainable approaches developed; make predictions, recommendations,
notably regarding economic shifts,
Development Goals set out in
or decisions influencing real or
competition, transitions in the labour RECOGNISING that given the rapid
the 2030 Agenda for Sustainable
virtual environments. AI systems are
market, inequalities, and implications development and implementation of
Development adopted by the
designed to operate with varying
for democracy and human rights, AI, there is a need for a stable policy
United Nations General Assembly
levels of autonomy.
privacy and data protection, and environment that promotes a human-
(A/RES/70/1) as well as the 1948
digital security; centric approach to trustworthy AI,
Universal Declaration of Human AI system lifecycle: AI system lifecycle
that fosters research, preserves
Rights; phases involve: i) ‘design, data and
RECOGNISING that trust is a key
economic incentives to innovate,
models’; which is a context-dependent
enabler of digital transformation;
HAVING REGARD to the important and that applies to all stakeholders
sequence encompassing planning
that, although the nature of future AI
work being carried out on artificial according to their role and the
and design, data collection and
applications and their implications
intelligence (hereafter, “AI”) in other context;
processing, as well as model building;
may be hard to foresee, the
international governmental and non-
ii) ‘verification and validation’; iii)
trustworthiness of AI systems is a key CONSIDERING that embracing
governmental fora;

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 181


‘deployment’; and iv) ‘operation and II. RECOMMENDS that Members thus invigorating inclusive growth, i. to foster a general understanding of
monitoring’. These phases often take and non-Members adhering to this sustainable development and well- AI systems,
place in an iterative manner and Recommendation (hereafter the being.
ii. to make stakeholders aware of their
are not necessarily sequential. The “Adherents”) promote and implement
1.2. Human-centred values and interactions with AI systems, including
decision to retire an AI system from the following principles for responsible
fairness in the workplace,
operation may occur at any point stewardship of trustworthy AI, which
during the operation and monitoring are relevant to all stakeholders. a) AI actors should respect the rule iii. to enable those affected by an AI
phase. of law, human rights and democratic system to understand the outcome,
III. CALLS ON all AI actors to promote
values, throughout the AI system and,
AI knowledge: AI knowledge refers to and implement, according to their
lifecycle. These include freedom,
the skills and resources, such as data, respective roles, the following iv. to enable those adversely affected
dignity and autonomy, privacy and
code, algorithms, models, research, Principles for responsible stewardship by an AI system to challenge its
data protection, non- discrimination
know-how, training programmes, of trustworthy AI. outcome based on plain and easy-to-
and equality, diversity, fairness, social
governance, processes and best understand information on the factors,
IV. UNDERLINES that the following justice, and internationally recognised
practices, required to understand and and the logic that served as the basis
principles are complementary and labour rights.
participate in the AI system lifecycle. for the prediction, recommendation or
should be considered as a whole.
b) To this end, AI actors should decision.
AI actors: AI actors are those who
1.1. Inclusive growth, sustainable implement mechanisms and
play an active role in the AI system 1.4. Robustness, security and safety
development and well-being safeguards, such as capacity for
lifecycle, including organisations and
human determination, that are
individuals that deploy or operate AI. a) AI systems should be robust,
Stakeholders should proactively
appropriate to the context and secure and safe throughout their
engage in responsible stewardship
Stakeholders: Stakeholders consistent with the state of art. entire lifecycle so that, in conditions
of trustworthy AI in pursuit of
encompass all organisations and of normal use, foreseeable use or
beneficial outcomes for people and 1.3. Transparency and explainability
individuals involved in, or affected by, misuse, or other adverse conditions,
the planet, such as augmenting
AI systems, directly or indirectly. AI AI Actors should commit to they function appropriately and do not
human capabilities and enhancing
actors are a subset of stakeholders. transparency and responsible pose unreasonable safety risk.
creativity, advancing inclusion of
disclosure regarding AI systems.
Section 1: Principles underrepresented populations,
To this end, they should provide
b) To this end, AI actors should ensure
reducing economic, social,
for responsible meaningful information, appropriate
traceability, including in relation to
gender and other inequalities, and
stewardship of protecting natural environments,
to the context, and consistent with the
datasets, processes and decisions
made during the AI system lifecycle,
trustworthy AI state of art:

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 183


to enable analysis of the AI system’s recommendations, consistent with Governments should foster the 2.4. Building human capacity
outcomes and responses to inquiry, the principles in section 1, in their development of, and access to, a and preparing for labour market
appropriate to the context and national policies and international digital ecosystem for trustworthy transformation
consistent with the state of art. co-operation, with special attention to AI. Such an ecosystem includes in
a) Governments should work closely
small and medium-sized enterprises particular digital technologies and
c) AI actors should, based on their with stakeholders to prepare for the
(SMEs). infrastructure, and mechanisms for
roles, the context, and their ability transformation of the world of work
sharing AI knowledge, as appropriate.
to act, apply a systematic risk 2.1. Investing in AI research and and of society. They should empower
In this regard, governments should
management approach to each development people to effectively use and interact
consider promoting mechanisms, such
phase of the AI system lifecycle on with AI systems across the breadth of
as data trusts, to support the safe, fair,
a) Governments should consider
a continuous basis to address risks applications, including by equipping
legal and ethical sharing of data.
long-term public investment, and
related to AI systems, including them with the necessary skills.
encourage private investment, in
privacy, digital security, safety and 2.3. Shaping an enabling policy
research and development, including b) Governments should take steps,
bias. environment for AI
interdisciplinary efforts, to spur including through social dialogue, to
1.5. Accountability innovation in trustworthy AI that focus a) Governments should promote a ensure a fair transition for workers
on challenging technical issues and policy environment that supports an as AI is deployed, such as through
AI actors should be accountable for
on AI-related social, legal and ethical agile transition from the research and training programmes along the
the proper functioning of AI systems
implications and policy issues. development stage to the deployment working life, support for those
and for the respect of the above
and operation stage for trustworthy affected by displacement, and access
principles, based on their roles, the b) Governments should also consider
AI systems. To this effect, they should to new opportunities in the labour
context, and consistent with the state public investment and encourage
consider using experimentation to market.
of art. private investment in open datasets
provide a controlled environment in
that are representative and respect c) Governments should also work
which AI systems can be tested, and
Section 2: National privacy and data protection to support closely with stakeholders to promote
scaled-up, as appropriate.
policies and an environment for AI research the responsible use of AI at work,

international co- and development that is free of b) Governments should review and to enhance the safety of workers
inappropriate bias and to improve adapt, as appropriate, their policy and the quality of jobs, to foster
operation for
interoperability and use of standards. and regulatory frameworks and entrepreneurship and productivity,
trustworthy AI assessment mechanisms as they apply and aim to ensure that the benefits
2.2. Fostering a digital ecosystem for
to AI systems to encourage innovation from AI are broadly and fairly shared.
V. RECOMMENDS that Adherents AI
and competition for trustworthy AI.
implement the following

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 185


2.5. International co-operation for these principles. multi-stakeholder and interdisciplinary Australia, Austria, Belgium, Canada,
trustworthy AI dialogue to promote trust in and Chile, Colombia, Costa Rica, the Czech
VI. INVITES the Secretary-General
adoption of AI; and Republic, Denmark, Estonia, Finland,
a) Governments, including developing and Adherents to disseminate this
France, Germany, Greece, Hungary,
countries and with stakeholders, Recommendation. d) to monitor, in consultation
Iceland, Ireland, Israel, Italy, Japan,
should actively co-operate to advance with other relevant Committees,
Korea, Latvia, Lithuania, Luxembourg,
VII. INVITES non-Adherents to take
these principles and to progress the implementation of this
Mexico, the Netherlands, New
due account of, and adhere to, this
on responsible stewardship of Recommendation and report thereon
Zealand, Norway, Poland, Portugal,
Recommendation.
trustworthy AI. to the Council no later than five years
the Slovak Republic, Slovenia, Spain,
following its adoption and regularly
VIII. INSTRUCTS the Committee on Sweden, Switzerland, Türkiye, the
b) Governments should work together
thereafter.
Digital Economy Policy: United Kingdom and the United
in the OECD and other global and
States. The European Union takes part
regional fora to foster the sharing of a) to continue its important work on About the OECD
in the work of the OECD.
AI knowledge, as appropriate. They artificial intelligence building on this
should encourage international, cross- The OECD is a unique forum

sectoral and open multi-stakeholder


Recommendation and taking into
where governments work together OECD Legal
account work in other international
initiatives to garner long-term to address the economic, social Instruments
fora, and to further develop the
expertise on AI. and environmental challenges of
measurement framework for Since the creation of the OECD in
globalisation. The OECD is also at the
evidence-based AI policies; 1961, around 460 substantive legal
c) Governments should promote the forefront of efforts to understand and
development of multi-stakeholder, instruments have been developed
b) to develop and iterate to help governments respond to new
consensus-driven global technical within its framework. These include
further practical guidance on developments and concerns, such as
standards for interoperable and OECD Acts (i.e. the Decisions and
the implementation of this corporate governance, the information
trustworthy AI. Recommendations adopted by the
Recommendation, and to report to economy and the challenges of an
OECD Council in accordance with the
the Council on progress made no later ageing population. The Organisation
d) Governments should also OECD Convention) and other legal
than end December 2019; provides a setting where governments
encourage the development, and their instruments developed within the
can compare policy experiences, seek
own use, of internationally comparable OECD framework (e.g. Declarations,
c) to provide a forum for exchanging answers to common problems, identify
metrics to measure AI research, international agreements).
information on AI policy and good practice and work to co-ordinate
development and deployment, and activities including experience domestic and international policies. All substantive OECD legal
gather the evidence base to assess with the implementation of this
progress in the implementation of instruments, whether in force or
Recommendation, and to foster The OECD Member countries are:
abrogated, are listed in the online

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 187


Compendium of OECD Legal the framework of the Organisation.
Instruments. They are presented in They are legally binding on the
five categories: Parties.

- Decisions are adopted by Council - Arrangement, Understanding


and are legally binding on all and Others: several other types
Members except those which of substantive legal instruments
abstain at the time of adoption. have been developed within the
They set out specific rights and OECD framework over time, such
obligations and may contain as the Arrangement on Officially
monitoring mechanisms. Supported Export Credits, the
International Understanding
- Recommendations are adopted
on Maritime Transport
by Council and are not legally
Principles and the Development
binding. They represent a political
Assistance Committee (DAC)
commitment to the principles they
Recommendations.
contain and entail an expectation

Artificial Intelligence
that Adherents will do their best to
implement them.

Risk Management
- Substantive Outcome Documents
are adopted by the individual
listed Adherents rather than by
an OECD body, as the outcome
of a ministerial, high-level or other Framework (AI RMF
1.0)
meeting within the framework of
the Organisation. They usually set
general principles or long-term
goals and have a solemn character.

- International Agreements are


negotiated and concluded within
Executive Summary levels of autonomy (Adapted from:
OECD Recommendation on AI:2019;
Artificial intelligence (AI) technologies ISO/IEC 22989:2022).
have significant potential to
transform society and people’s While there are myriad standards and

lives – from commerce and health best practices to help organizations

to transportation and cybersecurity mitigate the risks of traditional

to the envi- ronment and our planet. software or information-based

AI technologies can drive inclusive systems, the risks posed by AI

economic growth and support systems are in many ways unique

scientific advancements that improve (See Appendix B). AI systems, for

the conditions of our world. AI example, may be trained on data that

technologies, how- ever, also pose can change over time, sometimes

risks that can negatively impact significantly and unexpectedly,


NIST AI 100-1
individuals, groups, organizations, affecting system function- ality and
Artificial Intelligence Risk Management commu- nities, society, the trustworthiness in ways that are hard
Framework (AI RMF 1.0)
environment, and the planet. Like risks to understand. AI systems and the
This publication is available free of charge from: https://doi.org/10.6028/NIST.AI.100-1
for other types of technology, AI risks contexts in which they are deployed
January 2023
can emerge in a variety of ways and are frequently complex, making it
U.S. Department of Commerce
can be characterized as long- or short- difficult to detect and respond to
Gina M. Raimondo, Secretary
National Institute of Standards and Technology failures when they occur. AI systems
Laurie E. Locascio, NIST Director and Under Secretary of Commerce for Standards and Technology term, high- or low-probability, systemic
or localized, and high- or low-impact. are inherently socio-technical in
Certain commercial entities, equipment, or materials may be identified in this document in order to describe an
experimental procedure or concept adequately. Such identification is not intended to imply recommenda- tion or nature, meaning they are influenced
endorsement by the National Institute of Standards and Technology, nor is it intended to imply that the entities,
materials, or equipment are necessarily the best available for the purpose. The AI RMF refers to an AI system by societal dynamics and human
This publication is available free of charge from: https://doi.org/10.6028/NIST.AI.100-1 as an engineered or machine-based behavior. AI risks – and benefits – can
Update Schedule and Versions system that can, for a given set of emerge from the interplay of technical
The Artificial Intelligence Risk Management Framework (AI RMF) is intended to be a living document.
NIST will review the content and usefulness of the Framework regularly to determine if an update is appro- priate; objectives, generate outputs such aspects combined with societal
a review with formal input from the AI community is expected to take place no later than 2028. The Framework
will employ a two-number versioning system to track and identify major and minor changes. The first number
as predictions, recommenda- tions, factors related to how a system is
will represent the generation of the AI RMF and its companion documents (e.g., 1.0) and will change only with
major revisions. Minor revisions will be tracked using “.n” after the generation number (e.g., 1.1). All changes will be
or decisions influencing real or used, its interactions with other AI
tracked using a Version Control Table which identifies the history, including version number, date of change, and virtual environments. AI systems are
description of change. NIST plans to update the AI RMF Playbook frequently. Comments on the AI RMF Playbook
systems, who operates it, and the
may be sent via email to AIframework@nist.gov at any time and will be reviewed and integrated on a semi-annual designed to operate with varying
basis.
social context in which it is deployed.
These risks make AI a uniquely in turn, cultivate public trust. 116-283), the goal of the AI RMF is to Appendix A).
challenging technology to deploy offer a resource to the organizations
Social responsibility can refer to The AI RMF is intended to be practical,
and utilize both for orga- nizations designing, developing, deploying, or
the organization’s responsibility to adapt to the AI landscape as AI
and within society. Without proper using AI systems to help manage the
“for the impacts of its decisions technologies continue to develop, and
controls, AI systems can amplify, many risks of AI and promote trustwor-
and activities on society and the to be operationalized by organizations
perpetuate, or exacerbate inequitable thy and responsible development and
environment through transparent and in varying degrees and capacities so
or undesirable outcomes for use of AI systems. The Framework
ethical behavior” (ISO 26000:2010). society can benefit from AI while also
individuals and communities. With is intended to be voluntary, rights-
Sustainability refers to the “state being protected from its potential
proper controls, AI systems can preserving, non-sector-specific, and
of the global system, including harms.
mitigate and manage inequitable use-case agnostic, providing flexibil-
environmental, social, and economic
outcomes. ity to organizations of all sizes and in The Framework and supporting
aspects, in which the needs of the
all sectors and throughout society resources will be updated, expanded,
AI risk management is a key present are met without compromising
to implement the approaches in the and improved based on evolving
component of responsible the ability of future generations to
Framework. technology, the standards landscape
development and use of AI sys- meet their own needs” (ISO/IEC
around the world, and AI community
tems. Responsible AI practices can TR 24368:2022). Responsible AI is The Framework is designed to
ex- perience and feedback. NIST
help align the decisions about AI meant to result in technology that is equip organizations and individuals
will continue to align the AI RMF and
system design, de- velopment, and also equitable and accountable. The – referred to here as AI actors –
related guidance with applicable
uses with intended aim and values. expectation is that organizational with approaches that increase the
international standards, guidelines,
Core concepts in responsible AI practices are carried out in accord trustworthiness of AI systems, and to
and practices. As the AI RMF is put
em- phasize human centricity, social with “professional responsibility,” help foster the responsible design,
into use, additional lessons will be
responsibility, and sustainability. AI risk defined by ISO as an approach that development, deployment, and use
learned to inform future updates and
management can drive responsible “aims to ensure that professionals who of AI systems over time. AI actors
additional resources.
uses and practices by prompting design, develop, or deploy AI systems are defined by the Organisation
organizations and their internal teams and applications or AI-based products for Economic Co-operation and The Framework is divided into
who design, develop, and deploy AI to or systems, recognize their unique Development (OECD) as “those who two parts. Part 1 discusses how
think more critically about context and position to exert influence on people, play an active role in the AI system organizations can frame the risks
potential or unexpected negative and society, and the future of AI” (ISO/IEC lifecycle, including organiza- tions and related to AI and describes the
positive impacts. Understanding and TR 24368:2022). individuals that deploy or operate AI” intended audience. Next, AI risks
managing the risks of AI systems will [OECD (2019) Artificial Intelligence and trustworthi- ness are analyzed,
As directed by the National Artificial
help to enhance trustworthiness, and in Society—OECD iLibrary] (See outlining the characteristics of
Intelligence Initiative Act of 2020 (P.L.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 192


trustworthy AI systems, which include and public sec- tors is directed 1. Framing Risk from: OMB Circular A-130:2016).
valid and reliable, safe, secure and and consistent with its broader AI Negative impact or harm can be
AI risk management offers a path to
resilient, accountable and transparent, efforts called for by the National AI experienced by individuals, groups,
minimize potential negative impacts
explainable and interpretable, privacy Initiative Act of 2020, the National communities, organizations, society,
of AI systems, such as threats to
enhanced, and fair with their harmful Security Commission on Artificial the environment, and the planet.
civil liberties and rights, while also
biases managed. Intelligence recom- mendations, and
providing opportunities to maximize “Risk management refers to
the Plan for Federal Engagement
Part 2 comprises the “Core” of the positive impacts. Addressing, coordinated activities to direct and
in Developing Technical Standards
Framework. It describes four specific documenting, and managing control an organiza- tion with regard to
and Related Tools. Engagement
functions to help organizations AI risks and potential negative risk” (Source: ISO 31000:2018).
with the AI community during this
address the risks of AI systems in impacts effectively can lead to more
Framework’s development – via While risk management processes
practice. These functions – GOVERN, trustworthy AI systems.
responses to a formal Request for generally address negative impacts,
MAP, MEASURE, and MANAGE – are
Information, three widely attended 1.1 Understanding and Addressing this Framework of- fers approaches
broken down further into categories
workshops, public comments on Risks, Impacts, and Harms to minimize anticipated negative
and subcate- gories. While GOVERN
a concept paper and two drafts impacts of AI systems and identify
applies to all stages of organizations’
of the Framework, discussions at In the context of the AI RMF, risk
op- portunities to maximize positive
AI risk management pro- cesses and
mul- tiple public forums, and many refers to the composite measure of
impacts. Effectively managing the
procedures, the MAP, MEASURE, and
small group meetings – has informed an event’s probability of occurring
risk of potential harms could lead
MANAGE functions can be applied
development of the AI RMF 1.0 as and the magnitude or degree of the
to more trustworthy AI systems and
in AI system-specific contexts and at consequences of the corresponding
well as AI research and development unleash potential benefits to people
specific stages of the AI lifecycle. event. The impacts, or consequences,
and evaluation conducted by NIST (individ- uals, communities, and
and others. Priority research and of AI systems can be positive,
Additional resources related to the society), organizations, and systems/
additional guidance that will enhance negative, or both and can result in
Framework are included in the AI RMF ecosystems. Risk management can
this Framework will be captured in opportunities or threats (Adapted
Playbook, which is available via the enable AI developers and users to
an associated AI Risk Management from: ISO 31000:2018). When
NIST AI RMF website: understand impacts and account
Framework Roadmap to which NIST considering the negative impact of
for the inherent lim- itations and
https://www.nist.gov/itl/ai-risk- and the broader community can a potential event, risk is a function
uncertainties in their models and
management-framework. contribute. of 1) the negative impact, or magni-
systems, which in turn can improve
tude of harm, that would arise if the
overall system performance and
Development of the AI RMF by NIST Part 1: Foundational circumstance or event occurs and 2)
trustworthiness and the likelihood that
in collaboration with the private
Information the likelihood of occurrence (Adapted
AI technologies will be used in ways

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 194


that are beneficial. potential harms that can be related to 1.2 Challenges for AI Risk organization deploying or operating
AI systems. Management the system. Also, the organization
The AI RMF is designed to address
developing the AI system may not be
new risks as they emerge. This AI risk management efforts should Several challenges are described
transparent about the risk metrics
flexibility is particularly important consider that humans may assume below. They should be taken into
or methodologies it used. Risk
where impacts are not easily that AI systems work – and work well account when managing risks in
measurement and management can
foreseeable and applications are – in all settings. For example, whether pursuit of AI trustworthiness.
be complicated by how customers
evolving. While some AI risks and correct or not, AI systems are often
use or integrate third- party data or
1.2.1 Risk Measurement
benefits are well-known, it can be perceived as being more objective
systems into AI products or services,
challenging to assess negative than humans or as offering greater AI risks or failures that are not well- particularly without sufficient internal
impacts and the degree of harms. capabilities than general software. defined or adequately understood are governance structures and technical
Figure 1 provides examples of difficult to mea- sure quantitatively safeguards. Regardless, all parties
or qualitatively. The inability to and AI actors should manage risk in
Harm to an Harm to an
Harm to people appropriately measure AI risks does the AI systems they develop, deploy,
organization ecosystem
not imply that an AI system necessarily or use as standalone or integrated

Individual: Harm Harm to an Harm to poses either a high or low risk. Some components.
to a person's organization's interconnected and risk measurement challenges include:
civil liberties, business operations. interdependent Tracking emergent risks:
rights, physical elements and Risks related to third-party software, Organizations’ risk management
or psychological Harm to an resources.
safety, or economic organization from hardware, and data: Third-party efforts will be enhanced by identifying
opportunity. security breaches or Harm to the global data or systems can accelerate and tracking emergent risks and
monetary loss. financial system,
Group/Community: supply chain, or research and development and considering techniques for measuring
Harm to a Harm to an interelated systems. facilitate technology transition. them.
group such as organization's
They also may complicate risk
discrimination reputation Harm to natural
resources, the measurement. Risk can emerge both AI system impact assessment
against a population
sub-group. environment, and approaches can help AI actors
from third-party data, software or
planet.
hardware itself and how it is used. understand potential impacts or
Societal: Harm
to a democratic Risk metrics or methodologies used harms within specific contexts.
participation or
by the organization developing the
educational access. Availability of reliable metrics:
AI system may not align with the risk
The current lack of consensus on
metrics or methodologies uses by the
Fig. 1. Examples of potential harms related to AI systems. Trustworthy AI systems and their responsible use robust and verifiable measurement
can mitigate negative risks and contribute to benefits for people, organizations, and ecosystems.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 196


methods for risk and trustworthiness, may increase as AI systems adapt explainability or interpretabil- ity), lack norms established by AI sys- tem
and applicability to different AI use and evolve. Fur- thermore, different of transparency or documentation owners, organizations, industries,
cases, is an AI risk measurement AI actors across the AI lifecycle can in AI system development or communities, or policy makers. Risk
challenge. Potential pitfalls when have different risk perspectives. deployment, or inherent uncertainties tolerances are likely to change over
seeking to measure negative risk For example, an AI developer who in AI systems. time as AI systems, policies, and norms
or harms include the reality that makes AI software available, such evolve. Different organiza- tions may
Human baseline: Risk management
development of metrics is often an as pre-trained mod- els, can have have varied risk tolerances due to their
of AI systems that are intended to
institu- tional endeavor and may a different risk perspective than particular organizational priorities and
augment or replace human activity,
inadvertently reflect factors unrelated an AI actor who is responsible for resource considerations.
for example decision making, requires
to the underlying impact. In addition, deploying that pre-trained model in
some form of baseline metrics Emerging knowledge and methods
measurement approaches can be a specific use case. Such deployers
for comparison. This is difficult to to better inform harm/cost-
oversimplified, gamed, lack critical may not recognize that their
systematize since AI systems carry out benefit tradeoffs will con- tinue
nuance, be- come relied upon in particular uses could entail risks
different tasks – and perform tasks to be developed and debated by
unexpected ways, or fail to account which differ from those perceived
differently – than humans. businesses, governments, academia,
for differences in affected groups and by the initial developer. All involved
and civil society. To the extent that
contexts. AI actors share responsibilities for 1.2.2 Risk Tolerance
challenges for specifying AI risk
designing, developing, and deploying
Approaches for measuring impacts tolerances remain unresolved,
a trustworthy AI system that is fit for While the AI RMF can be used to
on a population work best if they there may be contexts where a risk
purpose. prioritize risk, it does not prescribe
recognize that contexts matter, that management framework is not yet
risk tolerance. Risk tolerance refers
harms may affect varied groups Risk in real-world settings: While readily applicable for mitigating
to the organization’s or AI actor’s (see
or sub-groups differently, and that measuring AI risks in a laboratory or negative AI risks.
Appendix A) readiness to bear the
communities or other sub-groups who a controlled environment may yield risk in order to achieve its objectives.
The Framework is intended to be
may be harmed are not always direct important insights pre-deployment, Risk tolerance can be influenced by
flexible and to augment existing risk
users of a system. these measurements may differ from legal or regula- tory requirements
practices which should align with
risks that emerge in operational, real- (Adapted from: ISO GUIDE 73). Risk
Risk at different stages of the AI applicable laws, regulations, and
world settings. tolerance and the level of risk that is
lifecycle: Measuring risk at an earlier norms. Organizations should follow
acceptable to organizations or society
stage in the AI lifecycle may yield Inscrutability: Inscrutable AI systems existing regulations and guidelines for
are highly contextual and application
different results than measuring risk can complicate risk measurement. risk criteria, tolerance, and response
and use-case specific. Risk tolerances
at a later stage; some risks may be Inscrutability can be a result of the established by organizational, domain,
can be influenced by policies and
latent at a given point in time and opaque nature of AI systems (limited discipline, sector, or professional

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 198


requirements. Some sectors or lay out clear guidelines for assessing AI systems that are designed or users about potential negative
industries may have established trustworthiness of each AI system deployed to directly interact with impacts of interacting with the system.
definitions of harm or established an organization develops or deploys. humans as compared to AI systems
1.2.4 Organizational Integration and
documentation, reporting, and Policies and resources should be that are not. Higher initial prioritization
Management of Risk
disclosure requirements. Within prioritized based on the assessed may be called for in settings where
sectors, risk management may risk level and potential impact of an the AI system is trained on large
AI risks should not be considered
depend on existing guidelines for AI system. The extent to which an AI datasets comprised of sensitive or
in isolation. Different AI actors
specific applications and use case system may be customized or tailored protected data such as personally
have different responsi- bilities
settings. Where established guidelines to the specific context of use by the AI identifiable information, or where the
and awareness depending on their
do not exist, organizations should deployer can be a contributing factor. outputs of the AI systems have direct
roles in the lifecycle. For example,
define reasonable risk tolerance. or indirect impact on humans. AI organizations developing an AI
When applying the AI RMF, risks
Once tolerance is defined, this AI systems designed to interact only with system often will not have information
which the organization determines to
RMF can be used to manage risks computational systems and trained on about how the system may be used.
be highest for the AI systems within
and to document risk management non-sensitive datasets (for example, AI risk management should be
a given context of use call for the
processes. data collected from the physical integrated and incorporated into
most urgent prioritization and most
environment) may call for lower initial
broader enterprise risk management
1.2.3 Risk Prioritization thorough risk management process.
prioritization. Nonethe- less, regularly strategies and processes. Treating
In cases where an AI system presents
Attempting to eliminate negative risk assessing and prioritizing risk based AI risks along with other critical risks,
unacceptable negative risk levels –
entirely can be counterproductive on context remains important because such as cybersecurity and privacy, will
such as where significant negative
in practice because not all incidents non-human-facing AI systems can yield a more integrated outcome and
impacts are imminent, severe harms
and failures can be eliminated. have downstream safety or social organizational efficiencies.
are actually occurring, or catastrophic
Unrealistic expectations about risk implications.
risks are present – development
The AI RMF may be utilized along with
may lead organizations to allocate
and deployment should cease in Residual risk – defined as risk related guidance and frameworks for
resources in a manner that makes
a safe manner until risks can be remaining after risk treatment (Source: managing AI system risks or broader
risk triage inefficient or impractical
sufficiently managed. If an AI system’s ISO GUIDE 73) – directly impacts enterprise risks. Some risks related to
or wastes scarce resources. A risk
development, deployment, and use end users or affected individuals and AI systems are common across other
management culture can help
cases are found to be low-risk in a communities. Documenting residual types of software development and
organizations recognize that not all
specific context, that may suggest risks will call for the system provider deployment. Examples of overlapping
AI risks are the same, and resources
potentially lower prioritization. to fully consider the risks of deploying risks include: privacy concerns
can be allocated purposefully.
the AI product and will inform end related to the use of underlying data
Actionable risk management efforts Risk prioritization may differ between

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 200


to train AI systems; the en- ergy and and potential impacts – both positive are the Application Context, Data and Appendix A. Within the AI RMF, all
environmental implications associated and negative – re- quires a broad set Input, AI Model, and Task and Output. AI actors work together to manage
with resource-heavy computing of perspectives and actors across AI actors involved in these dimensions risks and achieve the goals of
demands; security concerns related the AI lifecycle. Ideally, AI actors will who perform or manage the design, trustworthy and responsible AI. AI
to the confidentiality, integrity, and represent a diversity of experience, development, deployment, evaluation, actors with TEVV-specific expertise
availability of the system and its expertise, and backgrounds and and use of AI systems and drive AI risk are integrated throughout the AI
training and output data; and general comprise demograph- ically and management efforts are the primary lifecycle and are especially likely
security of the underlying software disciplinarily diverse teams. The AI AI RMF audience. to benefit from the Framework.
and hardware for AI systems. RMF is intended to be used by AI Performed regularly, TEVV tasks can
Representative AI actors across the
actors across the AI lifecycle and provide insights relative to technical,
Organizations need to establish lifecycle dimensions are listed in
dimensions. societal, legal, and ethical standards or
and maintain the appropriate Figure 3 and described in detail in
norms, and can assist with anticipating
accountability mechanisms, roles and The OECD has developed a
responsibilities, culture, and incentive framework for classifying AI lifecycle
Plan and
structures for risk management to be activities according to five key design Collect and
ef- fective. Use of the AI RMF alone socio-technical dimensions, each process data
will not lead to these changes or with properties relevant for AI policy Operate and
monitor
provide the appropriate incentives. and gover- nance, including risk
Data
Effective risk management is realized management [OECD (2022) OECD Application and input
through organizational commitment at Framework for the Classification of context
senior levels and may require cultural AI systems — OECD Digital Economy People
change within an organization or Papers]. Figure 2 shows these Task and
& planet AI Model
output
industry. In addi- tion, small to medium- dimensions, slightly modified by Build and
sized organizations managing AI risks NIST for purposes of this framework. use model
or implementing the AI RMF may The NIST modification high- lights Deploy
and use Verify and
face different challenges than large the importance of test, evaluation,
validate
organizations, depending on their verification, and validation (TEVV)
capabilities and resources. processes throughout an AI lifecycle
and generalizes the operational Fig. 2. Lifecycle and Key Dimensions of an AI System. Modified from OECD (2022) OECD Framework for the
2. Audience Classification of AI systems — OECD Digital Economy Papers. The two inner circles show AI systems’ key
context of an AI system. dimensions and the outer circle shows AI lifecycle stages. Ideally, risk management efforts start with the Plan
and Design function in the application context and are performed throughout the AI system lifecycle. See
Identifying and managing AI risks Figure 3 for representative AI actors.
AI dimensions displayed in Figure 2

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 202


impacts and assessing and tracking - promote discussion of the

Key dimensions
emergent risks. As a regular process tradeoffs needed to balance
Application Data & AI model AI model Task & Application People
within an AI lifecycle, TEVV allows societal values and priorities context input output context & planet
for both mid-course remediation and related to civil liberties and rights,
post-hoc risk management. equity, the environment and the

Lifecyle stage
Plan & Collect & Build & Verify & Deploy Operate & Use or
planet, and the economy. Design process use model validate & use monitor impacted
data by
The People & Planet dimension at the
center of Figure 2 represents human Successful risk management TEVV includes
TEVV includes
TEVV includes integration, TEVV includes TEVV includes
internal & TEVV includes TEVV includes

TEVV
rights and the broader well-being of depends upon a sense of collective audit & impact
external model testing model testing
compliance audit & impact audit & impact
assessment testing & assessment assessment
validation
society and the planet. The AI actors responsibility among AI actors shown validation

in this dimension comprise a separate in Figure 3. The AI RMF functions, Articulate and Gather, validate, Create or Verify & Pilot, check Operate the AI Use
document the and clean data select validate, compatibility with system and system/technol-
AI RMF audience who informs the described in Section 5, require diverse system’s and document algorithms; calibrate, and legacy systems, continuously ogy; monitor &
concept and the metadata train models. interpret model verify regulatory assess its assess impacts;

primary audience. These AI actors may perspectives, disciplines, professions, objectives,


underlying
and characteris-
tics of the
output. compliance,
manage
recommenda-
tions and impacts
seek mitigation
of impacts,

Activities
assumptions, dataset, in light organizational (both intended advocate for
in- clude trade associations, standards and experiences. Diverse teams and context in of objectives, change, and and unintended) rights.
light of legal and legal and ethical evaluate user in light of
developing organizations, researchers, contribute to more open sharing of regulatory considerations. experience. objectives, legal
requirements and regulatory
advocacy groups, environmental ideas and assumptions about the and ethical requirements,
considerations. and ethical
groups, civil society organizations, purposes and functions of technology considerations.

end users, and potentially impacted – making these implicit aspects System Data scientists; Modelers; model engineers; data System integra- System End users,
operators; ends data engineers; scientists; developers; domain tors; developers; operators, end operators, and
in- dividuals and communities. These more explicit. This broader collective users; domain data providers; experts; with consultation of systems users, and practitioners;
experts; AI domain experts; socio-cultural analysts familiar with engineers; practitioners; impacted
actors can: perspective creates opportunities for designers;
impact
socio-cultural
analysts; human
the application context and TEVV
experts.
software
engineers;
domain experts;
AI designers;
individuals/com-
munities; general

Representative actors
assessors; TEVV factors experts; domain experts; impact public; policy
surfacing problems and identifying experts; product TEVV experts. procurement assessors; TEVV makers;
- assist in providing context and managers; experts; third- experts; system standards
existing and emergent risks. compliance party suppliers; funders; product organizations;
experts; auditors; C-suite managers; trade associa-
understanding potential and actual governance executives; with compliance tions; advocacy
experts; consultation of experts; groups;
impacts; 3. AI Risks and Trustworthiness organizational human factors auditors; environmental
management; experts, socio- governance groups; civil
C-suite cultural analysts, experts; society
executives; governance organizational organizations;
- be a source of formal or quasi- For AI systems to be trustworthy, impacted experts, TEVV management; researchers.
individuals/com- experts. impacted
formal norms and guidance for AI they often need to be responsive munities; individuals/com-
evaluators. munities;
risk management; to a multiplicity of cri- teria that evaluators.

are of value to interested parties.


- designate boundaries for AI Fig. 3. AI actors across AI lifecycle stages. See Appendix A for detailed descriptions of AI actor tasks,
Approaches which enhance AI
including details about testing, evaluation, verification, and validation tasks. Note that AI actors in the AI
operation (technical, societal, legal,
trustworthiness can reduce negative Model dimension (Figure 2) are separated as a best practice, with those building and using the models
and ethical); and separated from those verifying and validating the models.
AI risks. This Framework articulates

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 204


set- ting, and some will be more or manner that is both transparent and
Fair - with
Secure & Explainable & Privacy less important in any given situation. appropriately justifiable.
Safe harmful bias
resilient interpretable enhanced
managed Ultimately, trustwor- thiness is a
Accountable & There are multiple approaches for
transparent social concept that ranges across a
enhancing contextual awareness
Valid & reliable spectrum and is only as strong as its
in the AI lifecycle. For example,
weakest characteristics.
subject matter experts can assist in
Fig. 4. Characteristics of trustworthy AI systems. Valid & Reliable is a necessary condition of trustworthiness When managing AI risks, organizations the evaluation of TEVV findings and
and is shown as the base for other trustworthiness characteristics. Accountable & Transparent is shown as a
vertical box because it relates to all other characteristics. can face difficult decisions in work with product and deployment
balancing these char- acteristics. For teams to align TEVV parameters
example, in certain scenarios tradeoffs to requirements and de- ployment
the following characteristics of Trustworthiness characteristics
may emerge between optimizing for conditions. When properly resourced,
trustworthy AI and offers guidance (shown in Figure 4) are inextricably
interpretability and achieving privacy. increasing the breadth and diversity
for addressing them. Characteristics tied to social and orga- nizational
In other cases, organizations might of input from interested parties and
of trustworthy AI systems include: behavior, the datasets used by AI
face a tradeoff between predictive relevant AI actors throughout the AI
valid and reliable, safe, secure and systems, selection of AI models and
accuracy and interpretability. Or, lifecycle can en- hance opportunities
resilient, accountable and trans- algorithms and the decisions made
under certain conditions such as for informing contextually sensitive
parent, explainable and interpretable, by those who build them, and the
data sparsity, privacy-enhancing evaluations, and for identifying AI
privacy-enhanced, and fair with interactions with the humans who
techniques can result in a loss in system benefits and positive impacts.
harmful bias managed. Creating provide insight from and oversight
accuracy, affecting decisions These practices can increase the
trustworthy AI requires balancing each of such systems. Human judgment
likelihood that risks arising in social
of these characteristics based on the should be employed when deciding about fairness and other values
contexts are managed appropriately.
AI system’s context of use. While all on the specific metrics related to AI in certain domains. Dealing with
characteristics are socio-technical trustworthiness characteristics and tradeoffs requires tak- ing into Understanding and treatment of
system at- tributes, accountability the precise threshold values for those account the decision-making context. trustworthiness characteristics
and transparency also relate to the metrics. These analyses can highlight the depends on an AI actor’s particular
processes and activities internal to existence and extent of tradeoffs role within the AI lifecycle. For any
Addressing AI trustworthiness
an AI system and its external setting. between different measures, but they given AI system, an AI designer
characteristics individually will not
Neglecting these characteristics do not answer questions about how to or developer may have a different
ensure AI system trust- worthiness;
can increase the probability and navigate the tradeoff. Those depend perception of the characteristics than
tradeoffs are usually involved, rarely
magnitude of negative consequences. on the values at play in the relevant the deployer.
do all characteristics apply in every
context and should be resolved in a

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 206


Trustworthiness characteristics inaccurate, unreliable, or poorly gener- and demonstrate external validity ongoing testing or monitoring that
explained in this document influence alized to data and settings beyond (generalizable beyond the training confirms a system is performing as
each other. Highly secure but unfair their training creates and increases conditions). Accuracy measurements intended. Measurement of validity,
systems, accurate but opaque negative AI risks and reduces should always be paired with clearly accuracy, robustness, and reliability
and uninterpretable systems, and trustworthiness. defined and realistic test sets – that contribute to trustworthiness and
inaccurate but secure, privacy- are representative of conditions of should take into con- sideration that
Reliability is defined in the same
enhanced, and transparent systems expected use – and details about certain types of failures can cause
standard as the “ability of an item to
are all unde- sirable. A comprehensive test methodology; these should be greater harm. AI risk management
perform as required, without failure,
approach to risk management calls included in associated documen- efforts should prioritize the
for a given time interval, under given
for balancing tradeoffs among the tation. Accuracy measurements may minimization of potential negative
conditions” (Source: ISO/IEC TS
trustworthiness characteristics. It is include disaggregation of results for impacts, and may need to include
5723:2022). Reliability is a goal for
the joint responsibility of all AI ac- tors different data segments. human intervention in cases where the
overall correctness of AI system
to determine whether AI technology AI system cannot detect or correct
operation under the conditions of Robustness or generalizability is
is an appropriate or necessary tool for errors.
expected use and over a given period defined as the “ability of a system
a given context or purpose, and how
of time, including the entire lifetime of to maintain its level of performance 3.2 Safe
to use it responsibly. The decision to
the system. under a variety of circumstances”
commission or deploy an AI system
AI systems should “not under
(Source: ISO/IEC TS 5723:2022).
should be based on a contextual
Accuracy and robustness contribute defined conditions, lead to a state in
Ro- bustness is a goal for appropriate
assessment of trustworthi- ness
to the validity and trustworthiness of which human life, health, property,
system functionality in a broad set
characteristics and the relative risks, AI systems, and can be in tension with or the environment is endangered”
of conditions and circumstances,
impacts, costs, and benefits, and one another in AI systems. (Source: ISO/IEC TS 5723:2022). Safe
including uses of AI systems not
informed by a broad set of interested
operation of AI systems is improved
Accuracy is defined by ISO/IEC TS initially anticipated. Robustness
parties.
through:
5723:2022 as “closeness of results requires not only that the system
3.1 Valid and Reliable perform exactly as it does under
of observations, computations, or - responsible design, development,
estimates to the true values or the expected uses, but also that it
and deployment practices;
Validation is the “confirmation, through
values accepted as being true.” should perform in ways that minimize
the provision of objective evidence,
potential harms to people if it is - clear information to deployers on
Mea- sures of accuracy should
that the re- quirements for a specific
operating in an unexpected setting. responsible use of the system;
consider computational-centric
intended use or application have been
measures (e.g., false positive and false
fulfilled” (Source: ISO 9000:2015). Validity and reliability for deployed - responsible decision-making by
negative rates), human-AI teaming,
Deployment of AI systems which are AI systems are often assessed by deployers and end users; and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 208


- explanations and documentation of sector- or application-specific Security and resilience are related levels of understanding, transparency
risks based on empirical evidence guidelines or standards. but distinct characteristics. While increases confidence in the AI system.
of incidents. resilience is the abil- ity to return to
3.3 Secure and Resilient This characteristic’s scope spans
normal function after an unexpected
Different types of safety risks may from design decisions and training
adverse event, security includes
AI systems, as well as the ecosystems
require tailored AI risk management data to model train- ing, the structure
re- silience but also encompasses
in which they are deployed, may
approaches based on context and the of the model, its intended use cases,
protocols to avoid, protect against,
be said to be re- silient if they can
severity of potential risks presented. and how and when deployment,
respond to, or recover from attacks.
withstand unexpected adverse
Safety risks that pose a potential risk post-deployment, or end user
Resilience relates to robustness and
events or unexpected changes in
of serious injury or death call for the decisions were made and by whom.
goes beyond the provenance of the
their envi- ronment or use – or if
most urgent prioritization and most Transparency is often necessary
data to encompass unexpected or
they can maintain their functions
thorough risk management process. for actionable redress related to AI
adversarial use (or abuse or misuse) of
and structure in the face of internal
system outputs that are incorrect or
the model or data.
Employing safety considerations and external change and degrade
otherwise lead to negative impacts.
during the lifecycle and starting safely and gracefully when this is
3.4 Accountable and Transparent Transparency should consider human-
as early as possible with planning necessary (Adapted from: ISO/IEC
AI interaction: for example, how a
and design can prevent failures TS 5723:2022). Common security Trustworthy AI depends upon
human operator or user is notified
or conditions that can render a concerns relate to adversarial accountability. Accountability
when a potential or actual adverse
system dangerous. Other practical examples, data poisoning, and the presupposes transparency.
outcome caused by an AI system is
approaches for AI safety often relate exfiltration of models, training data, Transparency reflects the extent
detected. A transparent system is
to rigorous simulation and in-domain or other intellectual property through to which information about an AI
not necessarily an accurate, privacy-
testing, real-time monitoring, and the AI system endpoints. AI systems system and its outputs is available
enhanced, secure, or fair system.
ability to shut down, modify, or have that can maintain confidentiality, to individuals interacting with such
However, it is difficult to determine
human inter- vention into systems that integrity, and availability through a system – regardless of whether
whether an opaque system possesses
deviate from intended or expected protection mechanisms that prevent they are even aware that they are
such characteristics, and to do so over
functionality. unauthorized access and use may be doing so. Meaningful transparency
time as complex systems evolve.
said to be secure. Guidelines in the provides access to appropriate levels
AI safety risk management
NIST Cybersecurity Framework and of information based on the stage The role of AI actors should
approaches should take cues from
Risk Manage- ment Framework are of the AI lifecycle and tailored to the be considered when seeking
efforts and guidelines for safety in
among those which are applicable role or knowledge of AI actors or accountability for the outcomes of AI
fields such as transportation and
here. individuals interacting with or using systems. The relationship between
healthcare, and align with existing
the AI system. By promoting higher risk and accountability associated

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 210


with AI and tech- nological systems As transparency tools for AI systems Risk from lack of explainability may its meaning or context to the user.
more broadly differs across cultural, and related documentation continue be managed by describing how AI
3.6 Privacy-Enhanced
legal, sectoral, and societal contexts. to evolve, devel- opers of AI systems systems function, with descriptions
When consequences are severe, are encouraged to test different types tailored to individual differences such Privacy refers generally to the norms
such as when life and liberty are at of transparency tools in cooper- ation as the user’s role, knowledge, and and practices that help to safeguard
stake, AI developers and deployers with AI deployers to ensure that AI skill level. Explainable systems can be human autonomy, identity, and dignity.
should consider proportionally systems are used as intended. debugged and monitored more easily, These norms and practices typically
and proactively adjusting their and they lend themselves to more address freedom from intrusion,
3.5 Explainable and Interpretable
transparency and accountability thorough documentation, audit, and limiting observation, or individuals’
practices. Maintaining organizational governance.
Explainability refers to a agency to consent to disclosure or
practices and governing structures for
representation of the mechanisms control of facets of their identities
Risks to interpretability often can
harm reduction, like risk management,
underlying AI systems’ oper- ation, (e.g., body, data, reputation). (See The
be addressed by communicating
can help lead to more accountable
whereas interpretability refers to the NIST Privacy Framework: A Tool for
a description of why an AI system
systems.
meaning of AI systems’ output in the Improving Privacy through Enterprise
made a particular prediction or
context of their designed functional Risk Management.)
Measures to enhance transparency recommendation. (See “Four
purposes. Together, explainability and
and accountability should also Principles of Explainable Artificial Privacy values such as anonymity,
interpretability assist those operating
consider the impact of these efforts Intelligence” and “Psychological confidentiality, and control generally
or overseeing an AI system, as well
on the implementing entity, including Foundations of Explainability should guide choices for AI system
as users of an AI system, to gain
the level of necessary resources and and Interpretability in Artificial design, development, and deployment.
deeper insights into the functionality
the need to safeguard proprietary Intelligence” found here.) Privacy-related risks may influence
and trustworthiness of the system,
information. security, bias, and transparency
including its out- puts. The underlying Transparency, explainability,
and come with tradeoffs with these
Maintaining the provenance of assumption is that perceptions of and interpretability are distinct
other characteristics. Like safety and
training data and supporting negative risk stem from a lack of ability characteristics that support each
security, specific technical features of
attribution of the AI system’s decisions to make sense of, or contextualize, other. Transparency can answer the
an AI system may promote or reduce
to subsets of training data can system output appropriately. question of “what happened” in the
privacy. AI systems can also present
assist with both transparency and Explainable and interpretable AI system. Ex- plainability can answer
new risks to privacy by allowing
accountability. Training data may also systems offer information that will help the question of “how” a decision was
inference to identify individuals or
be subject to copyright and should end users understand the purposes made in the system. Inter- pretability
previously private information about
follow applicable intellectual property and potential impact of an AI system. can answer the question of “why” a
individuals.
rights laws. decision was made by the system and

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 212


Privacy-enhancing technologies with disabilities or affected by the the design, implementation, operation, the Framework are encouraged to
(“PETs”) for AI, as well as data digital divide or may exacerbate and maintenance of AI. periodically evaluate whether the
minimizing methods such as de- existing disparities or systemic biases. AI RMF has improved their ability to
Bias exists in many forms and can
identification and aggregation for manage AI risks, including but not
Bias is broader than demographic become ingrained in the automated
certain model outputs, can support lim- ited to their policies, processes,
balance and data representativeness. systems that help make decisions
design for privacy-enhanced AI practices, implementation plans,
NIST has identified three major about our lives. While bias is not
systems. Under certain conditions indicators, measurements, and
categories of AI bias to be always a negative phenomenon, AI
such as data sparsity, privacy- expected outcomes. NIST intends
considered and managed: systemic, sys- tems can potentially increase
enhancing techniques can result in a to work collaboratively with others
computational and statistical, and the speed and scale of biases and
loss in accuracy, affecting decisions to develop met- rics, methodologies,
human-cognitive. Each of these can perpetuate and amplify harms to
about fairness and other values in and goals for evaluating the AI RMF’s
occur in the absence of prejudice, individuals, groups, communities,
certain domains. effectiveness, and to broadly share
partiality, or discriminatory intent. organizations, and society. Bias is
results and supporting information.
3.7 Fair – with Harmful Bias Managed Systemic bias can be present in AI tightly asso- ciated with the concepts
Framework users are expected to
datasets, the orga- nizational norms, of transparency as well as fairness in
Fairness in AI includes concerns for benefit from:
practices, and processes across the AI society. (For more informa- tion about
equality and equity by addressing
lifecycle, and the broader society that bias, including the three categories, - enhanced processes for
issues such as harm- ful bias and
uses AI systems. Computational and see NIST Special Publication 1270, governing, mapping, measuring,
discrimination. Standards of fairness
statistical biases can be present in AI Towards a Standard for Identifying and managing AI risk, and clearly
can be complex and difficult to
datasets and algorithmic processes, and Managing Bias in Artificial documenting outcomes;
define be- cause perceptions of
and often stem from systematic errors Intelligence.)
fairness differ among cultures and - improved awareness of the
due to non-representative samples.
may shift depending on application. 4. Effectiveness of the AI RMF relationships and tradeoffs among
Human-cognitive biases relate to
Organizations’ risk management trustworthiness char- acteristics,
how an individual or group perceives Evaluations of AI RMF effectiveness –
efforts will be enhanced by socio-technical approaches, and
AI sys- tem information to make a including ways to measure bottom-line
recognizing and considering these AI risks;
decision or fill in missing information, improve- ments in the trustworthiness
differences. Systems in which
or how humans think about purposes of AI systems – will be part of future - explicit processes for making go/
harmful biases are mitigated are not
and functions of an AI system. Human- NIST activities, in conjunction with the no-go system commissioning and
necessarily fair. For example, systems
cognitive biases are omnipresent in AI community. deployment deci- sions;
in which predictions are somewhat
decision-making processes across the
balanced across demographic groups
AI lifecycle and system use, including Organizations and other users of - established policies, processes,
may still be inaccessible to individuals

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 214


practices, and procedures 5. AI RMF Core
for improving organiza- tional Map Measure
The AI RMF Core provides outcomes Context is Identified risks
accountability efforts related to AI
and actions that enable dialogue, recognized and are assessed,
system risks; risks related to analyzed, or
understanding, and activities to
context are tracked
- enhanced organizational culture manage AI risks and responsibly identified Govern
which prioritizes the identification develop trustworthy AI systems. As A culture of risk
and management of AI system illus- trated in Figure 5, the Core is management is
cultivated and
risks and potential impacts composed of four functions: GOVERN,
present
to individuals, communities, MAP, MEASURE, and MANAGE.
organizations, and society; Each of these high-level functions
Manage
is broken down into categories and
- better information sharing within Risks are prioritzed
sub- categories. Categories and and acted upon based
and across organizations about
subcategories are subdivided into on a projected impact
risks, decision- making processes,
specific actions and outcomes.
responsibilities, common pitfalls,
Actions do not constitute a checklist,
TEVV practices, and approaches Fig. 4. Functions organize AI risk management activities at their highest level to govern, map, measure, and
nor are they necessarily an ordered manage AI risks. Governance is designed to be a cross-cutting function to inform and be infused throughout
for continuous improvement; the other three functions.
set of steps.

- greater contextual knowledge


Risk management should be developed, deployed, or evaluated and organizations can utilize the
for increased awareness of
continuous, timely, and performed – which can create opportunities to suggestions according to their
downstream risks;
throughout the AI system lifecycle surface problems and identify existing needs and interests. Playbook
dimensions. AI RMF Core functions and emergent risks. users can create tailored guidance
- strengthened engagement with
should be carried out in a way that selected from suggested material
interested parties and relevant AI
reflects diverse and multidisciplinary An online companion resource to the
actors; and for their own use and contribute their
perspectives, potentially including AI RMF, the NIST AI RMF Playbook,
suggestions for sharing with the
- augmented capacity for TEVV of AI the views of AI actors out- side is available to help organizations
broader community. Along with the
systems and associated risks. the organization. Having a diverse navigate the AI RMF and achieve
AI RMF, the Playbook is part of the
team contributes to more open its outcomes through suggested
NIST Trustworthy and Responsible AI
Part 2: Core and sharing of ideas and assumptions tactical actions they can apply
Resource Center.
Profiles about purposes and functions of within their own contexts. Like the

the technology being designed, AI RMF, the Playbook is voluntary Framework users may apply these

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 216


functions as best suits their needs within organizations design- ing, including legal and other issues enhance transparency, improve
for managing AI risks based on their developing, deploying, evaluating, concerning use of third-party human review processes, and bolster
resources and capabilities. Some or acquiring AI systems; software or hardware systems and accountability in AI system teams.
organizations may choose to select data.
- outlines processes, documents, After putting in place the structures,
from among the categories and
and organizational schemes that GOVERN is a cross-cutting function systems, processes, and teams
subcategories; others may choose
anticipate, identify, and manage that is infused throughout AI risk described in the GOV- ERN function,
and have the capacity to apply all
the risks a system can pose, management and enables the other organizations should benefit from a
categories and subcategories.
including to users and others functions of the process. Aspects of purpose-driven culture focused on
Assuming a governance struc- ture is
across society – and procedures to GOVERN, especially those related risk understanding and management.
in place, functions may be performed
achieve those outcomes; to compliance or evaluation, should It is incumbent on Framework users
in any order across the AI lifecycle
be integrated into each of the other to continue to ex- ecute the GOVERN
as deemed to add value by a user of
- incorporates processes to assess
functions. Attention to governance is function as knowledge, cultures, and
the framework. After instituting the
potential impacts;
a continual and intrinsic requirement needs or expectations from AI actors
outcomes in GOVERN, most users of
for effective AI risk management evolve over time.
the AI RMF would start with the MAP - provides a structure by which
over an AI system’s lifespan and the
function and con- tinue to MEASURE AI risk management functions
Practices related to governing AI
organization’s hierarchy.
or MANAGE. However users integrate can align with organi- zational
risks are described in the NIST
the functions, the process should principles, policies, and strategic
Strong governance can drive and AI RMF Playbook. Table 1 lists the
be iterative, with cross-referencing priorities;
enhance internal practices and GOVERN function’s categories and
between functions as necessary. norms to facilitate orga- nizational subcategories.
- connects technical aspects
Simi- larly, there are categories and risk culture. Governing authorities
of AI system design and
subcategories with elements that GOVERN 1:
can determine the overarching
development to organizational
apply to multiple functions, or that policies that direct an organization’s
values and principles, and enables Policies, processes, procedures, and
logically should take place before mission, goals, values, culture, and
organizational practices and practices across the organization
certain subcategory decisions. risk tolerance. Senior leader- ship
competencies for the individuals related to the mapping, measuring,
sets the tone for risk management
5.1 Govern involved in acquiring, training, and managing of AI risks are in
within an organization, and with it,
deploying, and monitoring such place, transparent, and implemented
The GOVERN function: organizational culture. Management
systems; and effectively.
aligns the technical aspects of AI
- cultivates and implements a - addresses full product lifecycle risk management to policies and GOVERN 1.1: Legal and regulatory
culture of risk management and associated processes, operations. Documentation can requirements involving AI are

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 218


understood, managed, and risk priorities. the organization takes responsibility GOVERN 4.1: Organizational policies
documented. for decisions about risks associated and practices are in place to foster
GOVERN 1.7: Processes and
with AI system development and a critical thinking and safety-first
GOVERN 1.2: The characteristics of procedures are in place for decom-
deployment. mindset in the design, development,
trustworthy AI are integrated into missioning and phasing out AI systems
deployment, and uses of AI systems to
organizational policies, processes, safely and in a manner that does GOVERN 3:
minimize potential negative impacts.
procedures, and practices. not increase risks or decrease the
Workforce diversity, equity, inclusion,
organization’s trustworthiness. GOVERN 4.2: Organizational teams
GOVERN 1.3: Processes, procedures, and accessibility processes are
document the risks and po- tential
and practices are in place to GOVERN 2: prioritized in the mapping, measuring,
impacts of the AI technology they
determine the needed level of risk and managing of AI risks throughout
design, develop, deploy, evaluate, and
Accountability structures are in
management activities based on the the lifecycle.
use, and they communicate about the
place so that the appropriate teams
organization’s risk tolerance.
impacts more broadly.
and individuals are empowered, GOVERN 3.1: Decision-making
GOVERN 1.4: The risk management responsible, and trained for mapping, related to mapping, measuring, and
GOVERN 4.3: Organizational practices
process and its outcomes are measuring, and managing AI risks. managing AI risks throughout the
are in place to enable AI testing,
established through transparent lifecycle is informed by a diverse
identification of incidents, and
GOVERN 2.1: Roles and
policies, procedures, and other team (e.g., diversity of demographics,
information sharing.
responsibilities and lines of
controls based on organizational risk disciplines, experience, expertise, and
communication related to mapping,
priorities. backgrounds). GOVERN 5:
measuring, and managing AI risks
GOVERN 1.5: Ongoing monitoring are documented and are clear to GOVERN 3.2: Policies and procedures Processes are in place for robust
and periodic review of the risk individuals and teams throughout the are in place to define and differentiate engagement with relevant AI actors.
management process and organization. roles and responsibilities for human-
GOVERN 5.1: Organizational policies
its outcomes are planned AI configurations and oversight of AI
GOVERN 2.2: The organization’s and practices are in place to collect,
and organizational roles and systems.
personnel and partners receive AI consider, prioritize, and integrate
responsibilities clearly defined,
risk management training to enable GOVERN 4: feedback from those external to the
including determining the frequency
them to perform their du- ties and team that developed or deployed the
of periodic review. Organizational teams are committed
responsibilities consistent with related AI system regarding the potential
to a culture that considers and
GOVERN 1.6: Mechanisms are in policies, procedures, and agreements. individual and societal impacts related
communicates AI risk.
place to inventory AI systems and are to AI risks.
GOVERN 2.3: Executive leadership of
resourced according to organizational

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 220


GOVERN 5.2: Mechanisms are AI actors in charge of one part of negative risk pre- vention and informs - improving their capacity for
established to enable the team that the process often do not have full decisions for processes such as model understanding contexts;
developed or deployed AI systems visibility or control over other parts management, as well as an initial
- checking their assumptions about
to regularly incorporate adjudicated and their associated contexts. The decision about appropriateness or the
context of use;
feedback from relevant AI actors into interdependencies between these need for an AI solution. Outcomes in
system design and implementation. activities, and among the relevant AI the MAP function are the basis for the
- enabling recognition of when
actors, can make it difficult to reliably MEASURE and MANAGE functions.
systems are not functional within or
GOVERN 6: Policies and procedures
anticipate impacts of AI systems. For Without contex- tual knowledge, and
out of their in- tended context;
are in place to address AI risks and
example, early decisions in identifying awareness of risks within the identified
benefits arising from third-party
purposes and objectives of an AI contexts, risk management is difficult - identifying positive and beneficial
software and data and other supply
system can alter its behavior and to perform. The MAP function is uses of their existing AI systems;
chain issues.
capabilities, and the dynamics of de- intended to enhance an organization’s
- improving understanding of
ployment setting (such as end users ability to identify risks and broader
GOVERN 6.1: Policies and procedures
limitations in AI and ML processes;
or impacted individuals) can shape contributing factors.
are in place that address AI risks
the impacts of AI system decisions. - identifying constraints in real-
associated with third-party entities,
Implementation of this function
including risks of in- fringement of a As a result, the best intentions within world applications that may lead to
is enhanced by incorporating
one dimension of the AI lifecycle can negative impacts;
third-party’s intellectual property or
perspectives from a diverse internal
be undermined via interactions with
other rights.
team and engagement with those - identifying known and foreseeable
decisions and conditions in other, later
GOVERN 6.2: Contingency processes external to the team that developed or negative impacts related to
activities.
are in place to handle failures or deployed the AI system. Engagement intended use of AI systems; and
incidents in third-party data or AI This complexity and varying levels of with external collaborators, end users,
visibility can introduce uncertainty potentially impacted communities, and - anticipating risks of the use of AI
systems deemed to be high-risk.
into risk man- agement practices. others may vary based on the risk level systems beyond intended use.
5.2 Map Anticipating, assessing, and otherwise of a particular AI system, the makeup
After completing the MAP function,
addressing potential sources of of the internal team, and organizational
The MAP function establishes Framework users should have
negative risk can mitigate this policies. Gathering such broad
the context to frame risks related sufficient contextual knowledge
uncertainty and enhance the integrity perspec- tives can help organizations
to an AI system. The AI lifecycle about AI system impacts to inform
of the decision process. proactively prevent negative risks and
consists of many interdependent an initial go/no-go decision about
develop more trustwor- thy AI systems
activities involving a diverse set of The information gathered while whether to design, develop, or deploy
by:
actors (See Figure 3). In practice, carrying out the MAP function enables an AI system. If a decision is made

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 222


to proceed, organizations should MEASURE 1.2: Appropriateness of AI population. flect system reliability and robustness,
utilize the MEASURE and MANAGE metrics and effectiveness of existing real-time monitoring, and response
MEASURE 2.3: AI system performance
functions along with policies and controls are regularly assessed and times for AI system failures.
or assurance criteria are measured
procedures put into place in the updated, including reports of errors
qualitatively or quantitatively and MEASURE 2.7: AI system security
GOVERN function to assist in AI risk and potential impacts on affected
demonstrated for conditions similar to and resilience – as identified in the
management efforts. It is incum- bent communities.
deployment setting(s). Measures are MAP function – are evaluated and
on Framework users to continue
MEASURE 1.3: Internal experts who documented. documented.
applying the MAP function to AI
did not serve as front-line developers
systems as context, capabilities, risks, MEASURE 2.4: The functionality and MEASURE 2.8: Risks associated with
for the system and/or independent
benefits, and potential impacts evolve behavior of the AI sys- tem and its transparency and account- ability – as
assessors are in- volved in regular
over time. components – as identified in the identified in the MAP function – are
assessments and updates. Domain
MAP function – are monitored when in examined and documented.
Practices related to mapping AI experts, users, AI actors external
production.
risks are described in the NIST AI to the team that developed or MEASURE 2.9: The AI model is
RMF Playbook. Table 2 lists the deployed the AI system, and affected MEASURE 2.5: The AI system to explained, validated, and docu-
MAP function’s categories and communities are consulted in support be deployed is demonstrated to mented, and AI system output is
subcategories. of assessments as necessary per be valid and reliable. Limitations interpreted within its context – as
organizational risk tolerance. of the generalizability be- yond identified in the MAP function
MEASURE 1:
the conditions under which the – to inform responsible use and
MEASURE 2: AI
Appropriate methods and metrics are technology was developed are governance.
identified and applied. systems are evaluated for trustworthy documented.
MEASURE 2.10: Privacy risk of the
characteristics.
MEASURE 1.1: Approaches and MEASURE 2.6: The AI system is AI system – as identified in the
metrics for measurement of AI risks MEASURE 2.1: Test sets, metrics, and evaluated regularly for safety risks MAP function – is examined and
enumerated during the MAP function details about the tools used during – as identified in the MAP function. documented.
are selected for imple- mentation TEVV are documented. The AI system to be de- ployed is
MEASURE 2.11: Fairness and bias
starting with the most significant AI demonstrated to be safe, its residual
MEASURE 2.2: Evaluations involving – as identified in the MAP function
risks. The risks or trustworthiness negative risk does not exceed the
human subjects meet ap- plicable – are evaluated and results are
characteristics that will not – or risk tolerance, and it can fail safely,
requirements (including human documented.
cannot – be measured are properly particularly if made to operate beyond
subject protection) and are
documented. its knowledge limits. Safety metrics re- MEASURE 2.12: Environmental impact
representative of the relevant

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 224


and sustainability of AI model training and appeal system outcomes are identified and documented. for independent review can improve
and management activities – as established and integrated into AI the effectiveness of testing and can
5.3 Measure
identified in the MAP function – are system evaluation metrics. mitigate internal biases and potential
assessed and documented. conflicts of inter- est.
The MEASURE function employs
MEASURE 4:
quantitative, qualitative, or mixed-
MEASURE 2.13: Effectiveness of Where tradeoffs among the
Feedback about efficacy of method tools, tech- niques, and
the employed TEVV met- rics and trustworthy characteristics arise,
measurement is gathered and methodologies to analyze, assess,
processes in the MEASURE function measurement provides a trace-
assessed. benchmark, and monitor AI risk and
are evaluated and documented. able basis to inform management
related impacts. It uses knowledge
decisions. Options may include
MEASURE 4.1: Measurement
MEASURE 3: relevant to AI risks identified in
recalibration, impact mitigation, or
approaches for identifying AI risks are
the MAP function and informs the
removal of the system from design,
Mechanisms for tracking identified AI connected to deployment context(s)
MANAGE function. AI systems should
development, production, or use,
risks over time are in place. and informed through consultation
be tested before their deployment and
as well as a range of compensating,
with domain experts and other end
regu- larly while in operation. AI risk
MEASURE 3.1: Approaches, personnel, detective, deterrent, directive, and
users. Approaches are documented.
measurements include documenting
and documentation are in place to recovery controls.
aspects of systems’ functionality and
regularly identify and track existing, MEASURE 4.2: Measurement results
trustworthiness. After completing the MEASURE
unanticipated, and emergent AI risks regarding AI system trust- worthiness
function, objective, repeatable, or
based on factors such as intended in deployment context(s) and across
Measuring AI risks includes
scalable test, evaluation, verification,
and actual performance in deployed the AI lifecycle are informed by input
tracking metrics for trustworthy
and validation (TEVV) processes
contexts. from domain experts and relevant
characteristics, social impact,
including metrics, methods, and
AI ac- tors to validate whether the
and human-AI configurations.
MEASURE 3.2: Risk tracking methodolo- gies are in place, followed,
system is performing consistently as
Processes developed or adopted
approaches are considered for and documented. Metrics and
intended. Results are documented.
in the MEASURE function should
settings where AI risks are difficult measurement methodologies should
include rigorous software testing
to assess using currently available MEASURE 4.3: Measurable adhere to scientific, legal, and ethical
and performance assessment
measurement techniques or where performance improvements or norms and be carried out in an open
methodologies with associated
metrics are not yet available. de- clines based on consultations and trans- parent process. New
measures of uncertainty, comparisons
with relevant AI actors, including types of measurement, qualitative
MEASURE 3.3: Feedback processes to performance benchmarks,
affected communities, and field data and quantitative, may need to be
for end users and impacted and formal- ized reporting and
about context- relevant risks and developed. The degree to which
communities to report problems documentation of results. Processes
trustworthiness characteristics are each measurement type provides

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 226


unique and meaningful information Considerations include: the specific MAP 1.6: System requirements (e.g., system trustworthiness, and construct
to the assessment of AI risks should set or types of users along with their “the system shall respect the privacy validation.
be considered. Framework users expectations; potential positive of its users”) are elicited from and
MAP 3: AI capabilities, targeted
will en- hance their capacity to and negative im- pacts of system understood by relevant AI actors.
usage, goals, and expected benefits
comprehensively evaluate system uses to individuals, communities, Design decisions take socio-technical
and costs compared with appropriate
trustworthiness, identify and track organizations, society, and the planet; implications into account to address
benchmarks are understood.
existing and emergent risks, and verify assumptions and related limitations AI risks.
efficacy of the metrics. Measurement about AI system purposes, uses,
MAP 3.1: Potential benefits of
MAP 2: Categorization of the AI
outcomes will be utilized in the and risks across the development or
intended AI system functionality
system is performed.
MANAGE function to assist risk product AI lifecycle; and related TEVV
and performance are examined and
monitoring and response efforts. It and system metrics. MAP 2.1: The specific tasks and documented.
is in- cumbent on Framework users
methods used to implement the tasks
MAP 1.2: Interdisciplinary AI actors,
to continue applying the MEASURE MAP 3.2: Potential costs, including
that the AI system will support are
competencies, skills, and capacities
function to AI systems as knowledge, non-monetary costs, which result
defined (e.g., classifiers, generative
for establishing context reflect
methodologies, risks, and impacts from expected or realized AI
models, recommenders).
demographic diversity and broad
evolve over time. errors or system functionality and
domain and user experience expertise, MAP 2.2: Information about the AI trustworthiness – as connected to
Practices related to measuring and their participation is documented. system’s knowledge limits and how organizational risk tolerance – are
AI risks are described in the NIST Opportunities for interdisciplinary system output may be utilized and examined and documented.
AI RMF Playbook. Table 3 lists the collaboration are prioritized. overseen by humans is documented.
MEASURE function’s categories and MAP 3.3: Targeted application scope
Documentation provides sufficient
MAP 1.3: The organization’s mission
subcategories. is specified and documented based
information to assist relevant AI actors
and relevant goals for AI technology on the system’s capability, established
when making decisions and taking
MAP 1: Context is established and are understood and documented. context, and AI system categorization.
subsequent actions.
understood.
MAP 1.4: The business value or context MAP 3.4: Processes for operator and
MAP 2.3: Scientific integrity and TEVV
MAP 1.1: Intended purposes, of business use has been clearly practitioner proficiency with AI system
considerations are identified and
potentially beneficial uses, context- defined or – in the case of assessing performance and trustworthiness –
documented, including those related
specific laws, norms and expectations, existing AI systems – re-evaluated. and relevant technical standards and
to experimental design, data collection
and prospective settings in which
certifications – are defined, assessed,
MAP 1.5: Organizational risk tolerances and selection (e.g., availability,
the AI system will be deployed
and documented.
are determined and documented. representativeness, suitability),
are understood and documented.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 228


MAP 3.5: Processes for human of AI systems in similar contexts, documentation practices established output from the MAP and MEASURE
oversight are defined, assessed, public incident re- ports, feedback in GOVERN and utilized in MAP and functions are prioritized, responded
and documented in accordance from those external to the team that MEASURE bolster AI risk management to, and managed.
with organizational policies from the developed or deployed the AI system, efforts and increase transparency
MANAGE 1.1: A determination is
GOVERN function. or other data are identified and and accountability. Processes for
made as to whether the AI system
documented. assessing emergent risks are in place,
MAP 4: Risks and benefits are achieves its intended purposes and
along with mechanisms for continual
mapped for all components of the AI MAP 5.2: Practices and personnel for stated objectives and whether its
improvement.
system including third-party software supporting regular engagement with development or deployment should
and data. relevant AI actors and integrating After completing the MANAGE proceed.
feedback about positive, negative, and function, plans for prioritizing risk and
MAP 4.1: Approaches for mapping MANAGE 1.2: Treatment of
unanticipated impacts are in place and regular monitoring and improvement
AI technology and legal risks of its documented AI risks is prioritized
documented. will be in place. Framework users will
components – including the use of based on impact, likelihood, and
have enhanced capacity to man- age
third-party data or soft- ware – are 5.4 Manage available resources or methods.
the risks of deployed AI systems
in place, followed, and documented,
The MANAGE function entails and to allocate risk management MANAGE 1.3: Responses to the
as are risks of infringement of a third
allocating risk resources to mapped resources based on assessed and AI risks deemed high priority, as
party’s intellectual property or other
and measured risks on a regular prioritized risks. It is incumbent on identified by the MAP function, are
rights.
basis and as defined by the GOVERN Framework users to continue to apply developed, planned, and documented.
MAP 4.2: Internal risk controls function. Risk treatment comprises the MANAGE function to deployed Risk response options can include
for components of the AI system, plans to respond to, recover from, AI systems as methods, contexts, mitigating, transfer- ring, avoiding, or
including third-party AI technologies, and communicate about incidents or risks, and needs or expectations from accepting.
are identified and documented. events. relevant AI actors evolve over time.
MANAGE 1.4: Negative residual risks
MAP 5: Impacts to individuals, groups, Contextual information gleaned from Practices related to managing AI (defined as the sum of all unmitigated
communities, organizations, and expert consultation and input from risks are described in the NIST AI risks) to both downstream acquirers
society are characterized. relevant AI actors – established in RMF Playbook. Table 4 lists the of AI systems and end users are
GOVERN and carried out in MAP – is MANAGE function’s categories and documented.
MAP 5.1: Likelihood and magnitude subcategories.
utilized in this function to decrease
of each identified impact (both MANAGE 2: Strategies to maximize
the likelihood of system failures
potentially beneficial and harmful) MANAGE 1: AI risks based on AI benefits and minimize negative
and negative impacts. Systematic
based on expected use, past uses assessments and other analytical impacts are planned, prepared,

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 230


implemented, documented, and regularly monitored, and risk controls Processes for tracking, responding organization, or application context.
informed by input from relevant AI are applied and documented. to, and recovering from incidents and An AI RMF Current Profile indicates
actors. errors are followed and documented. how AI is currently being managed and
MANAGE 3.2: Pre-trained models
the related risks in terms of current
MANAGE 2.1: Resources required to which are used for development are 6. AI RMF Profiles
outcomes. A Target Profile indicates
manage AI risks are taken into account monitored as part of AI system regular
the outcomes needed to achieve the
AI RMF use-case profiles are
– along with viable non-AI alternative monitoring and maintenance.
desired or target AI risk management
implementations of the AI RMF
systems, approaches, or methods – to
goals.
MANAGE 4: Risk treatments, functions, categories, and
reduce the magnitude or likelihood of
including response and recovery, subcategories for a specific
potential impacts. Comparing Current and Target
and communication plans for the setting or application based on the
Profiles likely reveals gaps to
MANAGE 2.2: Mechanisms are in identified and measured AI risks are requirements, risk tolerance, and
be addressed to meet AI risk
place and applied to sustain the value documented and monitored regularly. resources of the Framework user:
management objectives. Action plans
of deployed AI systems. for example, an AI RMF hiring profile
can be developed to address these
MANAGE 4.1: Post-deployment
or an AI RMF fair housing profile.
gaps to fulfill outcomes in a given
MANAGE 2.3: Procedures are AI system monitoring plans are
Profiles may illustrate and offer
category or subcategory. Prioritization
followed to respond to and recover implemented, including mechanisms
insights into how risk can be managed
of gap mitigation is driven by the
from a previously unknown risk when it for capturing and evaluating input
at various stages of the AI lifecycle
user’s needs and risk management
is identified. from users and other relevant
or in specific sector, technology, or
processes. This risk-based approach
AI actors, appeal and override,
end-use applications. AI RMF profiles
MANAGE 2.4: Mechanisms are in also enables Framework users
decommissioning, incident response,
assist organizations in deciding how
place and applied, and responsibilities to compare their approaches
recovery, and change management.
they might best manage AI risk that is
are assigned and understood, to with other approaches and to
well-aligned with their goals, considers
supersede, disengage, or deactivate MANAGE 4.2: Measurable activities gauge the resources needed (e.g.,
legal/regulatory requirements and
AI systems that demonstrate for continual improvements are staffing, funding) to achieve AI risk
best practices, and reflects risk
performance or outcomes integrated into AI system updates management goals in a cost- effective,
management priorities.
inconsistent with intended use. and include regular engagement with prioritized manner.
interested parties, including relevant
AI RMF temporal profiles are
MANAGE 3: AI risks and benefits from AI RMF cross-sectoral profiles cover
AI actors.
descriptions of either the current
third-party entities are managed. risks of models or applications that
state or the desired, target state of
MANAGE 4.3: Incidents and errors are can be used across use cases or
MANAGE 3.1: AI risks and benefits specific AI risk management activities
communicated to relevant AI actors, sectors. Cross-sectoral profiles
from third-party resources are within a given sector, industry,
including affected communities. can also cover how to govern, map,

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 232


measure, and manage risks for scientists, do- main experts, socio- during the Task and Output phase of of AI systems, product developers,
activities or business processes cultural analysts, experts in the field the lifecycle in Figure 2. AI Deployment evaluators and auditors, compliance
common across sectors such as the of diversity, equity, inclusion, and actors are responsible for contextual experts, organizational management,
use of large language models, cloud- accessibility, members of impacted decisions relating to how the AI and members of the research
based services or acquisition. communities, human factors experts system is used to assure deployment community.
(e.g., UX/UI design), governance of the system into production.
This Framework does not prescribe Test, Evaluation, Verification, and
experts, data engineers, data Related tasks include piloting the
profile templates, allowing for flexibility Validation (TEVV) tasks are performed
providers, system funders, product system, checking compatibility with
in implemen- tation. throughout the AI lifecycle. They are
man- agers, third-party entities, legacy systems, ensuring regu- latory
carried out by AI actors who examine
evaluators, and legal and privacy compliance, managing organizational
Appendix A: the AI system or its components,
governance. change, and evaluating user
or detect and remediate problems.
Descriptions of AI Actor Tasks from experience. AI actors in this category
AI Development tasks are performed Ideally, AI actors carrying out
Figures 2 and 3 include system integrators, software
during the AI Model phase of the verification and validation tasks are
developers, end users, oper- ators and
AI Design tasks are performed during lifecycle in Figure distinct from those who perform
practitioners, evaluators, and domain
the Application Context and Data test and evaluation actions. Tasks
experts with expertise in human
2. AI Development actors provide the
and Input phases of the AI lifecycle can be incorporated into a phase
factors, socio-cultural analysis, and
initial infrastructure of AI systems and
in Figure 2. AI Design actors create as early as design, where tests are
governance.
are responsi- ble for model building
the concept and objectives of AI planned in accordance with the design
and interpretation tasks, which involve
systems and are responsible for the Operation and Monitoring tasks are requirement.
the creation, selection, cali- bration,
planning, design, and data collection performed in the Application Context/
training, and/or testing of models or - TEVV tasks for design, planning,
and processing tasks of the AI Operate and Monitor phase of the
algorithms. AI actors in this category and data may center on internal
system so that the AI system is lawful lifecycle in Figure 2. These tasks
include machine learning experts, and external vali- dation of
and fit-for-purpose. Tasks include are carried out by AI actors who
data scientists, developers, third-party assumptions for system design,
ar- ticulating and documenting the are responsible for operating the AI
entities, legal and privacy governance data collection, and measurements
system’s concept and objectives, system and working with others to
experts, and experts in the socio- relative to the intended context of
underlying assumptions, context, and regularly assess system output and
cultural and contextual factors deployment or application.
requirements; gathering and cleaning impacts. AI actors in this category
associated with the deployment
data; and documenting the metadata include system operators, domain - TEVV tasks for development (i.e.,
setting.
and characteristics of the dataset. AI experts, AI designers, users who model building) include model
actors in this category include data AI Deployment tasks are performed interpret or incorporate the output validation and assessment.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 234


- TEVV tasks for deployment dynamics in all phases of the AI impact assessors and evaluators for another organization or the or-
include system validation and lifecycle. Human factors professionals provide technical, human factor, socio- ganization’s customers or clients.
integration in production, with provide multidisciplinary skills and cultural, and legal expertise. Third-party entities are responsible
testing, and recalibration for perspectives to understand context for AI design and development tasks,
Procurement tasks are conducted by
systems and process integration, of use, inform interdisciplinary and in whole or in part. By definition,
AI actors with financial, legal, or policy
user experience, and compliance demographic diversity, engage in they are external to the design,
management authority for acquisition
with existing legal, regulatory, and consultative processes, design devel- opment, or deployment team
of AI models, products, or services
ethical specifications. and evaluate user experience, of the organization that acquires
from a third-party developer, vendor,
perform human-centered evaluation its technologies or services. The
- TEVV tasks for operations involve or contractor.
and testing, and inform impact technologies acquired from third-
ongoing monitoring for periodic
assessments. party entities may be complex or
Governance and Oversight tasks
updates, testing, and subject
are assumed by AI actors with opaque, and risk tolerances may not
matter expert (SME) recalibration Domain Expert tasks involve input
management, fiduciary, and legal align with the deploying or operating
of models, the tracking of incidents from multidisciplinary practitioners or
authority and responsibility for organization.
or errors reported and their scholars who provide knowledge or
the organization in which an AI
management, the detection of expertise in – and about – an industry End users of an AI system are the
system is designed, developed,
emergent properties and related sector, economic sector, con- text, or individuals or groups that use the
and/or deployed. Key AI actors
impacts, and processes for redress application area where an AI system system for specific purposes. These
responsible for AI governance
and response. is being used. AI actors who are individuals or groups interact with an
include organizational management,
domain experts can provide essential AI system in a specific context. End
Human Factors tasks and activities senior leadership, and the Board of
guidance for AI system design and users can range in competency from
are found throughout the dimensions Directors. These actors are parties
development, and inter- pret outputs AI experts to first-time technology end
of the AI life- cycle. They include that are concerned with the impact
in support of work performed by TEVV users.
human-centered design practices and sustainability of the organization
and AI impact assessment teams.
and methodologies, promoting the as a whole. Affected individuals/communities
active involvement of end users and AI Impact Assessment tasks encompass all individuals, groups,
Additional AI Actors
other interested parties and relevant include assessing and evaluating communities, or organizations directly
AI actors, incor- porating context- requirements for AI system or indirectly affected by AI systems
Third-party entities include providers,
specific norms and values in system accountability, combating harmful or decisions based on the output of
developers, vendors, and evaluators
design, evaluating and adapting bias, examining impacts of AI systems, AI systems. These individuals do not
of data, al- gorithms, models, and/
end user experiences, and broad product safety, liability, and security, necessarily interact with the deployed
or systems and related services
integration of humans and human among others. AI actors such as system or application.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 236


Other AI actors may provide formal and approaches. Some AI system - Intentional or unintentional - AI systems may require more
or quasi-formal norms or guidance features that present risks also changes during training may frequent maintenance and triggers
for specifying and managing AI risks. can be beneficial. For example, fundamentally alter AI sys- tem for conducting cor- rective
They can include trade associations, pre-trained models and transfer performance. maintenance due to data, model, or
standards developing or- ganizations, learning can advance research and concept drift.
- Datasets used to train AI systems
advocacy groups, researchers, increase accuracy and resilience
may become detached from their - Increased opacity and concerns
environmental groups, and civil when compared to other models and
original and in- tended context about reproducibility.
society organizations. approaches. Identifying contextual
or may become stale or outdated
factors in the MAP function will assist - Underdeveloped software
The general public is most likely to relative to deployment context.
AI actors in determining the level of testing standards and inability to
directly experience positive and
risk and potential management efforts. - AI system scale and complexity document AI-based prac- tices
negative impacts of AI technologies.
(many systems contain billions or to the standard expected of
They may provide the motivation Compared to traditional software,
even trillions of decision points) traditionally engineered software
for actions taken by the AI actors. AI-specific risks that are new or
housed within more traditional for all but the simplest of cases.
This group can include individuals, increased include the following:
software applications.
communities, and consumers - Difficulty in performing regular
- The data used for building an
associated with the context in - Use of pre-trained models that can AI-based software testing, or
AI system may not be a true or
which an AI system is developed or advance research and improve determining what to test, since
appropriate representa- tion of the
deployed. performance can also increase AI systems are not subject to the
context or intended use of the AI
levels of statistical uncertainty same controls as traditional code
Appendix B: system, and the ground truth may
and cause issues with bias devel- opment.
either not exist or not be available.
How AI Risks Differ from Traditional management, scientific validity,
Additionally, harmful bias and other - Computational costs for
Software Risks and reproducibility.
data quality issues can affect AI developing AI systems and their
system trustworthiness, which
As with traditional software, risks - Higher degree of difficulty in impact on the environment and
could lead to negative impacts.
from AI-based technology can be predicting failure modes for planet.

bigger than an en- terprise, span emergent properties of large-scale


- AI system dependency and - Inability to predict or detect the
organizations, and lead to societal pre-trained models.
reliance on data for training tasks, side effects of AI-based systems
impacts. AI systems also bring a set combined with in- creased volume - Privacy risk due to enhanced beyond statistical measures.
of risks that are not comprehensively and complexity typically associated data aggregation capability for AI
addressed by current risk frameworks with such data. Privacy and cybersecurity risk
systems.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 238


management considerations and because AI risk management calls for transfer learning, and off- label use Many of the data-driven approaches
approaches are applicable in the addressing many other types of risks where AI systems may be trained that AI systems rely on attempt to
design, development, deployment, – frameworks like those mentioned for decision-making outside an convert or represent individual and
evaluation, and use of AI systems. above may inform security and privacy organiza- tion’s security controls social observational and decision-
Privacy and cybersecurity risks are considerations in the MAP, MEASURE, or trained in one domain and then making practices into measurable
also considered as part of broader and MANAGE functions of the AI RMF. “fine-tuned” for another. quanti- ties. Representing complex
enterprise risk management con- human phenomena with mathematical
At the same time, guidance available Both AI and traditional software
siderations, which may incorporate AI models can come at the cost of
before publication of this AI RMF does technologies and systems are subject
risks. As part of the effort to address removing necessary context. This
not compre- hensively address many to rapid innovation. Technology
AI trustworthi- ness characteristics loss of context may in turn make it
AI system risks. For example, existing advances should be monitored and
such as “Secure and Resilient” and difficult to understand individual and
frameworks and guidance are unable deployed to take advantage of those
“Privacy-Enhanced,” organizations societal impacts that are key to AI risk
to: devel- opments and work towards a
may consider leveraging available management efforts.
future of AI that is both trustworthy
standards and guidance that provide
- adequately manage the problem of
and responsible. Issues that merit further consideration
broad guidance to organizations to
harmful bias in AI systems;
and research include:
reduce security and privacy risks,
- confront the challenging risks
Appendix C:
such as, but not limited to, the NIST 1. Human roles and responsibilities
Cy- bersecurity Framework, the related to generative AI; AI Risk Management and Human-AI in decision making and overseeing
NIST Privacy Framework, the NIST Interaction AI systems need to be clearly
- comprehensively address security
Risk Management Frame- work, and defined and differentiated. Human-
concerns related to evasion, model Organizations that design, develop,
the Secure Software Development AI configurations can span from
extraction, mem- bership inference, or deploy AI systems for use in
Framework. These frameworks have fully autonomous to fully manual. AI
availability, or other machine operational settings may enhance
some features in common with the systems can autonomously make
learning attacks; their AI risk management by
AI RMF. Like most risk management deci- sions, defer decision making
approaches, they are outcome-based understanding current limitations of
- account for the complex attack to a human expert, or be used by a
rather than prescriptive and are often human- AI interaction. The AI RMF
surface of AI systems or other human decision maker as an additional
structured around a Core set of func- provides opportunities to clearly
security abuses enabled by AI opinion. Some AI systems may not
tions, categories, and subcategories. define and differentiate the various
systems; and require human oversight, such
While there are significant differences human roles and responsibilities when
as models used to improve video
between these frameworks based - consider risks associated with using, interacting with, or managing AI
compression. Other systems may
on the domain addressed – and third-party AI technologies, systems.
specifically require human oversight.

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 240


2. Decisions that go into the design, decisions than the AI or human alone. technical standards and certifications. to which humans are empowered and
development, deployment, evaluation, When these variations are judiciously Implementing MAP function cat- incentivized to challenge AI system
and use of AI systems reflect systemic taken into account in organizing egories and subcategories may help output requires further studies. Data
and human cognitive biases. AI human-AI teams, however, they organizations improve their internal about the fre- quency and rationale
actors bring their cognitive biases, can result in complementarity and competency for analyzing context, with which humans overrule AI system
both individual and group, into the improved overall perfor- mance. identifying procedural and system output in deployed systems may be
process. Biases can stem from end- limitations, exploring and examining useful to collect and analyze.
4. Presenting AI system information to
user decision-making tasks and be impacts of AI-based systems in the
introduced across the AI lifecycle via
humans is complex. Humans perceive
real world, and evaluating decision- Appendix D:
and derive meaning from AI system
human assumptions, expectations, making processes throughout the
output and explanations in different Attributes of the AI RMF
and decisions during design and AI lifecycle. The GOVERN and MAP
ways, reflecting different individual
modeling tasks. These biases, which functions describe the importance NIST described several key attributes
preferences, traits, and skills.
are not necessarily always harmful, of interdisciplinarity and demo- of the AI RMF when work on the
may be exacerbated by AI system graphically diverse teams and utilizing Framework first began. These
The GOVERN function provides
opacity and the resulting lack of feedback from potentially impacted attributes have remained intact and
organizations with the opportunity
transparency. Systemic biases at the individuals and communities. AI actors were used to guide the AI RMF’s
to clarify and define the roles and
organizational level can influence called out in the AI RMF who perform devel- opment. They are provided here
responsibilities for the humans in
how teams are structured and human factors tasks and activities can as a reference. The AI RMF strives to:
the Human-AI team configurations
who controls the decision-making assist technical teams by anchoring
and those who are overseeing the AI
processes throughout the AI lifecycle. in design and development practices 1. Be risk-based, resource-efficient,
system performance. The GOVERN
These biases can also influence to user intentions and representatives pro-innovation, and voluntary.
function also creates mechanisms for
downstream decisions by end users, of the broader AI community, and
organizations to make their decision- 2. Be consensus-driven and
decision makers, and policy makers societal values. These actors further
making processes more explicit, to developed and regularly updated
and may lead to negative impacts. help to incorporate context-specific
help counter systemic biases. through an open, trans- parent
norms and values in system design
3. Human-AI interaction results process. All stakeholders should have
The MAP function suggests and evaluate end user experiences
vary. Under certain conditions – the opportunity to contribute to the AI
opportunities to define and document – in conjunction with AI systems.
for example, in perceptual-based RMF’s development.
processes for operator and AI risk management approaches
judgment tasks – the AI part of the
practitioner proficiency with AI system for human-AI configurations will be 3. Use clear and plain language
human-AI interaction can am- plify
performance and trustworthiness augmented by on- going research and that is understandable by a
human biases, leading to more biased
concepts, and to define relevant evaluation. For example, the degree broad audience, including senior

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 242


executives, government officials, non- use cases.
governmental organization leadership,
7. Be outcome-focused and non-
and those who are not AI professionals
prescriptive. The Framework should
– while still of sufficient technical

Establish
provide a catalog of outcomes and
depth to be useful to practitioners.
approaches rather than prescribe
The AI RMF should allow for

a unified US
one-size-fits-all requirements.
communication of AI risks across an
organization, between organizations, 8. Take advantage of and foster
with customers, and to the public at
large.
greater awareness of existing
standards, guidelines, best practices,
privacy program
methodologies, and tools for
4. Provide common language and
managing AI risks – as well as illustrate Protect privacy and ensure US
understanding to manage AI risks.
the need for additional, improved compliance across the business
The AI RMF should offer taxonomy,
resources.
terminology, definitions, metrics, and
Protect consumer rights
characterizations for AI risk. 9. Be law- and regulation-agnostic. Collect consent, preferences, and first-party
The Framework should support data and activate data across the MarTech
5. Be easily usable and fit well with stack based on individual choice
organizations’ abilities to operate
other aspects of risk management.
under applicable domestic and Respond to employee privacy requests
Use of the Framework should be
international legal or regulatory Fully automate employee rights requests
intuitive and readily adaptable as
like access, deletion, and broader do not
regimes.
part of an organization’s broader risk sell requests
management strategy and processes. 10. Be a living document. The AI
Conduct privacy risk assessments
It should be consistent or aligned with RMF should be readily updated as Embed privacy by design into your business
other approaches to managing AI data strategy to manage risk at scale
technology, under- standing, and
risks. approaches to AI trustworthiness
Enforce data retention and minimization
and uses of AI change and as stake- Reduce your sensitive data footprint in
6. Be useful to a wide range of
holders learn from implementing AI compliance with retention and limitation
perspectives, sectors, and technology requirements
risk management generally and this
domains. The AI RMF should be
framework in particular.
universally applicable to any AI
technology and to context-specific Learn more at OneTrust.com

AI GOVERNANCE: A CONSOLIDATED REFERENCE | 244


REQUEST A DEMO TODAY AT ONETRUST.COM
As society redefines risk and opportunity, OneTrust empowers tomorrow’s leaders to succeed through trust
and impact with the Trust Intelligence Platform. The market-defining Trust Intelligence Platform from OneTrust
connects privacy, GRC, ethics, and ESG teams, data, and processes, so all companies can collaborate seamlessly
and put trust at the center of their operations and culture by unlocking their value and potential to thrive by doing
what’s good for people and the planet. Copyright ® 2023 OneTrust LLC. Proprietary & Confidential.

You might also like