You are on page 1of 10

The EU Rules on

Artificial Intelligence:
Five Actions for
Business Leaders

Peter Sondergaard
CHAIRMAN OF THE BOARD 2021.AI

0
EU DRAFT AI CLASSIFICATIONS

Proposal to classify AI systems into three risk categories


AI Risk Categories

• Social scoring in public or private systems

1 Unacceptable Risk • Biometric, real-time systems for manipulation and law enforcement
• Subliminal and manipulative systems and algorithms

• AI in critical infrastructure, and biometric ID of people


2 High Risk • AI in Education, Training , recruiting and employee management systems
• AI in justice and democratic processes

• AI in chatbots, video games, spam filters, etc


Limited or Minimal
3 Risk
• AI in process-based applications like ERP pr CRM systems
• Other AI environments

© 2021 The Sondergaard Group. All rights reserved.


EU DRAFT AI PROPOSAL

Ten requirements for High-risk AI systems in the EU*


Risk Management System Human Oversight
Implementation of Risk management system to document and AI systems shall include the ability for human oversight through
maintain, continuous the operations of the AI environment software interfaces

Accuracy, Robustness & Cybersecurity


Data & Data Governance
AI systems need to be designed with and include accuracy,
Training validating and testing models with data are subject to
robustness (i.e.. resilience) and cybersecurity and maintained
appropriate data governance and management practices
around this principles throughout its lifecycle

Technical Documentation Quality Management & Conformity Assessments


Before launch technical document must be available allowing The AI system must have a quality management in place and have
clients and regulatory authorities to access the functional risk. a conformity assessment performed prior to launch

Country- and EU-level Registration


Record Keeping
All AI systems must register at the country level and this
AI systems must have automatic event logs the monitor system
information will equally be recorded at EU level. A CE marking will
conformance and operations
accompany all High-risk AI systems

Transparency & Provision of information to users Monitoring and enforcement


An appropriate level of operational transparency including the Providers of systems are required to collect, document and
ability to access output of the system must be available to the user analyze data from systems in the market.

*Note: The Proposal will first need to be approved and adopted


© 2021 The Sondergaard Group. All rights reserved.
EXECUTIVE ACCOUNTABILITY

Who should be involved in AI governance?

CDO Responsible for CEO Responsible for


CFO CLC Responsible CHRO Responsible
the evolution of the the AI Governance
Responsible CMO Responsible for AI legal and for the creation of
AI Governance charter and
for the AI cost AI for customer risk factors for AI Employee policy
charter and data organizational
and financial and brand charters the organization and charter
governance accountability
risk

© 2021 The Sondergaard Group. All rights reserved.


THE FIVE ACTIONS TO CONSIDER
1
The board's involvement:

The EU proposal has made AI a question of risk or assessment


of risk. Ultimately, it becomes the board's responsibility or, in
the case of public sector organizations, the political oversight
function. Boards need to determine how to deal with AI risk,
ethics, and governance continuously and who at the board level
should assume ownership.

Action: Organizations with business in the EU must have the EU


regulation and AI risk as an agenda item on the board agenda at
least once a year going forward..

4
THE FIVE ACTIONS TO CONSIDER
2
Business leader accountability:

Given that AI implementations constantly evolve as more


advanced models learn and morph in function and ultimately
support humans, so must oversight. There will be a constant need
for business leaders to monitor and understand AI products and
solutions. Furthermore, given the increasing ubiquity of AI in
organizations, this responsibility and accountability can’t be
centralized but rests with all business leaders.

Action: All organizations implementing AI need to commence


an education of senior business leaders around the impact
and their responsibilities concerning AI and AI risk on their
business function.

5
THE FIVE ACTIONS TO CONSIDER
3 Continuous monitoring:
As mentioned, ongoing monitoring becomes necessary
immediately upon adoption in national legislation. And given the
requirement to retain inventory and compliance of AI
implementations, organizations will need to create a centralized
repository of AI models and their behaviour, ensuring that a
record is kept of past activity for high-risk AI products and
solutions. Continuous monitoring and compliance reporting are
only achieved through a platform approach to AI models.

Action: Organizations need to create a recording system to track and monitor


AI products and solutions and their compliance. The senior leadership team will
need to address AI regulation this year as an agenda item at a leadership team
meeting and assign accountability across the organization.

6
THE FIVE ACTIONS TO CONSIDER
4
The role of purchasing:
AI models will eventually be found in all software and data products, and
likely many physical products purchased, such as a car or medical
equipment. Consequently, the role of Purchasing in organizations
becomes a point of record and monitoring of product certification
(such as the CE certification of AI-enabled products). The purchasing
organization will need to be ready to capture the requirements of the
new rules in contracts. The purchasing organization will equally become
the point at which AI models, products with embedded AI are recorded in
the asset registry of the organization.

Action: Empower Purchasing to capture AI products and


solutions into the company and build this into the asset registry of
the organization.

7
THE FIVE ACTIONS TO CONSIDER

5 Communication:
Most AI solutions and products will fall into the categories of “limited” or
“minimal” risk and will not require the same level of oversight as high-risk AI
products and solutions. However, as the EU proposal also supports, proactive
communication with the different stakeholders about the organization's AI
principles, especially in the actual usage of AI-enabled systems and
products, is advisable. The leadership should create communication for the
three essential stakeholder constituencies, customers, employees, and
suppliers. In addition to the three constituencies, it would also become
essential to ensure that AI ethics and risk are part of the product descriptions
for organizations that produce physical products with embedded AI.

Action: Implement and communicate an AI charter for each of the


three critical stakeholders of the organization, customers,
suppliers, and employees.

8
2021.AI serves the growing enterprise need for full management
and oversight of applied AI. Our data science expertise, combined
with the Grace Enterprise AI Platform, offers a true AI differentiator
for clients and partners worldwide. Grace helps data scientists
solve some of the most complex business problems while also
providing organizations with the most comprehensive data and AI
Governance capability for responsible, transparent, and trustworthy
model development. 2021.AI is headquartered in Copenhagen with
employees in 5 locations globally.

Ryesgade 3F, DK-2200, Copenhagen N, Denmark | CVR. 3783 6303


| + 45 93 91 20 21 - 2021.AI all rights reserved.

You might also like