You are on page 1of 27

BUILDING TRUST

IN AI SYSTEMS:
WHERE ARE WE NOW?
A DISCUSSION OF APPROACHES IN
MOTION
Bakul Patel, Senior Director, Global
Digital Health Strategy & Regulatory,
Google

Koen Cobbaert, Senior Manager –


Quality, Standards & Regulations, Philips

Rohit Nayak, Co-Founder CEO,


Band Connect Inc. | CEO, Electronic
Registry Systems, Inc.
Discussion Outline
• Why Trust?
• Why is it important?
• Different names and flavors - Transparency, Trustworthiness, Explainability
• Driving Forces
• GDPR, XAI, EU AI ACT , AI Bill of Rights

This discussion reviews both the regulatory policy as well as the steps
being taken by a significant industry player
Driving Forces for
Explainable AI
Update
AI Bill of Rights

https://www.whitehouse.gov/ostp/ai-bill-of-rights/
Explainable AI

• EU GDPR – Right of Explanation


• IEEE - Standard for XAI – eXplainable Artificial Intelligence - for
Achieving Clarity and Interoperability of AI Systems Design
• IEEE - Guide for an Architectural Framework for Explainable Artificial
Intelligence
• DARPA’s XAI Initiative
• …..
EU AI ACT
What is it?
Why are we talking about it?
Where does it stand?
legislative draft available
currently in legislative process
adoption expected Q2 2023
data of application Q2 2025 (?)
AI System – Commission Definition
Proposed AIA Art. 3(1)
an AI system is
software that is developed with one or more of the techniques and
approaches listed in Annex I and can, for a given set of human-defined
objectives, generate outputs such as content, predictions,
recommendations, or decisions influencing the environments they interact
with.

Annex I (can be updated through delegated act)


a) Machine learning approaches, including supervised, unsupervised and
reinforcement learning, using a wide variety of methods including deep
learning;
b) Logic- and knowledge-based approaches, including knowledge
representation, inductive (logic) programming, knowledge bases,
inference and deductive engines, (symbolic) reasoning and expert
systems;
c) Statistical approaches, Bayesian estimation, search and optimization
methods
definition
reads as

AI system = any software application


In listing technologies considered AI, Annex I tries to compensate for a vague AI system definition,
but as technologies can be added or removed over time, it increases legal uncertainty

machine learning inference and deductive engines reasoning and expert systems search & optimization methods logic- and inductive programming

AI Act contains mandatory requirements for


High-Risk AI systems
=
regulated products or safety components of regulated products
which are subject to third-party assessment under the relevant sectorial legislation

and for AI systems with transparency risks

Implication:
medical devices that are or that contain software as safety component
and that are class IIa/B or higher are subject to AI Act
Legislative Lasagna

Today, technical documentation of a closed-loop insulin


pump needs to demonstrate compliance with three
different legislations before the CE-mark can be assigned.
*In the future, three additional legislations may come on top.
Standards with high operationalization value
for implementing AI Act requirements

Overlaps & Conflicts: extra costs, for little or no added value


• ISO/IEC 4213 Information technology — Artificial Intelligence —
Assessment of ML classification performance
• ISO/IEC 5259-3 Data quality for analytics and ML — Part 3: Data
quality management requirements and guidelines
IEC 62304 <->• ISO/IEC 5338 Information technology — Artificial intelligence —
AI system life cycle processes
• ISO/IEC 5469 Artificial intelligence — Functional safety and AI
systems
ISO 14971 <->• ISO/IEC 23894-2 Information Technology — Artificial Intelligence
— Risk Management
• ISO/IEC 24027 Information technology — Artificial intelligence
(AI) — Bias in AI systems and AI aided decision making
• ISO IEC 24029-1 Artificial Intelligence (AI) — Assessment of the
robustness of neural networks — Part 1: Overview
• ISO/IEC 38507 Information technology — Governance of IT —
Governance implications of the use of artificial intelligence by
organizations
ISO 13485 <->• ISO/IEC 42001 Information Technology — Artificial intelligence
— Management system
List complied by AI Watch, joint initiative of European Commission and EC Joint Research Council
Above listed ISO/IEC SC42 standards are still under development
Lessons from Industry
Where are we headed?
Artificial Intelligence Principles @Google

1. Be s o c ia lly 2. Avo id c re a t ing o r 3. Be b uilt a nd 4 . Be a c c o unt a b le


b e ne fic ia l. re info rc ing unfa ir t e s t e d fo r s a fe t y. t o p e o p le .
b ia s .

5. Inc o rp o ra t e p riva c y 6. Up ho ld hig h 7. Be m a d e


d e s ig n p rinc ip le s . s t a nd a rd s o f a va ila b le fo r us e s
s c ie nt ific e xc e lle nc e . t ha t a c c o rd wit h
t he s e p rinc ip le s .

ai.google/principles/
Whic h inc lud e s t hing s we will no t d o
We will no t p urs ue c e rt a in AI a p p lic a t io ns …

ai.google/principles/
https://arxiv.org/pdf/1702.08
608.pdf

Model Cards for Model


Reporting

https://arxiv.org/pdf/1810.03993.pdf
https://cloud.google.com/explainable-ai/
Explainable AI
Understand AI output and build trust

Explainable AI is a set of tools and frameworks


to help you understand and interpret predictions
made by your machine learning models, natively
integrated with a number of Google's products
and services. With it, you can debug and
improve model performance, and help others
understand your models' behavior. You can also
generate feature attributions for model
predictions in AutoML Tables, BigQuery ML and
Vertex AI, and visually investigate model
behavior using the What-If Tool.
What - If Tool

Interactive, visual
debugging of black -
box models

We b s it e

Probe classification and


regression models,
performing what - if
analysis and analyzing
fairness.

g o o g le _lo g o
Language
Interpretability
Tool
Interactive, extensible,
visual debugging of
NLP models and
beyond

We b s it e

Successor to the What -


If Tool.

Probe models of all


types (with a focus on
NLP), explore model
internals, prediction
explanations, fairness,
counterfactual
generation, and more.
g o o g le _lo g o
Know Your
Data
KYD is a n ML- b a s e d
d a t a s e t e xp lo ra t io n
t o o ls fo r ric h,
uns t ruc t ure d d a t a
- Aut o m a t ic a lly
c o m p ut e s s ig na ls
- Surfa c e m o s t
b ia s e d d a t a
fe a t ure
a ut o m a t ic a lly
t hro ug h s o rt ing
a nd c o lo ring

g o o g le _lo g o
Fairness
Indicators

O p e n- s o urc e lib ra ry
Fairness Indicators dashboard
t ha t e na b le s us e rs t o
e va lua t e m o d e l
p e rfo rm a nc e fo r
s p e c ific us e r g ro up s
(“s lic e d ” a na lys is ):
- Co m e s p re - lo a d e d
wit h c o m m o n
fa irne s s m e t ric s
- Pro vid e s int e ra c t ive
d a s hb o a rd fo r ra p id
a na lys is & s ha ring
ins ig ht s wit h o t he rs
- Run a na lys e s a nd
vis ua lize re s ult s in
J up yt e r no t e b o o ks o r
a s p a rt o f TFX
p ip e line s
Model
Remediation
Library

O p e n- s o urc e lib ra ry
t ha t e na b le s us e rs t o
t ra in c la s s ifie rs t ha t
e q ua lize p e rfo rm a nc e
(p ro vid e “e q ua l
t re a t m e nt ”) a c ro s s a
d im e ns io n, e .g .
d e m o g ra p hic g ro up

Ba s e d o n MinDiff
m o d e ling m e t ho d
(p a p e r: Toward a better
trade -off between
performance and fairness
with kernel -based
distribution matching )
Model Cards

Mo d e l Ca rd s o ffe r a
t ra ns p a re nc y
fra m e wo rk fo r
o rg a nizing &
c o m m unic a t ing ke y
info rm a t io n a b o ut a
m o d e l in a s t a nd a rd ize d
wa y.

O p e n- s o urc e Mo d e l
Ca rd To o lkit lib ra ry
fa c ilit a t e s a nd
s t re a m line s t he c re a t io n
o f m o d e l c a rd s

Mo d e l Ca rd s fo r Mo d e l
Re p o rt ing p a p e r (20 19)
Data Cards

Da t a Ca rd s o ffe r a
s t ruc t ure d wa y t o
d o c um e nt d a t a s e t s &
fa c ilit a t e info rm e d
d e c is io n m a king fo r
va rio us s t a ke ho ld e rs .

The Da t a Ca rd s
Pla yb o o k is a p e o p le -
c e nt e re d re s o urc e t o
he lp t e a m s c re a t e
c us t o m iza b le d a t a s e t
d o c um e nt a t io n.
DISCUSSION
BUILDING TRUST IN AI SYSTEMS: WHERE
ARE WE NOW?

You might also like