You are on page 1of 43

Module 3: Ethical

Principles

Unit 2: Key Ethical


Principles
Learning Outcomes
By the end of this unit, you will be able to:

• Outline the key ethical principles for AI for health


Key Ethical Principles
Key Ethical Principles

The WHO Expert Group has identified six ethical principles to guide the
development and use of AI technology for health.

1 Protect autonomy

2 Promote human well-being, human safety and the public interest


3 Ensure transparency, explainability and intelligibility
4 Foster responsibility and accountability
5 Ensure inclusiveness and equity
6 Promote artificial intelligence that is responsive and sustainable
1. Promote Autonomy
Promote Autonomy
Adoption of AI can lead to situations where
decision-making could be transferred to
machines.

The principle of autonomy requires that


any extension of machine autonomy does
not undermine human autonomy.

Humans should remain in full control of


health care systems and medical decisions.
Promote Autonomy

AI systems should be designed to:

• Conform to ethical principles and human


rights
• Assist humans in making informed
decisions

WHO / Noor Images


- Alixandra Fazzina
Promote Autonomy: Human
Oversight

The level of human oversight required may


depend on the risks associated with an AI
system.

It should include effective, transparent


monitoring of human values and moral
considerations when relying on AI.

WHO / Blink Media - Bart Verweij


Promote Autonomy: Human
Oversight
Human oversight could include deciding
whether to:

• Use an AI system for a particular decision


• Vary the level of human discretion and
decision-making when relying on AI
• Develop AI technologies that can rank
decisions

WHO / Aphaluck Bhatiasevi


Promote Autonomy: Data
Protection
Respect for autonomy also means:

• Protecting privacy and confidentiality


• Ensuring free and informed consent of the
individual
Promote Autonomy: Informed Consent
AI should not be used for manipulation of humans
in a health care system without free and informed
– or valid – consent.

The use of AI algorithms in diagnosis, prognosis,


prevention and treatment plans should be
incorporated into the consent process.

Essential services should not be denied if an


individual withholds consent.
2. Promote Human Well-Being, Human Safety
and the Public Interest
Promote Human Well-Being

AI technologies should not harm people.

They should satisfy regulatory requirements for


safety, accuracy and efficacy before deployment.

Measures should be in place to ensure quality


control and improvement.

WHO / Sergey Volkov


Promote Human Well-Being

Funders, developers and medical


professionals have a duty to measure and
monitor the performance of AI to:

• Ensure that AI technologies work as


designed
• Assess whether they have any
detrimental impact on individual
patients or groups

WHO / Blink Media -Juliana Tan


Promote Human Well-Being

AI technologies that provide insights


which inform diagnosis, prognosis or
other treatment should be carefully
managed and balanced against any “duty
to warn”.
Promote Human Well-Being

Appropriate safeguards should be in place


to protect individuals from stigmatization
or discrimination due to their health status.
3. Ensure Transparency, Explainability and Intelligibility
Transparency, Explainability and Intelligibility

AI should be understandable, or intelligible, to developers,


medical professionals and regulators.

Two broad approaches to ensuring intelligibility are


improving the transparency and explainability of AI
technology.
Transparency
Transparency requires sufficient
information to be documented before the
deployment of an AI technology.

This helps to:

• Improve system quality


• Protect patient and public health
safety

It must be possible to audit an AI


technology.

WHO/ Yoshi Shimizu


Transparency
Transparency should include accurate
information about:
• Assumptions and limitations of
the technology
• Operating protocols
• The properties of the data
• Development of the algorithmic
model

WHO/ Yoshi Shimizu


Explainability
AI technologies should be explainable, meaning:

• Clear information should be supplied to


those who request or require it
• System information and communication
must be tailored to affected populations

WHO / David Spitz


Explainability
The complexity of many AI technologies might
frustrate both the explainer and the recipient.

In some instances, there is a possible trade-off


between:

• Full explainability of an algorithm (at


the cost of accuracy)
• Improved accuracy (at the cost of
explainability)

WHO / David Spitz


Transparency, Explainability and
Intelligibility
All algorithms should be rigorously tested,
in the settings in which the technology will
be used, to:

• Ensure they meet safety and efficacy


standards
• Identify performance differences based
on human characteristics

There should be robust, independent


oversight of such tests and evaluation.
Transparency, Explainability and
Intelligibility
Health care institutions, health systems
and public health agencies should regularly
publish information about:
• How decisions have been made
for adoption of an AI technology
• How it will be evaluated and how
often
• Its uses and known limitations
• The role of decision-making
4. Foster Responsibility and Accountability
Foster Responsibility and
Accountability
Although AI technologies usually perform
narrow specific tasks, it is the
responsibility of human stakeholders to
ensure that:

• Technologies can perform those tasks


• They are used under appropriate
conditions
Responsibility
To use AI technologies responsibly, health care
providers require:
• Clear, transparent specification of the tasks
that systems can perform
• The conditions under which they can
achieve the desired level of performance
Responsibility: Human Warranty
Responsibility can be assured by the application of
“human warranty”.

The goal is to ensure that the AI technology:

• Is medically effective
• Can be evaluated

Example legislation is the bioethics law introduced


in France in 2021.

WHO / Blink Media


- Hannah Reyes Morales
Accountability
When something does go wrong, there
should be accountability.

This should include:

• Appropriate redress mechanisms for


individuals and groups adversely
affected by algorithmically-informed
decisions
• Prompt access to effective remedies
from those attributed with
responsibility for their use

WHO / Blink Media


- Hannah Reyes Morales
Responsibility and Accountability
The use of AI technologies in
medicine requires attribution of
responsibility, in which
responsibility is distributed among
numerous agents.

When medical decisions by AI


technologies harm individuals,
responsibility and accountability
processes should clearly identify the
relative roles of manufacturers and
medical professional users.
5. Ensure Inclusiveness and Equity
Inclusiveness and Equity
Inclusiveness means that AI used in health
care should be designed to encourage the
widest possible appropriate, equitable use
and access.

WHO / Blink Media


- Tali Kimelman
Inclusiveness and Equity
AI technology should be made available in both HIC and LMIC.
To ensure this is feasible:

AI developers and vendors Industry and government

• Diversity of language • Ensure the “digital divide” is


• Ability and forms of not widened
communication

Avoid barriers to use Ensure equitable access


Inclusiveness and Equity
AI technologies should not be biased.

Bias is a threat to inclusiveness and equity


because it represents a departure from
equal treatment.

A system designed to diagnose


cancerous skin lesions that is trained
with data on one skin colour may not
generate accurate results for different
skin colours.
Inclusiveness and Equity
AI developers should ensure that AI data
are free of sampling bias.

• Accurate
• Complete
• Diverse

These parties have a duty to address


potential bias and avoid introducing or
worsening health disparities.
Inclusiveness and Equity: Example
Inclusiveness and Equity
Inclusiveness and Equity
AI technologies must be monitored and
evaluated.

Identify disproportionate effects on specific


groups of people, when technologies mirror or
exacerbate bias and discrimination.

Special provisions should be made to protect the


rights and welfare of vulnerable persons, with
appropriate mechanisms for redress.
6. Promote AI that is Responsive and Sustainable
Responsive and Sustainable
Responsiveness requires designers,
developers and medical professionals to
continuously, systematically and
transparently examine an AI technology.

This will help determine whether they are


responding adequately to agreed ethical
frameworks.
Responsive and Sustainable

Responsiveness requires AI to be consistent with


wider efforts to promote health systems and
environmental and workplace sustainability.
Responsive and Sustainable

AI technologies should be introduced only if they can be fully


integrated and sustained in the health care system

AI systems should be designed to minimize environmental


effects and align to the UN’s sustainable development goals

Governments and companies should address anticipated


workplace disruptions
You have now completed Unit
2 of Module 3: Ethical
Principles.

Next: End-of-module quiz.

WHO/Yoshi Shimizu

You might also like