You are on page 1of 94

Computational Topology-Neural Networks Meeting

Rocio Gonzalez-Díaz, Universidad de Sevilla

Sevilla, November 2-5, 2021

https://sites.google.com/view/comptopnnmeeting/2021

Standardization activities on Trustworthy AI


maurizio.mongelli@ieiit.cnr.it

1
Index of the presentation
• Trustworthy AI
– standardization
– EASA
• buildling blocks of the certification process

• Example: visual landing


– Metrics, data quality, learning process

• Challenges
– reference to recent EASA tender
2
Index of the presentation
• Trustworthy AI
– standardization
– EASA
• buildling blocks of the certification process*

• Example: visual landing


– Metrics, data quality, learning process

• Challenges
– reference to recent EASA tender
3
* We are member of the SAE G-34/EUROCAE WG-114 AI in Aviation Committee: https://bit.ly/3mRVky0
Trustworthy AI

4
Trustworthy AI
Tesla Autopilot https://bit.ly/3jRPV85

 Trustworthy AI
 Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI
systems. Without AI systems – and the human beings behind them – being demonstrably
worthy of trust, unwanted consequences may ensue and their uptake might be hindered,
preventing the realisation of the potentially vast social and economic
 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

 ISO/IEC JTC 1/SC 42 https://bit.ly/3mr7pd2, SOTIF (automotive, https://bit.ly/3nFTAHa),


EASA (Avionics, https://bit.ly/3Er2cYY): provide guidance to developing Artificial Intelligence
applications
Trustworthy AI
Tesla Autopilot https://bit.ly/3jRPV85

 Trustworthy AI
 Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI
systems. Without AI systems – and the human beings behind them – being demonstrably
worthy of trust, unwanted consequences may ensue and their uptake might be hindered,
preventing the realisation of the potentially vast social and economic
 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

 ISO/IEC JTC 1/SC 42 https://bit.ly/3mr7pd2, SOTIF (automotive, https://bit.ly/3nFTAHa),


EASA (Avionics, https://bit.ly/3Er2cYY): provide guidance to developing Artificial Intelligence
applications

M. Mongelli, "Robustness of AI models and the way to AI certification," SAE G-34/WG-114 Tech Talk Regular
Sessions, Sept. 2021, https://bit.ly/3CwZEYO.
European Aviation Safety Agency
(EASA)

7
EASA concept paper

https://bit.ly/3Er2cYY
8
Roadmap

9
Levels of automation

10
Building blocks

11
Building blocks

12
Risk analysis

13
Risk analysis with ML in the loop

Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS),


https://arxiv.org/abs/2102.01564.
14
www.safecop.eu ITA use case: https://www.youtube.com/watch?v=9tvJCrLMnbI
Risk mitigation (vehicle platooning)

15
Risk mitigation (vehicle platooning)

16
Risk mitigation (vehicle platooning)

M. Mongelli, "Design of countermeasure to packet falsification in vehicle platooning by


explainable artificial intelligence," Comp. Commun., 2021, 17
https://doi.org/10.1016/j.comcom.2021.06.026.
ConOps and ODD

18
ConOps and ODD

19
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
ConOps and ODD

20
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
Concept of Operation (ConOps)

21
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
Concept of Operation (ConOps)

22
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
Concept of Operation (ConOps)

23
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
Concept of Operation (ODD)

24
Concept of Design Assurance for Neural Networks» (CoDANN), https://bit.ly/3vX4egt.
W-shaped process for AI certification

25
(traditional) V-cycle

26
Traditional V-model

27
Traditional V-model

28
W-cycle

29
W-shaped process (why W?)

30
Example: visual landing

31
Visual landing ConOps

32
Visual landing ConOps

33
ConOps and CNN

34
Use case classification

35
Use case classification

36
Metrics

37
Metrics in requirements

38
Metrics in requirements

39
Data quality

40
Data quality

41
Data quality

42
Data analyst vs safety engineer

43
Interaction between
data analyst and safety engineer

Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS),


44
https://arxiv.org/abs/2102.01564.
Learning process

45
Learning process: generalization

46
Learning process: generalization

47
Learning process: generalization

48
Learning theory

49
Learning theory: input

Mathematical Foundations of Supervised Learning, shorturl.at/drzEL.


Learning theory: output
Learning theory: goal
Learning theory
Learning theory: ERM
Learning theory: errors
Learning theory: errors
Learning theory: bound?
Learning theory: bound?
Learning theory: PAC
Learning theory: Vapnik-Chervonenkis
1
𝒱 ℱ + log
ℙS supℎ∈ℱ |ℛ ℎ − ℛ ℎ | ≤ 𝛿 ≥1−𝛿
𝑛

𝒱 ℱ represents a measure of the complexity of the class ℱ.


Learning theory: Vapnik-Chervonenkis
1
𝒱 ℱ + log
ℙS supℎ∈ℱ |ℛ ℎ − ℛ ℎ | ≤ 𝛿 ≥1−𝛿
𝑛

𝒱 ℱ represents a measure of the complexity of the class ℱ.

1 1
Roughly speaking: 1) the size of the training set should be: 𝑛 ≥ log 𝒱 ℱ
𝜀 𝛿
Learning theory: Vapnik-Chervonenkis
1
𝒱 ℱ + log
ℙS supℎ∈ℱ |ℛ ℎ − ℛ ℎ | ≤ 𝛿 ≥1−𝛿
𝑛

𝒱 ℱ represents a measure of the complexity of the class ℱ.

1 1
Roughly speaking: 1) the size of the training set should be: 𝑛 ≥ log 𝒱 ℱ
𝜀 𝛿

2) 𝒱 ℱ is the number of points that can be taken separated in ℱ .


Learning theory: ℱ complexity
Learning theory: XOR via perceptron
Learning theory: trade-off
Learning theory: bound from test set
Learning process: robustness

67
Robustness and resilient work domains

68
Enhancing Reliability of Out-of-distribution Image in NN, https://arxiv.org/abs/1706.02690 .
Learning process: robustness

69
Enhancing Reliability of Out-of-distribution Image in NN, https://arxiv.org/abs/1706.02690 .
Learning process: robustness

70
Enhancing Reliability of Out-of-distribution Image in NN, https://arxiv.org/abs/1706.02690 .
Learning process: robustness

71
Enhancing Reliability of Out-of-distribution Image in NN, https://arxiv.org/abs/1706.02690 .
Challenges (EASA tender Nov. 2021)

https://www.easa.europa.eu/the-agency/procurement/calls-for-tender/easa2021hvp18

72
Challenges: generalization

73
Challenges: data quality

74
Challenges: robustness

75
THANK YOU:… Q&A

All Rights Reserved © Rulex, Inc. 2015


77
Trustworthy AI
Tesla Autopilot https://bit.ly/3jRPV85

 Trustworthy AI
 Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI
systems. Without AI systems – and the human beings behind them – being demonstrably
worthy of trust, unwanted consequences may ensue and their uptake might be hindered,
preventing the realisation of the potentially vast social and economic
 https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

 ISO/IEC JTC 1/SC 42 https://bit.ly/3mr7pd2, SOTIF (automotive, https://bit.ly/3nFTAHa),


EASA (Avionics, https://bit.ly/3Er2cYY)
 provide guidance to developing Artificial Intelligence applications
Scope and objectives

79
Learning process: robustness
Out-of-Distribution Detector for Neural Networks:
https://github.com/facebookresearch/odin

80
Use case driven

81
eXplainable AI (XAI)

82
eXplainable AI

83
Traditional V-model

84
EASA FUG4L1 objectives: XAI

85
EASA FUG4L1 objectives: XAI

86
EASA FUG4L1 objectives: XAI,
human interaction

87
EASA FUG4L1 objectives: XAI,
questionable, explanation is inherent to the
classification problem

88
EASA FUG4L1 objectives: XAI,
confidence of the rules…

89
EASA FUG4L1 objectives: XAI,
… and rule distance

S. Narteni, M. Ferretti, V. Rampa, M. Mongelli, Bag-of-Words Similarity in eXplainable AI,


submitted to ACM K-CAP 2021, https://bit.ly/38DQigz.
90
EASA FUG4L1 objectives: reliability

91
EASA FUG4L1 objectives:
Trustworthiness Analysis

92
Titolo
Bla bla

93
title

94

You might also like