You are on page 1of 3

What is interpretability

In order to solve the "black box" problem of the model, scientists have proposed interpretable
machine learning. In addition to the accuracy of prediction, interpretability is also an important
measure of whether a machine learning model is trustworthy.
The core idea of interpretable machine learning (IML) is that when choosing a model, it is
necessary to consider both the prediction accuracy and interpretability of the model.

ML: Black Box Model IML

Accuracy Interpretability
AUC Human-readable
MSE Reason of result
F-measure Transparency and fairness
… …
Model supervision policy
Many countries and regions have issued laws and policies related to model supervision,
which objectively requires ML to be interpretable:

 The Chinese Central Bank issued the "FinTech Development Plan (2019-2021)" in August 2019. The planning document
puts forward requirements for technology in the financial industry. Satisfaction of financial products, and the
application of financial technology must be safe, controllable, advanced and efficient.
 The United States and the European Union have enacted laws and regulations related to AI applications. As early as
2011, the Federal Reserve and the U.S. Office of the Comptroller of the Currency jointly issued Model Risk
Management Guidelines (SR Letter 11-7: Supervisory Guidance on Model Risk Management);
 The European Union formally implemented the GDPR regulations in 2018. The full name of the GDPR is The General
Data Protection Regulation, which aims to enhance the protection of personal data and privacy. Article 22 stipulates
that individuals have the right to request an interpretation from the AI system.
Types of Interpretability
Intrinsic Interpretability:
1) The structure of the model itself is relatively simple, users can clearly see the internal structure of the
model, the results of the model have an explanatory effect, and the model is already interpretable when it
is designed.
2) Common internally interpretable models include logistic regression and shallow decision tree models

if petal length <=2.45:


class is setosa
Transform to else if petal width <=1.75:
human-readable rules. class is versicolor:
else:
class is virginical

You might also like