Professional Documents
Culture Documents
AiImageGeneration
BACHELOR OF TECHNOLOGY
(Computer Science and Engineering.)
SUBMITTED BY:
AAKASH SINGH, NEELAM PANJWANI, NITKA RANA, VIVEK RAJ
2027177, 2027210, 2026995, 2027035
FEB 2023
Asst. Professor
Mrs. Aarzoo Rajora
1
Title: Enhancing Explainability in Artificial Intelligence through Causal
Reasoning
Abstract:
Introduction:
Artificial intelligence (AI) has made significant strides in recent years, with applications in
healthcare, finance, transportation, and other industries. However, the lack of transparency
and interpretability of AI models has become a major concern for end-users, stakeholders,
and regulators. AI models are often viewed as black boxes, where the input features and
the underlying logic that drives the model's predictions are not easily understood. This
lack of transparency can lead to mistrust and scepticism towards AI, limiting its adoption
and hindering its potential benefits.
Methodology:
We begin by describing our dataset, which is derived from the Framingham Heart Study.
The dataset contains information on several risk factors for heart disease, such as age,
blood pressure, and cholesterol levels. We use this dataset to train a logistic regression
model to predict the risk of heart disease.
Next, we use causal reasoning to identify the underlying causes and effects of the input
features and the model predictions. We do this by constructing a causal graph that
represents the causal relationships between the input features and the output variable. We
use the graph to identify the causal paths that lead from the input features to the output
variable. We then use these causal paths to explain the model's predictions.
We also use the causal graph to identify the most important features that contribute to
the model's predictions. We do this by computing the causal effect of each feature on the
output variable. This allows us to identify the features that have the strongest causal
influence on the model's predictions.
3
Results:
Our results show that the proposed approach enhances the interpretability of the AI
model while maintaining a high level of privacy and security. We demonstrate that the
causal graph provides insights into the underlying causal mechanisms of the model,
enabling a better understanding of the reasons behind the model's predictions.
We also show that the causal graph enables us to identify the most important features
that contribute to the model's predictions. Our results indicate that age, blood pressure,
and cholesterol levels are the most important risk factors for heart disease. This
information can be used to develop targeted interventions to reduce the risk of heart
disease in high-risk individuals.
Conclusion:
In conclusion, we have demonstrated that causal reasoning has the potential to enhance
the explainability of AI models. Our approach provides a better understanding of the
underlying causal mechanisms of the model and enables us to identify the most important
features that contribute to the model's predictions. This information can be used to
develop targeted interventions and improve healthcare outcomes. We believe that the
proposed approach has the potential to enable the widespread adoption of AI across
various industries, while maintaining a high level of privacy and security.
However, there are still several challenges to be addressed in enhancing the explainability
of AI models through causal reasoning. One of the main challenges is the difficulty in
identifying the true causal relationships in complex datasets with many variables. Another
challenge is the need for domain knowledge and expertise in constructing causal graphs
and interpreting the results.