You are on page 1of 18

Role of Explainable Artificial Intelligence Approaches in

Cyber Security

Nishant Bawane
nishant.bawane.mtech2022@sitpune.edu.in
PRN:22070147002

Department of Electronics & Telecommunication Engineering,


Embedded System

Symbiosis Institute of Technologly, Pune

Guided By
Dr. Durgesh Nandan

(SIT) Nishant Bawane November 23, 2023 1 / 18


Overview
Introduction
Literature Review
Importance of Explainable AI
Preliminaries
Explainable Models
Explainable Classification of Cybersecurity
Security Properties of XAI
The Concept of XAI
Application Results
Conclusion and Future Scope
Reference

(SIT) Nishant Bawane November 23, 2023 2 / 18


Introduction

Overview:Deep learning’s extensive use in cybersecurity, particularly


in speech recognition and computer vision applications, is explored.
This signifies its pivotal role in advanced technologies.
Challenges:The major hurdle discussed is the incapability of AI
models to articulate clear explanations for their decisions, posing a
significant challenge in their adoption and trust.
Significance of XAI:Emphasizes the critical importance of
Explainable AI (XAI) as a research domain. This is driven by the
urgent need to render AI models interpretable for humans, addressing
practical scenarios such as loan approval
Research Focus:The research’s primary emphasis on conducting a
thorough literature assessment. The objective is to delve into the
intricate relationship between cybersecurity and XAI, addressing
challenges and opportunities in this dynamic intersection.

(SIT) Nishant Bawane November 23, 2023 3 / 18


Literature Review

AI in Cybersecurity::AI’s crucial role in network, computer, and


mobile security.
XAI in Cybersecurity:The paper underscores XAI’s importance in
intrusion detection and malware classification for interpretable
cybersecurity decisions.
Security and XAI:Examining attacks on XAI pipelines, the paper
focuses on defenses to ensure trustworthiness in AI-driven
cybersecurity.
Open Issues:Addressing XAI’s lack of uniformity, the literature notes
ongoing efforts through six W questions and practical strategies.

(SIT) Nishant Bawane November 23, 2023 4 / 18


Importance of Explainable AI

Operational Challenges: Security operators are overwhelmed by a


high volume of security notifications, many of which are false
positives.
XAI Solution: Introduces XAI as a solution to reduce alert fatigue,
enabling better threat recognition.
Enhancing Trust: Emphasizes the role of XAI in providing
transparency to AI models, ultimately building trust in their decisions.
Literature Assessment:Reviews existing literature on the security of
XAI, particularly its applications in intrusion detection and malware
classification.

(SIT) Nishant Bawane November 23, 2023 5 / 18


Importance of Explainable AI

Figure: Relationship between artificial intelligence and explainable artificial


intelligence.

1
1

(SIT) Nishant Bawane November 23, 2023 6 / 18


Preliminaries

Key Terms: :Defines crucial terms like Explainability and


Understandability in the context of AI.
XAI Techniques:: Introduces prominent XAI methods such as
Layer-wise Relevance Propagation (LRP) and Sensitivity Analysis.
Privacy Considerations: :Discusses the significance of privacy,
emphasizing GDPR compliance and data anonymization.
Decision-Making Impact::Explores how XAI contributes to better
decision-making processes

(SIT) Nishant Bawane November 23, 2023 7 / 18


Explainable Models

Introduction to Models:Describes various models like Linear


Regression, Logistic Regression, Generalized Linear Model, and
Decision Trees that can inherently explain their decisions.
Use Cases:Illustrates scenarios where these models can enhance
interpretability.
Comparison of Models:Evaluates the strengths and limitations of
each model concerning explainability.
XAI Impact: Discusses these models can provide explanations
without relying on external XAI methods.

(SIT) Nishant Bawane November 23, 2023 8 / 18


Explainable Classification of Cybersecurity

Transparency and Trust:Clear justifications for AI decisions.


Surrogate Models:Creation of sample semantics for confidentiality,
authenticity, and unavailability.
Global Explanations:Understanding how an entire model makes
decisions.
Real-world Application:Application of XAI in cybersecurity for
anomaly detection.

(SIT) Nishant Bawane November 23, 2023 9 / 18


Security Properties of XAI

Fairness: Evaluating and addressing biases in AI models.


Integrity: Attacks deceiving classifiers by altering feature inputs.
Privacy Concerns: Model inverting, membership inference, and
privacy implications.
Robustness : Importance of XAI in resisting attacks, examines
security features and potential study opportunities.

(SIT) Nishant Bawane November 23, 2023 10 / 18


The Concept of XAI

Introduction:XAI, or Explainable Artificial Intelligence, plays a


pivotal role across diverse fields, offering transparency and
interpretability in complex AI systems.
Ongoing DARPA Project:DARPA’s XAI program is at the forefront
of advancing AI capabilities. It strives to usher in ”third-wave AI
systems,” empowering robots to dynamically understand their
environment and create explanatory models.
Challenges and Solutions:The ongoing XAI initiative encounters
academic hurdles and machine learning challenges. It seeks to
overcome these obstacles, emphasizing the need for robust strategies
in developing AI systems.
Intersection with Cybersecurity:XAI’s significance extends to the
realm of cybersecurity, where its capabilities are crucial in addressing
security challenges. By enhancing interpretability, XAI contributes to
the reliability and trustworthiness of AI-driven security systems.
(SIT) Nishant Bawane November 23, 2023 11 / 18
The Concept of XAI

Figure: DARPA XAI Block Diagram

2
2

(SIT) Nishant Bawane November 23, 2023 12 / 18


The Concept of XAI

Figure: XAI concept.

(SIT) Nishant Bawane November 23, 2023 13 / 18


Application and Results

Enhanced Intrusion Detection:Using XAI on KDD datasets, our


approach improves trust in intrusion detection, revealing the
significance of features with entropy measures.
Interpreting Decision Tree Insights:We interpret Decision Tree
rules for intrusion classification, enhancing transparency and providing
actionable insights.
Benchmarking Accuracy Comparison:Contrasting with advanced
techniques, we benchmark Decision Tree accuracy, offering practical
insights beyond standard machine learning benchmarks.
Privacy Challenges in XAI:Addressing privacy concerns, we identify
model inverting and membership inference challenges, urging a careful
balance between explanatory quality and confidentiality in XAI
applications.

(SIT) Nishant Bawane November 23, 2023 14 / 18


Conclusion and Future Scope

Closing the Gap Between AI and Cybersecurity:Emphasizing the


need for deeper integration of AI and cybersecurity, our study
advocates for addressing real-world conditions in XAI methodologies.
Ineffectiveness of Current XAI Defenses: Highlighting the existing
vulnerabilities, we point out the current ineffectiveness of XAI
approach defenses, urging advancements to enhance the security of
explainable methods.
Critical Role of XAI in Cybersecurity:Recognizing the significance
of XAI in maintaining business policies and decision-making concerns
in cybersecurity, our study underscores the necessity for fair, open,
and unbiased treatment.
Empowering Security Analysts with XAI: XAI’s critical role in
cyber security is evident in its ability to empower security analysts by
identifying and explaining patterns of behavior indicative of cyber
threats, thereby facilitating quicker threat detection and investigation.
(SIT) Nishant Bawane November 23, 2023 15 / 18
References
1. A. Adadi and M. Berrada, ”Peeking Inside the Black-Box: A
Survey on Explainable Artificial Intelligence (XAI),” in IEEE Access,
vol. 6, pp. 52138-52160, 2018, doi: 10.1109/ACCESS.2018.2870052.
2. D. L. Aguilar, M. A. Medina-Pérez, O. Loyola-González, K. -K. R.
Choo and E. Bucheli-Susarrey, ”Towards an Interpretable
Autoencoder: A Decision-Tree-Based Autoencoder and its Application
in Anomaly Detection,” in IEEE Transactions on Dependable and
Secure Computing, vol. 20, no. 2, pp. 1048-1059, 1 March-April
2023, doi: 10.1109/TDSC.2022.3148331.
3. R. Alenezi and S. A. Ludwig, ”Explainability of Cybersecurity
Threats Data Using SHAP,” 2021 IEEE Symposium Series on
Computational Intelligence (SSCI), Orlando, FL, USA, 2021, pp.
01-10, doi: 10.1109/SSCI50451.2021.9659888.
4. A. Alqaraawi, M. Schuessler, P. Weiß, E. Costanza, and N.
Berthouze, “Evaluating saliency map explanations for convolutional
neural networks: a user study”, 2020 In Proceedings of the 25th
International Conference on Intelligent User Interfaces (IUI ’20).
(SIT) Nishant Bawane November 23, 2023 16 / 18
References
5. Christopher J. Anders, Plamen Pasliev, Ann-Kathrin Dombrowski,
Klaus-Robert Müller, and Pan Kessel. 2020. Fairwashing explanations
with off-manifold detergent. In Proceedings of the 37th International
Conference on Machine Learning (ICML’20). JMLR.org, Article 30,
314–323.
6. Liat Antwarg, Ronnie Mindlin Miller, Bracha Shapira, Lior Rokach,
“Explaining anomalies detected by autoencoders using Shapley
Additive Explanations”, Expert Systems with Applications, 2021,
Volume 186, 115736,https://doi.org/10.1016/j.eswa.2021.115736.
7. Rajesh Kumar, Zhang Xiaosong, Riaz Ullah Khan, Jay Kumar, and
Ijaz Ahad. 2018. Effective and Explainable Detection of Android
Malware Based on Machine Learning Algorithms. In Proceedings of
the 2018 International Conference on Computing and Artificial
Intelligence (ICCAI 2018). Association for Computing Machinery, New
York, NY, USA, 35–40. https://doi.org/10.1145/3194452.3194465
8. D. V. Lindberg and H. K. H. Lee, “Optimization under constraints
by applying an asymmetric entropy measure,” J. Comput. Graph.
(SIT) Nishant Bawane November 23, 2023 17 / 18
Thank You

(SIT) Nishant Bawane November 23, 2023 18 / 18

You might also like