0% found this document useful (0 votes)
43 views13 pages

Secure Federated Learning Framework

The document presents a framework for detecting data poisoning in machine learning systems using a Federated Learning approach combined with an AE-LSTM hybrid model and a Generalized Robust Loss Function. It highlights the limitations of existing models and demonstrates improved accuracy and precision in detection, achieving up to 93.75% accuracy and 98% precision. The proposed method ensures privacy while effectively addressing complex poisoning attacks with low computational overhead.

Uploaded by

reshap.one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views13 pages

Secure Federated Learning Framework

The document presents a framework for detecting data poisoning in machine learning systems using a Federated Learning approach combined with an AE-LSTM hybrid model and a Generalized Robust Loss Function. It highlights the limitations of existing models and demonstrates improved accuracy and precision in detection, achieving up to 93.75% accuracy and 98% precision. The proposed method ensures privacy while effectively addressing complex poisoning attacks with low computational overhead.

Uploaded by

reshap.one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

Secure Federated Learning

Framework for Data Poisoning


Attack Detection
Based on Autoencoder, LSTM, and
Generalized Robust Loss Function
Presented by: [Your Name]
Introduction

• Increasing cybersecurity threats in ML


systems.
• Data poisoning compromises training datasets.
• Need for decentralized, privacy-preserving,
robust detection.
Objectives

• Develop a Federated Learning (FL) framework


for data poisoning detection.
• Propose an AE-LSTM hybrid model.
• Integrate a Generalized Robust Loss Function
for improved detection.
Key Contributions

• Federated Learning-based framework to


detect and prevent poisoning.
• Data Cleaning operations to improve dataset
quality.
• AE-LSTM to extract spatial and temporal
features.
• Robust Loss Function to improve resilience
against adversarial samples.
Literature Review Highlights

• FLDA (Yin and Zeng) for gradient detection.


• FLDetector (Zhang et al.) for client-side
poisoning detection.
• Limitations: High overhead, low accuracy, poor
handling of complex attacks.
Problem Statement

• Existing models lack robustness and scalability.


• Conventional methods struggle with small
nodes and complex patterns.
• Need for privacy-preserving, high-accuracy
detection.
Proposed Model Architecture

• Data Collection and Cleaning.


• Federated learning using FedNet.
• AE-LSTM Hybrid Model for detection.
• Generalized Robust Loss Function for
enhanced precision.
Model Workflow

• Preprocessed data → Autoencoder →


Prediction Score 1.
• Preprocessed data → LSTM → Prediction
Score 2.
• Combine Scores → Apply Threshold → Detect
Attack.
Experimental Setup

• Dataset: Kitsune Network Attack Dataset


(Kaggle).
• Frameworks: Python, TensorFlow.
• Baselines Compared: DNN, AE, LSTM, SVM.
Results Summary

• Accuracy higher than baseline models (up to


93.75%).
• Precision ~98%.
• High Sensitivity and Specificity.
• Superior ROC (AUC) compared to traditional
models.
Advantages of Proposed Approach

• Protects sensitive data without sharing.


• Detects complex, gradual poisoning attacks.
• Low computational burden on edge devices.
• Resilient against evolving adversarial patterns.
Conclusion

• AE-LSTM + Federated Learning + Robust Loss =


Effective detection.
• High reliability and low false positives.
• Future Scope: Ensemble methods, healthcare,
adaptive thresholds.
Thank You

• Questions?
• Contact: [Your email/contact info]

You might also like