Secure Federated Learning
Framework for Data Poisoning
Attack Detection
Based on Autoencoder, LSTM, and
Generalized Robust Loss Function
Presented by: [Your Name]
Introduction
• Increasing cybersecurity threats in ML
systems.
• Data poisoning compromises training datasets.
• Need for decentralized, privacy-preserving,
robust detection.
Objectives
• Develop a Federated Learning (FL) framework
for data poisoning detection.
• Propose an AE-LSTM hybrid model.
• Integrate a Generalized Robust Loss Function
for improved detection.
Key Contributions
• Federated Learning-based framework to
detect and prevent poisoning.
• Data Cleaning operations to improve dataset
quality.
• AE-LSTM to extract spatial and temporal
features.
• Robust Loss Function to improve resilience
against adversarial samples.
Literature Review Highlights
• FLDA (Yin and Zeng) for gradient detection.
• FLDetector (Zhang et al.) for client-side
poisoning detection.
• Limitations: High overhead, low accuracy, poor
handling of complex attacks.
Problem Statement
• Existing models lack robustness and scalability.
• Conventional methods struggle with small
nodes and complex patterns.
• Need for privacy-preserving, high-accuracy
detection.
Proposed Model Architecture
• Data Collection and Cleaning.
• Federated learning using FedNet.
• AE-LSTM Hybrid Model for detection.
• Generalized Robust Loss Function for
enhanced precision.
Model Workflow
• Preprocessed data → Autoencoder →
Prediction Score 1.
• Preprocessed data → LSTM → Prediction
Score 2.
• Combine Scores → Apply Threshold → Detect
Attack.
Experimental Setup
• Dataset: Kitsune Network Attack Dataset
(Kaggle).
• Frameworks: Python, TensorFlow.
• Baselines Compared: DNN, AE, LSTM, SVM.
Results Summary
• Accuracy higher than baseline models (up to
93.75%).
• Precision ~98%.
• High Sensitivity and Specificity.
• Superior ROC (AUC) compared to traditional
models.
Advantages of Proposed Approach
• Protects sensitive data without sharing.
• Detects complex, gradual poisoning attacks.
• Low computational burden on edge devices.
• Resilient against evolving adversarial patterns.
Conclusion
• AE-LSTM + Federated Learning + Robust Loss =
Effective detection.
• High reliability and low false positives.
• Future Scope: Ensemble methods, healthcare,
adaptive thresholds.
Thank You
• Questions?
• Contact: [Your email/contact info]