Professional Documents
Culture Documents
Group Members
Akthar Azif(B180739EC)
Shamil Shihab(B190775EC)
3 Impact
The Fast Gradient Sign Method (FGSM) is a popular method for generating adversarial
examples in machine learning. It works by adding small perturbations to input data based on
the gradient of the loss function. These perturbations can cause a machine learning model to
misclassify the input, even if it appears unchanged to a human observer. Understanding FGSM
is critical to developing defenses against adversarial attacks.
Understanding iFGSM
The Iterative Fast Gradient Sign Method (iFGSM) is an extension of FGSM that generates
multiple perturbations to input data. It is more effective than FGSM and can produce stronger
adversarial examples. Understanding iFGSM is important for developing robust defenses
against adversarial attacks.
Understanding MI-FGSM
The Momentum Iterative Fast Gradient Sign Method (MI-FGSM) is a state-of-the-art method for
generating adversarial examples in machine learning. It uses a momentum term to
accumulate gradients across iterations, which can lead to even stronger attacks.
Understanding MI-FGSM is crucial for developing effective defenses against adversarial
attacks.
Approaches to Designing a Resilient
Machine Learning System
Defensive distillation Adversarial Training
Limited generalization
2 Ensemble Techniques
Increased computational and training costs
Overfitting
Difficulty in interpretation
Limited generalization
3 Defensive Distillation
Improved robustness
Simpler implementation
Reduced overfitting
Training Data and Defensive distillation
1. Train a "teacher" model on the original training data. This model is typically a large,
complex model that is trained to perform well on the training data.
2. Use the teacher model to generate a new set of training data by predicting the outputs of
the original training data. This new data set is referred to as the "distilled" data.
3. Train a "student" model on the distilled data. This model is typically a smaller, simpler
model that is easier and faster to train than the teacher model.
4. Use the student model for inference, i.e., to make predictions on new data.
RESULTS
RESULTS
RESULTS
Conclusion and Future Directions
Challenges Future
1. Robustness is difficult to achieve when 1. Future research must address adversarial
algorithmic decisions depend on attacks in the medical, financial, legal
features of adversarial distributions. and other domains.
Hinton, G., Vinyals, O., Dean, J. (2015, March 9). Distilling the Knowledge in a Neural
Network. arXiv.org. https://arxiv.org/abs/1503.02531v1