Professional Documents
Culture Documents
SusAI HiParis Poster 090721-F
SusAI HiParis Poster 090721-F
1. E. Strubell et al., « Energy and Policy Considerations for Deep Learning in NLP »,
2019, http://arxiv.org/abs/1906.02243
Winning Tickets method: 2. J. Devlin, et al., « BERT: Pre-training of Deep Bidirectional Transformers for
Pruned subnetworks can Language Understanding», 2019, https://arxiv.org/pdf/1810.04805.pdf
reach test accuracy 3. A. Brown et al., « Language Models are Few-Shot Learners », 2020,
comparable to the original https://arxiv.org/abs/2005.14165
On the left : original convolutional layer. The figure on 4. A. Wolff et al. « CarbonTracker : Tracking and Predicting the Carbon Footprint of
network in the same Training Deep Learning Models », 2020, https://arxiv.org/abs/2007.03051
the right show application of low-rank constraints to the
number of iterations 5. convolutional layers with rank-K. 5. J. Frankle and M. Carbin, « The Lottery Ticket Hypothesis: Finding Sparse, Trainable
Going from FP32 to FP16 can increase the inference
Neural Networks », 2018, http://arxiv.org/abs/1803.03635
time (fps) up to 70%. The efficiency of quantization 6. Y. Cheng, et al., « A Survey of Model Compression and Acceleration for Deep Neural
This figure is extract from Y. Wang et al., « Pruning from method highly depends on algorithm nature. Networks », 2020, http://arxiv.org/abs/1710.09282
Scratch », 2019 http://arxiv.org/abs/1909.12579