Abstract
The attention mechanism is the crucial component of the transformer architecture. Recent research shows that most attention heads are not confident in their decisions and can be pruned. However, removing them before training a model results in lower quality. In this paper, we apply the lottery ticket hypothesis to prune heads in the early stages of training. Our experiments on machine translation show that it is possible to remove up to three-quarters of attention heads from transformer-big during early training with an average -0.1 change in BLEU for Turkish→English. The pruned model is 1.5 times as fast at inference, albeit at the cost of longer training. Our method is complementary to other approaches, such as teacher-student, with English→German student model gaining an additional 10% speed-up with 75% encoder attention removed and 0.2 BLEU loss.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2664–2674 |
Number of pages | 11 |
ISBN (Print) | 978-1-952148-60-6 |
Publication status | Published - 16 Nov 2020 |
Event | The 2020 Conference on Empirical Methods in Natural Language Processing - Online Duration: 16 Nov 2020 → 20 Nov 2020 https://2020.emnlp.org/ |
Conference
Conference | The 2020 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP 2020 |
Period | 16/11/20 → 20/11/20 |
Internet address |