Projects per year
Independence assumptions during sequence generation can speed up inference, but parallel generation of highly inter-dependent tokens comes at a cost in quality. Instead of assuming independence between neighbouring tokens (semi-autoregressive decoding, SA), we take inspiration from bidirectional sequence generation and introduce a decoder that generates target words from the left-to-right and right-to-left directions simultaneously. We show that we can easily convert a standard architecture for unidirectional decoding into a bidirectional decoder by simply interleaving the two directions and adapting the word positions and self-attention masks. Our interleaved bidirectional decoder (IBDecoder) retains the model simplicity and training efficiency of the standard Transformer, and on five machine translationtasks and two document summarization tasks, achieves a decoding speedup of ∼2× compared to autoregressive decoding with comparable quality. Notably, it outperforms left-to-right SA because the independence assumptions in IBDecoder are more felicitous. To achieve even higher speedups, we explore hybrid models where we either simultaneously predict multiple neighbouring tokens per direction, or perform multi-directional decoding by partitioning the target sequence. These methods achieve speedups to 4×–11× across different tasks at the cost of <1 BLEU or <0.5 ROUGE (on average).
|Title of host publication||Proceedings of the Fifth Conference on Machine Translation|
|Place of Publication||Online|
|Publisher||Association for Computational Linguistics|
|Number of pages||13|
|Publication status||Published - 19 Nov 2020|
|Event||Fifth Conference on Machine Translation - Online Conference|
Duration: 19 Nov 2020 → 20 Nov 2020
|Conference||Fifth Conference on Machine Translation|
|Abbreviated title||WMT 2020|
|Period||19/11/20 → 20/11/20|
FingerprintDive into the research topics of 'Fast Interleaved Bidirectional Sequence Generation'. Together they form a unique fingerprint.
- 1 Active
1/05/17 → 30/04/22