Abstract
Recent work has shown that the encoder-decoder attention mechanisms in neural machine translation (NMT) are different from the word alignment in statistical machine translation. In this paper, we focus on analyzing encoder-decoder attention mechanisms, in the case of word sense disambiguation (WSD) in NMT models. We hypothesize that attention mechanisms pay more attention to context tokens when translating ambiguous words. We explore the attention distribution patterns when translating ambiguous nouns. Counter intuitively, we find that attention mechanisms are likely to distribute more attention to the ambiguous noun itself rather than context tokens, in comparison to other nouns. We conclude that attention is not the main mechanism used by NMT models to incorporate contextual information for WSD. The experimental results suggest that NMT models learn to encode contextual information necessary for WSD in the encoder hidden states. For the attention mechanism in Transformer models, we reveal that the first few layers gradually learn to “align” source and target tokens and the last few layers learn to extract features from the related but unaligned context tokens.
Original language | English |
---|---|
Title of host publication | EMNLP 2018 THIRD CONFERENCE ON MACHINE TRANSLATION (WMT18) |
Place of Publication | Brussels, Belgium |
Publisher | Association for Computational Linguistics |
Pages | 26-35 |
Number of pages | 10 |
Publication status | Published - Oct 2018 |
Event | EMNLP 2018 Third Conference on Machine Translation (WMT18) - Brussels, Belgium Duration: 31 Oct 2018 → 1 Nov 2018 http://www.statmt.org/wmt18/ |
Workshop
Workshop | EMNLP 2018 Third Conference on Machine Translation (WMT18) |
---|---|
Abbreviated title | WMT18 |
Country | Belgium |
City | Brussels |
Period | 31/10/18 → 1/11/18 |
Internet address |