Abstract
The translation of pronouns presents a special challenge to machine translation to this day, since it often requires context outside the current sentence. Recent work on models that have access to information across sentence boundaries has seen only moderate improvements in terms of automatic evaluation metrics such as BLEU. However, metrics that quantify the overall translation quality are ill equipped to measure gains from additional context. We argue that a different kind of evaluation is needed to assess how well models translate intersentential phenomena such as pronouns. This paper therefore presents a test suite of contrastive translations focused specifically on the translation of pronouns. Furthermore, we perform experiments with several context-aware models. We show that, while gains in BLEU are moderate for those systems, they outperform baselines by a large margin in terms of accuracy on our contrastive test set. Our experiments also show the effectiveness of parameter tying for multi-encoder architectures.
Original language | English |
---|---|
Title of host publication | EMNLP 2018 THIRD CONFERENCE ON MACHINE TRANSLATION (WMT18) |
Place of Publication | Brussels, Belgium |
Publisher | Association for Computational Linguistics |
Pages | 61-72 |
Number of pages | 12 |
Publication status | Published - Oct 2018 |
Event | EMNLP 2018 Third Conference on Machine Translation (WMT18) - Brussels, Belgium Duration: 31 Oct 2018 → 1 Nov 2018 http://www.statmt.org/wmt18/ |
Workshop
Workshop | EMNLP 2018 Third Conference on Machine Translation (WMT18) |
---|---|
Abbreviated title | WMT18 |
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/18 → 1/11/18 |
Internet address |