Abstract / Description of output
Word sense disambiguation is a well-known source of translation errors in NMT. We posit that some of the incorrect disambiguation choices are due to models’ over-reliance on dataset artifacts found in training data, specifically superficial word co-occurrences, rather than a deeper understanding of the source text. We introduce a method for the prediction of disambiguation errors based on statistical data properties, demonstrating its effectiveness across several domains and model types. Moreover, we develop a simple adversarial attack strategy that minimally perturbs sentences in order to elicit disambiguation errors to further probe the robustness of translation models. Our findings indicate that disambiguation robustness varies substantially between domains and that different models trained on the same data are vulnerable to different attacks.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) |
Publisher | Association for Computational Linguistics |
Pages | 7635–7653 |
Number of pages | 19 |
ISBN (Print) | 978-1-952148-60-6 |
DOIs | |
Publication status | Published - 16 Nov 2020 |
Event | The 2020 Conference on Empirical Methods in Natural Language Processing - Online Duration: 16 Nov 2020 → 20 Nov 2020 https://2020.emnlp.org/ |
Conference
Conference | The 2020 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP 2020 |
Period | 16/11/20 → 20/11/20 |
Internet address |