Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks

Denis Emelin, Ivan Titov, Rico Sennrich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Word sense disambiguation is a well-known source of translation errors in NMT. We posit that some of the incorrect disambiguation choices are due to models’ over-reliance on dataset artifacts found in training data, specifically superficial word co-occurrences, rather than a deeper understanding of the source text. We introduce a method for the prediction of disambiguation errors based on statistical data properties, demonstrating its effectiveness across several domains and model types. Moreover, we develop a simple adversarial attack strategy that minimally perturbs sentences in order to elicit disambiguation errors to further probe the robustness of translation models. Our findings indicate that disambiguation robustness varies substantially between domains and that different models trained on the same data are vulnerable to different attacks.
Original languageEnglish
Title of host publicationProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
PublisherAssociation for Computational Linguistics
Pages7635–7653
Number of pages19
ISBN (Print)978-1-952148-60-6
DOIs
Publication statusPublished - 16 Nov 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Online
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Detecting Word Sense Disambiguation Biases in Machine Translation for Model-Agnostic Adversarial Attacks'. Together they form a unique fingerprint.

Cite this