Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures

Gongbo Tang, Mathias Müller, Annette Rios, Rico Sennrich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation. CNNs and self-attentional networks can connect distant words via shorter network paths than RNNs, and it has been speculated that this improves their ability to model long-range dependencies. However, this theoretical argument has not been tested empirically, nor have alternative explanations for their strong performance been explored in-depth. We hypothesize that the strong performance of CNNs and self-attentional networks could also be due to their ability to extract semantic features from the source text, and we evaluate RNNs, CNNs and self-attention networks on two tasks: subject-verb agreement (where capturing long-range dependencies is required) and word sense disambiguation (where semantic feature extraction is required). Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation
Original languageEnglish
Title of host publication2018 Conference on Empirical Methods in Natural Language Processing
Place of PublicationBrussels, Belgium
PublisherAssociation for Computational Linguistics
Number of pages10
Publication statusPublished - Nov 2018
Event2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium
Duration: 31 Oct 20184 Nov 2018
http://emnlp2018.org/

Conference

Conference2018 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2018
Country/TerritoryBelgium
CityBrussels
Period31/10/184/11/18
Internet address

Fingerprint

Dive into the research topics of 'Why Self-Attention? A Targeted Evaluation of Neural Machine Translation Architectures'. Together they form a unique fingerprint.

Cite this