Abstract / Description of output
Recently, non-recurrent architectures (convolutional, self-attentional) have outperformed RNNs in neural machine translation. CNNs and self-attentional networks can connect distant words via shorter network paths than RNNs, and it has been speculated that this improves their ability to model long-range dependencies. However, this theoretical argument has not been tested empirically, nor have alternative explanations for their strong performance been explored in-depth. We hypothesize that the strong performance of CNNs and self-attentional networks could also be due to their ability to extract semantic features from the source text, and we evaluate RNNs, CNNs and self-attention networks on two tasks: subject-verb agreement (where capturing long-range dependencies is required) and word sense disambiguation (where semantic feature extraction is required). Our experimental results show that: 1) self-attentional networks and CNNs do not outperform RNNs in modeling subject-verb agreement over long distances; 2) self-attentional networks perform distinctly better than RNNs and CNNs on word sense disambiguation
Original language | English |
---|---|
Title of host publication | 2018 Conference on Empirical Methods in Natural Language Processing |
Place of Publication | Brussels, Belgium |
Publisher | Association for Computational Linguistics |
Number of pages | 10 |
Publication status | Published - Nov 2018 |
Event | 2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium Duration: 31 Oct 2018 → 4 Nov 2018 http://emnlp2018.org/ |
Conference
Conference | 2018 Conference on Empirical Methods in Natural Language Processing |
---|---|
Abbreviated title | EMNLP 2018 |
Country/Territory | Belgium |
City | Brussels |
Period | 31/10/18 → 4/11/18 |
Internet address |