Abstract / Description of output
Neural dependency parsing models that compose word representations from characters can presumably exploit morphosyntax when making attachment decisions. How much do they know about morphology? We investigate how well they handle morphological case, which is important for parsing. Our experiments on Czech, German and Russian suggest that adding explicit morphological case—either oracle or predicted—improves neural dependency parsing, indicating that the learned representations in these models do not fully encode the morphological knowledge that they need, and can still benefit from targeted forms of explicit linguistic modeling.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP |
Place of Publication | Brussels, Belgium |
Publisher | ACL Anthology |
Pages | 356-358 |
Number of pages | 3 |
Publication status | Published - Nov 2018 |
Event | 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP - Brussels, Belgium Duration: 1 Nov 2018 → 1 Nov 2018 https://blackboxnlp.github.io/ |
Conference
Conference | 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP |
---|---|
Country/Territory | Belgium |
City | Brussels |
Period | 1/11/18 → 1/11/18 |
Internet address |