Edinburgh Research Explorer

Explicitly modeling case improves neural dependency parsing

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

  • Download as Adobe PDF

    Final published version, 172 KB, PDF document

    Licence: Creative Commons: Attribution (CC-BY)

http://aclweb.org/anthology/W18-5447
https://aclanthology.coli.uni-saarland.de/papers/W18-5447/w18-5447
Original languageEnglish
Title of host publicationProceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Place of PublicationBrussels, Belgium
PublisherACL Anthology
Pages356-358
Number of pages3
Publication statusPublished - Nov 2018
Event2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP - Brussels, Belgium
Duration: 1 Nov 20181 Nov 2018
https://blackboxnlp.github.io/

Conference

Conference2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
CountryBelgium
CityBrussels
Period1/11/181/11/18
Internet address

Abstract

Neural dependency parsing models that compose word representations from characters can presumably exploit morphosyntax when making attachment decisions. How much do they know about morphology? We investigate how well they handle morphological case, which is important for parsing. Our experiments on Czech, German and Russian suggest that adding explicit morphological case—either oracle or predicted—improves neural dependency parsing, indicating that the learned representations in these models do not fully encode the morphological knowledge that they need, and can still benefit from targeted forms of explicit linguistic modeling.

Event

Download statistics

No data available

ID: 76908989