Edinburgh Research Explorer

What do character-level models learn about morphology? The case of dependency parsing

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

http://aclweb.org/anthology/D18-1278
https://aclanthology.coli.uni-saarland.de/papers/D18-1278/d18-1278
Original languageEnglish
Title of host publicationProceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Place of PublicationBrussels, Belgium
PublisherAssociation for Computational Linguistics
Pages2573-2583
Number of pages11
Publication statusPublished - 1 Nov 2018
Event2018 Conference on Empirical Methods in Natural Language Processing - Square Meeting Center, Brussels, Belgium
Duration: 31 Oct 20184 Nov 2018
http://emnlp2018.org/

Conference

Conference2018 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2018
CountryBelgium
CityBrussels
Period31/10/184/11/18
Internet address

Abstract

When parsing morphologically-rich languages with neural models, it is beneficial to model input at the character level, and it has been claimed that this is because character-level models learn morphology. We test these claims by comparing character-level models to an oracle with access to explicit morphological analysis on twelve languages with varying morphological typologies. Our results highlight many strengths of character-level models, but also show that they are poor at disambiguating some words, particularly in the face of case syncretism. We then demonstrate that explicitly modeling morphological case improves our best model, showing that characterlevel models can benefit from targeted forms of explicit morphological modeling.

Event

2018 Conference on Empirical Methods in Natural Language Processing

31/10/184/11/18

Brussels, Belgium

Event: Conference

Download statistics

No data available

ID: 75323626