Edinburgh Research Explorer

A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

  • Download as Adobe PDF

    Accepted author manuscript, 367 KB, PDF document

    Licence: Creative Commons: Attribution (CC-BY)

  • Download as Adobe PDF

    Final published version, 528 KB, PDF document

    Licence: Creative Commons: Attribution (CC-BY)

https://www.aclweb.org/anthology/D19-1102/
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing
PublisherAssociation for Computational Linguistics
Pages1105–1116
Number of pages13
ISBN (Print)978-1-950737-90-1
DOIs
Publication statusPublished - 4 Nov 2019
Event2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing - Hong Kong, Hong Kong
Duration: 3 Nov 20197 Nov 2019
https://www.emnlp-ijcnlp2019.org/

Conference

Conference2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing
Abbreviated titleEMNLP-IJCNLP 2019
CountryHong Kong
CityHong Kong
Period3/11/197/11/19
Internet address

Abstract

Parsers are available for only a handful of the world’s languages, since they require lots of training data. How far can we get with just a small amount of training data? We systematically compare a set of simple strategies for improving low-resource parsers: data augmentation, which has not been tested before; cross-lingual training; and transliteration. Experimenting on three typologically diverse low-resource languages—North Sámi, Galician, and Kazah—We find that (1) when only the low-resource treebank is available, data augmentation is very helpful; (2) when a related high-resource treebank is available, cross-lingual training is helpful and complements data augmentation; and (3) when the high-resource treebank uses a different writing system, transliteration into a shared orthographic spaces is also very helpful.

Download statistics

No data available

ID: 115876775