Training Data Augmentation for Low-Resource Morphological Inflection

Toms Bergmanis, Katharina Kann, Hinrich Schütze, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This work describes the UoE-LMU submission for the CoNLL-SIGMORPHON 2017 Shared Task on Universal Morphological Reinflection, Subtask 1: given a lemma and target morphological tags, generate the target inflected form. We evaluate several ways to improve performance in the 1000-example setting: three methods to augment the training data with identical input-output pairs (i.e., autoencoding), a heuristic approach to identify likely pairs of inflectional variants from an un-labeled corpus, and a method for cross-lingual knowledge transfer. We find that autoencoding random strings works surprisingly well, outperformed only slightly by autoencoding words from an unlabelled corpus. The random string method also works well in the 10,000-example setting despite not being tuned for it. Among 18 submissions our system takes 1st and 6th place in the 10k and 1k settings, respectively.
Original languageEnglish
Title of host publicationProceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection
PublisherAssociation for Computational Linguistics
Number of pages9
Publication statusPublished - 4 Aug 2017
EventCoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection - Vancouver, Canada
Duration: 3 Aug 20174 Aug 2017


ConferenceCoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection
Abbreviated titleCoNLL SIGMORPHON 2017
Internet address

Cite this