Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task

Jindřich Libovický, Zdeněk Kasner, Jindřich Helcl, Ondřej Dušek

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22 % weighted F1 absolute.
Original languageEnglish
Title of host publicationProceedings of the Fourth Workshop on Neural Generation and Translation
Place of PublicationOnline
PublisherAssociation for Computational Linguistics
Pages153-160
Number of pages8
ISBN (Electronic)978-1-952148-17-0
DOIs
Publication statusPublished - 10 Jul 2020
EventThe 4th Workshop on Neural Generation and Translation - Online workshop, Seattle, United States
Duration: 10 Jul 202010 Jul 2020
https://sites.google.com/view/wngt20

Workshop

WorkshopThe 4th Workshop on Neural Generation and Translation
Abbreviated titleWNGT 2020
CountryUnited States
CitySeattle
Period10/07/2010/07/20
Internet address

Fingerprint Dive into the research topics of 'Expand and Filter: CUNI and LMU Systems for the WNGT 2020 Duolingo Shared Task'. Together they form a unique fingerprint.

Cite this