Abstract / Description of output
We present our submission to the Simultaneous Translation And Paraphrase for Language Education (STAPLE) challenge. We used a standard Transformer model for translation, with a crosslingual classifier predicting correct translations on the output n-best list. To increase the diversity of the outputs, we used additional data to train the translation model, and we trained a paraphrasing model based on the Levenshtein Transformer architecture to generate further synonymous translations. The paraphrasing results were again filtered using our classifier. While the use of additional data and our classifier filter were able to improve results, the paraphrasing model produced too many invalid outputs to further improve the output quality. Our model without the paraphrasing component finished in the middle of the field for the shared task, improving over the best baseline by a margin of 10-22 % weighted F1 absolute.
Original language | English |
---|---|
Title of host publication | Proceedings of the Fourth Workshop on Neural Generation and Translation |
Place of Publication | Online |
Publisher | Association for Computational Linguistics |
Pages | 153-160 |
Number of pages | 8 |
ISBN (Electronic) | 978-1-952148-17-0 |
DOIs | |
Publication status | Published - 10 Jul 2020 |
Event | The 4th Workshop on Neural Generation and Translation - Online workshop, Seattle, United States Duration: 10 Jul 2020 → 10 Jul 2020 https://sites.google.com/view/wngt20 |
Workshop
Workshop | The 4th Workshop on Neural Generation and Translation |
---|---|
Abbreviated title | WNGT 2020 |
Country/Territory | United States |
City | Seattle |
Period | 10/07/20 → 10/07/20 |
Internet address |