Pre-training on high-resource speech recognition improves low-resource speech-to-text translation

Sameer Bansal, Herman Kamper, Karen Livescu, Adam Lopez, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We present a simple approach to improve direct speech-to-text translation (ST) when the source language is low-resource: we pre-train the model on a high-resource automatic speech recognition (ASR) task, and then fine-tune its parameters for ST. We demonstrate that our approach is effective by pre-training on 300 hours of English ASR data to improve SpanishEnglish ST from 10.8 to 20.2 BLEU when only 20 hours of Spanish-English ST training data are available. Through an ablation study, we find that the pre-trained encoder (acoustic model) accounts for most of the improvement, despite the fact that the shared language in these tasks is the target language text, not the source language audio. Applying this insight, we show that pre-training on ASR helps ST even when the ASR language differs from both source and target ST languages: pre-training on French ASR also improves Spanish-English ST. Finally, we show that the approach improves performance on a true low-resource task: pre-training on a combination of English ASR and French ASR improves Mboshi-French ST, where only 4 hours of data are available, from 3.5 to 7.1 BLEU.
Original languageEnglish
Title of host publicationProceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics
Place of PublicationMinneapolis, Minnesota
PublisherAssociation for Computational Linguistics
Pages58–68
Number of pages11
Volume1
DOIs
Publication statusPublished - 7 Jun 2019
Event2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics - Minneapolis, United States
Duration: 2 Jun 20197 Jun 2019
https://naacl2019.org/

Conference

Conference2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Abbreviated titleNAACL-HLT 2019
CountryUnited States
CityMinneapolis
Period2/06/197/06/19
Internet address

Fingerprint Dive into the research topics of 'Pre-training on high-resource speech recognition improves low-resource speech-to-text translation'. Together they form a unique fingerprint.

Cite this