Analyzing ASR Pretraining for Low-Resource Speech-to-Text Translation

Mihaela C. Stoian, Sameer Bansal, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Previous work has shown that for low-resource source languages, automatic speech-to-text translation (AST) can be improved by pretraining an end-to-end model on automatic speech recognition (ASR) data from a high-resource language. However, it is not clear what factors—e.g., language relatedness or size of the pretraining data—yield the biggest improvements, or whether pretraining can be effectively combined with other methods such as data augmentation. Here, we experiment with pretraining on datasets of varying sizes, including languages related and unrelated to the AST source language. We find that the best predictor of final AST performance is the word error rate of the pretrained ASR model, and that differences in ASR/AST performance correlate with how phonetic information is encoded in the later RNN layers of our model. We also show that pretraining and data augmentation yield complementary benefits for AST.
Original languageEnglish
Title of host publicationICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherInstitute of Electrical and Electronics Engineers
Pages7909-7913
Number of pages5
ISBN (Electronic)978-1-5090-6631-5
ISBN (Print)978-1-5090-6632-2
DOIs
Publication statusPublished - 14 May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing - Barcelona, Spain
Duration: 4 May 20208 May 2020
Conference number: 45

Publication series

Name
PublisherIEEE
ISSN (Print)1520-6149
ISSN (Electronic)2379-190X

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP 2020
Country/TerritorySpain
CityBarcelona
Period4/05/208/05/20

Keywords / Materials (for Non-textual outputs)

  • speech-to-text translation
  • transfer learning
  • pre-training
  • speech recognition
  • data augmentation

Fingerprint

Dive into the research topics of 'Analyzing ASR Pretraining for Low-Resource Speech-to-Text Translation'. Together they form a unique fingerprint.

Cite this