Cross-Lingual Topic Prediction for Speech Using Translations

Sameer Bansal, Herman Kamper, Adam Lopez, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Given a large amount of unannotated speech in a low-resource language, can we classify the speech utterances by topic? We consider this question in the setting where a small amount of speech in the low-resource language is paired with text translations in a high-resource language. We develop an effective cross-lingual topic classifier by training on just 20 hours of translated speech, using a recent model for direct speech-to-text translation. While the translations are poor, they are still good enough to correctly classify the topic of 1-minute speech segments over 70% of the time—a 20% improvement over a majority-class baseline. Such a system could be useful for humanitarian applications like crisis response, where incoming speech in a foreign low-resource language must be quickly assessed for further action.
Original languageEnglish
Title of host publicationICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Number of pages5
ISBN (Electronic)978-1-5090-6631-5
ISBN (Print)978-1-5090-6632-2
Publication statusPublished - 14 May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing - Barcelona, Spain
Duration: 4 May 20208 May 2020
Conference number: 45

Publication series

ISSN (Print)1520-6149
ISSN (Electronic)2379-190X


Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing
Abbreviated titleICASSP 2020

Keywords / Materials (for Non-textual outputs)

  • speech translation
  • Low-resource speech processing
  • speech classification
  • unwritten languages


Dive into the research topics of 'Cross-Lingual Topic Prediction for Speech Using Translations'. Together they form a unique fingerprint.

Cite this