Adaptive Feature Selection for End-to-End Speech Translation

Biao Zhang, Ivan Titov, Barry Haddow, Rico Sennrich

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to ASR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take L0DROP (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech En-Fr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out ∼84% temporal features, yielding an average translation gain of ∼1.3–1.6 BLEU and a decoding speedup of ∼1.4×. In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation).
Original languageEnglish
Title of host publicationFindings of the Association for Computational Linguistics: EMNLP 2020
PublisherAssociation for Computational Linguistics (ACL)
Pages2533-2544
Number of pages12
ISBN (Print)978-1-952148-90-3
DOIs
Publication statusPublished - 16 Nov 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Virtual conference
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
CityVirtual conference
Period16/11/2020/11/20
Internet address

Fingerprint

Dive into the research topics of 'Adaptive Feature Selection for End-to-End Speech Translation'. Together they form a unique fingerprint.

Cite this