Edinburgh Research Explorer

Adaptive Feature Selection for End-to-End Speech Translation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Documents

Original languageEnglish
Title of host publicationFindings of EMNLP
Number of pages12
Publication statusAccepted/In press - 15 Sep 2020
EventThe 2020 Conference on Empirical Methods in Natural Language Processing - Virtual conference
Duration: 16 Nov 202020 Nov 2020
https://2020.emnlp.org/

Conference

ConferenceThe 2020 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2020
CityVirtual conference
Period16/11/2020/11/20
Internet address

Abstract

Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to ASR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take L0DROP (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech En-Fr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out ∼84% temporal features, yielding an average translation gain of ∼1.3–1.6 BLEU and a decoding speedup of ∼1.4×. In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation).

Event

The 2020 Conference on Empirical Methods in Natural Language Processing

16/11/2020/11/20

Virtual conference

Event: Conference

ID: 172589823