Differentiable pooling for unsupervised speaker adaptation

Pawel Swietojanski, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper proposes a differentiable pooling mechanism to perform model-based neural network speaker adaptation. The proposed technique learns a speaker-dependent combination of activations within pools of hidden units, was shown to work well unsupervised, and does not require speaker-adaptive training. We have conducted a set of experiments on the TED talks data, as used in the IWSLT evaluations. Our results indicate that the approach can reduce word error rates (WERs) on standard IWSLT test sets by about 5–11% relative
compared to speaker-independent systems and was found complementary
to the recently proposed learning hidden units contribution (LHUC) approach, reducing WER by 6–13% relative. Both methods were also found to work well when adapting with small amounts of unsupervised data – 10 seconds is able to decrease the WER by 5% relative compared to the baseline speaker independent system.
Original languageEnglish
Title of host publicationProceedings IEEE International Conference on Acoustics, Speech and Signal Processing
Number of pages5
Publication statusPublished - 2015

Fingerprint Dive into the research topics of 'Differentiable pooling for unsupervised speaker adaptation'. Together they form a unique fingerprint.

Cite this