Learning Hidden Unit Contributions for Unsupervised Acoustic Model Adaptation

Pawel Swietojanski, Jinyu Li, Steve Renals

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

This work presents a broad study on the adaptation of neural network acoustic models by means of learning hidden unit contributions (LHUC) - a method that linearly re-combines hidden units in a speaker- or environment-dependent manner using small amounts of unsupervised adaptation data. We also extend LHUC to a speaker adaptive training (SAT) framework that leads to more adaptable DNN acoustic model, working both in a speaker-dependent and a speaker-independent manner, without the requirement to maintain auxiliary speaker-dependent feature extractors or to introduce significant speaker-dependent changes to the DNN structure. Through a series of experiments on four different speech recognition benchmarks (TED talks, Switchboard, AMI meetings and Aurora4) and over 270 test speakers we show that LHUC in both its test-only and SAT variants results in consistent word error rate reductions ranging from 5% to 23% relative depending on the task and the degree of mismatch between training and test data. In addition we have investigated the effect of the amount of adaptation data per speaker, the quality of unsupervised adaptation targets, the complementarity to other adaptation techniques, one-shot adaptation, and an extension to adapting DNNs trained in a sequence discriminative manner.
Original languageEnglish
Pages (from-to)1450-1463
Number of pages14
JournalIEEE/ACM Transactions on Audio, Speech and Language Processing
Volume24
Issue number8
Early online date28 Apr 2016
DOIs
Publication statusPublished - 1 Aug 2016

Fingerprint

Dive into the research topics of 'Learning Hidden Unit Contributions for Unsupervised Acoustic Model Adaptation'. Together they form a unique fingerprint.

Cite this