Projects per year
Recurrent neural network language models (RNNLMs) have been shown to consistently improve Word Error Rates (WERs) of large vocabulary speech recognition systems employing n-gram LMs. In this paper we investigate supervised and unsupervised discriminative adaptation of RNNLMs in a broadcast transcription task to target domains defined by either genre or show. We have explored two approaches based on (1) scaling forward-propagated hidden activations (Learning Hidden Unit Contributions (LHUC) technique) and (2) direct fine-tuning of the parameters of the whole RNNLM. To investigate the effectiveness of the proposed methods we carry out experiments on multi-genre broadcast (MGB) data following the MGB-2015 challenge protocol. We observe small but significant improvements in WER compared to a strong unadapted RNNLM model.
|Title of host publication||Interspeech 2016|
|Number of pages||5|
|Publication status||Published - 8 Sep 2016|
|Event||Interspeech 2016 - San Francisco, United States|
Duration: 8 Sep 2016 → 12 Sep 2016
|Publisher||International Speech Communication Association|
|Period||8/09/16 → 12/09/16|
FingerprintDive into the research topics of 'Unsupervised Adaptation of Recurrent Neural Network Language Models'. Together they form a unique fingerprint.
- 3 Finished
1/02/16 → 31/01/19