Projects per year
The recurrent neural network language model (RNNLM) has been demonstrated to consistently reduce perplexities and automatic speech recognition (ASR) word error rates across a variety of domains. In this paper we propose a pre-training method for the RNNLM, by sharing the output weights of the feed forward neural network language model (NNLM) with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. We have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED talks data used in the International Workshop on Spoken Language Translation (IWSLT) evaluation campaigns. Across the experiments, we observe small improvements in perplexity and ASR word error rate.
|Title of host publication||INTERSPEECH 2014 15th Annual Conference of the International Speech Communication Association|
|Publisher||International Speech Communication Association|
|Number of pages||5|
|Publication status||Published - 2014|
FingerprintDive into the research topics of 'Feed Forward Pre-training for Recurrent Neural Network Language Models'. Together they form a unique fingerprint.
- 2 Finished
User Generated Dialogue System: uDialogue
Yamagishi, J. & Renals, S.
1/04/14 → 31/03/15
1/11/11 → 31/10/14