Edinburgh Research Explorer

Unsupervised Adaptation of Recurrent Neural Network Language Models

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationInterspeech 2016
Number of pages5
Publication statusPublished - 8 Sep 2016
EventInterspeech 2016 - San Francisco, United States
Duration: 8 Sep 201612 Sep 2016

Publication series

PublisherInternational Speech Communication Association
ISSN (Print)1990-9772


ConferenceInterspeech 2016
CountryUnited States
CitySan Francisco
Internet address


Recurrent neural network language models (RNNLMs) have been shown to consistently improve Word Error Rates (WERs) of large vocabulary speech recognition systems employing n-gram LMs. In this paper we investigate supervised and unsupervised discriminative adaptation of RNNLMs in a broadcast transcription task to target domains defined by either genre or show. We have explored two approaches based on (1) scaling forward-propagated hidden activations (Learning Hidden Unit Contributions (LHUC) technique) and (2) direct fine-tuning of the parameters of the whole RNNLM. To investigate the effectiveness of the proposed methods we carry out experiments on multi-genre broadcast (MGB) data following the MGB-2015 challenge protocol. We observe small but significant improvements in WER compared to a strong unadapted RNNLM model.


Interspeech 2016


San Francisco, United States

Event: Conference

Download statistics

No data available

ID: 26891921