Unsupervised Adaptation of Recurrent Neural Network Language Models

Siva Reddy Gangireddy, Pawel Swietojanski, Peter Bell, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recurrent neural network language models (RNNLMs) have been shown to consistently improve Word Error Rates (WERs) of large vocabulary speech recognition systems employing n-gram LMs. In this paper we investigate supervised and unsupervised discriminative adaptation of RNNLMs in a broadcast transcription task to target domains defined by either genre or show. We have explored two approaches based on (1) scaling forward-propagated hidden activations (Learning Hidden Unit Contributions (LHUC) technique) and (2) direct fine-tuning of the parameters of the whole RNNLM. To investigate the effectiveness of the proposed methods we carry out experiments on multi-genre broadcast (MGB) data following the MGB-2015 challenge protocol. We observe small but significant improvements in WER compared to a strong unadapted RNNLM model.
Original languageEnglish
Title of host publicationInterspeech 2016
Pages2333-2337
Number of pages5
DOIs
Publication statusPublished - 8 Sep 2016
EventInterspeech 2016 - San Francisco, United States
Duration: 8 Sep 201612 Sep 2016
http://www.interspeech2016.org/

Publication series

NameInterspeech
PublisherInternational Speech Communication Association
ISSN (Print)1990-9772

Conference

ConferenceInterspeech 2016
CountryUnited States
CitySan Francisco
Period8/09/1612/09/16
Internet address

Fingerprint Dive into the research topics of 'Unsupervised Adaptation of Recurrent Neural Network Language Models'. Together they form a unique fingerprint.

Cite this