Edinburgh Research Explorer

Unsupervised cross-lingual knowledge transfer in DNN-based LVCSR

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationSpoken Language Technology Workshop (SLT), 2012 IEEE
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages246-251
Number of pages6
DOIs
Publication statusPublished - 2012

Abstract

We investigate the use of cross-lingual acoustic data to initialise deep neural network (DNN) acoustic models by means of unsupervised restricted Boltzmann machine (RBM) pre-training. DNNs for German are pretrained using one or all of German, Portuguese, Spanish and Swedish. The DNNs are used in a tandem configuration, where the network outputs are used as features for a hidden Markov model (HMM) whose emission densities are modeled by Gaussian mixture models (GMMs), as well as in a hybrid configuration, where the network outputs are used as the HMM state likelihoods. The experiments show that unsupervised pretraining is more crucial for the hybrid setups, particularly with limited amounts of transcribed training data. More importantly, unsupervised pretraining is shown to be language-independent.

    Research areas

  • Gaussian processes, hidden Markov models, neural nets, speech recognition, unsupervised learning, DNN-based LVCSR, GMM, Gaussian mixture models, HMM state likelihoods, RBM pretraining, automatic speech recognition systems, cross-lingual ASR, cross-lingual acoustic data, deep neural network acoustic models, hidden Markov model, hybrid configuration, hybrid setups, restricted Boltzmann machine, tandem configuration, unsupervised cross-lingual knowledge transfer, unsupervised pretraining, Hidden Markov models, Mel frequency cepstral coefficient, Speech, Speech recognition, Training, Training data, Cross-lingual ASR, Deep Neural Networks, GlobalPhone

Download statistics

No data available

ID: 12415144