Edinburgh Research Explorer

Deep neural networks employing multi-task learning and stacked bottleneck features for speech synthesis.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationAcoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on
Place of PublicationBrisbane, Australia
Pages4460-4464
Number of pages5
Publication statusPublished - 1 Apr 2015

Abstract

Deep neural networks (DNNs) use a cascade of hidden representations to enable the learning of complex mappings from input to output features. They are able to learn the complex mapping from textbased linguistic features to speech acoustic features, and so perform text-to-speech synthesis. Recent results suggest that DNNs can produce more natural synthetic speech than conventional HMM-based statistical parametric systems. In this paper, we show that the hidden representation used within a DNN can be improved through the use of Multi-Task Learning, and that stacking multiple frames of hidden layer activations (stacked bottleneck features) also leads to improvements. Experimental results confirmed the effectiveness of the proposed methods, and in listening tests we find that stacked bottleneck features in particular offer a significant improvement over both a baseline DNN and a benchmark HMM system.

    Research areas

  • Speech synthesis, acoustic model, bottleneck feature, deep neural network, multi-task learning

Download statistics

No data available

ID: 24617665