Edinburgh Research Explorer

A Hierarchical Encoder-Decoder Model for Statistical Parametric Speech Synthesis

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationProceedings Interspeech 2017
Pages1133-1137
Number of pages5
DOIs
Publication statusPublished - 24 Aug 2017
EventInterspeech 2017 - Stockholm, Sweden
Duration: 20 Aug 201724 Aug 2017
http://www.interspeech2017.org/

Conference

ConferenceInterspeech 2017
CountrySweden
CityStockholm
Period20/08/1724/08/17
Internet address

Abstract

Current approaches to statistical parametric speech synthesis using Neural Networks generally require input at the same temporal resolution as the output, typically a frame every 5ms, or in some cases at waveform sampling rate. It is therefore necessary to fabricate highly-redundant frame-level (or sample level) linguistic features at the input. This paper proposes the use of a hierarchical encoder-decoder model to perform the sequence-to-sequence regression in a way that takes the input linguistic features at their original timescales, and preserves the relationships between words, syllables and phones. The proposed model is designed to make more effective use of suprasegmental features than conventional architectures, as well as being computationally efficient. Experiments were conducted on prosodically-varied audiobook material because the use of supra-segmental features is thought to be particularly important in this case. Both objective measures and results from subjective listening tests, which asked listeners to focus on prosody, show that the proposed method performs significantly better than a conventional architecture that requires the linguistic input to be at the acoustic frame rate.

We provide code and a recipe to enable our system to be reproduced using the Merlin toolkit.

Event

Interspeech 2017

20/08/1724/08/17

Stockholm, Sweden

Event: Conference

Download statistics

No data available

ID: 40918468