Using Text and Acoustic Features in Predicting Glottal Excitation Waveforms for Parametric Speech Synthesis with Recurrent Neural Networks

Lauri Juvela, Xin Wang, Shinji Takaki, Manu Airaksinen, Junichi Yamagishi, Paavo Alku

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This work studies the use of deep learning methods to directly model glottal excitation waveforms from context dependent text features in a text-to-speech synthesis system. Glottal vocoding is integrated into a deep neural network-based text-to-speech framework where text and acoustic features can be flexibly used as both network inputs or outputs. Long short-term memory recurrent neural networks are utilised in two stages: first, in mapping text features to acoustic features and second, in predicting glottal waveforms from the text and/or acoustic features. Results show that using the text features directly yields similar quality to the prediction of the excitation from acoustic features, both outperforming a baseline system based on using a fixed glottal pulse for excitation generation.
Original languageEnglish
Title of host publicationInterspeech 2016
PublisherInternational Speech Communication Association
Pages2283-2287
Number of pages5
DOIs
Publication statusPublished - 8 Sep 2016
EventInterspeech 2016 - San Francisco, United States
Duration: 8 Sep 201612 Sep 2016
http://www.interspeech2016.org/

Publication series

NameInternational Speech Communication Association
ISSN (Print)1990-9772

Conference

ConferenceInterspeech 2016
CountryUnited States
CitySan Francisco
Period8/09/1612/09/16
Internet address

Fingerprint Dive into the research topics of 'Using Text and Acoustic Features in Predicting Glottal Excitation Waveforms for Parametric Speech Synthesis with Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this