Edinburgh Research Explorer

Speech Enhancement for a Noise-Robust Text-to-Speech Synthesis System using Deep Recurrent Neural Networks

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related Edinburgh Organisations

Open Access permissions

Open

Original languageEnglish
Title of host publicationProceedings of Interspeech 2016
Place of PublicationSan Francisco, United States
Pages352-356
Number of pages5
DOIs
StatePublished - Sep 2016
EventInterspeech 2016 - San Francisco, United States
Duration: 8 Sep 201612 Sep 2016
http://www.interspeech2016.org/

Conference

ConferenceInterspeech 2016
CountryUnited States
CitySan Francisco
Period8/09/1612/09/16
Internet address

Abstract

Quality of text-to-speech voices built from noisy recordings is diminished. In order to improve it we propose the use of a recurrent neural network to enhance acoustic parameters prior to training. We trained a deep recurrent neural network using a parallel database of noisy and clean acoustics parameters as input and output of the network. The database consisted of multiple speakers and diverse noise conditions. We investigated using text-derived features as an additional input of the network. We processed a noisy database of two other speakers using this network and used its output to train an HMM acoustic text-to-synthesis model for each voice. Listening experiment results showed that the voice built with enhanced parameters was ranked significantly higher than the ones trained with noisy speech and speech that has been enhanced using a conventional enhancement system. The text-derived features improved results only for the female voice, where it was ranked as highly as a voice trained with clean speech.

Event

Interspeech 2016

8/09/1612/09/16

San Francisco, United States

Event: Conference

Download statistics

No data available

ID: 26377238