Multimodal Speech Synthesis Architecture for Unsupervised Speaker Adaptation

Hieu-Thi Luong, Junichi Yamagishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper proposes a new architecture for speaker adaptation of multi-speaker neural-network speech synthesis systems, in which an unseen speaker’s voice can be built using a relatively small amount of speech data without transcriptions. This is sometimes called “unsupervised speaker adaptation”. More specifically, we concatenate the layers to the audio inputs when performing unsupervised speaker adaptation while we concatenate them to the text inputs when synthesizing speech from text. Two new training schemes for the new architecture are also proposed in this paper. These training schemes are not limited to speech synthesis; other applications are suggested. Experimental results show that the proposed model not only enables adaptation to unseen speakers using untranscribed speech but it also improves the performance of multi-speaker modeling and speaker adaptation using transcribed audio files.
Original languageEnglish
Title of host publicationProc. Interspeech 2018
Place of PublicationHyderabad, India
PublisherISCA
Pages2494-2498
Number of pages5
DOIs
Publication statusPublished - 6 Sep 2018
EventInterspeech 2018 - Hyderabad International Convention Centre, Hyderabad, India
Duration: 2 Sep 20186 Sep 2018
http://interspeech2018.org/

Publication series

Name
PublisherISCA
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech 2018
CountryIndia
CityHyderabad
Period2/09/186/09/18
Internet address

Fingerprint Dive into the research topics of 'Multimodal Speech Synthesis Architecture for Unsupervised Speaker Adaptation'. Together they form a unique fingerprint.

Cite this