Exemplar-based Speech Waveform Generation

Oliver Watts, Cassia Valentini Botinhao, Felipe Espic calderón, Simon King

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper presents a simple but effective method for generating speech waveforms by selecting small units of stored speech to match a low-dimensional target representation. The method is designed as a drop-in replacement for the vocoder in a deep neural network-based text-to-speech system. Most previous work on hybrid unit selection waveform generation relies on phonetic annotation for determining unit
boundaries, or for specifying target cost, or for candidate preselection. In contrast, our waveform generator requires no phonetic information, annotation, or alignment. Unit boundaries are determined by epochs, and spectral analysis provides representations which are compared directly with target features at runtime. As in unit selection, we minimise a combination of target cost and join cost, but find that greedy left-to-right nearest-neighbour search gives similar results to dynamic programming. The method is fast and can generate the waveform incrementally. We use publicly available data and provide a permissively-licensed open source toolkit for reproducing our results.
Original languageEnglish
Title of host publicationInterspeech 2018
Place of PublicationHyderabad, India
PublisherISCA
Pages2022-2026
Number of pages5
DOIs
Publication statusPublished - 6 Sept 2018
EventInterspeech 2018 - Hyderabad International Convention Centre, Hyderabad, India
Duration: 2 Sept 20186 Sept 2018
http://interspeech2018.org/

Publication series

Name
PublisherISCA
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech 2018
Country/TerritoryIndia
CityHyderabad
Period2/09/186/09/18
Internet address

Fingerprint

Dive into the research topics of 'Exemplar-based Speech Waveform Generation'. Together they form a unique fingerprint.

Cite this