Exemplar-based Speech Waveform Generation

Oliver Watts, Cassia Valentini Botinhao, Felipe Espic calderón, Simon King

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

This paper presents a simple but effective method for generating speech waveforms by selecting small units of stored speech to match a low-dimensional target representation. The method is designed as a drop-in replacement for the vocoder in a deep neural network-based text-to-speech system. Most previous work on hybrid unit selection waveform generation relies on phonetic annotation for determining unit
boundaries, or for specifying target cost, or for candidate preselection. In contrast, our waveform generator requires no phonetic information, annotation, or alignment. Unit boundaries are determined by epochs, and spectral analysis provides representations which are compared directly with target features at runtime. As in unit selection, we minimise a combination of target cost and join cost, but find that greedy left-to-right nearest-neighbour search gives similar results to dynamic programming. The method is fast and can generate the waveform incrementally. We use publicly available data and provide a permissively-licensed open source toolkit for reproducing our results.
Original languageEnglish
Title of host publicationInterspeech 2018
Place of PublicationHyderabad, India
Number of pages5
Publication statusPublished - 6 Sept 2018
EventInterspeech 2018 - Hyderabad International Convention Centre, Hyderabad, India
Duration: 2 Sept 20186 Sept 2018

Publication series

ISSN (Electronic)1990-9772


ConferenceInterspeech 2018
Internet address


Dive into the research topics of 'Exemplar-based Speech Waveform Generation'. Together they form a unique fingerprint.

Cite this