Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis

Xin Wang, Shinji Takaki, Junichi Yamagishi

Research output: Contribution to journalArticlepeer-review

Abstract

Neural waveform models have demonstrated better performance than conventional vocoders for statistical parametric speech synthesis. One of the best models, called WaveNet, uses an autoregressive (AR) approach to model the distribution of waveform sampling points, but it has to generate a waveform in a time-consuming sequential manner. Some new models that use inverse-autoregressive flow (IAF) can generate a whole waveform in a one-shot manner but require either a larger amount of training time or a complicated model architecture plus a blend of training criteria. As an alternative to AR and IAF-based frameworks, we propose a neural source-filter (NSF) waveform modeling framework that is straightforward to train and fast to generate waveforms. This framework requires three components to generate waveforms: a source module that generates a sine-based signal as excitation, a non-AR dilated-convolution-based filter module that transforms the excitation into a waveform, and a conditional module that pre-processes the input acoustic features for the source and filter modules. This framework minimizes spectral-amplitude distances for model training, which can be efficiently implemented using short-time Fourier transform routines. As an initial NSF study, we designed three NSF models under the proposed framework and compared them with WaveNet using our deep learning toolkit. It was demonstrated that the NSF models generated waveforms at least 100 times faster than our WaveNet-vocoder, and the quality of the synthetic speech from the best NSF model was comparable to that from WaveNet on a large single-speaker Japanese speech corpus.
Original languageEnglish
Pages (from-to)402-415
Number of pages14
JournalIEEE/ACM Transactions on Audio, Speech and Language Processing
Volume28
Early online date28 Nov 2019
DOIs
Publication statusPublished - 2020

Keywords

  • Speech synthesis
  • neural network
  • waveform model
  • short-time Fourier transform

Fingerprint

Dive into the research topics of 'Neural Source-Filter Waveform Models for Statistical Parametric Speech Synthesis'. Together they form a unique fingerprint.

Cite this