Abstract
The dataset contains the testing stimuli and listeners' MUSHRA test responses for the Interspeech 2016 paper "Waveform generation based on signal reshaping for statistical parametric speech synthesis". On this paper, we propose a new paradigm of waveform generation for Statistical Parametric Speech Synthesis that is based on neither source-filter separation nor sinusoidal modelling. We suggest that one of the main problems of current vocoding techniques is that they perform an extreme decomposition of the speech signal into source and filter, which is an underlying cause of "buzziness", "musical artifacts", or "muffled sound" in the synthetic speech. The proposed method avoids making unnecessary assumptions and decompositions as far as possible, and uses only the spectral envelope and F0 as parameters. Pre-recorded speech is used as a base signal, which is "reshaped" to match the acoustic specification predicted by the statistical model, without any source-filter decomposition. A detailed description of the method is presented, including implementation details and adjustments. Subjective listening test evaluations of complete DNN-based text-to-speech systems were conducted for two voices: one female and one male. The results show that the proposed method tends to outperform the state-of-the-art standard vocoder STRAIGHT, whilst using fewer acoustic parameters.
Data Citation
Espic, Felipe; Valentini-Botinhao, Cassia; Wu, Zhizheng; King, Simon. (2016). Listening test materials for "Waveform generation based on signal reshaping for statistical parametric speech synthesis", [dataset]. University of Edinburgh. The Centre for Speech Technology Research (CSTR). http://dx.doi.org/10.7488/ds/1433.
Date made available | 24 Jun 2016 |
---|---|
Publisher | Edinburgh DataShare |
Date of data production | 2016 |