Large multi-speaker datasets for TTS typically contain diverse speakers, recording conditions, styles and quality of data. Although one might generally presume that more data is better, in this paper we show that a model trained on a carefully-chosen subset of speakers from LibriTTS provides significantly better quality synthetic speech than a model trained on a larger set. We propose an unsupervised methodology to find this subset by clustering per-speaker acoustic representations.
|Name||Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH|
|Conference||21st Annual Conference of the International Speech Communication Association, INTERSPEECH 2020|
|Period||25/10/20 → 29/10/20|
- sequence-to-sequence models
- speaker representation
- speech synthesis