Abstract
A robust voice conversion function relies on a large amount of parallel training data, which is difficult to collect in practice. To tackle the sparse parallel training data problem in voice conversion, this paper describes a mixture of factor analyzers method which integrates prior knowledge from nonparallel speech into the training of conversion function. The experiments on CMU ARCTIC corpus show that the proposed method improves the quality and similarity of converted speech.
With both objective and subjective evaluations, we show the proposed method outperforms the baseline GMM method.
With both objective and subjective evaluations, we show the proposed method outperforms the baseline GMM method.
Original language | English |
---|---|
Pages (from-to) | 914-917 |
Number of pages | 4 |
Journal | IEEE Signal Processing Letters |
Volume | 19 |
Issue number | 12 |
DOIs | |
Publication status | Published - 2012 |