Combining Vocal Tract Length Normalization With Hierarchical Linear Transformations

L. Saheer, J. Yamagishi, P.N. Garner, J. Dines

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Recent research has demonstrated the effectiveness of vocal tract length normalization (VTLN) as a rapid adaptation technique for statistical parametric speech synthesis. VTLN produces speech with naturalness preferable to that of MLLR-based adaptation techniques, being much closer in quality to that generated by the original average voice model. However, with only a single parameter, VTLN captures very few speaker specific characteristics when compared to linear transform based adaptation techniques. This paper shows that the merits of VTLN can be combined with those of linear transform based adaptation in a hierarchical Bayesian framework, where VTLN is used as the prior information. A novel technique for propagating the gender and age information captured by the VTLN transform into constrained structural maximum a posteriori linear regression (CSMAPLR) adaptation is presented. This paper also compares this proposed technique to other combination techniques. Experiments are performed on both matched and mismatched training and test conditions, including gender, age, and recording environments. Text-to-speech (TTS) synthesis experiments show that the resulting transformation produces improved speech quality with better naturalness and intelligibility (similar to VTLN transformation) when compared to the CSMAPLR transformation, especially when the quantity of adaptation data is very limited. With more parameters to capture speaker characteristics, the proposed method performs better in speaker similarity compared to VTLN in mis-matched conditions. Hence, the proposed combination combines the quality and intelligibility of VTLN with the speaker similarity of CSMAPLR especially in the mismatched train and test conditions. Experiments are also performed using the automatic speech recognition (ASR) system in a unified framework as that of synthesis. This is to prove that the techniques developed for TTS can be plugged into ASR in order to improve the performance.
Original languageEnglish
Pages (from-to)262-272
Number of pages11
JournalIEEE Journal of Selected Topics in Signal Processing
Issue number2
Publication statusPublished - 1 Apr 2014

Keywords / Materials (for Non-textual outputs)

  • Bayes methods
  • regression analysis
  • speaker recognition
  • speech synthesis
  • ASR system
  • CSMAPLR adaptation
  • MLLR-based adaptation techniques
  • TTS synthesis
  • VTLN
  • age information
  • automatic speech recognition system
  • combination techniques
  • constrained structural maximum a posteriori linear regression adaptation
  • gender information
  • hierarchical Bayesian framework
  • hierarchical linear transformations
  • mismatched conditions
  • speaker similarity
  • speaker specific characteristics
  • statistical parametric speech synthesis
  • text-to-speech synthesis
  • vocal tract length normalization
  • Adaptation models
  • Estimation
  • Hidden Markov models
  • Regression tree analysis
  • Speech
  • Speech synthesis
  • Transforms
  • Constrained structural maximum a posteriori linear regression
  • hidden Markov models
  • speaker adaptation


Dive into the research topics of 'Combining Vocal Tract Length Normalization With Hierarchical Linear Transformations'. Together they form a unique fingerprint.

Cite this