Formant-controlled HMM-based speech synthesis

Ming Lei, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Li-Rong Dai

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper proposes a novel framework that enables us to manipulate and control formants in HMM-based speech synthesis. In this framework, the dependency between formants and spectral features is modelled by piecewise linear transforms; formant parameters are effectively mapped by these to the means of Gaussian distributions over the spectral synthesis parameters. The spectral envelope features generated under the influence of formants in this way may then be passed to high-quality vocoders to generate the speech waveform. This provides two major advantages over conventional frameworks. First, we can achieve spectral modification by changing formants only in those parts where we want control, whereas the user must specify all formants manually in conventional formant synthesisers (e.g. Klatt). Second, this can produce high-quality speech. Our results show the proposed method can control vowels in the synthesized speech by manipulating F 1 and F 2 without any degradation in synthesis quality.
Original languageEnglish
Title of host publication Interspeech 2011
Subtitle of host publication12th Annual Conference of the International Speech Communication Association
PublisherInternational Speech Communication Association
Number of pages4
ISBN (Print)1990-9772
Publication statusPublished - 1 Aug 2011

Fingerprint Dive into the research topics of 'Formant-controlled HMM-based speech synthesis'. Together they form a unique fingerprint.

Cite this