Mage-HMM-based speech synthesis reactively controlled by the articulators

Maria Astrinaki, Alexis Moinet, Junichi Yamagishi, Korin Richmond, Zhen-Hua Ling, Simon King, Thierry Dutoit

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we present the recent progress in the MAGE project. MAGE is a library for realtime and interactive (reactive) parametric speech synthesis using hidden Markov models (HMMs). Here, it is broadened in order to support not only the standard acoustic features (spectrum and f0) to model and synthesize speech but also to combine acoustic and articulatory features, such as tongue, lips and jaw positions. Such an integration enables the user to have a straight forward and meaningful control space to intuitively modify the synthesized phones in real time only by configuring the position of the articulators.
Original languageEnglish
Title of host publication8th ISCA Speech Synthesis Workshop
Number of pages5
Publication statusPublished - Sep 2013

Keywords

  • speech synthesis, reactive, articulators

Fingerprint Dive into the research topics of 'Mage-HMM-based speech synthesis reactively controlled by the articulators'. Together they form a unique fingerprint.

Cite this