Real-time control of expressive speech synthesis using kinect body tracking

Christophe Veaux, Maria Astrinaki, Keiichiro Oura, Robert A. J. Clark, Junichi Yamagishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The flexibility of statistical parametric speech synthesis has recently led to the development of interactive speech synthesis systems where different aspects of the voice output can be continuously controlled. The demonstration presented in this paper is based on MAGE/pHTS, a real-time synthesis system developed at Mons University. This system enhances the controllability and the reactivity of HTS by enabling the generation of the speech parameters on the fly. This demonstration gives an illustration of the new possibilities offered by this approach in terms of interaction. A kinect sensor is used to follow the gestures and body posture of the user and these physical parameters are mapped to the prosodic parameters of an HMM-based singing voice model. In this way, the user can directly control various aspect of the singing voice such as the vibrato, the fundamental frequency or the duration. An avatar is used to encourage and facilitate the user interaction.
Original languageEnglish
Title of host publicationThe Eighth ISCA Tutorial and Research Workshop on Speech Synthesis, Barcelona, Spain, August 31-September 2, 2013
Pages247-248
Number of pages2
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Real-time control of expressive speech synthesis using kinect body tracking'. Together they form a unique fingerprint.

Cite this