Speech animation using electromagnetic articulography as motion capture data

Ingmar Steiner, Korin Richmond, Slim Ouni

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Electromagnetic articulography (EMA) captures the position and orientation of a number of markers, attached to the articulators, during speech. As such, it performs the same function for speech that conventional motion capture does for full-body movements acquired with optical modalities, a long-time staple technique of the animation industry. In this paper, EMA data is processed from a motion-capture perspective and applied to the visualization of an existing multimodal corpus of articulatory data, creating a kinematic 3D model of the tongue and teeth by adapting a conventional motion capture based animation paradigm. This is accomplished using off-the-shelf, open-source software. Such an animated model can then be easily integrated into multimedia applications as a digital asset, allowing the analysis of speech production in an intuitive and accessible manner. The processing of the EMA data, its co-registration with 3D data from vocal tract magnetic resonance imaging (MRI) and dental scans, and the modeling workflow are presented in detail, and several issues discussed.
Original languageEnglish
Title of host publicationProc. 12th International Conference on Auditory-Visual Speech Processing
Pages55-60
Number of pages6
Publication statusPublished - 2013

Keywords / Materials (for Non-textual outputs)

  • speech production, articulatory data, electromagnetic articulography, vocal tract, motion capture, visualization

Fingerprint

Dive into the research topics of 'Speech animation using electromagnetic articulography as motion capture data'. Together they form a unique fingerprint.

Cite this