Projects per year
Abstract / Description of output
Electromagnetic articulography (EMA) captures the position and orientation of a number of markers, attached to the articulators, during speech. As such, it performs the same function for speech that conventional motion capture does for full-body movements acquired with optical modalities, a long-time staple technique of the animation industry. In this paper, EMA data is processed from a motion-capture perspective and applied to the visualization of an existing multimodal corpus of articulatory data, creating a kinematic 3D model of the tongue and teeth by adapting a conventional motion capture based animation paradigm. This is accomplished using off-the-shelf, open-source software. Such an animated model can then be easily integrated into multimedia applications as a digital asset, allowing the analysis of speech production in an intuitive and accessible manner. The processing of the EMA data, its co-registration with 3D data from vocal tract magnetic resonance imaging (MRI) and dental scans, and the modeling workflow are presented in detail, and several issues discussed.
Original language | English |
---|---|
Title of host publication | Proc. 12th International Conference on Auditory-Visual Speech Processing |
Pages | 55-60 |
Number of pages | 6 |
Publication status | Published - 2013 |
Keywords / Materials (for Non-textual outputs)
- speech production, articulatory data, electromagnetic articulography, vocal tract, motion capture, visualization
Fingerprint
Dive into the research topics of 'Speech animation using electromagnetic articulography as motion capture data'. Together they form a unique fingerprint.Projects
- 1 Finished
-
ULTRAX: Ultrax: Real-time tongue tracking for speech therapy using ultrasound
Richmond, K., Renals, S., Cleland, J. & Scobbie, J. M.
1/02/11 → 31/07/14
Project: Research