A multilinear tongue model derived from speech related MRI data of the human vocal tract

Alexander Hewer*, Stefanie Wuhrer, Ingmar Steiner, Korin Richmond

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

We present a multilinear statistical model of the human tongue that captures anatomical and tongue pose related shape variations separately. The model is derived from 3D magnetic resonance imaging data of 11 speakers sustaining speech related vocal tract configurations. To extract model parameters, we use a minimally supervised method based on an image segmentation approach and a template fitting technique. Furthermore, we use image denoising to deal with possibly corrupt data, palate surface information reconstruction to handle palatal tongue contacts, and a bootstrap strategy to refine the obtained shapes. Our evaluation shows that, by limiting the degrees of freedom for the anatomical and speech related variations, to 5 and 4, respectively, we obtain a model that can reliably register unknown data while avoiding overfitting effects. Furthermore, we show that it can be used to generate plausible tongue animation by tracking sparse motion capture data.
Original languageEnglish
Pages (from-to)68-92
JournalComputer Speech and Language
Volume51
Early online date21 Feb 2018
DOIs
Publication statusE-pub ahead of print - 21 Feb 2018

Keywords

  • MRI
  • shape analysis
  • statistical model
  • tongue
  • vocal tract

Fingerprint

Dive into the research topics of 'A multilinear tongue model derived from speech related MRI data of the human vocal tract'. Together they form a unique fingerprint.

Cite this