Integretation of Speech and Deictic Gesture in a Multimodal Grammar

Katya Alahverdzhieva, Alex Lascarides

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

In this paper we present a constraint-based analysis of the form-meaning mapping of deictic gesture and its synchronous speech signal. Based on an empirical study of multimodal corpora, we capture generalisations about well-formed multimodal utterances that support the preferred interpretations in the final context-of-use. More precisely, we articulate a multimodal grammar whose construction rules use the prosody, syntax and semantics of speech, the form and meaning of the deictic signal, as well as the temporal performance of speech relative to the temporal performance of deixis to constrain the derivation of a single multimodal tree and to map it to a meaning representation. The contribution of our project is two-fold: it augments the existing NLP resources with annotated speech and gesture corpora, and it also provides the theoretical grammar framework where the semantic composition of an utterance results from its speech-and-deixis synchrony.
Original languageEnglish
Title of host publicationProceedings of Traitement Automatique de Langues Naturelles (TALN 2011)
Number of pages12
Publication statusPublished - 2011

Fingerprint

Dive into the research topics of 'Integretation of Speech and Deictic Gesture in a Multimodal Grammar'. Together they form a unique fingerprint.

Cite this