Abstract / Description of output
In this paper we present a constraint-based analysis of the form-meaning mapping of deictic gesture and its synchronous speech signal. Based on an empirical study of multimodal corpora, we capture generalisations about well-formed multimodal utterances that support the preferred interpretations in the final context-of-use. More precisely, we articulate a multimodal grammar whose construction rules use the prosody, syntax and semantics of speech, the form and meaning of the deictic signal, as well as the temporal performance of speech relative to the temporal performance of deixis to constrain the derivation of a single multimodal tree and to map it to a meaning representation. The contribution of our project is two-fold: it augments the existing NLP resources with annotated speech and gesture corpora, and it also provides the theoretical grammar framework where the semantic composition of an utterance results from its speech-and-deixis synchrony.
Original language | English |
---|---|
Title of host publication | Proceedings of Traitement Automatique de Langues Naturelles (TALN 2011) |
Number of pages | 12 |
Publication status | Published - 2011 |