Abstract / Description of output
Meaning in everyday communication is conveyed by various signals including spoken utterances and spontaneous hand gestures. The literature has attested that gestures function in synchrony with speech to deliver an integrated message, or a `single thought', exhibit language-specific properties and are subject to formal semantic modeling. One of the challenges in modeling synchrony is to use the form of the verbal signal, the form of the gesture and their relative timing to produce an integrated meaning representation. We meet this challenge by exploiting well-established semantic composition rules for deriving meaning from the form of the multimodal action. So, while the existing grammars (HPSG, LFG, CCG) produce semantic representations for unimodal input, we argue that any formalisation of language should fit into the multimodal perspective of synchronising language and co-verbal gesture. We will further show that any formalism that interfaces syntax/semantics and prosody is well-suited for regimenting synchrony and its effects on multimodal meaning, regardless of whether the surface syntactic structure is isomorphic to prosodic structure (e.g., CCG) or not (e.g., HPSG, LFG).
Original language | English |
---|---|
Title of host publication | Proceedings of the Fourth Conference of the International Society for Gesture Studies (ISGS) |
Place of Publication | Frankfurt Oder |
Publication status | Published - 2010 |
Event | 4th Conference of the International Society for Gesture Studies (ISGS) - European University Viadrina , Frankfurt/Oder, Germany Duration: 25 Jul 2010 → 30 Jul 2010 |
Conference
Conference | 4th Conference of the International Society for Gesture Studies (ISGS) |
---|---|
Country/Territory | Germany |
City | Frankfurt/Oder |
Period | 25/07/10 → 30/07/10 |