In this paper we describe the gathering of a corpus of synchronised speech and text interaction over the network. The data collection scenarios characterise audio meetings with a significant textual component. Unlike existing meeting corpora, the corpus described in this paper emphasises temporal relationships between speech and text media streams. This is achieved through detailed logging and time stamping of text editing operations, actions on shared user interface widgets and gesturing, as well as generation of speech activity profiles. A set of tools has been developed specifically for these purposes which can be used as a data collection platform for the development of meeting browsers. The data gathered to date consists of nearly 30 hours of recorded audio and time stamped editing operations and gestures.
|Number of pages||6|
|Publication status||Published - 2006|
|Event||5th International Conference on Language Resources and Evaluation, LREC 2006 - Genoa, Italy|
Duration: 22 May 2006 → 28 May 2006
|Conference||5th International Conference on Language Resources and Evaluation, LREC 2006|
|Period||22/05/06 → 28/05/06|