Restructuring Multimodal Interaction Data for Browsing and Searching

Saturnino Luz*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract

This chapter presents work on techniques for indexing and structuring recordings of interactive activities, such as collaborative editing, computer-mediated and computer-assisted meetings, and presentations. It argues that a model that can minimally account for such recursive inter-media relations based on different levels of text and speech segmentation is needed. The chapter proposes a model based on temporal links that induce a graph structure on multimedia records of multiparty communication and collaboration. It examines several examples of meeting recording and browsing activities and the technologies that support these activities in relation. Once restricted to denoting face-to-face interaction, the word “meeting” has, with the spread of information and communication technologies, taken on a much broader meaning. The requirements and issues discussed suggest that, in addition to technologies capable of capturing and analyzing interaction data such as speech, gestures, facial expressions, and text editing, one needs to be able to integrate these capabilities under a unified information structure.

Original languageEnglish
Title of host publicationSemantic Multimedia Analysis and Processing
PublisherCRC Press
Pages87-109
Number of pages23
ISBN (Electronic)9781466575509
ISBN (Print)9781315215945
DOIs
Publication statusPublished - 1 Jan 2017

Fingerprint

Dive into the research topics of 'Restructuring Multimodal Interaction Data for Browsing and Searching'. Together they form a unique fingerprint.

Cite this