History-based visual mining of semi-structured audio and text

Matt Mouley Bouamrane*, Saturnino Luz, Masood Masoodian

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Accessing specific or salient parts of multimedia recordings remains a challenge as there is no obvious way of structuring and representing a mix of space-based and timebased media. A number of approaches have been proposed which usually involve translating the continuous component of the multimedia recording into a space-based representation, such as text from audio through automatic speech recognition and images from video (keyframes). In this paper, we present a novel technique which defines retrieval units in terms of a log of actions performed on spacebased artefacts, and exploits timing properties and extended concurrency to construct a visual presentation of text and speech data. This technique can be easily adapted to any mix of space-based artefacts and continuous media.

Original languageEnglish
Title of host publicationMMM2006
Subtitle of host publication12th International Multi-Media Modelling Conference - Proceedings
Pages360-363
Number of pages4
DOIs
Publication statusPublished - 2006
EventMMM2006: 12th International Multi-Media Modelling Conference - Beijing, China
Duration: 4 Jan 20066 Jan 2006

Publication series

NameMMM2006: 12th International Multi-Media Modelling Conference - Proceedings
Volume2006

Conference

ConferenceMMM2006: 12th International Multi-Media Modelling Conference
Country/TerritoryChina
CityBeijing
Period4/01/066/01/06

Fingerprint

Dive into the research topics of 'History-based visual mining of semi-structured audio and text'. Together they form a unique fingerprint.

Cite this