A Markov Clustering Topic Model for mining behaviour in video

T. Hospedales, S. Gong, T. Xiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper addresses the problem of fully automated mining of public space video data. A novel Markov Clustering Topic Model (MCTM) is introduced which builds on existing Dynamic Bayesian Network models (e.g. HMMs) and Bayesian topic models (e.g. Latent Dirichlet Allocation), and overcomes their drawbacks on accuracy, robustness and computational efficiency. Specifically, our model profiles complex dynamic scenes by robustly clustering visual events into activities and these activities into global behaviours, and correlates behaviours over time. A collapsed Gibbs sampler is derived for offline learning with unlabeled training data, and significantly, a new approximation to online Bayesian inference is formulated to enable dynamic scene understanding and behaviour mining in new video data online in real-time. The strength of this model is demonstrated by unsupervised learning of dynamic scene models, mining behaviours and detecting salient events in three complex and crowded public scenes.
Original languageEnglish
Title of host publication2009 IEEE 12th International Conference on Computer Vision
PublisherInstitute of Electrical and Electronics Engineers
Pages1165-1172
Number of pages8
ISBN (Electronic)978-1-4244-4419-9
ISBN (Print)978-1-4244-4420-5
DOIs
Publication statusPublished - Sept 2009

Fingerprint

Dive into the research topics of 'A Markov Clustering Topic Model for mining behaviour in video'. Together they form a unique fingerprint.

Cite this