A Unified Bayesian Framework for Adaptive Visual Tracking

Emanuel E. Zelniker, Timothy M. Hospedales, Shaogang Gong, Tao Xiang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

We propose a novel method for tracking objects in a video scene that undergo drastic changes in their appearance. These changes may arise due to out-of-plane rotation, abrupt or gradual changes in illumination in outdoor scenarios, or changing position with respect to near light-sources indoors. The key problem with most existing models is that they are either non-adaptive (rendering them non-robust to object appearance change) or use a single tracker output to heuristically update the appearance model at each iteration
(rendering them vulnerable to drift). In this paper, we take a step toward general
real-world tracking, in a principled manner, proposing a unified generative model for Bayesian multi-feature, adaptive target tracking. We show the performance of our method on a wide variety of video data, with a focus on surveillance scenarios.
Original languageEnglish
Title of host publicationProceedings of the British Machine Vision Conference
PublisherBMVA Press
Number of pages11
ISBN (Print)1-901725-39-1
Publication statusPublished - 2009


Dive into the research topics of 'A Unified Bayesian Framework for Adaptive Visual Tracking'. Together they form a unique fingerprint.

Cite this