Semantic embedding space for zero-shot action recognition

X. Xu, T. Hospedales, S. Gong

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The number of categories for action recognition is growing rapidly. It is thus becoming increasingly hard to collect sufficient training data to learn conventional models for each category. This issue may be ameliorated by the increasingly popular “zero-shot learning” (ZSL) paradigm. In this framework a mapping is constructed between visual features and a human interpretable semantic description of each category, allowing categories to be recognised in the absence of any training data. Existing ZSL studies focus primarily on image data, and attribute-based semantic representations. In this paper, we address zero-shot recognition in contemporary video action recognition tasks, using semantic word vector space as the common space to embed videos and category labels. This is more challenging because the mapping between the semantic space and space-time features of videos containing complex actions is more complex and harder to learn. We demonstrate that a simple self-training and data augmentation strategy can significantly improve the efficacy of this mapping. Experiments on human action datasets including HMDB51 and UCF101 demonstrate that our approach achieves the state-of-the-art zero-shot action recognition performance.
Original languageEnglish
Title of host publication2015 IEEE International Conference on Image Processing (ICIP)
PublisherInstitute of Electrical and Electronics Engineers
Pages63-67
Number of pages5
ISBN (Electronic)978-1-4799-8339-1
DOIs
Publication statusPublished - Sept 2015

Fingerprint

Dive into the research topics of 'Semantic embedding space for zero-shot action recognition'. Together they form a unique fingerprint.

Cite this