Learning Multimodal Latent Attributes

Y. Fu, T. M. Hospedales, T. Xiang, S. Gong

Research output: Contribution to journalArticlepeer-review


The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.
Original languageEnglish
Pages (from-to)303-316
Number of pages14
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Issue number2
Publication statusPublished - Feb 2014


Dive into the research topics of 'Learning Multimodal Latent Attributes'. Together they form a unique fingerprint.

Cite this