Gaussian Visual-Linguistic Embedding for Zero-Shot Recognition

Tanmoy Mukherjee, Timothy Hospedales

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

An exciting outcome of research at the intersection of language and vision is that of zeroshot learning (ZSL). ZSL promises to scale visual recognition by borrowing distributed semantic models learned from linguistic corpora and turning them into visual recognition models. However the popular word-vector DSM embeddings are relatively impoverished in their expressivity as they model each word as a single vector point. In this paper we explore word-distribution embeddings for ZSL. We present a visual-linguistic mapping for ZSL in the case where words and visual categories are both represented by distributions. Experiments show improved results on ZSL benchmarks due to this better exploiting of intra-concept variability in each modality
Original languageEnglish
Title of host publicationProceedings of the 2016 Conference on Empirical Methods in Natural Language Processing
PublisherAssociation for Computational Linguistics (ACL)
Pages912-918
Number of pages7
ISBN (Print)978-1-945626-25-8
DOIs
Publication statusPublished - 5 Nov 2016
Event2016 Conference on Empirical Methods in Natural Language Processing - Austin, United States
Duration: 1 Nov 20165 Nov 2016
https://www.aclweb.org/mirror/emnlp2016/

Conference

Conference2016 Conference on Empirical Methods in Natural Language Processing
Abbreviated titleEMNLP 2016
Country/TerritoryUnited States
CityAustin
Period1/11/165/11/16
Internet address

Fingerprint

Dive into the research topics of 'Gaussian Visual-Linguistic Embedding for Zero-Shot Recognition'. Together they form a unique fingerprint.

Cite this