Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings

James Thewlis, Hakan Bilen, Andrea Vedaldi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Learning automatically the structure of object categories remains an important open problem in computer vision. In this paper, we propose a novel unsupervised approach that can discover and learn landmarks in object categories, thus characterizing their structure. Our approach is based on factorizing image deformations, as induced by a viewpoint change or an object deformation, by learning a deep neural network that detects landmarks consistently with such visual effects. Furthermore, we show that the learned landmarks establish meaningful correspondences between different object instances in a category without having to impose this requirement explicitly. We assess the method qualitatively on a variety of object types, natural and man-made. We also show that our unsupervised landmarks are highly predictive of manually-annotated landmarks in face benchmark datasets, and can be used to regress these with a high degree of accuracy.
Original languageEnglish
Title of host publication2017 IEEE International Conference on Computer Vision (ICCV)
Place of PublicationVenice, Italy
PublisherInstitute of Electrical and Electronics Engineers (IEEE)
Pages3229-3238
Number of pages10
ISBN (Electronic)978-1-5386-1032-9
ISBN (Print)978-1-5386-1033-6
DOIs
Publication statusPublished - 25 Dec 2017
Event2017 IEEE International Conference on Computer Vision - Venice, Italy
Duration: 22 Oct 201729 Oct 2017
http://iccv2017.thecvf.com/

Publication series

Name
PublisherIEEE
ISSN (Electronic)2380-7504

Conference

Conference2017 IEEE International Conference on Computer Vision
Abbreviated titleICCV 2017
CountryItaly
CityVenice
Period22/10/1729/10/17
Internet address

Fingerprint Dive into the research topics of 'Unsupervised Learning of Object Landmarks by Factorized Spatial Embeddings'. Together they form a unique fingerprint.

Cite this