The Art of Detection

Elliot J. Crowley, Andrew Zisserman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

The objective of this work is to recognize object categories in paintings, such as cars, cows and cathedrals. We achieve this by training classifiers from natural images of the objects. We make the following contributions: (i) we measure the extent of the domain shift problem for image-level classifiers trained on natural images vs paintings, for a variety of CNN architectures; (ii) we demonstrate that classification-by-detection (i.e. learning classifiers for regions rather than the entire image) recognizes (and locates) a wide range of small objects in paintings that are not picked up by image-level classifiers, and combining these two methods improves performance; and (iii) we develop a system that learns a region-level classifier on-the-fly for an object category of a user’s choosing, which is then applied to over 60 million object regions across 210,000 paintings to retrieve localised instances of that category.
Original languageEnglish
Title of host publicationComputer Vision -- ECCV 2016 Workshops
Subtitle of host publicationAmsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part I
EditorsGang Hua, Hervé Jégou
Place of PublicationCham
PublisherSpringer
Pages721-737
Number of pages17
ISBN (Electronic)978-3-319-46604-0
DOIs
Publication statusE-pub ahead of print - 18 Sept 2016
EventEuropean Conference on Computer Vision 2016 Workshops - Amsterdam, Netherlands
Duration: 8 Oct 201616 Oct 2016
http://www.eccv2016.org/workshops/

Publication series

NameLecture Notes in Computer Science
Publisher Springer International Publishing
Volume9913
ISSN (Print)0302-9743

Conference

ConferenceEuropean Conference on Computer Vision 2016 Workshops
Abbreviated titleECCV 2016
Country/TerritoryNetherlands
CityAmsterdam
Period8/10/1616/10/16
Internet address

Fingerprint

Dive into the research topics of 'The Art of Detection'. Together they form a unique fingerprint.

Cite this