When Does Contrastive Visual Representation Learning Work?

Elijah Cole, Xuan Yang, Kimberly Wilber, Oisin Mac Aodha, Serge Belongie

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on finegrained visual classification tasks.
Original languageEnglish
Title of host publication2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherInstitute of Electrical and Electronics Engineers
Pages14735-14744
Number of pages10
ISBN (Electronic)978-1-6654-6946-3
ISBN (Print)978-1-6654-6947-0
DOIs
Publication statusPublished - 27 Sept 2022
EventIEEE/CVF Conference on Computer Vision and Pattern Recognition 2022
- New Orleans, United States
Duration: 19 Jun 202224 Jun 2022
https://cvpr2022.thecvf.com/

Publication series

NameConference on Computer Vision and Pattern Recognition (CVPR)
PublisherIEEE
ISSN (Print)1063-6919
ISSN (Electronic)2575-7075

Conference

ConferenceIEEE/CVF Conference on Computer Vision and Pattern Recognition 2022
Abbreviated titleCVPR 2022
Country/TerritoryUnited States
CityNew Orleans
Period19/06/2224/06/22
Internet address

Fingerprint

Dive into the research topics of 'When Does Contrastive Visual Representation Learning Work?'. Together they form a unique fingerprint.

Cite this