How Well Do Self-Supervised Models Transfer?

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.
Original languageEnglish
Title of host publication2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PublisherInstitute of Electrical and Electronics Engineers
Pages5414-5423
Number of pages19
ISBN (Electronic)978-1-6654-4509-2
ISBN (Print)978-1-6654-4510-8
DOIs
Publication statusPublished - 2 Nov 2021
EventIEEE Conference on Computer Vision and Pattern Recognition 2021 - Virtual
Duration: 19 Jun 202125 Jun 2021
http://cvpr2021.thecvf.com/

Publication series

Name
ISSN (Print)1063-6919
ISSN (Electronic)2575-7075

Conference

ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2021
Abbreviated titleCVPR 2021
Period19/06/2125/06/21
Internet address

Fingerprint

Dive into the research topics of 'How Well Do Self-Supervised Models Transfer?'. Together they form a unique fingerprint.

Cite this