How Well Do Self-Supervised Models Transfer?

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Self-supervised visual representation learning has seen huge progress recently, but no large scale evaluation has compared the many models now available. We evaluate the transfer performance of 13 top self-supervised models on 40 downstream tasks, including many-shot and few-shot recognition, object detection, and dense prediction. We compare their performance to a supervised baseline and show that on most tasks the best self-supervised models outperform supervision, confirming the recently observed trend in the literature. We find ImageNet Top-1 accuracy to be highly correlated with transfer to many-shot recognition, but increasingly less so for few-shot, object detection and dense prediction. No single self-supervised method dominates overall, suggesting that universal pre-training is still unsolved. Our analysis of features suggests that top self-supervised learners fail to preserve colour information as well as supervised alternatives, but tend to induce better classifier calibration, and less attentive overfitting than supervised learners.
Original languageEnglish
Title of host publicationProceedings of the Conference on Computer Vision and Pattern Recognition (CVPR 2021)
Number of pages19
Publication statusAccepted/In press - 3 Mar 2021
EventIEEE Conference on Computer Vision and Pattern Recognition 2021 - Virtual
Duration: 19 Jun 202125 Jun 2021


ConferenceIEEE Conference on Computer Vision and Pattern Recognition 2021
Abbreviated titleCVPR 2021
Internet address


Dive into the research topics of 'How Well Do Self-Supervised Models Transfer?'. Together they form a unique fingerprint.

Cite this