Projects per year
Abstract / Description of output
Self-supervised representation learning (SSRL) methods aim to provide powerful, deep feature learning without the requirement of large annotated data sets, thus alleviating the annotation bottleneck—one of the main barriers to the practical deployment of deep learning today. These techniques have advanced rapidly in recent years, with their efficacy approaching and sometimes surpassing fully supervised pretraining alternatives across a variety of data modalities, including image, video, sound, text, and graphs. This article introduces this vibrant area, including key concepts, the four main families of approaches and associated state-of-the-art techniques, and how self-supervised methods are applied to diverse modalities of data. We further discuss practical considerations including workflows, representation transferability, and computational cost. Finally, we survey major open challenges in the field, that provide fertile ground for future work.
Original language | English |
---|---|
Pages (from-to) | 42-62 |
Number of pages | 21 |
Journal | IEEE Signal Processing Magazine |
Volume | 39 |
Issue number | 3 |
DOIs | |
Publication status | Published - 6 May 2022 |
Fingerprint
Dive into the research topics of 'Self-Supervised Representation Learning: Introduction, advances, and challenges'. Together they form a unique fingerprint.Projects
- 2 Finished
-
Signal Processing in the Information Age
Davies, M., Hopgood, J., Hospedales, T., Mulgrew, B., Thompson, J., Tsaftaris, S. & Yaghoobi Vaighan, M.
1/07/18 → 31/03/24
Project: Research
-
UK Robotics and Artificial Intelligence Hub for Offshore Energy Asset Integrity Management (ORCA)
Vijayakumar, S., Mistry, M., Ramamoorthy, R. & Williams, C.
1/10/17 → 31/03/22
Project: Research