Projects per year
Abstract
Contrastive self-supervised learning has gained attention for its ability to create high-quality representations from large unlabelled data sets. A key reason that these powerful features enable data-efficient learning of downstream tasks is that they provide augmentation invariance, which is often a useful inductive bias. However, the amount and type of invariances preferred is not known apriori, and varies across different downstream tasks. We therefore propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner. Our multi-task representation provides a strong and flexible feature that benefits diverse downstream tasks. We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance on all of them
Original language | English |
---|---|
Title of host publication | Proc. INTERSPEECH 2023 |
Publisher | International Speech Communication Association |
Pages | 4399-4403 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 20 Aug 2023 |
Event | Interspeech 2023 - Dublin, Ireland Duration: 20 Aug 2023 → 24 Aug 2023 Conference number: 24 https://www.interspeech2023.org/ |
Publication series
Name | Interspeech |
---|---|
ISSN (Electronic) | 1990-9772 |
Conference
Conference | Interspeech 2023 |
---|---|
Country/Territory | Ireland |
City | Dublin |
Period | 20/08/23 → 24/08/23 |
Internet address |
Fingerprint
Dive into the research topics of 'MT-SLVR: Multi-Task Self-Supervised Learning for Transformation In(Variant) Representations'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Signal Processing in the Information Age
Davies, M., Hopgood, J., Hospedales, T., Mulgrew, B., Thompson, J., Tsaftaris, S. & Yaghoobi Vaighan, M.
1/07/18 → 31/03/24
Project: Research