Abstract / Description of output
Self-supervised speech models have grown fast during the past few years and have proven feasible for use in various downstream tasks. Some recent work has started to look at the characteristics of these models, yet many concerns have not been fully addressed. In this work, we conduct a study on emotional corpora to explore a popular self-supervised model -- wav2vec 2.0. Via a set of quantitative analysis, we mainly demonstrate that: 1) wav2vec 2.0 appears to discard paralinguistic information that is less useful for word recognition purposes; 2) for emotion recognition, representations from the middle layer alone perform as well as those derived from layer averaging, while the final layer results in the worst performance in some cases; 3) current self-supervised models may not be the optimal solution for downstream tasks that make use of non-lexical features. Our work provides novel findings that will aid future research in this area and theoretical basis for the use of existing models.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2022 IEEE Spoken Language Technology Workshop |
Publisher | Institute of Electrical and Electronics Engineers |
Pages | 868-875 |
Number of pages | 8 |
ISBN (Electronic) | 979-8-3503-9690-4, 979-8-3503-9689-8 |
ISBN (Print) | 979-8-3503-9691-1 |
DOIs | |
Publication status | Published - 27 Jan 2023 |
Event | The IEEE Spoken Language Technology Workshop, 2022 - Doha, Qatar Duration: 9 Jan 2023 → 12 Jan 2023 https://slt2022.org/ |
Workshop
Workshop | The IEEE Spoken Language Technology Workshop, 2022 |
---|---|
Abbreviated title | SLT 2022 |
Country/Territory | Qatar |
City | Doha |
Period | 9/01/23 → 12/01/23 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- wav2vec 2.0
- self-supervised learning
- speech emotion
- speech recognition
- paralinguistics