Abstract
Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers.
Original language | English |
---|---|
Title of host publication | Proc. INTERSPEECH 2023 |
Publisher | International Speech Communication Association |
Pages | 2968-2972 |
Number of pages | 5 |
DOIs | |
Publication status | Published - 20 Aug 2023 |
Event | Interspeech 2023 - Dublin, Ireland Duration: 20 Aug 2023 → 24 Aug 2023 Conference number: 24 https://www.interspeech2023.org/ |
Publication series
Name | Interspeech |
---|---|
ISSN (Electronic) | 1990-9772 |
Conference
Conference | Interspeech 2023 |
---|---|
Country/Territory | Ireland |
City | Dublin |
Period | 20/08/23 → 24/08/23 |
Internet address |
Keywords / Materials (for Non-textual outputs)
- self-supervised learning
- unsupervised speech processing
- speaker normalization