Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces

Oli Danyi Liu, Hao Tang, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Self-supervised speech representations are known to encode both speaker and phonetic information, but how they are distributed in the high-dimensional space remains largely unexplored. We hypothesize that they are encoded in orthogonal subspaces, a property that lends itself to simple disentanglement. Applying principal component analysis to representations of two predictive coding models, we identify two subspaces that capture speaker and phonetic variances, and confirm that they are nearly orthogonal. Based on this property, we propose a new speaker normalization method which collapses the subspace that encodes speaker information, without requiring transcriptions. Probing experiments show that our method effectively eliminates speaker information and outperforms a previous baseline in phone discrimination tasks. Moreover, the approach generalizes and can be used to remove information of unseen speakers.
Original languageEnglish
Title of host publicationProc. INTERSPEECH 2023
PublisherInternational Speech Communication Association
Pages2968-2972
Number of pages5
DOIs
Publication statusPublished - 20 Aug 2023
EventInterspeech 2023 - Dublin, Ireland
Duration: 20 Aug 202324 Aug 2023
Conference number: 24
https://www.interspeech2023.org/

Publication series

NameInterspeech
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech 2023
Country/TerritoryIreland
CityDublin
Period20/08/2324/08/23
Internet address

Keywords / Materials (for Non-textual outputs)

  • self-supervised learning
  • unsupervised speech processing
  • speaker normalization

Fingerprint

Dive into the research topics of 'Self-supervised Predictive Coding Models Encode Speaker and Phonetic Information in Orthogonal Subspaces'. Together they form a unique fingerprint.

Cite this