A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech

Oli Danyi Liu, Hao Tang, Naomi H. Feldman, Sharon Goldwater

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Speech perception involves storing and integrating sequentially presented items. Recent work in cognitive neuroscience has identified temporal and contextual characteristics in humans' neural encoding of speech that may facilitate this temporal processing. In this study, we simulated similar analyses with representations extracted from a computational model that was trained on unlabelled speech with the learning objective of predicting upcoming acoustics. Our simulations revealed temporal dynamics similar to those in brain signals, implying that these properties can arise without linguistic knowledge. Another property shared between brains and the model is that the encoding patterns of phonemes support some degree of cross-context generalization. However, we found evidence that the effectiveness of these generalizations depends on the specific contexts, which suggests that this analysis alone is insufficient to support the presence of context-invariant encoding.
Original languageEnglish
Title of host publicationProceedings of the Annual Meeting of the Cognitive Science Society
PublishereScholarship University of California
DOIs
Publication statusAccepted/In press - 5 Apr 2024
EventCogSci 2024: Dynamics of Cognition - Rotterdam, Netherlands
Duration: 24 Jul 202427 Jul 2024
https://cognitivesciencesociety.org/cogsci-2024/

Conference

ConferenceCogSci 2024
Abbreviated titleCogSci 2024
Country/TerritoryNetherlands
CityRotterdam
Period24/07/2427/07/24
Internet address

Keywords / Materials (for Non-textual outputs)

  • speech processing
  • speech representations
  • computational model

Fingerprint

Dive into the research topics of 'A predictive learning model can simulate temporal dynamics and context effects found in neural representations of continuous speech'. Together they form a unique fingerprint.

Cite this