Projects per year
Abstract
Recent studies have introduced methods for learning acoustic word embeddings (AWEs)—fixed-size vector representations of words which encode their acoustic features. Despite the widespread use of AWEs in speech processing research, they have only been evaluated quantitatively in their ability to discriminate between whole word tokens. To better understand the applications of AWEs in various downstream tasks and in cognitive modeling, we need to analyze the representation spaces of AWEs. Here we analyze basic properties of AWE spaces learned by a sequence-to-sequence encoder-decoder model in six typologically diverse languages. We first show that these AWEs preserve some information about words’ absolute duration and speaker. At the same time, the representation space of these AWEs is organized such that the distance between words’ embeddings increases with those words’ phonetic dissimilarity. Finally, the AWEs exhibit a word onset bias, similar to patterns reported in various studies on human speech processing and lexical access. We argue this is a promising result and encourage further evaluation of AWEs as a potentially useful tool in cognitive science, which could provide a link between speech processing and lexical memory.
Original language | English |
---|---|
Pages | 1-6 |
Number of pages | 6 |
Publication status | Published - 26 Apr 2020 |
Event | Bridging AI and Cognitive Science Workshop @ ICLR 2020 - Virtual Workshop, Addis Ababa, Ethiopia Duration: 26 Apr 2020 → 26 Apr 2020 https://baicsworkshop.github.io/ |
Conference
Conference | Bridging AI and Cognitive Science Workshop @ ICLR 2020 |
---|---|
Abbreviated title | BAICS 2020 |
Country/Territory | Ethiopia |
City | Addis Ababa |
Period | 26/04/20 → 26/04/20 |
Internet address |