Edinburgh Research Explorer

Synchronising audio and ultrasound by learning cross-modal embeddings

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Original languageEnglish
Title of host publicationINTERSPEECH 2019: Proceedings of the 20th Annual Conference of the International Speech Communication Association (ISCA)
Place of PublicationGraz, Austria
Number of pages5
Publication statusAccepted/In press - 17 Jun 2019
EventInterspeech 2019 - Graz, Austria
Duration: 15 Sep 201919 Sep 2019
https://www.interspeech2019.org/

Conference

ConferenceInterspeech 2019
CountryAustria
CityGraz
Period15/09/1919/09/19
Internet address

Abstract

Audiovisual synchronisation is the task of determining the time offset between speech audio and a video recording of the articulators. In child speech therapy, audio and ultrasound videos of the tongue are captured using instruments which rely on hardware to synchronise the two modalities at recording time. Hardware synchronisation can fail in practice, and no mechanism exists to synchronise the signals post hoc. To address this problem, we employ a two-stream neural network which exploits the correlation between the two modalities to find the offset. We train our model on recordings from 69 speakers, and show that it correctly synchronises 82.9% of test utterances from unseen therapy sessions and unseen speakers, thus considerably reducing the number of utterances to be manually synchronised. An analysis of model performance on the test utterances shows that directed phone articulations are more difficult to automatically synchronise compared to utterances containing natural variation in speech such as words, sentences, or conversations.

    Research areas

  • Audiovisual synchronisation, audio and ultrasound data, machine learning, neural-networks, self-supervision

Event

Interspeech 2019

15/09/1919/09/19

Graz, Austria

Event: Conference

ID: 97955818