Synchronising audio and ultrasound by learning cross-modal embeddings

Aciel Eshky, Manuel Ribeiro, Korin Richmond, Steve Renals

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Audiovisual synchronisation is the task of determining the time offset between speech audio and a video recording of the articulators. In child speech therapy, audio and ultrasound videos of the tongue are captured using instruments which rely on hardware to synchronise the two modalities at recording time. Hardware synchronisation can fail in practice, and no mechanism exists to synchronise the signals post hoc. To address this problem, we employ a two-stream neural network which exploits the correlation between the two modalities to find the offset. We train our model on recordings from 69 speakers, and show that it correctly synchronises 82.9% of test utterances from unseen therapy sessions and unseen speakers, thus considerably reducing the number of utterances to be manually synchronised. An analysis of model performance on the test utterances shows that directed phone articulations are more difficult to automatically synchronise compared to utterances containing natural variation in speech such as words, sentences, or conversations.
Original languageEnglish
Title of host publicationINTERSPEECH 2019: Proceedings of the 20th Annual Conference of the International Speech Communication Association (ISCA)
Place of PublicationGraz, Austria
PublisherInternational Speech Communication Association
Pages4100-4104
Number of pages5
DOIs
Publication statusPublished - 19 Sept 2019
EventInterspeech 2019 - Graz, Austria
Duration: 15 Sept 201919 Sept 2019
https://www.interspeech2019.org/

Publication series

Name
PublisherInternational Speech Communication Association
ISSN (Electronic)1990-9772

Conference

ConferenceInterspeech 2019
Country/TerritoryAustria
CityGraz
Period15/09/1919/09/19
Internet address

Keywords / Materials (for Non-textual outputs)

  • Audiovisual synchronisation
  • audio and ultrasound data
  • machine learning
  • neural-networks
  • self-supervision

Fingerprint

Dive into the research topics of 'Synchronising audio and ultrasound by learning cross-modal embeddings'. Together they form a unique fingerprint.

Cite this