Acquiring pronunciation knowledge from transcribed speech audio via multi-task learning

Siqi Sun, Korin Richmond

Research output: Working paperPreprint

Abstract / Description of output

Recent work has shown the feasibility and benefit of bootstrapping an integrated sequence-to-sequence (Seq2Seq) linguistic frontend from a traditional pipeline-based frontend for text-to-speech (TTS). To overcome the fixed lexical coverage of bootstrapping training data, previous work has proposed to leverage easily accessible transcribed speech audio as an additional training source for acquiring novel pronunciation knowledge for uncovered words, which relies on an auxiliary ASR model as part of a cumbersome implementation flow. In this work, we propose an alternative method to leverage transcribed speech audio as an additional training source, based on multi-task learning (MTL). Experiments show that, compared to a baseline Seq2Seq frontend, the proposed MTL-based method reduces PER from 2.5% to 1.6% for those word types covered exclusively in transcribed speech audio, achieving a similar performance to the previous method but with a much simpler implementation flow.
Original languageEnglish
PublisherArXiv
Pages1-5
Number of pages5
DOIs
Publication statusPublished - 15 Sept 2024

Keywords / Materials (for Non-textual outputs)

  • computation and language
  • sound
  • audio and speech processing

Fingerprint

Dive into the research topics of 'Acquiring pronunciation knowledge from transcribed speech audio via multi-task learning'. Together they form a unique fingerprint.

Cite this