Edinburgh Research Explorer

ALISA: An automatic lightly supervised speech segmentation and alignment tool

Research output: Contribution to journalArticle

Original languageEnglish
Pages (from-to)116-133
Number of pages18
JournalComputer Speech and Language
Volume35
Early online date3 Jul 2015
DOIs
Publication statusPublished - Jan 2016

Abstract

Abstract This paper describes the ALISA tool, which implements a lightly supervised method for sentence-level alignment of speech with imperfect transcripts. Its intended use is to enable the creation of new speech corpora from a multitude of resources in a language-independent fashion, thus avoiding the need to record or transcribe speech data. The method is designed so that it requires minimum user intervention and expert knowledge, and it is able to align data in languages which employ alphabetic scripts. It comprises a GMM-based voice activity detector and a highly constrained grapheme-based speech aligner. The method is evaluated objectively against a gold standard segmentation and transcription, as well as subjectively through building and testing speech synthesis systems from the retrieved data. Results show that on average, 70% of the original data is correctly aligned, with a word error rate of less than 0.5%. In one case, subjective listening tests show a statistically significant preference for voices built on the gold transcript, but this is small and in other tests, no statistically significant differences between the systems built from the fully supervised training data and the one which uses the proposed method are found.

    Research areas

  • Imperfect transcripts

Download statistics

No data available

ID: 20048372