ALISA: An automatic lightly supervised speech segmentation and alignment tool

A. Stan, Y. Mamiya, J. Yamagishi, P. Bell, O. Watts, R.A.J. Clark, S. King

Research output: Contribution to journalArticlepeer-review


This paper describes the ALISA tool, which implements a lightly supervised method for sentence-level alignment of speech with imperfect transcripts. Its intended use is to enable the creation of new speech corpora from a multitude of resources in a language-independent fashion, thus avoiding the need to record or transcribe speech data. The method is designed so that it requires minimum user intervention and expert knowledge, and it is able to align data in languages which employ alphabetic scripts. It comprises a GMM-based voice activity detector and a highly constrained grapheme-based speech aligner. The method is evaluated objectively against a gold standard segmentation and transcription, as well as subjectively through building and testing speech synthesis systems from the retrieved data. Results show that on average, 70% of the original data is correctly aligned, with a word error rate of less than 0.5%. In one case, subjective listening tests show a statistically significant preference for voices built on the gold transcript, but this is small and in other tests, no statistically significant differences between the systems built from the fully supervised training data and the one which uses the proposed method are found.
Original languageEnglish
Pages (from-to)116-133
Number of pages18
JournalComputer Speech and Language
Early online date3 Jul 2015
Publication statusPublished - 31 Jan 2016


  • Imperfect transcripts


Dive into the research topics of 'ALISA: An automatic lightly supervised speech segmentation and alignment tool'. Together they form a unique fingerprint.

Cite this