A Comparison of Manual and Automatic Voice Repair for Individual with Vocal Disabilities

Christophe Veaux, Junichi Yamagishi, Simon King

Research output: Chapter in Book/Report/Conference proceedingConference contribution


When individuals lose the ability to produce their own speech, due to degenerative diseases such as motor neurone disease (MND) or Parkinson’s, they lose not only a functional means of communication but also a display of their individual and group identity. In order to build personalized synthetic voices, attempts have been made to capture the voice before it is lost, using a
process known as voice banking. But, for some patients, the speech deterioration frequently coincides or quickly follows diagnosis. Using HMM-based speech synthesis, it is now possible to build personalized synthetic voices with minimal data recordings and even disordered speech. The power of this
approach is that it is possible to use the patient’s recordings to adapt existing voice models pre-trained on many speakers. When the speech has begun to deteriorate, the adapted voice model can be further modified in order to compensate for the disordered characteristics found in the patient’s speech, we call this process "voice repair". In this paper we compare two methods of voice
repair. The first method follows a trial and error approach and requires the expertise of a speech therapist. The second method is entirely automatic and based on some a priori statistical knowledge. A subjective evaluation shows that the automatic method achieves similar results than the manually controlled
Original languageEnglish
Title of host publicationSLPAT 2015, 6th Workshop on Speech and Language Processing for Assistive Technologies
PublisherAssociation for Computational Linguistics (ACL)
Number of pages4
ISBN (Print)978-1-941643-79-2
Publication statusPublished - Sep 2015


Dive into the research topics of 'A Comparison of Manual and Automatic Voice Repair for Individual with Vocal Disabilities'. Together they form a unique fingerprint.

Cite this