Abstract / Description of output
Text-to-Speech synthesis in Indian languages has a seen lot of progress over the decade partly due to the annual Blizzard challenges. These systems assume the text to be written in Devanagari or Dravidian scripts which are nearly phonemic orthography scripts. However, the most common form of computer interaction among Indians is ASCII written transliterated text. Such text is generally noisy with many variations in spelling for the same word. In this paper we evaluate three approaches to synthesize speech from such noisy ASCII text: a naive UniGrapheme approach, a Multi-Grapheme approach, and a supervised Grapheme-to-Phoneme (G2P) approach. These methods first convert the ASCII text to a phonetic script, and then learn a Deep Neural Network to synthesize speech from that. We train and test our models on Blizzard Challenge datasets that were transliterated to ASCII using crowdsourcing. Our experiments on Hindi, Tamil and Telugu demonstrate that our models generate speech of competetive quality from ASCII text compared to the speech synthesized from the native scripts. All the accompanying transliterated datasets are released for public access.
Original language | English |
---|---|
Title of host publication | 9th ISCA Speech Synthesis Workshop |
Pages | 74-79 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 15 Sept 2016 |
Event | 9th ISCA Speech Synthesis Workshop - Sunnyvale, United States Duration: 13 Sept 2016 → 15 Sept 2016 http://ssw9.talp.cat/ |
Conference
Conference | 9th ISCA Speech Synthesis Workshop |
---|---|
Abbreviated title | ISCA 2016 |
Country/Territory | United States |
City | Sunnyvale |
Period | 13/09/16 → 15/09/16 |
Internet address |