Expressive speech synthesis: synthesising ambiguity

Matthew P. Aylett, Blaise Potard, Christopher J. Pidcock

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Previous work in HCI has shown that ambiguity, normally avoided in interaction design, can contribute to a user’s engagement by increasing interest and uncertainty. In this work, we create and evaluate synthetic utterances where there is a conflict between text content, and the emotion in the voice. We show that: 1) text content measurably alters the negative/positive perception of a spoken utterance, 2) changes in voice quality also produce this effect, 3) when the voice quality and text content are conflicting the result is a synthesised ambiguous utterance. Results were analysed using an evaluation/activation space. Whereas the effect of text content was restricted to the negative/positive dimension (valence), voice quality also had a significant effect on how active or passive the utterance was perceived (activation). Index Terms: speech synthesis, unit selection, expressive speech synthesis, emotion, prosody
Original languageEnglish
Title of host publicationThe Eighth ISCA Tutorial and Research Workshop on Speech Synthesis, Barcelona, Spain, August 31-September 2, 2013
Pages217-221
Number of pages5
Publication statusPublished - 2013

Fingerprint

Dive into the research topics of 'Expressive speech synthesis: synthesising ambiguity'. Together they form a unique fingerprint.

Cite this