Listening test materials for "Evaluating comprehension of natural and synthetic conversational speech"

Dataset

Description

The key contents are:

* Stimuli used in the comprehension test reported in the paper
* Response data from the comprehension test reported in the paper

The following files are included in this release:

audio data:
- DW_{M,N,S}.wav
- SC_{M,N,S}.wav
- VW_{M,N,S}.wav

question data:
- questions.txt
- questionnaire.pdf

results:
- comprehension_results.tsv
- questionnaire_results.csv

analysis scripts:
- holmbonferroni.m
- load_csv.m
- simple_fisher.m
- wester2016evaluating_analysis.m

Abstract

Current speech synthesis methods typically operate on isolated sentences and lack convincing prosody when generating longer segments of speech. Similarly, prevailing TTS evaluation paradigms, such as intelligibility (transcription word error rate) or MOS, only score sentences in isolation, even though overall comprehension arguably is more important for speech-based communication. In an effort to develop more ecologically-relevant evaluation techniques that go beyond isolated sentences, we investigated comprehension of natural and synthetic speech dialogues. Specifically, we tested listener comprehension on long segments of spontaneous and engaging conversational speech (three 10-minute radio interviews of comedians). Interviews were reproduced either as natural speech, synthesised from carefully prepared transcripts, or synthesised using durations from forced-alignment against the natural speech, all in a balanced design. Comprehension was measured using multiple choice questions. A significant difference was measured between the comprehension/retention of natural speech (74% correct responses) and synthetic speech with forced-aligned durations (61% correct responses). However, no significant difference was observed between natural and regular synthetic speech (70% correct responses). Effective evaluation of comprehension remains elusive.

Data Citation

Wester, Mirjam; Watts, Oliver; Henter, Gustav Eje. (2016). Listening test materials for "Evaluating comprehension of natural and synthetic conversational speech", [dataset]. University of Edinburgh, School of Informatics, Centre for Speech Technology Research. http://dx.doi.org/10.7488/ds/1352.
Date made available3 Mar 2016
PublisherEdinburgh DataShare

Cite this