Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks

Cassia Valentini-Botinhao, Manuel Sam Ribeiro, Oliver Watts, Korin Richmond, Gustav Eje Henter

Research output: Chapter in Book/Report/Conference proceedingConference contribution


Automatically predicting the outcome of subjective listening tests is a challenging task. Ratings may vary from person to person even if preferences are consistent across listeners. While previous work has focused on predicting listeners' ratings (mean opinion scores) of individual stimuli, we focus on the simpler task of predicting subjective preference given two speech stimuli for the same text. We propose a model based on anti-symmetric twin neural networks, trained on pairs of waveforms and their corresponding preference scores. We explore both attention and recurrent neural nets to account for the fact that stimuli in a pair are not time aligned. To obtain a large training set we convert listeners' ratings from MUSHRA tests to values that reflect how often one stimulus in the pair was rated higher than the other. Specifically, we evaluate performance on data obtained from twelve MUSHRA evaluations conducted over five years, containing different TTS systems, built from data of different speakers. Our results compare favourably to a state-of-the-art model trained to predict MOS scores.
Original languageEnglish
Title of host publicationProceedings of Interspeech 2022
EditorsHanseok Ko, John H. L. Hansen
Number of pages5
Publication statusPublished - 22 Sep 2022
EventInterspeech 2022 - Incheon, Korea, Democratic People's Republic of
Duration: 18 Sep 202222 Sep 2022
Conference number: 23


ConferenceInterspeech 2022
Country/TerritoryKorea, Democratic People's Republic of
Internet address


  • Preference prediction
  • text-to-speech
  • twin neural networks


Dive into the research topics of 'Predicting pairwise preferences between TTS audio stimuli using parallel ratings data and anti-symmetric twin neural networks'. Together they form a unique fingerprint.

Cite this