A person’s mental health can be reflected in their speech. Recent studies have used machine learning to detect signs of mental illness from hundreds of acoustic features. However, the resulting algorithms can be hard to interpret clinically. The Geneva Minimalistic Acoustic Parameter Set for voice research
and affective computing (GeMAPS) partly addresses this issue by providing a standard, restricted set of 88 acoustic variables.
In practice, though, many of these variables covary, and can thus be summarised into more easily interpretable metavariables.
Using principal component analysis, we derive a set of eight such meta-variables for a data set of 17949 samples from nonpsychiatric adults and adults with a diagnosis of mania, depression, psychosis, or schizophrenia. We show that (a) there are significant differences between healthy controls and people with a mental illness on these variables; and (b) the patterns of
these differences (size and significance) vary depending on the condition.
Original languageEnglish
Publication statusPublished - 17 Nov 2017


Dive into the research topics of 'Clinically Interpretable Acoustic Meta-Features for Characterising the Effect of Mental Illness on Speech and Voice'. Together they form a unique fingerprint.

Cite this