Recognizing Emotions in Spoken Dialogue with Acoustic and Lexical Cues

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user’s emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledge-inspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.
Original languageEnglish
Title of host publicationICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents
PublisherACM
Pages45-46
Number of pages2
ISBN (Electronic)9781450355582
DOIs
Publication statusPublished - 13 Nov 2017
EventICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents - Glasgow, United Kingdom
Duration: 13 Nov 201713 Nov 2017
http://icmi.acm.org/2018/

Conference

ConferenceICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents
Abbreviated titleISIAA 2017
Country/TerritoryUnited Kingdom
CityGlasgow
Period13/11/1713/11/17
Internet address

Fingerprint

Dive into the research topics of 'Recognizing Emotions in Spoken Dialogue with Acoustic and Lexical Cues'. Together they form a unique fingerprint.

Cite this