Abstract
Emotions play a vital role in human communications. Therefore, it is desirable for virtual agent dialogue systems to recognize and react to user’s emotions. However, current automatic emotion recognizers have limited performance compared to humans. Our work attempts to improve performance of recognizing emotions in spoken dialogue by identifying dialogue cues predictive of emotions, and by building multimodal recognition models with a knowledge-inspired hierarchy. We conduct experiments on both spontaneous and acted dialogue data to study the efficacy of the proposed approaches. Our results show that including prior knowledge on emotions in dialogue in either the feature representation or the model structure is beneficial for automatic emotion recognition.
Original language | English |
---|---|
Title of host publication | ICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents |
Publisher | ACM |
Pages | 45-46 |
Number of pages | 2 |
ISBN (Electronic) | 9781450355582 |
DOIs | |
Publication status | Published - 13 Nov 2017 |
Event | ICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents - Glasgow, United Kingdom Duration: 13 Nov 2017 → 13 Nov 2017 http://icmi.acm.org/2018/ |
Conference
Conference | ICMI 2017 Satellite Workshop Investigating Social Interactions with Artificial Agents |
---|---|
Abbreviated title | ISIAA 2017 |
Country/Territory | United Kingdom |
City | Glasgow |
Period | 13/11/17 → 13/11/17 |
Internet address |