Abstract
Current Spoken Dialogue Systems (SDSs) often serve as passive listeners that respond only after receiving user speech. To achieve human-like dialogue, we propose a novel future prediction architecture that allows an SDS to anticipate future affective reactions based on its current behaviors before the user speaks. In this work, we investigate two scenarios: speech and laughter. In speech, we propose to predict the user's future emotion based on its temporal relationship with the system's current emotion and its causal relationship with the system's current Dialogue Act (DA). In laughter, we propose to predict the occurrence and type of the user's laughter using the system's laughter behaviors in the current turn. Preliminary analysis of human-robot dialogue demonstrated synchronicity in the emotions and laughter displayed by the human and robot, as well as DA-emotion causality in their dialogue. This verifies that our architecture can contribute to the development of an anticipatory SDS.
Original language | English |
---|---|
Title of host publication | The ACM CHI Conference on Human Factors in Computing Systems |
Publisher | ACM Association for Computing Machinery |
Pages | 1-7 |
Number of pages | 8 |
ISBN (Electronic) | 9781450394222 |
DOIs | |
Publication status | Published - 19 Apr 2023 |
Event | Computer Human Interaction (CHI) 2023 - Germany, Hamburg, Germany Duration: 23 Apr 2023 → 28 Apr 2023 |
Conference
Conference | Computer Human Interaction (CHI) 2023 |
---|---|
Country/Territory | Germany |
City | Hamburg |
Period | 23/04/23 → 28/04/23 |
Keywords / Materials (for Non-textual outputs)
- emotion
- dialogue act
- interaction
- laughter
- spoken dialogue system