I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue

Yuanchao Li, Koji Inoue, Leimin Tian, Changzeng Fu, Carlos Ishi, Hiroshi Ishiguro, Tatsuya Kawahara, Catherine Lai

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Current Spoken Dialogue Systems (SDSs) often serve as passive listeners that respond only after receiving user speech. To achieve human-like dialogue, we propose a novel future prediction architecture that allows an SDS to anticipate future affective reactions based on its current behaviors before the user speaks. In this work, we investigate two scenarios: speech and laughter. In speech, we propose to predict the user's future emotion based on its temporal relationship with the system's current emotion and its causal relationship with the system's current Dialogue Act (DA). In laughter, we propose to predict the occurrence and type of the user's laughter using the system's laughter behaviors in the current turn. Preliminary analysis of human-robot dialogue demonstrated synchronicity in the emotions and laughter displayed by the human and robot, as well as DA-emotion causality in their dialogue. This verifies that our architecture can contribute to the development of an anticipatory SDS.
Original languageEnglish
Title of host publicationThe ACM CHI Conference on Human Factors in Computing Systems
PublisherACM Association for Computing Machinery
Number of pages8
ISBN (Electronic)9781450394222
Publication statusPublished - 19 Apr 2023
EventComputer Human Interaction (CHI) 2023 - Germany, Hamburg, Germany
Duration: 23 Apr 202328 Apr 2023


ConferenceComputer Human Interaction (CHI) 2023

Keywords / Materials (for Non-textual outputs)

  • emotion
  • dialogue act
  • interaction
  • laughter
  • spoken dialogue system


Dive into the research topics of 'I Know Your Feelings Before You Do: Predicting Future Affective Reactions in Human-Computer Dialogue'. Together they form a unique fingerprint.

Cite this