Synthesizing cooperative conversation

Catherine Pelachaud, Justine Cassell, Norman I. Badler, Mark Steedman, Scott Prevost, Matthew Stone

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.
Original languageEnglish
Title of host publicationMultimodal Human-Computer Communication
Subtitle of host publicationSystems, Techniques, and Experiments
PublisherSpringer
Pages68-88
Number of pages21
ISBN (Electronic)978-3-540-69764-0
ISBN (Print)978-3-540-64380-7
DOIs
Publication statusPublished - 1995

Publication series

NameLecture Notes in Computer Science
PublisherSpringer Berlin Heidelberg
Volume1374
ISSN (Print)0302-9743

Fingerprint

Dive into the research topics of 'Synthesizing cooperative conversation'. Together they form a unique fingerprint.

Cite this