Animated conversation: rule-based generation of facial expression, gesture spoken intonation for multiple conversational agents

Justine Cassell, Catherine Pelachaud, Norman I. Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, Matthew Stone

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversation is created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gestures generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout we will use examples from an actual synthesized, fully animated conversation.
Original languageEnglish
Title of host publicationSIGGRAPH '94 Proceedings of the 21st annual conference on Computer graphics and interactive techniques
PublisherACM
Pages413-420
Number of pages8
ISBN (Print)0-89791-667-0
DOIs
Publication statusPublished - 1994

Cite this