Corpus-based generation of head and eyebrow motion for an embodied conversational agent

MaryEllen Foster, Jon Oberlander

Research output: Contribution to journalArticlepeer-review

Abstract / Description of output

Humans are known to use a wide range of non-verbal behaviour while speaking. Generating naturalistic embodied speech for an artificial agent is therefore an application where techniques that draw directly on recorded human motions can be helpful. We present a system that uses corpus-based selection strategies to specify the head and eyebrow motion of an animated talking head. We first describe how a domain-specific corpus of facial displays was recorded and annotated, and outline the regularities that were found in the data. We then present two different methods of selecting motions for the talking head based on the corpus data: one that chooses the majority option in all cases, and one that makes a weighted choice among all of the options. We compare these methods to each other in two ways: through cross-validation against the corpus, and by asking human judges to rate the output. The results of the two evaluation studies differ: the cross-validation study favoured the majority strategy, while the human judges preferred schedules generated using weighted choice. The judges in the second study also showed a preference for the original corpus data over the output of either of the generation strategies.
Original languageEnglish
Pages (from-to)305-323
Number of pages19
JournalLanguage Resources and Evaluation
Volume41
Issue number3-4
DOIs
Publication statusPublished - 1 Dec 2007

Keywords / Materials (for Non-textual outputs)

  • Data-driven generation
  • Embodied conversational agents
  • Evaluation of generated output
  • Multimodal corpora

Fingerprint

Dive into the research topics of 'Corpus-based generation of head and eyebrow motion for an embodied conversational agent'. Together they form a unique fingerprint.

Cite this