Discourse Generation for Instructional Applications: Identifying and Exploiting Relevant Prior Explanations

Johanna Moore, Benoît Lemaire, James A Rosenblum

Research output: Chapter in Book/Report/Conference proceedingChapter

Abstract / Description of output

To reap the benefits of natural language interaction, tutorial systems must be endowed with the properties that make human natural-language interaction so effective. One striking feature of naturally occurring interactions is that human tutors and students freely refer to the context created by prior explanations. In
contrast, computer-generated utterances that do not draw on the previous discourse often seem awkward and unnatural and may even be incoherent. The explanations produced by such systems are frustrating to students because they repeat the same information over and over again. Perhaps more critical is that,
by not referring to prior explanations, computer-based tutors are not pointing out similarities between problem-solving situations and therefore may be missing out on opportunities to help students form generalizations. In this article, we discuss several observations from an analysis of human-human tutorial interactions and provide examples of the ways in which tutors and students refer to previous explanations. We describe how we have used a case-based reasoning algorithm to enable a computational system to identify prior explanations that may be relevant to the explanation currently being generated. We then describe two computational systems that can exploit this knowledge about relevant prior explanations in constructing their subsequent explanations.
Original languageEnglish
Title of host publicationThe Journal of the Learning Sciences
Place of PublicationMahwah New Jersey
Number of pages45
Publication statusPublished - 1996


Dive into the research topics of 'Discourse Generation for Instructional Applications: Identifying and Exploiting Relevant Prior Explanations'. Together they form a unique fingerprint.

Cite this