We present an empirical approach to adaptively selecting a tutoring system’s remediation strategy based on an annotated corpus of human-human tutorial dialogues. We are interested in the remediation selection problem, that of generating the best remediation strategy given a diagnosis for an incorrect answer and the current problem solving context. By comparing the use of individual remediation strategies to their success in varying contexts, we can empirically extract and implement tutoring rules for the content planner of an intelligent tutoring system. We describe a methodology for analyzing a tutoring corpus and using the resulting data to inform a content planning model.
|Title of host publication||Proceedings of the 11th European Workshop on Natural Language Generation (ENLG), Schloss Dagstuhl, Germany,|
|Number of pages||8|
|Publication status||Published - 2007|