Generating Subsequent Reference in Shared Visual Scenes: Computation vs Re-Use

Jette Viethen, Robert Dale, Markus Guhe

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Traditional computational approaches to referring expression generation operate in a deliberate manner, choosing the attributes to be included on the basis of their ability to distinguish the intended referent from its distractors. However, work in psycholinguistics suggests that speakers align their referring expressions with those used previously in the discourse, implying less deliberate choice and more subconscious reuse. This raises the question as to which is a more accurate characterisation of what people do. Using a corpus of dialogues containing 16,358 referring expressions, we explore this question via the generation of subsequent references in shared visual scenes. We use a machine learning approach to referring expression generation and demonstrate that incorporating features that correspond to the computational tradition does not match human referring behaviour as well as using features corresponding to the process of alignment. The results support the view that the traditional model of referring expression generation that is widely assumed in work on natural language generation may not in fact be correct; our analysis may also help explain the oft-observed redundancy found in human produced referring expressions.
Original languageEnglish
Title of host publicationProceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27-31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL
PublisherAssociation for Computational Linguistics (ACL)
Pages1158-1167
Number of pages10
Publication statusPublished - 2011

Fingerprint

Dive into the research topics of 'Generating Subsequent Reference in Shared Visual Scenes: Computation vs Re-Use'. Together they form a unique fingerprint.

Cite this