Scan patterns on visual scenes predict sentence

Moreno Coco, Frank Keller

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Most everyday tasks involve multiple modalities, which raises the question of how
the processing of these modalities is coordinated by the cognitive system. In this
paper, we focus on the coordination of visual attention and linguistic processing
during speaking. Previous research has shown that objects in a visual scene are fixated before they are mentioned, leading us to hypothesize that the scan pattern of a participant can be used to predict what they will say. We test this hypothesis using a data set of cued scene descriptions of photo-realistic scenes. We demonstrate that similar scan patterns are correlated with similar sentences, within and between visual scenes; and that this correlation holds for three phases of the language production process (target identification, sentence planning, and speaking). We also present a simple algorithm that uses scan patterns to accurately predict associated sentences by utilizing similarity-based retrieval.
Original languageEnglish
Title of host publicationProceedings of the 32nd Annual Conference of the Cognitive Science Society
Pages1204–1223
Number of pages19
Publication statusPublished - 2010

Fingerprint

Dive into the research topics of 'Scan patterns on visual scenes predict sentence'. Together they form a unique fingerprint.

Cite this