The influence of image and object features on fixation selection in scene viewing: A generalized linear mixed model approach

Antje Nuthmann, Wolfgang Einhäuser

Research output: Contribution to conferenceAbstract

Abstract

Which image characteristics predict where people fixate when encoding natural images? To answer this question, we introduce a new analysis approach utilizing generalized linear mixed models (GLMM). Our method allows for directly describing the relationship between continuous feature value and fixation probability, and for assessing each feature’s unique contribution to fixation selection. Key to our approach is an a priori parcellation of the scene. First, we use a grid to obtain homogeneous and exhaustive image coverage. In addition to image features, GLMMs included a predictor that captured the viewing bias towards the centre of the scene. Edge density, clutter, and the number of homogenous segments in a grid cell independently predicted whether image patches were fixated or not; luminance and contrast had no independent effect. Second, we extend the approach to an object-based parcellation to investigate the relative prioritization of objects by their features. Objects that were fixated exactly once during first-pass viewing were not only larger and closer to scene centre, but also visually more salient. Whether or not an object was refixated depended on its size and salience, but not on its distance to image centre. Such prioritization among objects provides evidence for an alternative role of salience.
Original languageEnglish
Pages61
DOIs
Publication statusPublished - Aug 2015
Event18th European Conference on Eye Movements - Vienna, Austria
Duration: 16 Aug 201521 Aug 2015

Conference

Conference18th European Conference on Eye Movements
Country/TerritoryAustria
CityVienna
Period16/08/1521/08/15

Fingerprint

Dive into the research topics of 'The influence of image and object features on fixation selection in scene viewing: A generalized linear mixed model approach'. Together they form a unique fingerprint.

Cite this