Inattentional Blindness in Visual Search

Matt Chapman-Rounds, Christopher Lucas, Frank Keller

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract / Description of output

Models of visual saliency normally belong to one of two camps: models such as Experience Guided Search (E-GS), which emphasize top-down guidance based on task features, and models such as Attention as Information Maximisation (AIM), which emphasize the role of bottom-up saliency. In this paper, we show that E-GS and AIM are structurally similar and can be unified to create a general model of visual search which includes a generic prior over potential non-task related objects. We demonstrate that this model displays inattentional blindness, and that blindness can be modulated by adjusting the relative precisions of several terms within the model. At the same time, our model correctly accounts for a series of classical visual search results.
Original languageEnglish
Title of host publicationProceedings of the 41st Annual Conference of the Cognitive Science Society
Subtitle of host publicationMontreal 2019
EditorsAshok Goel, Colleen Seifert, Christian Freksa
PublisherCognitive Science Society
Number of pages7
ISBN (Print)0-9911967-7-5
Publication statusPublished - 24 Jul 2019
Event41st Annual Meeting of the Cognitive Science Society - Palais des Congrès de Montréal, Montréal , Canada
Duration: 24 Jul 201927 Jul 2019
Conference number: 41


Conference41st Annual Meeting of the Cognitive Science Society
Abbreviated titleCOGSCI 2019
Internet address

Keywords / Materials (for Non-textual outputs)

  • Inattentional Blindness
  • Conjunction Search
  • Visual Attention
  • Bayesian Modelling
  • Predictive Processing


Dive into the research topics of 'Inattentional Blindness in Visual Search'. Together they form a unique fingerprint.

Cite this