Multilevel Auditory Displays for Mobile Eyes-free Location-based Interaction

Yolanda Vazquez-Alvarez, Matthew P. Aylett, Stephen A. Brewster, Rocio von Jungenfeld, Antti Virolainen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

This paper explores the use of multilevel auditory displays to enable eyes-free mobile interaction with location-based information in a conceptual art exhibition space. Multilevel auditory displays enable user interaction with concentrated areas of information. However, it is necessary to consider how to present the auditory streams without overloading the user. We present an initial study in which a top-level exocentric sonification layer was used to advertise information present in a gallery-like space. Then, in a secondary interactive layer, three different conditions were evaluated that varied in the presentation (sequential versus simultaneous) and spatialisation (non-spatialised versus egocentric spatialisation) of multiple auditory sources. Results show that 1) participants spent significantly more time interacting with spatialised displays, 2) there was no evidence that a switch from an exocentric to an egocentric display increased workload or lowered satisfaction, and 3) there was no evidence that simultaneous presentation of spatialised Earcons in the secondary display increased workload.
Original languageEnglish
Title of host publicationCHI '14 Extended Abstracts on Human Factors in Computing Systems
Place of PublicationNew York, NY, USA
PublisherACM
Pages1567-1572
Number of pages6
ISBN (Print)978-1-4503-2474-8
DOIs
Publication statusPublished - 2014

Keywords / Materials (for Non-textual outputs)

  • auditory displays, eyes-free interaction, spatial audio

Fingerprint

Dive into the research topics of 'Multilevel Auditory Displays for Mobile Eyes-free Location-based Interaction'. Together they form a unique fingerprint.

Cite this