Model-Based Target Sonification in Small Screen Devices: Perception and Action

Parisa Eslambolchilar, Andrew Crossan, Roderick Murray-Smith, Sara Dalzel-Job, Frank Pollick

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

Abstract

In this work, we investigate the use of audio and haptic feedback to augment the display of a mobile device controlled by tilt input. The questions we answer in this work are: How do people begin searching in unfamiliar spaces? What patterns do users follow and which techniques are employed to accomplish the experimental task? What effect does a prediction of the future state in the audio space, based on a model of the human operator, have on subjects’ behaviour? In the pilot study we studied subjects’ navigation in a state space with seven randomly placed audio sources, displayed via audio and vibrotactile modalities. In the main study, we compared only the efficiency of different forms of audio feedback. We ran these experiments on a Pocket PC instrumented with an accelerometer and a headset. The accuracy of selecting, exploration density, and orientation of each target was measured. The results quantified the changes brought by predictive or “quickened” sonified displays in mobile, gestural interaction. Also, they highlighted subjects’ search patterns and the effect of a combination of independent variables and each individual variable in the navigation patterns.
Original languageEnglish
Title of host publicationHandbook of Research on User Interface Design and Evaluation for Mobile Technology
PublisherIGI Global
Chapter29
Pages478-506
Number of pages29
ISBN (Electronic)9781599048727
ISBN (Print)9781599048710
DOIs
Publication statusPublished - 2008

Fingerprint

Dive into the research topics of 'Model-Based Target Sonification in Small Screen Devices: Perception and Action'. Together they form a unique fingerprint.

Cite this