w[i]nd: An interactive and generative audiovisual composition/installation in procedural and orchestrated space

Research output: Non-textual formSoftware

Abstract

w[i]nd is an interactive and generative audio-visual composition / installation in procedural and orchestrated space. This project aims to be a rich and immersive visual and sound art experience in a first-person Virtual Environment (VE). The VE consists of an array of audio-visual exhibits of images of wind instruments created by generative image synthesis techniques coupled with samples of physically modelled synthesis and processed acoustic recordings that exhibit a range of characteristics of augmentation and abstraction. In addition to the displayed exhibits, there are a number of very large-scale three-dimensional models of conventional wind instruments emitting untreated sustained tones typical of these instruments that reciprocally contrast with and comment on the synthesised sounds and images. First person motion, underlying algorithmic parameters and triggers on exhibits are used as a mechanic to influence the sonic density and character of the experience. Among other things, the work explores open sonic form, timbre and character in non-linear interactive experiences and contributes to research into the use of sound and sounding objects as an approach to presence and immersion, orientation, navigation, interaction and “ergodic musicking” (Oliva 2019) in VEs.
Original languageEnglish
Media of outputOnline
Publication statusPublished - 20 Apr 2020

Fingerprint

Dive into the research topics of 'w[i]nd: An interactive and generative audiovisual composition/installation in procedural and orchestrated space'. Together they form a unique fingerprint.

Cite this