The Sound Of The Wind (2015)
Multimedia Installation
This is an audiovisual composition about the sound of the wind, presented to the public in the form of an interactive installation. It uses live internet feeds to drive an audiovisual, non-repeatable experience, based on weather sonification and visualisation.
About how this works:
Weather data is accessed online via a native Processing library (onformative 2015) which allows to read live internet feeds directly from a weather online service; such data is routed internally within Processing for dynamic visualisation and sent via OSC protocol to a group of Max/MSP patches for data scaling and sonification. Data is fed back later from Max into Processing in order to reinforce a symbiotic generation of audiovisuals.
OSC data packages include information about a particular location determined by a WOEID value (i.e. a city around the world), including geographical latitude, longitude, weather condition, wind direction, wind speed, visible distance, temperature, local pressure and so on. Such packages are sent and received via Jamoma (2015) and oscP5 (2015) libraries in Max and Processing respectively. Interactivity is supported by a graphical user interface whereby the audience is able to select a city around the world to filter the data, wait, and hear how it sounds.
The sonic output of this is based on two different implementations; the first relates with Andy Farnell’s (2010, pp. 471-481) model for a DSP-based re-creation of the sound of the wind, i.e., a dynamic wind sound generator which follows a pattern that constantly changes according to a local wind speed factor, using noise, filters and surround spatialisation. The patch emulates several types of wind sound according to its causality, as for example, the sound caused by wind passing through resonant spaces like pipes and doorways, the sound caused by large irregular objects with a rough surface, the sound of whistling wires and so on.
The second implementation presents a six voice polyphonic synthesizer featuring wavetable synthesis and frequency modulation; such is used to sonify a mix of the data generated by the previous implementation and the data received directly form the internet feeds. The synthesis engine is based on the following: wavetable lookup synthesis (wavetable synthesis) (Floss 2012, Chapter 20; Puckette 2007, Chapter 2; Rolfe 2014), multiple carrier frequency modulation (FM) (Puckette 2007, Chapter 5; Truax 2014) and exponential envelope generators (McCulloch 2014). In addition to this, two further processes of DSP-based sound generation are performed: (1) vocoding of the sound of the wind and the synthesiser output via Tomczak (2015) 128 Band Vocoder M4L device; (2) reverberation via Christian Kleine’s Verbotron reverb (Bloom 2015).
Data visualisation is performed within Processing via arrays of particles (Poulson 2015); direction of the particles is directly fed from the weather service whilst size and color in the visualisation are driven by the data being fed from the sonification process. Additionally, a textual description of the internet feed is presented at the bottom of the installation main screen.
A production diary can be found here.