Glastonbury Pi (2008)

 

The first psychedelic fluid+particles interaction prototype

 

Camera to osc+midi interaction tests


(Please see Webcam Piano 2.0 for the evolution of this)

 

The two concepts strung together

 

Just a reminder, this isn’t random trippy visuals set to a piano soundtrack… I’m playing a virtual piano by waving my hands and fingers around in the air (which also moves the fluid and particles around as well)

 

Pi is an interactive audio visual installation commissioned by Trash City of the Glastonbury Festival to be shown at the festival in June 2008.

Working with fellow techheads at Seeper, our concept was to take a 50ft tent, and convert it into a giant audio/visual instrument – all of the music, audio and visuals inside the tent are generated and controlled purely by the movements of the occupants.

The space was divided into 6 zones. Two of the zones were purely visual, this was the waiting area. Here people could dance, chill, run about and do what they pleased. Two cameras tracked their movement and applied it to the fluid/particles visuals – so people could ‘throw’ plasma balls at each other, or send colorful waves propagating around the space. The other 4 zones had the same visual interactions, but in addition were also connected to an audio system. Each of these four zones was allocated an instrument type (drums/beats/percussion, pads, bass, strings etc.), and movement within these zones would also trigger notes or beats – depending on precisely where in the zone the movement was triggered. A lot of effort went into designing the sounds and notes triggered to make sure the end result would almost always sound pleasant and not be complete cacophony.

Technical information:

The visuals start with the camera analysis – without motion there are no visuals. 6 cameras are fed into an 8-core Mac Pro and are all analysed in parallel with an optical flow motion estimation, the analysis for each camera feed running in its own thread. Once the camera analysis is complete for all cameras, the velocity vectors are stitched together and fed into a fluid simulation. Any movement the user makes, causes ‘colored dye’ to be injected into the fluid simulation, with the speed and direction of the movement being inherited by the dye. These movements also create ‘currents’ in the fluid, allowing the user to create swirls and vortices with circular movements (e.g. waving arms around), or just send waves of coloured plasma rippling across the room with a simple push of the body. These currents also allow the user to control particles as described below.

Any movement faster than a certain threshold, also creates ‘glitter’. The speed of the movement controls the number of glitter created (larger & faster movements create more glitter) – thousands of glitter can be swimming around at any one time. Once created, each glitter is independant and swims around with a basic AI – but always overpowered by the current of the fluids, so the user can herd the swarms of glitter using their arms, legs, body etc. and push them towards one another.

Even larger movements – e.g. swiftly swinging hands or head around – create ‘energy orbs’. These orbs cut through the fluid quite quickly, but can still be affected by the currents allowing the user to control the direction and speed of the energy orbs. This allows users to bounce the orbs back and forth between each other playing imaginary tennis or football.

The biggest challenge in creating an application of this scale was to structure and optimize it in a way so it could analyze upto 6 camera feeds, and run at a large enough resolution to cover the entire tent. A multiple computer approach was out of the question due to the complications of synchronising a fluid simulation across multiple PC’s, so the decision was made to go with a multi-threaded app running on an 8-core Mac Pro. The motion estimation was split into 6 threads (one for each camera), the fluid solver ran in its own thread, and the particles (glitter & orb) ran in another thread – all of these threads ran in parallel. Once all threads were finished processing their data for one frame, they exchanged their results ready for processing for the next frame (camera motion fed into fluid solver ready for next frame, fluid currents fed into particles ready for next frame etc.). This approach allowed everything to run in parallel with smooth framerates of 30fps.

In addition to controlling the visuals, motion is also what triggers the sound. The users’ motion is analyzed and broken down into a spatial grid, with imaginary pads corresponding to different notes. This information is sent over a local network to another computer running Ableton Live where all the samples and loops are stored and triggered. The trigger information is sent using OSC and is mapped on the audio computer from OSC to midi using OSCulator – which originally wasn’t designed to handle such complex polyphony, but working closely with the author we received a version which suited our needs perfectly.

Acknowledgements

Made with openFrameworks.

Related Dates

2008 Jun 25-29,
Exhibition , Glastonbury Pi,
Glastonbury Festival, UK

Related keywords

c++, computer vision, cylindrical projection, fluid simulation, generative music, generative visuals, glastonbury festival, glsl, infrared, installation, interactive, midi, motion tracking, open source, opencv, openframeworks, optical flow, osc, particles, processing, quartz composer