Currently browsing for "motion tracking"
Meet your Creator
A live theatrical performance with 16x moving head spotlights, 16x flying robots each equipped with LEDs and motorized mirrors.
Exploring UAVs (Unmanned Aerial Vehicles) as a means to deflect and divert light, and create floating light sculptures dancing to music. Pushing the anthropomorphism of abstract forms to it’s limits – with an aim to create an emotional connection between audience and quad-rotor flying vehicles.
Now in its 22nd year, the Saatchi & Saatchi New Directors’ Showcase hit Cannes again, unveiling another presentation of the new directorial talent. Marshmallow Laser Feast were the creative and technical directors of the production which included a theatrical performance by 16 flying robots reflecting light beams on the stage.
A Marshmallow Laser Feast project.
Event concept created by
Marshmallow Laser Feast and Saatchi & Saatchi creatives Jonathan Santana & Xander Smith
Marshmallow Laser Feast
Memo Akten, Robin McNicholas, Barney Steel
Quadrotor Design & Development
Oneohtrix Point Never
Spiritualized “Shine a Light”
Typography & Design
Sam & Arthur
Thanks to Vicon for the tracking system.
(partly) made with openFrameworks
1ch HD projection, 1ch HD LCD, 2ch sound
Forms won the prestigious Golden Nica (first prize) in the Prix Ars Electronica 2013 Animation category.
Forms is an ongoing collaboration between visuals artists Memo Akten and Quayola, a series of studies on human motion, and its reverberations through space and time. It is inspired by the works of Eadweard Muybridge, Harold Edgerton, Étienne-Jules Marey as well as similarly inspired modernist cubist works such as Marcel Duchamp’s “Nude Descending a Staircase No.2″. Rather than focusing on observable trajectories, it explores techniques of extrapolation to sculpt abstract forms, visualizing unseen relationships – power, balance, grace and conflict – between the body and its surroundings.
The project investigates athletes; pushing their bodies to their extreme capabilities, their movements shaped by an evolutionary process targeting a winning performance. Traditionally a form of entertainment in todays society with an overpowering competitive edge, the disciplines are deconstructed and interrogated from an exclusively mechanical and aesthetic point of view; concentrating on the invisible forces generated by and influencing the movement.
The source for the study is footage from the Commonwealth Games. The process of transformation from live footage to abstract forms is exposed as part of the interactive multi-screen artwork, to provide insight into the evolution of the specially crafted world in which the athletes were placed.
The video installation was commissioned by and exhibited at the National Media Museums ‘In The Blink of an Eye’ Exhibition, 9th March – 2nd September, 2012, alongside classic images by photographers such as Harold Edgerton, Eadweard Muybridge, Roger Fenton, Richard Billingham and Oscar Rejlander as well as historic items of equipment, films and interactive displays.
Quayola and Memo Akten - Artists
Nexus Interactive Arts - Production Company
Beccy McCray - Producer
Jo Bierton - Production Manager
Matthias Kispert - Sound design
Maxime Causeret - Houdini Developer
Raffael F J Ziegler (AKA Moco) – 3D Animator
Katie Parnell - 3D Tracker
Eoin Coughlan - 3D Tracker
Mark Davies - 3D Tracking Supervisor
Commissioned by the National Media Museum for the ‘In The Blink of an Eye‘ Exhibition 2012; with the support of imove, part of the Cultural Olympiad programme.
With thanks to BBC Motion Gallery and Commonwealth Games Federation
A 20m tall abstract, interactive, digital waterfall. Responds to peoples movements and sounds. An artwork commissioned by Coca-Cola Latin America.
Cascada consists of realistic 3D fluid simulations – modeled physically accurate, but rendered highly abstract - pouring from 5 storeys high mimicking a giant waterfall. The waterfall reacts to visitors movements and responds to audio input when live bands perform in the venue. Visitors are tracked using a stereoscopic camera, and they can play with the virtual flow as if it were real.
Read more on fastcompany.com
Production company: Nexus Interactive Arts
Director: Davide Quayola
Technical Director: Memo Akten
Liquid Simulation: Matt Swoboda
Executive Producer: Cedric Gairard
Creative Director: Chris O’Reilly
Lead Interactive Producer: Ulla Winkler
Documentation/Digital Producer: Tim Dillon
Account Manager: Carolina Vallejo
Event / AV Production Co: Induvallas
Editor: David Slade / Steve McInerney
On site filming: David Holguin / Paul Gallegos
System Support: Jelani John
Production Assistant: Fernanda Garcia Lopez
Nexus IT Support: Patrick Hearn
Interactive Sound Design: MOST / Vauxlab
Composers: Ivo Witteveen & Diederik Idenburg
Vauxlab Audio software: Thijs Koerselman
Coca-Cola Latin America:
CE Director: Guido Rosales
Marketing Director: Miguel Moreno Toscano
Senior Design Manager: Raphael Abreu
Senior Marketing Manager: Pierangela Sierra
Brand Manager: Emilia Villamarin
(partly) made with openFrameworks
Wombats – Techno Fan
Music video for Wombats’ latest single “Techno Fan”. I designed and developed software (using C++ / openframeworks) to process live footage of the band. The software analyzes the footage, and in real-time outputs visualizations based on motion and numerous user-controlled parameters to adjust the behaviour, look and feel, various different modes etc. All images are generated from this software, then edited, composited and finished in a traditional post-production workflow.
The raw video shots were broken down shot by shot and various layers were rotoscoped, separated (e.g. foreground, background, singer, drummer etc.) and rendered out as quicktime files. (This was all done in the traditional way with AfterEffects). Then each of these shots and layers were individually fed into my custom software. My software analyzes the video and based on the dozens of parameters outputs a new sequence (as a sequence of png’s). The analysis is done almost realtime (depending on input video size) and the user can play with the dozens of parameters in realtime, while the app is running and even while it is rendering the processed images to disk. So all the animations you see in the video, were ‘performed’ in realtime. No keyframes used. Lots of different ‘looks’ were created (presets) and applied to the different shots and layers. Each of these processed sequences were rendered to disk and re-composited and edited back together with Final Cut and AfterEffects to produce the final video.
This isn’t meant as a tutorial, but a quick, high level overview of all the techniques used in the processing of the footage. There are a few main phases in the processing of the footage:
- analyze the footage and find some interesting points
- create triangles from those interesting points
- display those triangles
- save image sequence to disk
Phase #1 is where all the computer vision (opencv) stuff happens. I used a variety of techniques. As you can see from the GUI screenshots, the first step is a bit of pre-processing: blur (cvSmooth), bottom threshold (clamp anything under a certain brightness to black - cvThreshold), top threshold (clamp anything above a certain brightness to white - cvThreshold), adaptive threshold (apply a localized binary threshold, clamping to white or black depending on neighbours only - cvAdaptiveThreshold), erode (shrink or ‘thinnen’ bright pixels - cvErode), dilate (expand or ‘thicken’ bright pixels – cvDilate). Not all of these are always used, different shots and looks require different pre-processing.
Next, the first method of finding interesting points was ‘finding contours’ (cvFindContours) – or ‘finding blobs’ as it is also sometimes known as. This procedure basically allows you to finds the ‘edges’ in the image, and return them as a sequence of points – as opposed to applying say just a canny or laplacian edge detector, which will also find the edges, but will return a B&W image with a black background and white edges. The latter (canny, laplacian etc) finds the edges *visually* while the cvFindContours will go one step further and return the edge *data* in a computer readable way, i.e. an array of points, so you can parse through this array in your code and see where these edges are. (cvFindContours also returns other information regarding the ‘blobs’ like area, centroid etc but that is irrelevant for this application). Now that we have the edge data, we can triangulate it? No, because it’s way too dense – a coordinate for every pixel. So some simplification is in order. Again for this I used a number of techniques. A very crude method is just to omit every n’th point. Another method is to omit a point if the dot product of the vector leading up to that point from previous point, and the vector leading away from that point to the next point, is greater than a certain threshold (that threshold is the cosine of the minimum angle you desire). In english: omit a point if it is on a relatively straight line. OR: if we have points A, B and C. Omit point B if: (B-A) . (B-C) > cos(angle threshold). Another method is to resample along the edges at fixed distance intervals. For this I use my own MSA::Interpolator class ( http://msavisuals.com/msainterpolator). (I think there may have been a few more techniques, but I cannot remember as it’s been a while since I wrote this app!)
Independent to the cvFindContours point finding method, I also looked at using ‘corner detection’ (feature detection / feature extraction). For this I looked into three algorithms: Shi-Tomasi and Harris (both of which are implemented in opencv’s cvGoodFeaturesToTrack function) and SURF (using the OpenSURF library). Out of these three Shi-Tomasi gave the best visual results. I wanted a relatively large set of points, that would not flicker too much (given relatively low ‘tracking quality’). Harris was painfully slow, whereas SURF would just return too few features, adjusting the parameters to return a higher set of features just made the feature tracking too unstable. Once I had a set of points returned by the Shi-Tomasi (cvGoodFeaturesToTrack) I tracked these with a sparse Lucas Kanade Optical Flow (cvCalcOpticalFlowPyrLK) and omited any stray points. Again a few parameters to simplify, set thesholds etc.
Phase #2 is quite straightforward. I used ”delaunay triangulation” (as many people have pointed out on twitter, flickr, vimeo). This is a process for creating triangles given a set of arbitrary points on a plane ( See http://en.wikipedia.org/wiki/Delaunay_triangulation for more info ). For this I used the ‘Triangle’ library by Jonathan Shewchuk, I just feed it the set of points I obtained from Phase #1, and it outputs a set of triangle data.
Phase #3 is also quite straightforward. As you can see from the GUI shots below, a bunch of options for triangle outline (wireframe) thickness and transparency, triangle fill transparency, original footage transparency etc. allowed customization of the final look. (Where colors for the triangles were picked as the average color in the original footage underneath that triangle). Also a few more display options on how to join together the triangulation, pin to the corners etc.
Phase #4 The app allowed scrubbing, pausing and playback of the video while processing in (almost) realtime (it could have been realtime if optimizations were pushed, but it didn’t need to be, so I didn’t bother). The processed images were always output to the screen (so you can see what you’re doing), but also optionally written to disk as the video was playing and new frames were processed. This allowed us to play with the parameters and adjust while the video was playing and being saved to disk – i.e. animate the parameters in realtime and play it like a visual instrument.
Director: Barney Steel (Found Collective)
Compositing/Post: Raoul Paulet, Barney Steel & James Medcraft
Software development: Memo Akten
Webcam Piano 2.0
Interactive sound installation
1ch HD LCD, 2ch sound, infrared camera, custom software
Webcam Piano is an ongoing project researching interpretation of movement into sound, primarily classical influenced (though not necessarily western), traditional sounding music as opposed to abstract electronic audio or soundscapes. My main aim is to create an instrument that sounds conventional – and is governed by conventional musical rules established over centuries of musical and social evolution – but is performed via a highly un-conventional – yet hopefully very intuitive – approach, allowing users to get deeply involved within minutes, if not seconds of starting to play with it.
The first incarnation of Webcam Piano came about as a little experiment in 2008, as an opensource Quartz Composer patch. This was shortly followed by an opensource Java / Processing implementation and again quickly followed by an opensource C++ /openFrameworks implementation which also evolved into a large scale multi-instrument installation at the Glastonbury festival.
Webcam Piano 2.0, uses much smarter algorithms (it was developed before affordable depth cameras like the kinect were available) to try and understand motion and interpret how the person (or people) are moving and what kind of sounds they might be wanting and expecting to create with their movements. It also offers a simple gestural interface to switch between 3 (potentially more) different musical modes and color schemes, each representing a different emotion and feel.
made with openFrameworks
Science Museum – Who am I?
I was commissioned by All of Us to provide consultancy on the design and development of new exhibitions for the Science Museum’s relaunch of their “Who Am I?” gallery, as well as taking on the role of designing and developing one of the installations. I developed the ‘Threshold’ installation, situated at the entrance of the gallery, creating a playful, interactive environment inviting visitors to engage with the installation whilst learning about the gallery and the key messages.
made with openFrameworks
A workshop + residency at the Mapping festival 2010, Geneva, Switzerland led by 1024 Architecture. Many collaborators, artists, musicians, performers, visualists took over various spaces at La Parfumerie to create audio-visual performances and installations. Constructing a large scaffold structure in the main hall, armed with DMX controlled lights, microphones, cameras, sensors and projectors; we converted the space into a giant audio-visual-light instrument for the audience to explore and play with, and be part of and experience a non-linear narrative performance. The project involved live projection mapping, motion tracking, audio-reactive visuals, piezo-reactive audio and visuals, DMX controlled lights, rope gymnasts, acrobats and much more!
Blaze – The Streetdance Show
MSA Visuals produced and directed the visuals for the west-end street dance show Blaze, directed by Anthony Van Laast (Mamma Mia!, Joseph And The Amazing Technicolor Dreamcoat, Jesus Christ Superstar).
Previews started 11th March 2009, after a 2 week run at the Peacock / Sadlers Wells in Holborn, London UK, it goes off on a tour of Holland and the UK. More info on the official site.
The project involves camera-tracking breakers, projection mapping onto an extremely intricate set designed by Es Devlin (Kanye West, Pet Shop Boys, Lady Gaga) – and it’s a touring show – and no one from the MSA Visuals team is touring with it – and some shows are one night stands – and the set is projected on by two projectors at very close distance with extremely short throws and huge amounts of lens distortion. So the challenge was not only to create content for the show, but also devise a solution which would enable a tourable and quick and easy setup to calibrate, and still provide us the power to map the fine detail we desire. All powered by custom software of course made with openFrameworks, including a custom media server mapping the 3D content in realtime, controllable from front of house over wifi as well as synced to Midi Time Code to be driven by the existing show control system – QLab.
We used two very short throw (i.e. high lens distortion) projectors covering different parts of the set. The show went on tour of many venues, with different projector positions at each venue – often arriving at a venue in the morning, setting up and playing that evening, and loading out the same night – with no time for laborious content adaption / calibration. So all 3D mapping, calibration and geometry adjustments were done in realtime (even adjustable during live playback if need be). Each projector was fed dynamically rendered perspectives to be mapped extremely precisely onto the set using the realtime 3D mapping capabilities of the Mega Super Awesome Interactive Media Server Engine.
Visuals created in collaboration with Robin McNicholas with additional design and illustrations by Jane Laurie. Huge thanks to the amazing production crew, super talented breakers and dancers, all the other creatives, and special thanks to Theo Watson and François Wunschel.
Depeche Mode – Fragile Tension
Music video for Depeche Mode’s latest single “Fragile Tension”. I designed and developed software (using C++ / openframeworks) to process live footage of the band and dancers. The software analyzes the footage, and in real-time outputs visualizations based on motion and numerous user-controlled parameters to adjust the behaviour, look and feel, various different modes etc. All images are generated from this software, then edited, composited and finished in a traditional post-production workflow.
The raw video shots were broken down shot by shot and various layers were rotoscoped, separated and rendered out as quicktime files. (This was all done in the traditional way with AfterEffects). Then each of these shots and layers were individually fed into my custom software. My software analyzes the video and based on the dozens of parameters outputs a new sequence (as a sequence of png’s). The analysis is done almost realtime (depending on input video size) and the user can play with the dozens of parameters in realtime, while the app is running and even while it is rendering the processed images to disk. So all the animations you see in the video, were ‘performed’ in realtime. No keyframes used. Lots of different ‘looks’ were created (presets) and applied to the different shots and layers. Each of these processed sequences were rendered to disk and re-composited and edited back together with Final Cut and AfterEffects to produce the final video.
Video Commissioner: John Moule
Directors: Barney Steel (Found Collective) & Rob Chandler
Producer: Sam Brown
Software development: Memo Akten
(partly) made with openFrameworks
Interactive audio-visual installation
1ch HD LCD, 2ch sound, infrared camera, custom software
Gold is an interactive installation which explores our obsession with super-stardom, and the extravagance that accompanies it. Through a ‘magic mirror’, revel in a world of excess where you are the super-star. Shower in glittery gold, experience almost omnipotent powers as you materialize, morph and dematerialize into pure sparkling gold dust. Immortalize yourself as a shimmering golden statue, before you collapse and fade away.
The installation consists of a custom designed and built cabinet housing a plasma screen, infrared lights, an infrared camera and a computer running custom software written in C++ with openFrameworks, opengl and opencv.
made with openFrameworks
1ch HD projection, infrared camera, custom software
Dimensions (variable): [3m x 2m] – [6m x 4m]
Body paint by Mehmet Akten is an interactive installation – a visual instrument – allowing users to paint on a virtual canvas with their body, interpreting movement, gestures and dance into evolving compositions. It’s purpose is not to create a new interface for creating static paintings, but more a natural way of creating, directing and performing moving images in realtime, with focus on the interaction experience. What matters is not the painting created at the end, but the sensation one experiences while using it and reacting in realtime to their own creation as it evolves – analogous to a musical instrument – while one often plays a piano to compose and record, it is quite common to just play and improvise without any concern for recording. Every note is just for the moment, a realtime reaction coming from within, in response to your journey so far. Hence when you stop moving, the painting fades away to white, leaving only the memory, like the song you just played on the piano.
Our body is a vessel for emotional expression. When we talk, we move with our whole body. As we get excited, and more involved and passionate about what we are saying, we get more animated. Body Paint taps into this, our natural instinct to express ourselves with full body movement and dance, and combines it with our subconscious desire to create – even more so, our desire to create something beautiful.
The installation has been shown in various events, galleries and festivals across the world including the Decode exhibition at the Victoria & Albert Museum, London UK; Holon Museum in Tel Aviv, and Garage Center for Contemporary Culture, Moscow, Russia.
made with openFrameworks
1ch HD video, 2ch sound
Created with custom software tracking the motion of the dancers and generating these visuals, abstract layers containing subtle hints of human forms and motion.
When the clip starts, you probably won’t recognize a human shape at first, but your eyes and mind will be searching, seeking mental connections between abstract shapes and recognizable patterns, like looking for shapes in clouds. You’ll be questioning what you see, is that it? is it sitting? is it crouching? is it kneeling? Then all of a sudden, it’ll be crystal clear. Then you’ll try and keep it in focus, following it as it moves around, tracking each limb, using the motion to construct an image of the parts you can’t see. It’ll fade in and out of clarity. At times you’ll be clinging onto just the tip of it’s hand swinging round, trying to identify any other recognizable parts. You might see another arm or leg and grab onto it, fighting not to lose it. Then it’ll be crystal clear again, and then all of a sudden vanish, literally in a puff of smoke, and your eyes will start searching again.
Credits & Acknowledgements
“Reincarnation” is an off-shoot film born while creating visuals for The Rambert Dance Company and flat-e‘s “Iatrogenesis” performance at the Queen Elizabeth Hall, South Bank, London UK. This film is not representative of flat-e & Rambert’s “Iatrogenesis”. This is a standalone film born from working on that project.
made with openFrameworks
My Secret Heart
1ch 6K (or 6ch XGA) projection, 8ch sound
Dimensions: 6m (diameter) x 3m (height)
My Secret Heart is a music and film installation & performance commissioned by Streetwise Opera with music composed by Mira Calix and sound design by David Sheppard. The new piece is inspired by the Allegri’s 17th-century choral work Miserere Mei, a piece so protected by the Vatican that they put an embargo on it. Working with video artists Flat-e, we created a film to accompany the 48 minute performance, as well as versions for an installation and short film.
Streetwise Opera are a charity who use music as a tool to help people who have experienced homelessness move forward in their lives. They run a weekly music programme, resident in 10 homeless centres around the country – and also stage an annual production which gives their performers the chance to star in quality shows where there are high-expectations, no compromise and no patronising. The voices you hear in the music, and people you see in the film, are from Streetwise workshops around the UK. 100+ Streetwise performers also sang at the My Secret Heart premiere at the Royal Festival Hall in December 2008. My Secret Heart is about their story.
The film has an abstract narrative derived from individual conversations with each of the Streetwise performers. It is a direct emotional response to their stories combined with the haunting beauty of Mira Calix’s composition. Instead of focusing on a specific plot, the film embarks on a complex journey through various states of emotion, starting from pre-birth through birth, curiosity, exploration, excitement, playfulness; through to fear, anxiety and isolation. While it maintains a relatively dark and eerie mood overall, intertwined with the feelings of desperation are strong elements of hope.
The process – digital puppetry
The visuals were designed and created primarily with custom software written in C++/openFrameworks, with some Quartz Composer elements, rendered AfterEffects sequences and live action footage. The custom C++ app is audio-reactive and user-interactive, allowing the visuals to be ‘performed’ live with full control over the behaviour of the virtual inhabitants of the cylindrical aquarium-like rig.
Over the course of a few months, and after many conversations with Mira Calix and listening to the soundtrack over and over and over again, we decided roughly what the visuals should do and what kind of behaviours we wanted the visuals to perform at specific points in the song. After a lengthy coding period, I had an application that when you ran, did… nothing, but it had the potential to do everything I wanted. The application was a live performance tool with full control over its environment as well as audio playback and control, and an input recording / playback system.
Once the application was complete, I sat down with Robin from flat-e, and pressed ‘play’ on the app – this started the music playback and the physics recorder. While the music was playing we could control the inhabitants of the virtual world with many sliders, knobs, touchpads, mouse etc. As the music was playing we would respond in realtime by sending messages to make them move gracefully, erratically, flocking together, swimming apart, getting excited, slowing down, speeding up, telling them to die, slowly start twitching, come alive, swim to the surface, sink to the bottom etc – our actions being recorded gave us the ability to later go back and scrub to certain positions in the song and overdub and mix new behaviours we might have missed in the first round. In the end we found that actually we had to do little to no editing. The best overall performance was the one we recorded in a single 50 minute take.
The sensation of performing and recording the visuals was that of actually directing a film with thousands of virtual actors, commanding an army, digital puppetry – an approach I’m sure I will be revisiting in the very near future.
Made with openFrameworks.
Pi is an interactive audio visual installation commissioned by Trash City of the Glastonbury Festival to be shown at the festival in June 2008.
Working with fellow techheads at Seeper, our concept was to take a 50ft tent, and convert it into a giant audio/visual instrument – all of the music, audio and visuals inside the tent are generated and controlled purely by the movements of the occupants.
The space was divided into 6 zones. Two of the zones were purely visual, this was the waiting area. Here people could dance, chill, run about and do what they pleased. Two cameras tracked their movement and applied it to the fluid/particles visuals – so people could ‘throw’ plasma balls at each other, or send colorful waves propagating around the space. The other 4 zones had the same visual interactions, but in addition were also connected to an audio system. Each of these four zones was allocated an instrument type (drums/beats/percussion, pads, bass, strings etc.), and movement within these zones would also trigger notes or beats – depending on precisely where in the zone the movement was triggered. A lot of effort went into designing the sounds and notes triggered to make sure the end result would almost always sound pleasant and not be complete cacophony.
The visuals start with the camera analysis – without motion there are no visuals. 6 cameras are fed into an 8-core Mac Pro and are all analysed in parallel with an optical flow motion estimation, the analysis for each camera feed running in its own thread. Once the camera analysis is complete for all cameras, the velocity vectors are stitched together and fed into a fluid simulation. Any movement the user makes, causes ‘colored dye’ to be injected into the fluid simulation, with the speed and direction of the movement being inherited by the dye. These movements also create ‘currents’ in the fluid, allowing the user to create swirls and vortices with circular movements (e.g. waving arms around), or just send waves of coloured plasma rippling across the room with a simple push of the body. These currents also allow the user to control particles as described below.
Any movement faster than a certain threshold, also creates ‘glitter’. The speed of the movement controls the number of glitter created (larger & faster movements create more glitter) – thousands of glitter can be swimming around at any one time. Once created, each glitter is independant and swims around with a basic AI – but always overpowered by the current of the fluids, so the user can herd the swarms of glitter using their arms, legs, body etc. and push them towards one another.
Even larger movements – e.g. swiftly swinging hands or head around – create ‘energy orbs’. These orbs cut through the fluid quite quickly, but can still be affected by the currents allowing the user to control the direction and speed of the energy orbs. This allows users to bounce the orbs back and forth between each other playing imaginary tennis or football.
The biggest challenge in creating an application of this scale was to structure and optimize it in a way so it could analyze upto 6 camera feeds, and run at a large enough resolution to cover the entire tent. A multiple computer approach was out of the question due to the complications of synchronising a fluid simulation across multiple PC’s, so the decision was made to go with a multi-threaded app running on an 8-core Mac Pro. The motion estimation was split into 6 threads (one for each camera), the fluid solver ran in its own thread, and the particles (glitter & orb) ran in another thread – all of these threads ran in parallel. Once all threads were finished processing their data for one frame, they exchanged their results ready for processing for the next frame (camera motion fed into fluid solver ready for next frame, fluid currents fed into particles ready for next frame etc.). This approach allowed everything to run in parallel with smooth framerates of 30fps.
In addition to controlling the visuals, motion is also what triggers the sound. The users’ motion is analyzed and broken down into a spatial grid, with imaginary pads corresponding to different notes. This information is sent over a local network to another computer running Ableton Live where all the samples and loops are stored and triggered. The trigger information is sent using OSC and is mapped on the audio computer from OSC to midi using OSCulator – which originally wasn’t designed to handle such complex polyphony, but working closely with the author we received a version which suited our needs perfectly.
Made with openFrameworks.