Computer Vision

Dec 02 15:46

Dynamic projection mapping with camera / head tracking - Sony PlayStation Video Store

Three videos I co-directed with fellow Marshmallow Laser Feast posse Barney Steel and Robin McNicholas have just been released.

For the launch of the Sony PlayStation Video Store, our job was to bring a living room alive with hints at various hollywood blockbuster franchises - so we decided to push projection mapping to a new level. We projection mapped a living room space with camera (or head) tracking and dynamic perspective. All content realtime 3D, camera (or head) is tracked to match and update the 3D perspective in realtime to the viewers point of view. Add to this real props, live puppetry, interaction between the virtual and physical worlds, a mixture of hi-tech and lo-tech live special effects, a little bit of pyrotechnics and a lot of late nights.

The project is driven using custom software based on the Unity Engine, here all the realtime 3D content was animated and triggered, and mapped onto the geometry. Camera (and head) tracking was done on the PlayStation3 using PlayStation Move controllers and PlayStation Eye cameras, and communicated to our Unity application via ethernet. Six projectors were used to cover the area: 6x Optoma EW610ST, and the open-source Syphon Framework was used to to communicate between our Unity based 3D and mapping software, and our custom native Cocoa application to output to all projectors.

Full credits of the amazingly talented team, more information and full write-up coming soon, in the meantime I hope you enjoy the videos!



Credits

Agency: Studio Output
Ian Hambleton - Producer

Client: Sony PlayStation

Produced and Directed by MarshmallowLaserFeast

Mehmet Akten (MSA Visuals) - Director
Robin McNicholas (Flat-E) - Director
Barney Steel (The Found Collective) - Director / Producer

Ian Walker (The Found Collective) - Post Production Producer
Nadine Allen - Production Assistant
Kavya Ramalu - Production Secretary
Thomas English - Camera Man / AR
Richard Bradbury - Focus Puller
Celia Clare-Moodie - Camera Assistant
Jonathan Stow - Assistant Director
Philip Davies - Digital Imaging Technician
Jools Peacock - 3D Artist
Dirk Van Dijk - 3D Artist
Tobias Barendt - Programmer
Raffael Ziegler - 3D Artist
Alex Trattler - 3D Artist
Neil Lambeth - Art Department Director
Elise Colledge - Art Department Assistant
Oli van der Vijver - Set Construction
Robert Pybus - Assistant Director / Puppeteer
Gareth Cooper - Actor
Kimberly Morrison - Puppeteer
Rhimes Lecointe - Puppeteer
Jen Bailey-Rae - Puppeteer
Tashan - Puppeteer
Ralph Fuller - Puppeteer
Frederick Fuller - Puppeteer
Jane Laurie - Puppeteer
Sandra Ciampone – Photography / Puppeteer
Daniel McNicholas White – Runner
Maddie McNicholas - Runner
Nick White - Runner
Rosa Rooney - Runner

Jul 20 11:02

The Wombats "Techno Fan"

I worked with the Found Collective on this Wombats music video. I designed and developed software (using C++ / openframeworks) to process live footage of the band. All images seen below are generated from this software.

Background 

In 2010 the label had originally commissioned someone else for the video (I'm not sure who), they filmed and edited a live performance of the band. The label (or band or commissioner) then got in touch with Barney Steel from the Found Collective ( www.thefoundcollective.com ) to "spice up the footage", having seen the Depeche Mode "Fragile Tension" video which we worked on together ( http://www.msavisuals.com/depeche_mode_fragile_tension ). Barney in turn got in touch with me to create an app / system / workflow which could "spice up the footage". In short we received a locked down edit of band footage, which we were tasked with "applying a process and making it pretty". 

 

Workflow 

We received a locked edit of the band performing the song live. This was then broken down shot by shot and various layers were rotoscoped, separated (e.g. foreground, background, singer, drummer etc.) and rendered out as quicktime files. (This was all done in the traditional way with AfterEffects, no custom software yet). Then each of these shots & layers were individually fed into my custom software. The software analyzes the video file and based on the dozens of parameters outputs a new sequence (as a sequence of png's). The analysis is done almost realtime (depending on input video size) and the user can play with the dozens of parameters in realtime, while the app is running and even while it is rendering the processed images to disk. So all the animations you see in the video, were 'performed' in realtime. No keyframes used. Lots of different 'looks' were created (presets) and applied to the different shots & layers. Each of these processed sequences were rendered to disk and re-composited and edited back together with Final Cut and AfterEffects to produce the final video.

 

Processing

This isn't meant as a tutorial, but a quick, high level overview of all the techniques used in the processing of the footage. There are a few main phases in the processing of the footage:

  1. analyze the footage and find some interesting points 
  2. create triangles from those interesting points 
  3. display those triangles
  4. save image sequence to disk
  5. profit

Phase #1 is where all the computer vision (opencv) stuff happens. I used a variety of techniques. As you can see from the GUI screenshots, the first step is a bit of pre-processing: blur (cvSmooth), bottom threshold (clamp anything under a certain brightness to black - cvThreshold), top threshold (clamp anything above a certain brightness to white - cvThreshold), adaptive threshold (apply a localized binary threshold, clamping to white or black depending on neighbours only - cvAdaptiveThreshold), erode (shrink or 'thinnen' bright pixels - cvErode), dilate (expand or 'thicken' bright pixels - cvDilate). Not all of these are always used, different shots and looks require different pre-processing.

Next, the first method of finding interesting points was 'finding contours' (cvFindContours) - or 'finding blobs' as it is also sometimes known as. This procedure basically allows you to finds the 'edges' in the image, and return them as a sequence of points - as opposed to applying say just a canny or laplacian edge detector, which will also find the edges, but will return a B&W image with a black background and white edges. The latter (canny, laplacian etc) finds the edges *visually* while the cvFindContours will go one step further and return the edge *data* in a computer readable way, i.e. an array of points, so you can parse through this array in your code and see where these edges are. (cvFindContours also returns other information regarding the 'blobs' like area, centroid etc but that is irrelevant for this application). Now that we have the edge data, we can triangulate it? No, because it's way too dense - a coordinate for every pixel. So some simplification is in order. Again for this I used a number of techniques. A very crude method is just to omit every n'th point. Another method is to omit a point if the dot product of the vector leading up to that point from previous point, and the vector leading away from that point to the next point, is greater than a certain threshold (that threshold is the cosine of the minimum angle you desire). In english: omit a point if it is on a relatively straight line. OR: if we have points A, B and C. Omit point B if: (B-A) . (B-C) > cos(angle threshold). Another method is to resample along the edges at fixed distance intervals. For this I use my own MSA::Interpolator class ( http://msavisuals.com/msainterpolator). (I think there may have been a few more techniques, but I cannot remember as it's been a while since I wrote this app!)

Independent to the cvFindContours point finding method, I also looked at using 'corner detection' (feature detection / feature extraction). For this I looked into three algorithms: Shi-Tomasi and Harris (both of which are implemented in opencv's cvGoodFeaturesToTrack function) and SURF (using the OpenSURF library). Out of these three Shi-Tomasi gave the best visual results. I wanted a relatively large set of points, that would not flicker too much (given relatively low 'tracking quality'). Harris was painfully slow, whereas SURF would just return too few features, adjusting the parameters to return a higher set of features just made the feature tracking too unstable. Once I had a set of points returned by the Shi-Tomasi (cvGoodFeaturesToTrack) I tracked these with a sparse Lucas Kanade Optical Flow (cvCalcOpticalFlowPyrLK) and omited any stray points. Again a few parameters to simplify, set thesholds etc.

Phase #2 is quite straightforward. I used "delaunay triangulation" (as many people have pointed out on twitter, flickr, vimeo). This is a process for creating triangles given a set of arbitrary points on a plane ( See http://en.wikipedia.org/wiki/Delaunay_triangulation for more info ). For this I used the 'Triangle' library by Jonathan Shewchuk, I just feed it the set of points I obtained from Phase #1, and it outputs a set of triangle data.

Phase #3 is also quite straightforward. As you can see from the GUI shots below, a bunch of options for triangle outline (wireframe) thickness and transparency, triangle fill transparency, original footage transparency etc. allowed customization of the final look. (Where colors for the triangles were picked as the average color in the original footage underneath that triangle). Also a few more display options on how to join together the triangulation, pin to the corners etc.

Phase #4 The app allowed scrubbing, pausing and playback of the video while processing in (almost) realtime (it could have been realtime if optimizations were pushed, but it didn't need to be, so I didn't bother). The processed images were always output to the screen (so you can see what you're doing), but also optionally written to disk as the video was playing and new frames were processed. This allowed us to play with the parameters and adjust while the video was playing and being saved to disk - i.e. animate the parameters in realtime and play it like a visual instrument.

The software was written in C++ with openFrameworks http://www.openframeworks.cc

libraries used:

 

WOMBATS__0004_Layer 76
WOMBATS__0013_Layer 67
WOMBATS__0002_Layer 78
WOMBATS__0003_Layer 77
WOMBATS__0011_Layer 69
WOMBATS__0014_Layer 66
WOMBATS__0063_Layer 17
WOMBATS__0062_Layer 18
WOMBATS__0039_Layer 41
WOMBATS__0048_Layer 32
WOMBATS__0072_Layer 8
Screen shot 2011-07-12 at 15.26.39
Screen shot 2011-07-12 at 15.26.49
Screen shot 2011-07-11 at 18.05.00
Screen shot 2011-07-11 at 18.01.14
Screen shot 2011-07-11 at 18.02.07

Jul 15 19:11

iSteveJobs

In case you've been living under a rock for the past week, this happened recently:
http://mashable.com/2011/07/07/secret-service-apple-store-art-2/
http://www.bbc.co.uk/news/technology-14080438
http://fffff.at/people-staring-at-computers/
http://eyeteeth.blogspot.com/2011/07/feds-visit-artist-behind-people-sta...
http://en.wikipedia.org/wiki/People_Staring_at_Computers
http://www.google.com/search?q=%22people+staring+at+computers%22

(Cease & Desist letters may have affected the content on these sites since posting).

Inspired by the events and the FAT Lab censor, I knocked up this project. It slaps on a Steve Jobs mask on any face it finds in a live webcam feed.

Feel free to install it on Apple Stores around the world. It should be legal (though don't quote me on that).

Download the source and mac binary at https://github.com/memo/iSteveJobs

iSteveJobs

Mar 17 01:14

Tweak, tweak, tweak. 41 pages of GUI, or "How I learned to stop worrying and love the control freak within"

I often tell people that I spend 10% of my time designing + coding, and the rest of my time number tweaking. The actual ratio may not be totally accurate, but I do spend an awful lot of time playing with sliders. Usually getting the exact behaviour that I want is simply a balancing act between lots (and lots (and lots (and lots))) of parameters. Getting that detail right is absolutely crucial to me, the smallest change in a few numbers can really make or break the look, feel and experience. If you don't believe me, try 'Just Six Numbers' by Sir Martin Rees, Astronomer Royal.

So as an example I thought I'd post the GUI shots for one of my recent projects - interactive building projections for Google Chrome, a collaboration between my company (MSA Visuals), Flourish, Seeper and Bluman Assoicates. MSA Visuals provided the interactive content, software and hardware.

In this particular case, the projections were run by a dual-head Mac Pro (and a second for backup). One DVI output went to the video processors/projectors, the other DVI output to a monitor where I could preview the final output content, input camera feeds, see individual content layers and tweak a few thousand parameters - through 41 pages of GUI!. To quickly summarize some of the duties carried out by the modules seen in the GUI:

  • configure layout for mapping onto building architecture and background anim parameters
  • setup lighting animation parameters
  • BW camera input options, warping, tracking, optical flow, contours etc.
  • color camera input options
  • contour processing, tip finding, tip tracking etc.
  • screen saver / timeout options
  • fluid sim settings
  • physics and collision settings
  • post processing effects settings (per layer)
  • tons of other display, animation and behaviour settings

(This installation uses a BW IR camera and Color Camera. When taking these screenshots the color camera wasn't connected, hence a lot of black screens on some pages.)

Check out the GUI screen grabs below, or click here to see them fullscreen (where you can read all the text)

Feb 19 00:16

Speed Project: RESDELET 2011

Back in the late 80s/early 90s I was very much into computer viruses - the harmless, fun kind. To a young boy, no doubt the concept of an invisible, mischievous, self-replicating little program was very inviting - and a great technical + creative challenge.

The very first virus I wrote was for an 8088, and it was called RESDELET.EXE. This was back in the age of DOS, before windows. In those days to 'multitask' - i.e. keep your own program running in the background while the user interacted with another application in the foreground - was a dodgy task. It involved hooking into interrupt vectors and keeping your program in memory using the good old TSR: Terminate, Stay Resident interrupt call 27h.

So RESDELET.EXE would hang about harmlessly in memory while you worked on other things - e.g. typing up a spreadsheet in Lotus 123 - then when you pressed the DELETE key on the keyboard, the characters on the screen would start falling down - there and then inside Lotus 123 or whatever application you were running.

RESDELET 2011 is an adaptation of the original. It hangs about in the background, and when you press the DELETE or BACKSPACE key, whatever letters you have on your screen start pouring down - with a bit of added mouse interactivity. This version does *not* self-replicate - it is *not* a virus, just a bit of harmless fun.

Source code coming real soon (as soon as I figure out how to add a git repo inside another repo)

This is a speed project developed in just over half a day, so use at your own risk!

Sorry for the flicking, there was a conflict with the screen recording application I couldn't resolve. Normally there is no flicker it's as smooth as silk.

Nov 14 19:46

First tests with Kinect - gestural drawing in 3D

Yes I'm playing with hacking Kinect :)

The XBox Kinect is connected to my Macbook Pro, and I wrote a little demo to analyse the depth map for gestural 3D interaction. One hand to draw in 3D, two hands to rotate the view. Very rough, early prototype.

You can download the source for the above demo (GPL v2) at
https://github.com/memo/ofxKinect-demos

Within a few hours of receiving his Kinect, Hector Martin released source code to read in an RGB and depth map from the device for Linux.
http://git.marcansoft.com/?p=libfreenect.git

within a few hours of that Theo Watson ported it to Mac OSX and release his source, which - with the help of others - became an openFrameworks addon pretty quickly.
https://github.com/ofTheo/ofxKinect

Now demos are popping up all over the world as people are trying to understand the capabilities of this device and how it will change Human Computer Interaction on a consumer / mass level.

Nov 05 15:38

OpenCL Particles at OKGo's Design Miami 2009 gig

For last years Design Miami (2009) I created realtime visuals for an OKGo performance where they were using guitars modded by Moritz Waldemeyer, shooting out lasers from the headstock. I created software to track the laser beams and project visuals onto the wall where they hit.

This video is an opensource demo - written with openframeworks - of one of the visualizations from that show, using an OpenCL particle system and the macbook multitouch pad to simulate the laser hit points. The demo is audio reactive and is controlled by my fingers (more than one) on the macbook multitouch pad (each 'attractor' is a finger on the multitouch pad). It runs at a solid 60fps on a Macbook Pro, but unfortunately the screen capture killed the fps - and of course half the particles aren't even visible because of the video compression.

The app is written to use the MacbookPro multitouch pad, so will not compile for platforms other than OSX, but by simply removing the multitouch pad sections (and hooking something else in), the rest should compile and run fine (assuming you have an OpenCL compatible card and implementation on your system).

Uses ofxMultiTouchPad by Jens Alexander Ewald with code from Hans-Christoph Steiner and Steike.
ofxMSAfft uses core from Dominic Mazzoni and Don Cross.

Source code (for OF 0062) is included and includes all necessary non-OFcore addons (MSACore, MSAOpenCL, MSAPingPong, ofxMSAFFT, ofxMSAInteractiveObject, ofxSimpleGuiToo, ofxFBOTexture, ofxMultiTouchPad, ofxShader) - but bear in mind some of these addons may not be latest version (ofxFBOTexture, ofxMultiTouchPad, ofxShader), and are included for compatibility with this demo which was written last year.

More information on the project at
http://msavisuals.com/okgo_fendi_design_miami_show

Most of the magic is happening in the opencl kernel, so here it is (or download the full zip with xcode project at the bottom of this page)

typedef struct {
    float2 vel;
    float mass;
    float life;
} Particle;
 
 
typedef struct {
    float2 pos;
    float spread;
    float attractForce;
    float waveAmp;
    float waveFreq;
} Node;
 
#define kMaxParticles       512*512
 
#define kArg_particles          0
#define kArg_posBuffer          1
#define kArg_colBuffer          2
#define kArg_nodes              3
#define kArg_numNodes           4
#define kArg_color              5
#define kArg_colorTaper         6
#define kArg_momentum           7
#define kArg_dieSpeed           8
#define kArg_time               9
#define kArg_wavePosMult        10
#define kArg_waveVelMult        11
#define kArg_massMin            12
 
 
float rand(float2 co) {
    float i;
    return fabs(fract(sin(dot(co.xy ,make_float2(12.9898f, 78.233f))) * 43758.5453f, &i));
}
 
 
__kernel void update(__global Particle* particles,      //0
                     __global float2* posBuffer,        //1
                     __global float4 *colBuffer,        //2
                     __global Node *nodes,              //3
                     const int numNodes,                //4
                     const float4 color,                //5
                     const float colorTaper,            //6
                     const float momentum,              //7
                     const float dieSpeed,              //8
                     const float time,                  //9
                     const float wavePosMult,           //10
                     const float waveVelMult,           //11
                     const float massMin                //12
                     ) {                
 
    int     id                  = get_global_id(0);
    __global Particle   *p      = &particles[id];
    float2  pos                 = posBuffer[id];
 
    int     birthNodeId         = id % numNodes;
    float2  vecFromBirthNode    = pos - nodes[birthNodeId].pos;                         // vector from birth node to particle
    float   distToBirthNode     = fast_length(vecFromBirthNode);                            // distance from bith node to particle
 
    int     targetNodeId        = (id % 2 == 0) ? (id+1) % numNodes : (id + numNodes-1) % numNodes;
    float2  vecFromTargetNode   = pos - nodes[targetNodeId].pos;                        // vector from target node to particle
    float   distToTargetNode    = fast_length(vecFromTargetNode);                       // distance from target node to particle
 
    float2  diffBetweenNodes    = nodes[targetNodeId].pos - nodes[birthNodeId].pos;     // vector between nodes (from birth to target)
    float2  normBetweenNodes    = fast_normalize(diffBetweenNodes);                     // normalized vector between nodes (from birth to target)
    float   distBetweenNodes    = fast_length(diffBetweenNodes);                        // distance betweem nodes (from birth to target)
 
    float   dotTargetNode       = fmax(0.0f, dot(vecFromTargetNode, -normBetweenNodes));
    float   dotBirthNode        = fmax(0.0f, dot(vecFromBirthNode, normBetweenNodes));
    float   distRatio           = fmin(1.0f, fmin(dotTargetNode, dotBirthNode) / (distBetweenNodes * 0.5f));
 
    // add attraction to other nodes
    p->vel                      -= vecFromTargetNode * nodes[targetNodeId].attractForce / (distToTargetNode + 1.0f) * p->mass;
 
    // add wave
    float2 waveVel              = make_float2(-normBetweenNodes.y, normBetweenNodes.x) * sin(time + 10.0f * 3.1416926f * distRatio * nodes[birthNodeId].waveFreq);
    float2 sideways             = nodes[birthNodeId].waveAmp * waveVel * distRatio * p->mass;
    posBuffer[id]               += sideways * wavePosMult;
    p->vel                      += sideways * waveVelMult * dotTargetNode / (distBetweenNodes + 1);
 
    // set color
    float invLife = 1.0f - p->life;
    colBuffer[id] = color * (1.0f - invLife * invLife * invLife);// * sqrt(p->life);    // fade with life
 
    // add waviness
    p->life -= dieSpeed;
    if(p->life < 0.0f || distToTargetNode < 1.0f) {
        posBuffer[id] = posBuffer[id + kMaxParticles] = nodes[birthNodeId].pos;
        float a = rand(p->vel) * 3.1415926f * 30.0f;
        float r = rand(pos);
        p->vel = make_float2(cos(a), sin(a)) * (nodes[birthNodeId].spread * r * r * r);
        p->life = 1.0f;
//      p->mass = mix(massMin, 1.0f, r);
    } else {
        posBuffer[id+kMaxParticles] = pos;
        colBuffer[id+kMaxParticles] = colBuffer[id] * (1.0f - colorTaper);  
 
        posBuffer[id] += p->vel;
        p->vel *= momentum;
    }
}

Sep 22 18:13

Impromptu, improvised performance with Body Paint at le Cube, Paris.

My Body Paint installation is currently being exhibited at le Cube festival in Paris. At the opening night two complete strangers, members of the public, broke into an impromptu, improvised performance with the installation. Mind blowing and truly humbling. Thank you. My work here is done.

Sep 15 18:07

"Who am I?" @ Science Museum

MSA Visuals' Memo Akten was commissioned by All of Us to provide consultancy on the design and development of new exhibitions for the Science Museum's relaunch of their "Who Am I?" gallery, as well as taking on the role of designing and developing one of the installations. MSAV developed the 'Threshold' installation, situated at the entrance of the gallery, creating a playful, interactive environment inviting visitors to engage with the installation whilst learning about the gallery and the key messages.

http://www.wired.co.uk/news/archive/2010-06/25/science-museum-revamps-who-am-i-gallery

Sep 08 16:06

"Waves" UK School Games 2010 opening ceremony

MSA Visuals' Memo Akten was commissioned by Modular to create interactive visuals for the UK School Games 2010 opening ceremony at Gateshead stadium in Newcastle. The project involved using an array of cameras to convert the entire runway into an interactive space for the opening parade of 1600+ participants walking down the track as well as a visual performance to accompany a breakdance show by the Bad Taste Cru. All of the motion tracking and visuals were created using custom software written in C++ and using openFrameworks, combining visual elements created in Quartz Composer. Again using custom mapping software, the visuals were mapped and displayed on a 30m LED wall alongside the track. The event was curated and produced by Modular Projects, for commissioners Newcastle Gateshead Initiative. 

Aug 06 18:14

Announcing Webcam Piano 2.0

Jun 03 16:42

Metamapping 2010

I was recently at a workshop / residency at the Mapping festival 2010, Geneva, Switzerland. Many collaborators, artists, musicians, performers, visualists took over various spaces at La Parfumerie to create audio-visual performances and installations. Constructing a large scaffold structure in the main hall, armed with DMX controlled lights, microphones, cameras, sensors and projectors; we converted the space into a giant audio-visual-light instrument for the audience to explore and play with, and be part of and experience a non-linear narrative performance. The project involved live projection mapping, motion tracking, audio-reactive visuals, piezo-reactive audio and visuals, DMX controlled lights, rope gymnasts, acrobats and much more!

Video coverage of the event coming soon.

More information can be found at Mapping festival and 1024 Arcitecture

 

Jan 21 23:31

Laser tracking visuals for OKGo & Fendi @ Design Miami 2009

I designed and programmed visuals (projections) for the OKGo performance and Fendi installation at Design Miami 2009. OKGo were playing modified Les Paul guitars mounted with laser beams in the headstock, designed by Fendi in collaboration with Moritz Waldemeyer. Using a PC equipped with a high-speed firewire camera, I developed custom software to track the laser beams and generate visuals around the spots they hit the wall. These visuals were also audio-reactive, responding to the live audio feed coming from the sounddesk.

More images & video coverage coming soon.

P.S. For additional laser tracking goodness, checkout the Graffiti Research Lab's Laser Tag.

Dec 16 23:02

Happy Holidays! OKGo 'wtf' effect in realtime

Inspired by the brilliant use of an age old concept in the recent OKGo 'WTF' video, I created this little open-source demo in processing. It works in real-time with a webcam and you can download the app and source from http://www.msavisuals.com/xmas2009

Jun 10 18:30

Body Paint performance at Clicks or Mortar, March 2009

I finally got round to editing the footage from the Body Paint performances at Clicks or Mortar, March 2009.

designed & created by Mehmet Akten, http://www.memo.tv
choreography & performance by Miss Martini, http://www.myspace.com/maleficentmartini
music "Kill me" by Dave Focker, http://www.myspace.com/davefocker

Excerpts from performance at
“Clicks or Mortar”, Tyneside Cinema, March 2009
curated by Ed Carter / The Pixel Palace, http://www.thepixelpalace.org/

http://www.memo.tv/body_paint

Apr 10 10:36

MSAFluid in the wild

Jan 31 17:19

Reincarnation

"Reincarnation" is an off-shoot from a visual performance for the Rambert Dance Company's "Iatrogenesis" at the Queen Elizabeth Hall, South Bank, London UK - visuals for the latter directed by my good friends at flat-e.

The project began when I started working with footage of the Rambert dancers. Inspired by the footage, I wrote custom software to track the motion and generate the visuals you see in the video below... and "Reincarnation" was born. 

For the Rambert's "Iatrogenesis" piece, I created similar visuals, abstract visual layers containing subtle hints of human forms and motion, which would tie in with the movement of the dancers on stage. These were used by flat-e and composited with various other layers and projected on a see-through gauze in front of the stage.

The video (and music) you see below is not representative of the visuals (and music) of the Rambert's and flat-e's "Iatrogenesis". This is just a standalone piece born from working with the Rambert choreography.

When the clip starts, you probably won't recognize a human shape at first, but your eyes and mind will be searching, seeking mental connections between abstract shapes and recognizable patterns, like looking for shapes in clouds. You'll be questioning what you see, is that it? is it sitting? is it crouching? is it kneeling? Then all of a sudden, it'll be crystal clear. Then you'll try and keep it in focus, following it as it moves around, tracking each limb, using the motion to construct an image of the parts you can't see. It'll fade in and out of clarity. At times you'll be clinging onto just the tip of it's hand swinging round, trying to identify any other recognizable parts. You might see another arm or leg and grab onto it, fighting not to lose it. Then it'll be crystal clear again, and then all of a sudden vanish, literally in a puff of smoke, and your eyes will start searching again.

Made with openFrameworks.

FireOnBlack02FireOnBlack09FireOnBlack05BlueOnBlack01BlueOnBlack10FireOnWhite08FireOnWhite05FireOnWhite06BlueOnWhite02BlueOnWhite09

Dec 01 12:36

Interactive Stand for Toyota IQ

Working with Seeper, we created a vision driven interactive stand for Brandwidth and Toyota IQ. The stand is touring shopping centers in the UK and currently at the Westfield Shopping center in London.

Made with openFrameworks.

Nov 19 01:02

Gold dust demo

Update

I'm delighted to announce that my "Gold" installation has been selected to be shown at the Tent London exhibition as part of the London Design Festival. 24-27th September 2009, at the Truman Brewery, London, UK. Stay tuned for more information.

The video below is an early demo of the installation.

“Gold” is an interactive installation which explores our obsession with super-stardom, and the extravagance that accompanies it. Through a ‘magic mirror’, revel in a world of excess where you are the super-star. Shower in glittery gold, experience almost omnipotent powers as you materialize, morph and dematerialize into pure sparkling gold dust. Immortalize yourself as a shimmering golden statue, before you collapse and fade away.

The installation uses custom software written with openFrameworks and the OpenCV computer vision library. The software analyzes the video feed from infra-red cameras in real-time and generates 1080p HD output using OpenGL.


An iPhone adaptation of this can be found here

Source code for the particle system in this demo (minus the fancy effects) can be found at http://memo.tv/vertex_arrays_vbos_and_point_sprites_with_openframeworks
This demonstrates how to use VBOs, Vertex Arrays and Point Sprites.

Nov 13 23:50

Interactive Windows for Citizens Bank

Working with Arnold Agency and Todd Vanderlin+Ryan Habbyshaw from their R&D team, we created an interactive display for Citizens bank in branches in major cities in the US.

Using motion tracking, the pedestrians can interact with the display, signaling birds to come flying in and drop coins, grow plants, create wind to blow the plants around and spray pollen etc. The display is also time-reactive, automatically theming itself depending on time of day.

Made with openFrameworks.

Photos can be seen at http://www.flickr.com/photos/habbyshaw/sets/72157608742269830/

Oct 27 03:47

Projection mapping / quad warping with Quartz Composer & VDMX

This is a demo of projection mapping with VDMX & Quartz Composer inspired by deepvisual's tutorial of doing it in modul8 (uk.youtube.com/watch?v=2bRfdn9lNO8).

VDMX unfortunately doesn't have this feature built-in, but fortunately has beautiful integration with Quartz Composer - allowing me to build a quad warper in QC using a GLSL vertex shader, which should be super fast.

Also, around the 4:30 mark you'll see me masking the video on the box in the back. This is also using a custom Quartz Composition which allows 4 point mask creation. Usage is almost identical to the QuadWarper, but instead of warping the image it just applies a mask, or you can invert the mask and it cuts a chunk out. You could do the same by creating new layers, grouping, using as a layer mask etc. but its a a bit more hassle I think. Using the QuadMask is a lot quicker and you can put multiple QuadMasks on the same layer to draw more complex masks.

Sep 11 17:46

ofLab '08 @ Ars Electronica 2008

I was fortunate enough to be part of the OF Lab team this year at Ars Electronica in Linz, Austria.

Apart from building a 3 storey lab, tons of random little bits of software and visualizations for our environment, we also had the challenge of creating pieces inspired by 5 words submitted by the public - with only a few hours upto a day per project.

Below is an excerpt from "one second before big bang". Visuals all realtime and purely controlled by motion.

Made with openFrameworks.

Sep 11 16:49

Roots @ Minitek Festival 2008

"Roots" is an interactive musical/visual installation for the Brick Table tangible and multi-touch interface, where multiple people can collaborate in making generative music in a dynamic & visually responsive environment. It is a collaborative effort between myself and the Brick Table creators Jordan Hochenbaum & Owen Vallis. It will premiere at the Minitek Music + Innovation Festival September 12-14, 2008 in New York.

The essence of the interaction, is that you control parameters of a chaotic environment - which affect the behaviour of its inhabitants - which create and control music.

To breakdown very briefly without going into much detail:

  • There are vinelike structures branching and wandering around on the table. They live and move in an environment governed by chaos.
  • Audio is triggered and controlled entirely by how and where the branches move.
  • You - the user - control various parameters of the chaotic environment. Parameters which range from introducing varying amounts of order, to simply changing certain properties to let the chaos evolve in different directions.

There are varying levels of interaction, ranging from traditional one-to-one correlations - 'this movement I make creates that sound', but also to more complex relationships along the lines of 'this movement I make affects the environment in this way which sends the music into that direction where it evolves with a life of its own'. The visuals are purely generative, as is the audio, and as user you can play with the parameters of that system and watch and listen to the results...

 

Demo of drawing with roots:

 

Demo of using fiducials to create magnetic force fields:

Jul 01 15:24

Pi @ Glastonbury 2008

"Pi" is an interactive audio/visual installation commissioned by Trash City of the Glastonbury Festival to be shown at the festival in June 2008.

Working with arts and technology collective Seeper, our concept was to take a 50ft tent, and convert it into a giant audio/visual instrument - all of the music, audio and visuals inside the tent are generated and controlled purely by the movements of the occupants.

The space was divided into 6 zones. Two of the zones were purely visual, this was the waiting area. Here people could dance, chill, run about and do what they pleased. Two cameras tracked their movement and applied it to the fluid/particles visuals - so people could 'throw' plasma balls at each other, or send colorful waves propagating around the space. The other 4 zones had the same visual interactions, but in addition were also connected to an audio system. Each of these four zones was allocated an instrument type (drums/beats/percussion, pads, bass, strings etc.), and movement within these zones would also trigger notes or beats - depending on precisely where in the zone the movement was triggered. A lot of effort went into designing the sounds and notes triggered to make sure the end result would almost always sound pleasant and not be complete cacophony.

 

The first psychedelic fluid/particles interaction prototype developed in processing.org:

 

Camera -> osc/midi interaction tests (developed in Quartz Composer):

 

The two concepts strung together and written in C++ with openFrameworks:

Made with openFrameworks.

Jun 23 17:42

Audio Visual Interactive Installation Teaser for Glastonbury 2008

This is a little teaser for an audio visual interactive installation I'm working on for Glastonbury 2008. It'll be projected around the entire (almost) 65ft interior of a 50ft round tent with multiple channels of audio. Everyone inside will be contributing to the audio/visual experience. Located behind the Laundrettas' crashed plane / laundrette in Trash City.

All visuals and music is entirely camera driven (by my waving arms and hands) and real-time. Originally started this app in Processing, but realized I needed as much power as possible so switched to C++ / openFrameworks. Not using the GPU as much as I'd liked due to time restraints, v2 will be fully GPU hopefully ;)

Made with openFrameworks.