Live Performance at the Opening Reception of ACM Multimedia 2013

A little more than one year ago, Alex Jaimes and I collaborated on a live intermedia performance for the Opening Reception of the ACM Multimedia 2013 conference in Barcelona Spain. The video below records part of the performance, but it was made with a small pocket digital camera, using the built-in mic, and I was very preoccupied with the performance itself. Unfortunately this is the only record I have of the performance.

It was held October 24, 2013 at the Foment de les Arts i del Disseny, which is housed in an ancient stone building next to Barcelona’s Contemporary Art Museum. We worked with contemporary dancer Laida Azkona and violinist Paloma de Juan who was one of Alex’s colleagues at Yahoo Research Labs in Barcelona. Paloma’s day job is research engineer, whereas Laida is a profession contemporary dancer who has studied and performed around the world.

Again the quality of this video recording is not good, but perhaps it gives some impression of the performance.

The next video is from the first rehearsal for the live performance. Alex, Laida, and I met at my apartment the same day I arrived in Barcelona and we had our first rehearsal a day or two later. We had two more rehearsals after that, which includes a brief one after we had setup and sound-checked on the day of the opening reception. Since Alex and I worked remotely we could not prepare much beforehand, though I had already written and tested the programs needed for capturing movements and converting them to OSC for controlling the video projections. The video clips themselves were edited at the last moment, partly during a visit to Alex’s home in Barcelona. It was a busy but exciting trip!

Danse Neurale: NeuroSky + Kinect + OpenFrameworks

This performances makes use of the NeuroSky EEG sensor as well as the Kinect. Visuals and music are driven by EEG and registered with the performers body using the Kinect. It seems their system runs under OpenFrameworks. In fact, I noticed this video in the OF gallery. The second half of the video consists of an interview with the technical team and performer.

This performance uses off-the-shelf technology but is cutting edge in more than one sense. No one can accuse these guys of lacking commitment.

A project page may be found here: Danse Neurale.

They generously list the code used to acquire signals from the NeuroSky server in the OF forum. This part of the system is written in P5 (Processing).

Here are a few details on the technical background of the work, given by one of the creators in the OF forum:

Sensors:

– breath: it’s sensed with a wireless mic positioned inside Lukas’ mask. its signal goes directly through a mixer controlled by the audio workstation

– heart: it’s sensed with modified stethoscope connected with a wireless mic; signal works just like the breath (we’re not sure, but in the future we may decide to apply some DSP on it)

– EEG: we use the cheaper sensor from NeuroSky; it streams brainwaves (already splitted into frequencies) via radio in a serial like protocol; these radio packets arrive to my computer where they’re parsed, conveted into OSC and broadcasted via wifi (we only have 2 computers on stage, but the idea is that if we have an affine hacker soul between the public, he/she can join the jam session 🙂 )

– skeleton tracking: it’s obviously done with ofxOpenNI (as you can see in the video we also stage the infamous “calibration pose”, because we wanted to let people understand as much as possible what was going on)

The audio part maps the brainwave data onto volumes and scales, while the visual part uses spikes (originated i.e. by the piercings and by the winch pulling on the hooks) to trigger events; so, conceptually speaking, the wings are a correct representation of Lukas’s neural response and they really lift him off the ground.

Point cloud painting with the Kinect

This short video by Daniel Franke & Cedric Kiefer is one of the most aesthetically impressive uses of the Microsoft Kinect I have seen yet. Apparently they used three Kinects. Not sure whether the visuals could be rendered in real time because there is clearly interpolation between the 3D views involved in producing this video. Also for real-time use this probably involves programming in C++, or at least Openframeworks. For anyone interested in the Kinect, it’s worth trying to find out more about what went into producing the video. Some links are given:

onformative.com
chopchop.cc

There’s full quality version of the video available online:

daniel-franke.com/unnamed_soundsculpture.mov

And a ‘making-of’ video on Vimeo:

Here is a statement by the artists:

The basic idea of the project is built upon the consideration of creating a moving sculpture from the recorded motion data of a real person. For our work we asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective. The body – constant and indefinite at the same time – “bursts” the space already with its mere physicality, creating a first distinction between the self and its environment. Only the body movements create a reference to the otherwise invisible space, much like the dots bounce on the ground to give it a physical dimension. Thus, the sound-dance constellation in the video does not only simulate a purely virtual space. The complex dynamics of the body movements is also strongly self-referential. With the complex quasi-static, inconsistent forms the body is “painting”, a new reality space emerges whose simulated aesthetics goes far beyond numerical codes. Similar to painting, a single point appears to be still very abstract, but the more points are connected to each other, the more complex and concrete the image seems. The more perfect and complex the “alternative worlds” we project (Vilém Flusser) and the closer together their point elements, the more tangible they become. A digital body, consisting of 22 000 points, thus seems so real that it comes to life again.

Virtual Kyoto Gardens

Kyoto’s own Mercury Software has put their excellent  360° panoramas of Kyoto gardens online, I learned yesterday afternoon from shacho Ian Shortreed, when I bumped into him buying bread. The virtual tours of two dozen or so of Kyoto’s finest temple gardens run in Flash served from Amazon S3 servers. Definitely worth an extended contemplative visit!

A decade ago, I made use of some of Ian’s work in my presentation at the ACM SIGGRAPH 2002 conference in San Antonio, Texas. Related work on a shape-processing based analysis of dry landscape gardens (枯山水) was later published in a short paper in Nature magazine and a more extensive one in the philosophy journal Axiomathes.

Though their main business is in multi-lingual writing tools, Mercury also sells the iTabi, a wabi-sabi iPhone/iPod pouch available in traditional Kyoto textile designs.